uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,993,963
arxiv
\section{Introduction} \label{sec:intro} \vspace{\baselineskip} \par The visualization of internal organs of small animals {\it in vivo} has become one of the main tasks in preclinical studies over the last decade~\cite{1,2,3}. Single-photon emission computed tomography (SPECT) allows obtaining tomographic images of the biodistribution of radiolabeled compounds, both throughout the patient’s body and in separate organs. Radiopharmaceuticals labeled with some gamma-emitter radionuclides (E$\gamma$=80-350~keV) are used in SPECT, in contrast to the positron emission tomography (PET). SPECT is currently one of the most effective and highly sensitive imaging methods to study the function of internal organs and tissues, as well as a key tool in the development of new radiopharmaceuticals and to seek for methods for their targeted delivery~\cite{4,5}. Small animal imaging techniques also help to reduce the number of animals in non-clinical research providing a way to observe radiopharmaceuticals and other drugs {\it in vivo} distribution in a noninvasive manner. \par In the traditional gamma-ray registration device (Anger gamma camera), the detecting unit consists of a collimator, scintillator and photomultiplier tubes (PMTs). Gamma radiation passing through the collimator interacts with the scintillation crystal, where the emitted energy is converted into the visible light, subsequently detected by the PMT. The main disadvantage of the gamma camera is its relatively low spatial resolution, which typically equals to several millimeters or more and which is limited by the collimator construction~\cite{6}. \par In studies of small animals, the region of interest typically has a small size and a high spatial resolution is necessary to get a good image. However, the increase of the spatial resolution with the given specific activity of radiopharmaceutical results in larger statistical fluctuations in the image, especially if the detection efficiency is not high enough. This could be compensated by the increase of the activity of the radiotracer and the exposure time, but only up to a certain limit determined by the maximum allowed dose and the mobility of the studied object (breathing, occasional movements etc)~\cite{5}. This makes good detection efficiency to be another key property of the SPECT system. \par With the exception to the Compton camera~\cite{7,8}, the main way to get an image in SPECT is to collimate the gamma-rays. The quality of the image largely depends on the choice of collimator. The spatial resolution of a gamma camera can be improved using a pin-hole and a high-resolution coordinate detector~\cite{9}. However, this method is not efficient since only those photons that pass through the hole will be detected and the rest of them will be absorbed by the collimator. The larger is the aperture of the pin-hole, the higher is the sensitivity. However, the spatial resolution of the system deteriorates with the increase of the pin-hole aperture~\cite{10}. \par An alternative is to use a coded aperture (CA) mask~\cite{11}. It is a perforated plate with a number of holes (transparent elements), where the location of holes follows the specific pattern. The plate is made of a heavy material to reach a high gamma absorption. Opposite to a pin-hole collimator, the CA combines the high spatial resolution with the high sensitivity. In the case of CA, the image of an object in the detector plane will be a result of the superposition of the images formed by each hole (a shadowgram). The image of the real object can be reconstructed from the shadowgram of $N \times N$ dimension using convolution with a decoding function~\cite{12}: \begin{equation} \label{eq:1} \begin{split} I_{\rm k, l} = \sum_{\rm j=1}^{\rm N}\sum_{\rm i=1}^{\rm N}D_{\rm i, j}\cdot M_{\rm i+k, j+l},\end{split} \end{equation} where $I_{\rm k,l}$ is an element of the reconstructed image, $D_{\rm i, j}$ is an element of the shadowgram and $M$ is the decoding function. \par \section{Experimental setup} \vspace{\baselineskip} \par The goal of this work is to study the performance with a system based on a coded aperture mask and a hybrid pixel Timepix detector with the CdTe sensor~\cite{13,14,15} in SPECT applications, aimed at the imaging of small animals. A series of experiments with various radiation sources was carried out with the following setup~(Fig.\ref{fig:1}): \begin{figure}[!ht] \centering \qquad \includegraphics[scale = 0.5]{FOV.jpg} \caption {\label {fig:1} Layout of the experimental setup: 1~–~source plane, 2 – coded aperture, 3 – detector, {\it FoV} – field of view,{\it f}~–~distance from the source to the coded aperture, {\it d} – distance from the coded aperture to the detector} \end{figure} \begin{itemize} \item \textbf{Detector}: a hybrid pixel detector based on the Timepix readout chip with a 1~mm thick CdTe sensor. The main properties of the detector are summarized in Table~\ref{tab:1}. The detector is capable to record the position of a gamma ray interaction and to determine the energy deposit in the sensor for every particle~\cite{16}. Each particle interaction may induce a signal in one or several adjacent detector pixels, thus forming a cluster. Weighted average of all signals in the cluster provides the coordinate of the particle interaction and the total amplitude of all signals in the cluster is proportional to the energy deposit in the sensor. During the exposure, clusters are recorded and then only those are used further whose energy falls in the certain range. Position of the selected clusters is used to form an 256 pixels wide by 256 pixels high image, where every cluster enters with the unit weight. \begin{table}[!ht] \centering \caption{\label{tab:1} Detector properties.} \smallskip \begin{tabular}{|lr|c|} \hline Sensor material & CdTe \\ \hline Sensor size & 14.1~mm~x~14.1~mm \\ \hline Sensor thickness & 1~mm \\ \hline Pixel matrix & 256~x~256 \\ \hline Pixel size & 55~$\rm \mu$m~x~55~$\rm \mu$m \\ \hline Energy resolution for 60~keV gamma rays \cite{17} & 5.6\% \\ \hline Gamma ray detection efficiency\cite{18} & $\approx$100\% below 60~keV, 65\% at 100~keV\\ \hline \end{tabular} \end{table} \item \textbf{Collimator}: a set of identical tungsten CA masks with holes following the square MURA pattern with the rank equal to 31. The dimension of workarea is 22~mm~$\times$~22~mm. The radius of a hole is of 170$\pm$10~$\rm \mu$m. The thickness of every mask is equal to 0.5~mm. Depending on the energy of gamma rays, a set can consist of more than one mask to improve the absorption. However, a thick mask decreases the {\it FoV} and deteriorates the reconstruction near the edges. Therefore, the optimal thickness is a trade-off between the collimator absorption and the spatial resolution of the measuring system, which is especially important for the multiple hole collimator with a small diameter of a hole~\cite{19,20}. \begin{figure}[!ht] \centering \includegraphics[scale = 0.05]{2.jpg} \centering \caption{\label{fig:2} Mounting of the CA to a turntable.} \end{figure} \par In this study a set of three CA masks was used with a total thickness of 1.5~mm (which is enough to attenuate the intensity of 160~keV gamma rays by a factor of 30) fixed on a vertically mounted turntable, thereby ensuring the rotation of the masks around their center~(Fig.\ref{fig:2}). A distinctive feature of MURA type CA is that the rotation of a mask by 90$^\circ$ closes previously open mask elements and open the closed ones. This allows to reduce the systematic background, thereby increasing the signal-to-noise ratio~\cite{11,12}. \item \textbf{Sources}: three radiation sources have been used for the detector characterization. Namely, a microfocus X-ray source\footnote{X-ray source SB~120-350 by SourceRay Inc.}, the 59.5~keV gamma rays from the $^{241}Am$ radioactive source with the activity of 0.1~MBq and the diameter of approximately~2 mm, and various phantoms filled with~$^{99m}Tc$. \end{itemize} \par The values of {\it f} and {\it d} were chosen so that the field of view was about 3~cm~$\times$~3~cm and the shadowgram from the base mask pattern completely fitted to the detector area, which was the necessary condition for the unambiguous image reconstruction from the shadowgram. A general view of the experimental setup is shown in~Fig.\ref{fig:3}. \begin{figure}[!ht] \centering \includegraphics[scale = 0.45]{set-up.jpg} \caption{\label{fig:3} A general view of the experimental setup. 1 - The Timepix detector with the CA mask installed, 2 - an object under study.} \end{figure} \section{Measurement of the spatial resolution} \vspace{\baselineskip} \par The system spatial resolution of the detector was measured with the CA mask installed. Two types of sources were used. First, a point-like X-ray source and a thin linear source were used to directly measure the FWHM of the point spread function (PSF) and the line spread function (LSF), respectively. Second, small radioactive objects comparable in size with the expected spatial resolution were used as a cross-check. In the latter case, the response function was fitted by a convolution of the uniform distribution and a Gaussian and the spatial resolution was obtained using the standard deviation $\sigma$~\cite{21}: \begin{equation} \label{eq:2} \begin{split} FWHM = \sigma * 2.35 \end{split} \end{equation} \par The X-ray source with voltage of 60~kV and the tube current of 100~$\rm \mu$A was used to measure the PSF. The focal spot size of 175~$\rm \mu$m could be considered point-like. The photon energy followed the continuous spectrum with maximum at 30 keV. The shadowgram obtained with the 1~s exposure, the reconstructed image and the response function to the point-like source are shown in Fig.~\ref{fig:4}. The FWHM of the response function equals to 0.88~mm. \begin{figure}[!ht] \begin{minipage}[h]{0.45\linewidth} \center{\includegraphics[scale=0.5]{15sm.jpg}} \\a) \\ \end{minipage} \hfill \begin{minipage}[h]{0.45\linewidth} \center{\includegraphics[scale=0.5]{15Point_source.jpg}} \\b) \end{minipage} \vfill \begin{minipage}[h]{1\linewidth} \center{\includegraphics[scale=0.25]{point_source.jpg}} \\c) \\ \end{minipage} \caption{The point-like source (selected energy range: 6 keV - 60 keV): a) the shadowgram obtained using the X-ray source, b) the reconstructed image, c) the response function to the point-like source.} \label{fig:4} \end{figure} \par A 150~$\rm \mu$m thick cotton thread saturated with a solution containing $^{99m}Tc$ was used to determine the LSF. The thread was stretched in a pattern that formed intersections at 90$^\circ$ and about 45$^\circ$, as shown in Fig.~\ref{fig:5}a. The soaking and drying caused an uneven distribution of $^{99m}Tc$ along the thread (Fig.~\ref{fig:5}b). However, the LSF of reconstructed horizontal and vertical thread images was measured with the FWHM to be 0.75~mm and 0.80~mm, respectively (see Fig.~\ref{fig:5}c). \begin{figure}[!ht] \begin{minipage}[h]{0.35\linewidth} \center{\includegraphics[scale=0.45]{threads_placment.jpg}} \\a) \\ \end{minipage} \begin{minipage}[h]{0.25\linewidth} \center{\includegraphics[scale=0.5]{threads.jpg}} \\b) \\ \end{minipage} \begin{minipage}[h]{0.4\linewidth} \center{\includegraphics[scale=0.2]{thread_profile.jpg}} \\c) \\ \end{minipage} \caption{Visualization of a thread saturated with $^{99m}Tc$ (selected energy range: 90~keV - 145~keV): a) a thread pattern, b) the reconstructed image, c) vertical and horizontal LTFs.} \label{fig:5} \end{figure} \par A spectrometric gamma-ray source of $^{241}Am$ of small size (Fig.~\ref{fig:6}a) was used as a cross-check. The reconstructed image is shown in Fig.~\ref{fig:6}b. The image profile was fitted by a convolution of the uniform distribution with the width of 1.46~mm and a Gaussian with standard deviation of 0.35~mm (see Fig.~\ref{fig:6}c). Therefore, the spatial resolution was equal to 0.82~mm (Fig.~\ref{fig:6}b,c), which was consistent with the results obtained using the point-like and the thin linear sources. \begin{figure}[!ht] \begin{minipage}[h]{0.3\linewidth} \center{\includegraphics[scale=0.9]{OSGI_Am.jpg}} \\a) \\ \end{minipage} \begin{minipage}[h]{0.3\linewidth} \center{\includegraphics[scale=0.47]{OSGI_Am_recon.jpg}} \\b) \end{minipage} \begin{minipage}[h]{0.4\linewidth} \center{\includegraphics[scale=0.28]{OSGI_profile.jpg}} \\c) \\ \end{minipage} \caption{$^{241}Am$ spectrometric source (selected energy range: 40 keV - 60 keV): a) $^{241}Am$ spectrometric source, b) the reconstructed image, c) the response function to the $^{241}Am$ spectrometric source.} \label{fig:6} \end{figure} \par Another cross-check was made using a capillary filled with a solution containing $^{99m}Tc$. The internal and external diameter of the capillary was 1~mm and 1.5~mm, respectively. The $^{99m}Tc$ activity was 156~MBq and the exposure time was 2 minutes. The picture of the capillary and the reconstructed image are shown in Fig.~\ref{fig:7}a and Fig.~\ref{fig:7}b. The reconstructed image profile was fitted by a convolution of 1~mm wide rectangular distribution with a Gaussian. The spatial resolution of 0.74~mm was derived from the fit, that confirmed the results obtained using the thin linear source (see Fig.~\ref{fig:8}). \begin{figure}[!ht] \begin{minipage}[h]{0.33\linewidth} \center{\includegraphics[scale=0.03]{Tc_cap_Image.jpg}} \\a) \\ \end{minipage} \begin{minipage}[h]{0.33\linewidth} \center{\includegraphics[scale=0.46]{Tc_cap_exp.jpg}} \\b) \end{minipage} \begin{minipage}[h]{0.33\linewidth} \center{\includegraphics[scale=0.46]{Tc_cap_sim.jpg}} \\c) \\ \end{minipage} \caption{Capillary with $^{99m}Tc$ (selected energy range: 90~keV - 145~keV): a) the picture of the capillary, b) the reconstructed image, c) the simulation.} \label{fig:7} \end{figure} \begin{figure}[!ht] \centering \qquad \includegraphics[scale = 0.25]{Tc_cap_comparison.jpg} \centering \caption{\label{fig:8} Comparison of experimental and simulated profiles of the capillary filled with $^{99m}Tc$.} \end{figure} \par Finally, the characteristic radiation emitted from a copper ring irradiated by X-rays was used to obtain the image of the complex shape (K$\alpha$ line of 8.98~keV was used). The thickness of the ring was 1~mm and the external and internal diameters -- 11.9~mm and 10.2~mm, respectively. The ring was rotated by 45$^\circ$ with respect to the detector plane. The results are shown in Fig.~\ref{fig:9}. The average diameter of the ring from the reconstructed image is equal to 11.4~mm, which is compatible with the real diameter of the ring. \begin{figure}[!ht] \begin{minipage}[h]{0.3\linewidth} \center{\includegraphics[scale=1]{10.jpg}} \\a) \\ \end{minipage} \hfill \begin{minipage}[h]{0.34\linewidth} \center{\includegraphics[scale=0.6]{11.jpg}} \\b) \end{minipage} \hfill \begin{minipage}[h]{0.3\linewidth} \center{\includegraphics[scale=0.575]{12.jpg}} \\c) \\ \end{minipage} \vfill \begin{minipage}[h]{1\linewidth} \center{\includegraphics[scale=0.45]{new_oring_profile.jpg}} \\d) \end{minipage} \caption{Visualization of the copper ring (selected energy range: 6~keV - 9~keV): a) a picture of the copper ring, b) the shadowgram, c) the reconstructed image, d) the profile.} \label{fig:9} \end{figure} \section{Simulation study} \vspace{\baselineskip} \par The systematic study of the response of the system based on coded aperture to the gamma sources of different shape, with the energy up to 350~keV was carried out using the simulation based on the Geant4 toolkit~\cite{22,23}. The experimental setup with 1.5~mm thick CA mask was modelled. The simulation did not take into account the effects associated with the charge collection in the sensor and the readout electronics of the Timepix chip. Instead, the true position of the particle incident was taken into consideration and the simulated energy deposit was smeared with the experimental energy resolution. Low energy Geant4 electromagnetic package~\cite{24}, based on the Livermore data libraries, was used to simulate photon interactions with the expected cross-section precision within 10\% in the energy range of interest~\cite{25}. The experimental data obtained with $^{241}Am$ were used to adjust the geometrical parameters of the model. Other experimental data sets were used to cross-check the simulation up to the photon energy of 140~keV. Prediction at higher energy relied on the validity of physics models of Geant4 corroborated by the cross-check results. The reconstruction algorithm was identical to the one used in the experiment, including the selection of the energy range. \par The comparison of the simulation and experiment using two $^{241}Am$ radioactive sources is shown in Fig.~\ref{fig:10}. The sources are clearly distinguishable, when separated from each other at a distance of 2.5~mm. Comparison of image profiles demonstrate that the simulation is compatible with the experiment. The difference in the profile of reconstructed images (see Fig.~\ref{fig:11}) can be explained by small imperfections of the model: the intensity of the sources is assumed to be flat, the open mask elements are identical. \begin{figure}[!ht] \begin{minipage}[h]{0.5\linewidth} \center{\includegraphics[scale=0.4]{17-1.jpg}} \\ \end{minipage} \hfill \begin{minipage}[h]{0.5\linewidth} \center{\includegraphics[scale=0.4]{17-3.jpg}} \\ \end{minipage} \vfill \begin{minipage}[h]{0.5\linewidth} \center{\includegraphics[scale=0.4]{17-2.jpg}} \\ \end{minipage} \hfill \begin{minipage}[h]{0.5\linewidth} \center{\includegraphics[scale=0.4]{17-4.jpg}} \\ \end{minipage} \caption{Visualization of two $^{241}Am$ sources (selected energy range: 50~keV - 65~keV): shadowograms (left) and reconstructed images (right) are shown. Simulated data are plotted in the top and the experimental data -- in the bottom.} \label{fig:10} \end{figure} \begin{figure}[!ht] \centering \qquad \includegraphics[scale = 0.25]{ExpSimComparison.jpg} \centering \caption{Comparison of simulated and experimental profiles of two $^{241}Am$ sources.} \label{fig:11} \end{figure} \par The similar comparison was made for the experiment with the capillary filled with a solution containing $^{99m}Tc$, as described above. The image profiles are shown in Fig.~\ref{fig:8}. The reconstructed image profile was fitted by a convolution of 1~mm wide rectangular distribution with a Gaussian. The spatial resolution of 0.74~mm, equal to the experimental one, was obtained in the simulation, according to the image profile. While the simulation is also compatible with the experiment, the minor difference in the tails can be associated with the different noise levels in the original shadowgrams. \par Finally, the simulation was used to calculate the spatial resolution and the detection efficiency for the gamma sources that are most often used in SPECT (Table~\ref{tab:4}). The point-like source was simulated. No cut on the gamma ray energy was applied during the detection efficiency calculation. \begin{table}[!ht] \centering \caption{\label{tab:4} The calculated data obtained for the gamma emitters most commonly used in SPECT.} \smallskip \begin{tabular}{|l|c|c|c|c|c|} \hline Isotope & \begin{tabular}[c]{@{}c@{}}Energy \cr[keV] \end{tabular} & \begin{tabular}[c]{@{}c@{}}Spatial \\ resolution \cr[mm] \end{tabular} & \multicolumn{2}{c|}{\begin{tabular}[c]{@{}c@{}}Registration efficiency, \% \\(with the collimator) \end{tabular}} & \begin{tabular}[c]{@{}c@{}} SNR\hspace{0.5cm}\end{tabular} \\ \cline{4-5} &&& CdTe~1~mm~&~CdTe~2~mm&\\ \hline $^{125}I$& 30& 0.88& 40& 40& 96\\ \hline $^{67}Ga$& 93.3& 0.89& 28& 36& 90\\ \hline $^{177}Lu$& 113& 0.89& 23& 31& 88\\ \hline $^{201}Tl$& 135& 0.89& 16& 27& 87\\ \hline $^{99m}Tc$& 140.5& 0.89& 15& 23& 87\\ \hline $^{117m}Sn$& 158.6& 0.90& 11& 20& 86\\ \hline $^{123}I$& 159& 0.90& 11& 20& 85\\ \hline $^{201}Tl$& 167& 0.90& 10& 18& 85\\ \hline $^{111}In$& 171.3& 0.90& 10& 17& 84\\ \hline $^{67}Ga$& 184.6& 0.91& 8& 16& 83\\ \hline $^{177}Lu$& 210& 0.91& 7& 12& 81\\ \hline $^{111}In$& 245.4& 0.91& 5& 10& 78\\ \hline $^{67}Ga$& 300& 0.92& 4& 7& 74\\ \hline $^{133}Xe$& 350& 0.92& 3& 6& 69\\ \hline \end{tabular} \end{table} \begin{figure}[!ht] \begin{minipage}[!ht]{1\linewidth} \center{\includegraphics[scale=0.5]{SNR.jpg}} \end{minipage} \caption{SNR calculation (blue square corresponds to the signal area, red square - to the background area.} \label{fig:12} \end{figure} \par The signal-to-noise ratio (SNR) was defined as follows: \begin{equation} \label{eq:3} \begin{split} SNR =S/\sqrt{S^{2}+B^{2}} \end{split} , \end{equation} where $S$ is the integral intensity of signal in the range of 3$\sigma$ around the maximum, and $B$ is the integral intensity of the background calculated in the similar area outside the signal image, as shown in Fig.~\ref{fig:12}. \par It is noteworthy, that the spatial resolution slightly worsens as the photon energy increases. The higher the energy, the lower the absorption coefficient of the collimator and more photons pass thorough the opaque elements of the CA mask, thus providing a higher background counts in the detector and reducing the SNR (see Fig.\ref{fig:13}). \begin{figure}[!ht] \begin{minipage}[!ht]{0.22\linewidth} \center{\includegraphics[scale=0.4]{30keV.jpg}} \end{minipage} \hfill \begin{minipage}[!ht]{0.22\linewidth} \center{\includegraphics[scale=0.4]{140keV.jpg}} \end{minipage} \hfill \begin{minipage}[!ht]{0.22\linewidth} \center{\includegraphics[scale=0.4]{180.jpg}} \end{minipage} \hfill \begin{minipage}[!ht]{0.22\linewidth} \center{\includegraphics[scale=0.4]{350keV.jpg}} \end{minipage} \vfill \begin{minipage}[!ht]{0.22\linewidth} \center{\includegraphics[scale=0.4]{30_raw.jpg}} \end{minipage} \hfill \begin{minipage}[!ht]{0.22\linewidth} \center{\includegraphics[scale=0.4]{140keV_raw.jpg}} \end{minipage} \hfill \begin{minipage}[!ht]{0.22\linewidth} \center{\includegraphics[scale=0.4]{180_raw.jpg}} \end{minipage} \hfill \begin{minipage}[!ht]{0.22\linewidth} \center{\includegraphics[scale=0.4]{350_raw.jpg}} \end{minipage} \caption{Simulated images of a source with the gamma ray energy of (left to right) 30, 140, 180, 350~keV. The reconstructed images are shown in the top row. The shadowgrams are shown in the bottom row.} \label{fig:13} \end{figure} \section{Conclusion} \par A system based on the coded aperture and the hybrid pixel Timepix detector with the CdTe sensor has been used to obtain images of different gamma ray sources. The spatial resolution is shown to be of 0.8-0.9~mm at the field of view of 3~cm~$\times$~3~cm for the energy range typical for SPECT. The experimental data, supported by the simulation, demonstrate that a 1.5~mm thick tungsten coded aperture is sufficient to obtain an image of the distributed radioactive sources with the energy of gamma rays at least up to 140.5 keV without significant reconstruction artifacts. At higher energy, the image quality starts to deteriorate due to lower detection efficiency and lower absorption in the collimator. This increases the reconstruction artifacts which become evident at the energy of gamma rays about 180~keV. High spatial resolution combined with the sufficiently large field of view allows using this system for SPECT studies of small animals. \section{Acknowledgements} \par The reported study was funded jointly by RFBR and CITMA, project number 18-52-34005.
1,314,259,993,964
arxiv
\section{Introduction} In this paper we prove potential automorphy theorems for $n$-dimensional $l$-adic and residual representations of the absolute Galois group of an imaginary CM field. The precise statement of the theorem for residual representations is as following. \begin{theorem} \label{1.1} Suppose $F$ is a CM number field, $F^{\mathrm{a}\mathrm{v}}$ is a finite extension of $F$ and $n\geq 2$ is a positive integer. Let $l$ be a prime number and suppose that $$ \overline{r}: \mathrm{Gal}(\overline{F}/F)\rightarrow GL_n(\mathbb{F}_{l^{s}}) $$ is a continuous semisimple representation. Then there exists a finite CM Galois extension $F'/F$ linearly disjoint from $F^{\mathrm{a}\mathrm{v}}$ over $F$ such that $\overline{r}\mid_{\mathrm{Gal}(\overline{F}/F')}$ is ordinarily automorphic. \end{theorem} We first breifly recall the definitions of the terms appearing in the theorem. Recall that for $E$ any CM (or totally real) field, we could attach to any regular algebraic cuspidal automorphic representation $\pi$ of $GL_{n}(\mathbb{A}_{E})$ an $l$-adic Galois representation of $G_{E}$ satisfying certain local-global compatibility condition by the main theorem of \cite{HLLT}. More precisely, fix an isomorphism $\overline{\mathbb{Q}}_{l}\rightarrow \mathbb{C}$. For such a $\pi$, there is a unique continuous semisimple representation $$ r_{l,\iota}(\pi):G_{E}\rightarrow GL_{n}(\overline{\mathbb{Q}}_{l}) $$ such that, if $p\neq l$ is a rational prime above which $\pi$ and $E$ are unramified and if $v|p$ is a prime of $E$, then $r_{l,\iota}(\pi)$ is unramified at $v$ $$ r_{l,\iota}(\pi)|_{W_{E_{v}}}^{ss}=\iota^{-1}rec_{E_{v}}(\pi_{v}|\mathrm{det}|_{v}^{(1-n)/2}) $$ here $rec_{E_{v}}$ denotes the local Langlands correspondance for $E_{v}$ and $|^{ss}$ denotes the semisimplification. \begin{definition} For a $p$-adic local field $L$ and a continuous representation $\rho: G_L\rightarrow GL_n(\overline{\mathbb{Q}}_p)$, we say it is ordinary with regular Hodge-Tate weight if there exists a weight $\lambda=(\lambda_{\tau, i})\in (\{(a_1, \ldots, a_n)|a_1\geq\cdots\geq a_n\})^{\mathrm{H}\mathrm{o}\mathrm{m}(L,\overline{\mathbb{Q}}_{p})}=:(\mathbb{Z}_{+}^{n})^{\mathrm{H}\mathrm{o}\mathrm{m}(L,\overline{\mathbb{Q}}_{p})}$ such that there is an isomorphism: \[ \rho\sim\left( \begin{matrix} \psi_{1}&\ast&\ast&\ast\\ 0&\psi_{2}&\ast&\ast\\ \vdots&\ddots&\ddots&\ast\\ 0&\cdots&0&\psi_{n} \end{matrix} \right) \] where for each $i=1,\ldots,n$ the character $\psi_{i}$ : $G_{L}\rightarrow\overline{\mathbb{Q}}_{p}^{\times}$ agrees with the character $$ \sigma\in I_{L}\mapsto\prod_{\tau\in \mathrm{H}\mathrm{o}\mathrm{m}(L,\overline{\mathbb{Q}}_{p})}\tau(\mathrm{Art}_{L}^{-1}(\sigma))^{-(\lambda_{\tau,n-i+1}+i-1)} $$ on an open subgroup of the inertia group $I_{L}$. \end{definition} \begin{definition} For a Galois representation $r: G_{E}\rightarrow GL_{n}(\overline{\mathbb{Q}}_{l})$ , we say it is automorphic if there exists a regular algebraic cuspidal automorphic representation $\pi$ such that $r\cong r_{l,\iota}(\pi)$ . And for a residual representation $\overline{r}$ : $G_{E}\rightarrow GL_{n}(\overline{\mathbb{F}}_{l})$ , we say it is automorphic if there exists a lift $r$ of $\overline{r}$ that is automorphic. We say it is ordinarily automorphic if there exists an automorphic lift $r$ which is potentially semistable and ordinary with regular Hodge-Tate weights as $G_{E_v}$ representation for all $v\mid l$. \end{definition} We also remark here that restricting to some $G_{F'}$ for an Galois extension $F'/F$ that avoids a prescribed finite extension $F^{\text{av}}$ of $F$ can ensure that the image $\overline{r}(G_{F'})$ does not shrink. Combine our main theorem for residual representation Theorem \ref{1.1} with the automorphy lifting theorem 6.1.2 from \cite{tap} and the main result of \cite{qian}, we obtain a potential automorphy theorem for a single $l$-adic Galois representation into $GL_n$. \begin{theorem} \label{1.4} Suppose $F$ is a CM number field, $F^{\mathrm{a}\mathrm{v}}$ is a finite extension of $F$ and $n\geq 2$ is a positive integer. Let $l$ be a prime number. Fix an isomorphism $\iota: \overline{\mathbb{Q}}_{l}\rightarrow \mathbb{C}$ and suppose that \[ r: G_F\rightarrow GL_n(\overline{\mathbb{Q}}_l) \] is a continuous representation satisfying the following condition: \begin{itemize} \item $r$ is unramified almost everywhere. \item For each place $v|l$ of $F$, the representation $r|_{G_{F_{v}}}$ is potentially semistable, ordinary with regular Hodge-Tate weights $\lambda\in(\mathbb{Z}_{+}^{n})^{\mathrm{H}\mathrm{o}\mathrm{m}(F,\overline{\mathbb{Q}}_{p})}$. \item $\overline{r}$ is absolutely irreducible and decomposed generic (See \cite{tap} Definition 4.3.1). The image of $\overline{r}|_{G_{F(\zeta_{l})}}$ is enormous (See \cite{tap} Definition 6.2.28). \item There exists $\sigma\in G_{F}-G_{F(\zeta_{l})}$ such that $\overline{r}(\sigma)$ is a scalar. \end{itemize} Then there exists a finite CM Galois extension $F'/F$ linearly disjoint from $F^{\mathrm{a}\mathrm{v}}$ over $F$ such that $r\mid_{G_{F'}}$ is ordinarily automorphic. \end{theorem} Previously there are potential automorphy results for $\overline{r}$ and $r$ that take value in $GSp_{2n}$ (\cite{HSBT}) or more generally, the subgroup of $GL_{n}$ that preserves a nondegenerate form up to a scalar (\cite{BLGHT}). The strategy of proving the main theorems in this paper is based on the strategy of proving theorems in these paper. But there are also many crucial differences. The main idea of the proof of Theorem \ref{1.1} is as the following. The prime $l$ is given. But we will choose some positive integer $N$ and another prime $l'$ with good properties. Note that this choice make certain arguments for the $l'$-related objects easier than their $l$-related counterpart. Consider the Dwork family $Y\subset \mathbb{P}^{N-1}\times(\mathbb{P}^{1}\backslash (\mu_{N}\cup \{\infty\}))$ defined by the following equation: $$ X_{1}^{N}+X_{2}^{N}+\ +X_{N}^{N}=NtX_{1}X_{2}\text{ . . . }X_{N} $$ for the good $N$ we will choose. The variety comes equipped with an action of the group $$ H_{0}:=\{(\xi_{1},\ .\ .\ .\ \xi_{N})\in\mu_{N}^{N}:\xi_{1}\cdots\xi_{N}=1\}/\mu_{N} $$ (see Section \ref{3}). After picking a character $\chi$ of $H_{0}$, we may consider the motives such that their $l$(or $l'$)-adic realisation is the $\chi$-eigenspace of the $(N-2)$-th(middle degree) etale cohomology of any fibre of this family with coefficients $\overline{\mathbb{Q}}_{l}$ (or $\overline{\mathbb{Q}}_{l'}$). We will denote such $l$(or $l'$)-adic cohomology of the fibre over the point $t$ as $V_{\lambda, t}$(or $V_{\lambda', t}$), where $\lambda, \lambda'$ is a place of $\mathbb{Q}(\zeta_N)$ above $l, l'$. Note that here we will choose a $\chi$ that is of a shape artificially made to break the self-duality of the motive. However, the self-duality shape of the Hodge-Tate weights will be preserved. In fact, they are a string of consecutive integers. We will try to find a point $t$ on the base defined over an extension field $F'$, such that the mod-$l$ residual Galois representation $V[\lambda]_t$ given by the fibre of the motive over $t$ is isomorphic to the $\overline{r}$ in the theorem, while the mod-$l'$ residual Galois representation $V[\lambda']_t$ given by the fibre of the motive over $t$ is isomorphic to $\overline{r}_{l',\iota}(\pi)$ for some known ordinarily automorphic representation $r_{l',\iota}(\pi)$, both as representation of $G_{F'}$ . If such a point exists, then we could apply ordinary automorphy lifting theorem 6.1.2 of \cite{tap} to see $V_{\lambda', t}$ is automorphic, and conclude that $V_{\lambda, t}$ is automorphic. Hence $\overline{r}$ is automorphic. \ The above is a very rough summary of what we did in this paper. Let us be more precise now. The first problem that is worth more explanantion is the existence of such a point $t$. Assume $\overline{r}$ and $\overline{r}_{l',\iota}(\pi)$ can be defined as representation over $k(\lambda)$ and $k(\lambda')$, where these are the residue fields for the places $\lambda$ and $\lambda'$ of $\mathbb{Z}[\zeta_N]$. The existence of such a point $t$ is guaranteed by a careful study of the moduli scheme that detects the isomorphisms between $\overline{r}\times \overline{r}_{l',\iota}(\pi)$ and the varying $V[\lambda]_t\times V[\lambda']_t$ as representation over $k(\lambda)\times k(\lambda')$, such that the top wedge of the isomorphism is fixed to be an a priori choice. Now the main property we use to prove the existence of such a point $t$ is the geometric connectivity of the moduli variety. And the geomoretic connectivity is in turn deduced from the result that the geometric monodromy map of this family surjects onto $SL_{n}(k(\lambda))\times SL_{n}(k(\lambda'))$, over which the fiber over $t$ of the moduli scheme is a torsor. The proof of this surjectivity result involves combinatorial arguments that precisely make use of the shape of the charater $\chi$ we choose. In contrast, we know that if we had chosen the $\chi$ to be of some nice self-dual form, then the image of the geometric monodromy map would be contained in some symplectic group $Sp_{n}(k(\lambda))\times Sp_{n}(k(\lambda'))$. We remark that in previous work, other authors have considered the moduli variety parametrizing similar isomorphisms but with the condition that certain alternating forms on both representation spaces need to be preserved, where the alternating form on the varying cohomology is induced by Poincare duality and the self-dual shape of the $\chi$ they chose. In the above procedure, after showing that the geometric monodromy has image in $SL_{n}(k(\lambda))\times SL_{n}(k(\lambda'))$ (using the shape of $\chi$), we see that the spaces $\wedge^n(V[\lambda]_t\times V[\lambda']_t)$ as characters of $G_F$ does not depend on the base point $t$. Thus to construct the moduli scheme, it suffices to construct a fixed isomorphism between $\wedge^n(\overline{r}\times \overline{r}_{l',\iota}(\pi))$ and $\wedge^n(V[\lambda]_t\times V[\lambda']_t)$ for any chosen $t$ an $F$ point of the base. However, this isomorphism does not a priori exist. We get around this problem by restricting to a smaller $G_{F'}$ and twisting $\overline{r}\times \overline{r}_{l',\iota}(\pi)$ by a character $\overline{\chi}_1\times \overline{\chi}_2: G_{F'}\rightarrow k(\lambda)^\times\times k(\lambda')^\times$, so that we would like to construct the moduli scheme as the one detecting isomorphisms between $\left(\overline{r}\times \overline{r}_{l',\iota}(\pi)\right)\otimes\left(\overline{\chi}_1\times \overline{\chi}_2\right)$ and $V[\lambda]_t\times V[\lambda']_t$. Note that we need $\overline{\chi}_1\times \overline{\chi}_2$ to take value in exactly $k(\lambda)^\times\times k(\lambda')^\times$ because we want the fiber of the moduli scheme to be a torsor under the image $SL_{n}(k(\lambda))\times SL_{n}(k(\lambda'))$ of the geometric monodromy map because this is crucial to show the geometric connectivity of the moduli scheme. Now choosing $t=0$, to construct an isomorphism between $\det\left(\left(\overline{r}\times \overline{r}_{l',\iota}(\pi)\right)\otimes\left(\overline{\chi}_1\times \overline{\chi}_2\right)\right)$ and $\det\left(V[\lambda]_t\times V[\lambda']_t\right)$, amounts to taking an "$n$-th root" of the character $(\det V[\lambda]_0\times V[\lambda']_0)^{-1}\otimes\det(\overline{r}\times \overline{r}_{l',\iota}(\pi))$ as character valued in $k(\lambda)^\times\times k(\lambda')^\times$, where $V[\lambda]_0, V[\lambda']_0$ denotes the mod $l, l'$ cohomology of the fibre over $0$. The first step to make this adjustment work is that we need \begin{itemize} \item $(\det V[\lambda]_0\times V[\lambda']_0)^{-1}\otimes\det(\overline{r}\times \overline{r}_{l',\iota}(\pi))$ has image in $(k(\lambda)^\times)^n\times (k(\lambda')^\times)^n$ as a $G_{F'}$ representation. \end{itemize} We remark that this condition is proved by a computation for the fibre over $0$, where there is a good description. The computation is done in \ref{npower}. Then, we will use Lemma \ref{2.1} to deduce that this condition above enables us to construct such an "$n$-th root" of character while also making sure that $F'$ satisfies certain linearly disjoint properties. \ The second problem is that to apply ordinary automorphy lifting theorems, we also need to show $V_{\lambda, t}$ and $V_{\lambda', t}$ are both ordinary. The proof of $V_{\lambda', t}$ being ordinary is relatively easy. We just pick $t\in \mathbb{A}^1(F')$ that is $l'$-adicly close to $0$. Applying \ref{2.3}, we may check ordinarity via an examination of $D_{\mathrm{cris}}(V_{\lambda', t})$ . The comparison theorem identifies $D_{\mathrm{cris}}(V_{\lambda', t})$ and $D_{\mathrm{cris}}(V_{\lambda', 0})$, and hence reduces the proof to the case $t=0$. In that case, $V_{\lambda', 0}$ actually splits into characters as a $G_{F'}$ representation. To prove that $V_{\lambda, t}$ is ordinary is harder and relies heavily on the machinery of log geometry. This is the result of \cite{qian}. The idea is to choose $t\in \mathbb{A}^1(F')$ that has $l$-adic valuation $<0$. The intuition is that via the construction of a semistable model, the comparison theorems and an application of the log cristalline cohomology theory of Hyodo-Kato, we should have that the operator $N$ acting on $D_{\mathrm{st}}(V_{\lambda, t})$ is identified with the residue of the Gauss-Manin connection at $\infty$ of $V_B$ in the notation of Section \ref{3}, upon identifying the underlying space they act on. The latter is some sort of log of the monodromy around the point $\infty$ of $V_B\otimes\mathbb{C}$ and hence $N$ is maximally nilpotent because the monodromy is maximally unipotent by \ref{maxun}. Then ordinarity follows from a $p$-adic Hodge theoretic lemma. The difference between the case of $l$ and $l'$ arises partially from the fact that $l$ is given but we may choose $l'$ arbitrarily as well. \ Lastly the $\pi$ we use such that $r_{l',\iota}(\pi)$ is ordinarily automorphic is such that $r_{l',\iota}(\pi)$ is a symmetric tensor power of the Tate module of an elliptic curve over $\mathbb{Q}$. \ With the above input and an awkward choice of the character $\chi$ of $H_{0}$, plus several technical algebraic number theory lemmas listed in Section 2, we will finally prove the theorem in Section 4. \subsection*{Acknowledgement} I would like to first thank Richard Taylor for encouraging me to think about the subject of this paper. I also want to thank him for all the helpful comments on the draft of this paper. I benefit a lot from the many interesting conversation with Richard Taylor, Brian Conrad, Weibo Fu, Ravi Vakil, Jun Su and Jack Sempliner during the preparation of this text. \section{Several Lemmas}\label{2} Let us first state the properties we will use throughout the paper regarding the notion of linearly disjoint fields. \begin{itemize} \item If $A$ and $B$ are extensions of $C$ then $A$, $B$ linearly disjoint over $C$ implies $A \cap B =C$, and the converse is true if $A$ or $B$ is finite Galois over $C$. \item If $A \supset B \supset C$ and $D\supset C$ with $A$ and $D$ linearly disjoint over $C$, then $A$ and $BD$ are linearly disjoint over $B$. In particular $A \cap BD = B$. \end{itemize} \begin{lemma} \label{2.1} {\it For a CM} fi{\it eld} $F, a$ fi{\it nite CM Galois extension} $M/F, a$ fi{\it nite Galois extension} $F_{0}/\mathbb{Q}$ , a finite field $\mathbb{F}_{l^{r}}$ {\it containing all $n$-th roots of unity, and a character} $\chi$ : $G_{M}\rightarrow(\mathbb{F}_{l^{r}}^{\times})^{n}$, {\it there exists a} finite totally real Galois extension $L/\mathbb{Q}$ linearly disjoint with $F_0$ over $\mathbb{Q}$ and such that if we denote $F_{1}=LM$ , there exists a character $\psi$ : $G_{F_{1}}\rightarrow \mathbb{F}_{l^{r}}^{\times}$ {\it such that} $\psi^{n}=\chi|_{G_{F_{1}}}$. \end{lemma} \begin{proof} Consider the long exact sequence associated to the following short exact sequence of $G_{M}$-module with trivial action: \[ \begin{tikzcd} 0\arrow[r]& \mathbb{Z}/m\mathbb{Z}\arrow[r]& \mathbb{F}_{l^{r}}^{\times}\arrow[r, "(\cdot)^n"]&(\mathbb{F}_{l^{r}}^{\times})^{m}\arrow[r]& 0 \end{tikzcd} \] where we write $n=l^{a}m, l\nmid m$, we have: \[ \begin{tikzcd} H^{1}(G_{M},\mathbb{F}_{l^{r}}^{\times})\arrow[r, "(\cdot)^n"]& H^{1}(G_{M},(\mathbb{F}_{l^{r}}^{\times})^{m})\arrow[r, "\delta"]& H^{2}(G_{M},\mathbb{Z}/m\mathbb{Z}) \end{tikzcd} \] Now $\chi\in H^{1}(G_{M},\ (\mathbb{F}_{l^{r}}^{\times})^{m})$ . If we let $\tilde{\chi}=\delta(\chi)$ , we are reduced to finding a Galois CM extension $F_{1}\supset M$ of the form $F_1=LM$ for some $L$ linearly disjoint with $F_{0}$ over $\mathbb{Q}$ such that the obstruction $\tilde{\chi}$ is killed by the restriction map $H^{2}(G_{M},\mathbb{Z}/m\mathbb{Z})\rightarrow H^{2}(G_{F_{1}}, \mathbb{Z}/m\mathbb{Z})$. Consider the map $H^{2}(G_{M}, \mathbb{Z}/m\mathbb{Z})\rightarrow\prod_{v}H^{2}(G_{M_{v}},\ \mathbb{Z}/m\mathbb{Z})$ given by restriction. The image actually lands in $\bigoplus_{v}H^{2}(G_{M_{v}}, \mathbb{Z}/m\mathbb{Z})$ because any element in $H^{2}(G_{M}, \mathbb{Z}/m\mathbb{Z})$ is inflated from some $\phi\in H^{2}(\mathrm{Gal}(M'/M), \mathbb{Z}/m\mathbb{Z})$ for some $M'/M$ a finite extension and for those primes $v$ of $M$ that is unramified in $M'$, the image of $\phi$ in $H^{2} (G_{M_{v}}, \mathbb{Z}/m\mathbb{Z})$ by restriction actually lands in $H^{2}(\mathrm{Gal}(M_{v}^{\mathrm{n}\mathrm{r}}/M_{v}), \mathbb{Z}/m\mathbb{Z})$ , which is $0$ since the cohomological dimension of $\hat{\mathbb{Z}}$ is 1. \ The first step is to take an CM extension $F_{2}/M$ that is of the form $L_2M$ for a totally real $L_2$ Galois over $\mathbb{Q}$ that is linearly disjoint with $F_{0}$ over $\mathbb{Q}$, such that in the following commutative diagram, the image of $\tilde{\chi}$ in the upper right corner is $0$: \[ \begin{tikzcd} H^{2}(G_{F_{2}}, \mathbb{Z}/m\mathbb{Z})\arrow[r]& \bigoplus_{w}H^{2}(G_{F_{2,w}}, \mathbb{Z}/m\mathbb{Z})\\ H^{2}(G_{M}, \mathbb{Z}/m\mathbb{Z})\arrow[r]\arrow[u]& \bigoplus_{v}H^{2}(G_{M_{v}}, \mathbb{Z}/m\mathbb{Z})\arrow[u] \end{tikzcd} \] Let $\bigoplus\tilde{\chi}_{v}$ be the image of $\tilde{\chi}$ in $\bigoplus_{v}H^{2}(G_{M_{v}}, \mathbb{Z}/m\mathbb{Z})$ . If we can take a CM extension $F_{2}/M$ of the above form such that for any $v$ with $\tilde{\chi}_{v}\neq 0$ and $w|v$ a place of $F_{2}$, $\zeta_{m}\in F_{2,w}$ and $m|[F_{2,w}: M_{v}(\zeta_{m})]$, then the image of $\tilde{\chi}_{v}$ restricting to $H^{2}(G_{F_{2,w}}, \mathbb{Z}/m\mathbb{Z}) $ is 0, since $H^{2}(G_{M_{v}(\zeta_{m})},\mathbb{Z}/m\mathbb{Z})\cong H^{2}(G_{M_{v}(\zeta_{m})},\mu_{m})\cong\frac{1}{m}\mathbb{Z}/\mathbb{Z}$, and the restriction map $\frac{1}{m}\mathbb{Z}/\mathbb{Z}\cong H^{2}(G_{M_{v}(\zeta_{m})}, \mu_{m})\rightarrow H^{2}(G_{F_{2,w}},\mu_{m})\cong\frac{1}{m}\mathbb{Z}/\mathbb{Z}$ is multiplication by $[F_{2,w}:M_{v}(\zeta_{m})]$. We can construct such an extension $F_{2}/M$ coming from $L_2/\mathbb{Q}$ linearly disjoint with $F_{0}$ over $\mathbb{Q}$ with prescribed local behavior for a finite number of primes $v$ of $M$ as the following: Let $S_{1}$ be the set of rational primes lying under the primes $v$ of $M$ such that $\widetilde{\chi}_{v}\neq 0$. Let $S_{2}=$\{$\infty$\} and $S=S_{1}\cup S_{2}$. For each $q\in S_{1}$, let $M_{q}$ denote the composite of the image of all embeddings $\tau$ : $M\hookrightarrow\overline{\mathbb{Q}}_{q}$. We fix an extension $E_{q}/M_{q}(\zeta_{m})$ of order divisible by $m$ and Galois over $\mathbb{Q}_q$. Now we apply Lemma 4.1.2 of \cite{CHT} to $F_0/\mathbb{Q}$ and the set of primes $S$ with prescribed local behavior: \begin{itemize} \item $E_{q}/\mathbb{Q}_{q}$ for all $q\in S_{1}$ \item Trivial extension $\mathbb{R}/\mathbb{R}$ for $\infty\in S_{2}$ \end{itemize} We get a finite totally real Galois extension $L_2/\mathbb{Q}$ that is linearly disjoint with $F_0$ over $\mathbb{Q}$ and that for any $w$ a place of $L_2$ over a $q\in S_1$, $(L_2)_{w}\cong E_q$ we defined above over $\mathbb{Q}_q$. Take $F_{2}=ML_2$. For any prime $v$ of the field $M$ with $\tilde{\chi}_{v}\neq 0$ and $v'$ a prime of $F_{2}$ over $v$, then $q=v'_{|\mathbb{Q}}\in S_{1}$, thus $F_{2,v'}\supset (L_2)_{v'\mid_{L_2}}\supset E_{q}$ and so $\mathrm{Gal}(F_{2,v'}/M_{v}(\zeta_{m}))$ is of order divisible by $m$. So we have constructed the desired $L_2$ and $F_2$. \ Now for any number field $F$ and $G_{F}$ module $A$, let $\sha^{i}(F, A)$ be $$ \mathrm{k}\mathrm{e}\mathrm{r}(H^{i}(G_{F}, A)\rightarrow\prod_{v}H^{i}(G_{F_{v}}, A)) $$ where the product is over all places $v$ of $F$ (so is every product that follows). Thus the first step yields a finite CM Galois extension $F_{2}/F$ containing $M$ that comes from some $L_2/\mathbb{Q}$ as described above such that the image $\tilde{\chi}_{1}$ of $\tilde{\chi}$ in $H^{2}(G_{F_{2}}, \mathbb{Z}/m\mathbb{Z})$ actually lies in $\sha^{2}(F_{2}, \mathbb{Z}/m\mathbb{Z})$ . \ The second step is to analyze $\sha^{2}(F_{2}, \mathbb{Z}/m\mathbb{Z})$ and kill it after some further CM extension $F_{1}/F_{2}$ where $F_1=L_1L_2M$, with $L_1$ totally real Galois over $\mathbb{Q}$ that would be specified later and such that $L:=L_1L_2$ is linearly disjoint with $F_0$ over $\mathbb{Q}$. Poitou-Tate duality(cf. \cite{Neu} Theorem 8.6.7) gives a perfect pairing $$ \langle\cdot,\ \cdot\rangle:\sha^{2}(F_{2}, \mathbb{Z}/m\mathbb{Z})\times \sha^{1}(F_{2}, \mu_{m})\rightarrow \mathbb{Q}/\mathbb{Z} $$ satisfying the following compatibility for any finite extension $F_{1}/F_{2}$ and $x\in \sha^{1}(F_{1}, \mu_{m})$, $y\in \sha^{2}(F_{2}, \mathbb{Z}/m\mathbb{Z})$ : $$ \langle x, {\rm Res}(y)\rangle=\langle \mathrm{C}\mathrm{o}\mathrm{r}(x), y\rangle $$ So we now choose such an extension $F_{1}/F_{2}$ such that $\mathrm{C}\mathrm{o}\mathrm{r}(x)=0$, $\forall x\in \sha^{1}(F_{1}, \mu_{m})$, then by the perfectness, ${\rm Res}(y)=0$, $\forall y\in \sha^{2}(F_{2}, \mathbb{Z}/m\mathbb{Z})$. Write $m=2^{r} \prod_{i=1}^{s}p_{i}^{r_{i}}$. Decompose $\sha^{1}(F_{2}, \mu_{m})=\sha^{1}(F_{2}, \mu_{2^{r}})\times\prod_{i=1}^{s}\sha^{1}(F_{2}, \mu_{p_{i}^{r_{i}}})$. The following lemma is basically Theorem 9.1.9 of \cite{Neuk}. \begin{lemma} \label{2.2} {\it For any number} field $F$, $\sha^{1}(F, \mu_{p^{r}})=0$ or $\mathbb{Z}/2\mathbb{Z}$. {\it The later case could happen only when} $p=2$. {\it In any case}, $\sha^{1}(F, \mu_{p^{r}}) =\sha^{1}(F(\mu_{p^{r}})/F, \mu_{p^{r}})$ ({\it de}fi{\it ned in the proof}) . \end{lemma} \begin{proof} Set $K=F(\mu_{p^{r}})$ . We have the following commutative diagram where each row and column (except the left column) are exact: \[ \begin{tikzcd} &0\arrow[r] &H^{1}(G_{K}, \mu_{p^{r}})\arrow[r] &\prod_{w}H^{1}(G_{K_{\omega}}, \mu_{p^{r}})\\ 0\arrow[r] &\sha^{1}(F, \mu_{p^{r}})\arrow[r]\arrow[u] &H^{1}(G_{F}, \mu_{p^{r}})\arrow[r]\arrow[u] &\prod_{v}H^{1}(G_{F_{v}}, \mu_{p^{r}})\arrow[u]\\ 0\arrow[r] &\sha^{1}(K/F, \mu_{p^{r}})\arrow[r]\arrow[u, hook] &H^{1}(\mathrm{Gal}(K/F), \mu_{p^{r}})\arrow[r]\arrow[u, hook] &\prod_{v}H^{1}(\mathrm{Gal}(K_{w}/F_{v}), \mu_{p^{r}})\arrow[u, hook] \end{tikzcd} \] where $\sha^{1} (K/F, \mu_{p^{r}})$ is defined by the exactness of the bottom row and the top row is exact because an element in the kernel corresponds to an cyclic extension of $K$ of order dividing $p^r$ that splits at all primes $w$ of $K$, which has to be trivial. (Again $\mu_{p^r}=\mathbb{Z}/p^r\mathbb{Z}$ as a $G_{K}$ module and $H^1$ is just $\mathrm{Hom}$.) A diagram chasing gives that $\sha^{1} (F, \mu_{p^{r}}) =\sha^{1}(K/F, \mu_{p^{r}})$. By Proposition 9.1.6 of \cite{Neuk}, $H^{1} (\mathrm{G}\mathrm{a}\mathrm{l}(K/F), \mu_{p^{r}}) =0$ except when \begin{itemize} \item $p=2$, $r\geq 2$ \item and $-1$ is in the image of $\mathrm{G}\mathrm{a}\mathrm{l}(K/F)\rightarrow(\mathbb{Z}/2^{r}\mathbb{Z})^{\times}$ \end{itemize} In this case, $H^{1} (\mathrm{G}\mathrm{a}\mathrm{l}(K/F), \mu_{2^{r}})=\mathbb{Z}/2\mathbb{Z}$. As a subspace of $H^{1}(\mathrm{G}\mathrm{a}\mathrm{l}(K/F), \mu_{2^{r}})$, $\sha^{1} (F, \mu_{p^{r}}) =\sha^{1}(K/F, \mu_{p^{r}})=0$ or $\mathbb{Z}/2\mathbb{Z}$. \end{proof} Recall the relation $\mathrm{Cor}\circ{\rm Res}=[F_{1}:F_{2}]$ and the commutative diagram: \[ \begin{tikzcd} H^{1}(G_{F_{2}}, \mu_{2^{r}})\arrow[r, "\mathrm{Res}"] &H^{1}(G_{F_{1}}, \mu_{2^{r}})\\ H^{1} (\mathrm{G}\mathrm{a}\mathrm{l}(F_{2}(\mu_{2^{r}})/F_{2}), \mu_{2^{r}}) \arrow[r, "\mathrm{Res}"]\arrow[u, hook] &H^{1}(\mathrm{G}\mathrm{a}\mathrm{l}(F_{1}(\mu_{2^{r}})/F_{1}), \mu_{2^{r}})\arrow[u, hook] \end{tikzcd} \] The bottom row is an isomorphism if we pick $F_{1}$ linearly disjoint with $F_{2}(\mu_{2^{r}})$ over $F_{2}$. If this is the case and $2\mid [F_{1}:F_{2}]$, then by Lemma \ref{2.2} \begin{equation} \begin{split} \mathrm{Cor}(\sha^{1}(F_{1}, \mu_{m}))&=\mathrm{C}\mathrm{o}\mathrm{r}(\sha^{1}(F_{1}, \mu_{2^{r}}))\\ &=\mathrm{C}\mathrm{o}\mathrm{r}(\sha^{1}(F_{1}(\mu_{2^{r}})/F_1, \mu_{2^{r}}))\\ &\subset \mathrm{C}\mathrm{o}\mathrm{r}(H^{1}(\mathrm{G}\mathrm{a}\mathrm{l}(F_{1}(\mu_{2^{r}})/F_{1}), \mu_{2^{r}}))\\ &=\mathrm{C}\mathrm{o}\mathrm{r}({\rm Res}(H^{1}(\mathrm{G}\mathrm{a}\mathrm{l}(F_{2}(\mu_{2^{r}})/F_{2}), \mu_{2^{r}})))\\ &=[F_{1}:F_{2}]\cdot H^{1}(\mathrm{G}\mathrm{a}\mathrm{l}(F_{2}(\mu_{2^{r}})/F_{2}), \mu_{2^{r}})\\ &=0 \label{cor} \end{split} \end{equation} Here when we apply $\text{Cor}$ to some group, we always mean $\text{Cor}$ applied to the image of this group in $H^1(G_{F_1}, \mu_{2^r})$. Now we construct an $L_1$ such that the associated extension $F_{1}/F_{2}$ satisfies the property that $F_{1}$ is linearly disjoint with $F_{2}(\mu_{2^{r}})$ over $F_{2}$ and $2\mid [F_{1}:F_{2}]$. Choose a rational prime $p$ and a local field $E_{p}$ Galois over $\mathbb{Q}_p$ that contains $M'_p$, the composite of the image of all embeddings $\tau: F_2\hookrightarrow \overline{\mathbb{Q}}_p$, and is of order divisible by $2$ over it. We again apply Lemma 4.1.2 of \cite{CHT} to the extension $F_0F_2(\mu_{2^r})/\mathbb{Q}$ and the set of primes $S$ consisting of $p$ and $\infty$ with prescribed local behavior: \begin{itemize} \item $E_{p}/\mathbb{Q}_p$ \item Trivial extension $\mathbb{R}/\mathbb{R}$ for the place $\infty$ \end{itemize} We get a totally real Galois extension $L_1/\mathbb{Q}$ linearly disjoint with $F_0F_2(\mu_{2^r})$ over $\mathbb{Q}$. The associated $F_1=F_2L_1=ML_2L_1$. Then $2\mid [F_1:F_2]$ because for any place $v$ of $F_1$ above $p$, $(F_1)_v\supset (L_1)_{v\mid_{L_1}}\cong E_p\supset (F_2)_{v\mid_{F_2}}$ and the last inclusion is of order divisible by $2$. The property that $F_1=F_2L_1$ and $F_2(\mu_{2^r})$ are linearly disjoint over $F_2$ follows from the fact that $L_1$ and $F_2(\mu_{2^r})$ are linearly disjoint over $\mathbb{Q}$. Now $L_1$ and $F_0F_2$ are linearly disjoint over $\mathbb{Q}$ implies that $L_1L_2$ and $F_0F_2$ are linearly disjoint over $L_2$. Hence $L_{1}L_2\cap F_{0}=L_1L_2\cap F_{0}F_{2}\cap F_{0}=L_{2}\cap F_{0}=\mathbb{Q}$. We conclude that the image of $\tilde{\chi}$ in $H^{2}(G_{F_{1}}, \mathbb{Z}/m\mathbb{Z})$ is $0$ by (\ref{cor}) and thus we can take an $n$-th root of $\chi|_{G_{F_{1}}}$ for $F_{1}=LM\supset M$ and $L=L_1L_2$ we constructed above finite totally real Galois over $\mathbb{Q}$ and linearly disjoint with $F_{0}$ over $\mathbb{Q}$. \end{proof} \begin{lemma} \label{2N} {\it Let $l$ be a rational prime. Given any positive integer} $s$ {\it and a finite set of rational primes} $S$, {\it we can} fi{\it nd a positive integer} $N$ {\it not divisible by any primes in} $S$ {\it and} $l$, {\it and satisfying}: \begin{itemize} \item {\it Let} $r$ {\it be the smallest positive integer such that} $N\mid l^{r}-1$, {\it then} $s\mid r.$ \item {\it When} $r$ {\it is even}, $N\nmid l^{r/2}+1$ \end{itemize} \end{lemma} \begin{proof} Factorize $s$ as $s=2^{a_{0}}\displaystyle \prod_{i=1}^{m}p_{i}^{a_{i}}$. View $p_{0}=2$. We will construct a sequence of pairwise coprime integers $M_{i}$ (not divisible by any rational prime in $S$) and a sequence of integers $t_{i}$ with $t_{i}\geq a_{i}$ for $i=0,1,\ldots, m$, such that $M_{i}\mid l^{r}-1$ if and only if $p_{i}^{t_{i}}\mid r$. Set $N=\displaystyle \prod_{i=0}^{m}M_{i}$ and consider the order of $l$ in $(\mathbb{Z}/N\mathbb{Z})^{\times}\cong \displaystyle \prod_{i=1}^{m}(\mathbb{Z}/M_{i}\mathbb{Z})^{\times}$, we see that the first condition is satisfied. For the second condition (if $a_{0}>0$), we need to make $M_{0}$ satisfy the following extra property: $$ M_{0}\nmid l^{2^{t_{0}-1}}+1 $$ Now we work on $i=0$ first. Take $t_{0}>a_{0}$ large enough such that $t_{0}>2$ and for each rational prime $q\in S\cup\{2\}$, one of the following holds: (1) $q \nmid l^{2^{k}}-1$ for any $k>0$ (2) $q\mid l^{2^{t_{0}-3}}-1$ The fact $t_{0}>2$ gives $l^{2^{t_{0}-2}}+1\equiv 2$ mod $4$ and $l^{2^{t_{0}-1}}+1\equiv 2$ mod $4$ (or are both odd when $l=2$). Thus we may choose an odd prime divisor $ A$ of $l^{2^{t_{0}-2}}+1$ and an odd prime divisor $B$ of $l^{2^{t_{0}-1}}+1$. We deduce that $$ AB\mid l^{2^{t_{0}}}-1,\ B\nmid l^{2^{t_{0}-1}}-1,\ A\nmid l^{2^{t_{0-}1}}+1 $$ Take $M_{0}=AB$. Thus the smallest $r$ such that $M_{0}\mid l^{r}-1$ is $2^{t_0}$ and $M_{0}\nmid l^{2^{t_{0}-1}}+1$. Also, for $q\in S$, if $q\mid M_{0}$, then $q\mid l^{2^{t_{0}-2}}+1$ or $q\mid l^{2^{t_{0}-1}}+1$. In either case (1) won't happen, so $q\mid l^{2^{t_{0}-3}}-1$. Thus $q=2$. And this gives a contradiction with $AB$ being odd. We have constructed an $M_{0}$ with the property stated above. Now we inductively construct $M_{i}$ and $t_{i}\geq a_{i}$ such that \begin{itemize} \item $M_{i}$ is not divisible by any rational primes in $S_{i}=\{p_{i}\}\cup S \cup \{$rational prime divisors of $M_{j}$ for $j<i\}\cup\{l\}\cup \{2\}$. \item The order of $l$ in $(\mathbb{Z}/M_{i}\mathbb{Z})^{\times}$ is $p_{i}^{t_{i}}$. \end{itemize} Choose $t_{i}>a_{i}$ large enough, such that $l^{p_i^{t_i-2}}>p_{i}$ and for each rational prime $q\in S_{i}$, one of the following holds: \begin{enumerate} \item $q\nmid l^{p_{i}^{k}}-1$ for any $k>0$ \item $q\mid l^{p_{i}^{t_{i}-2}}-1$ \end{enumerate} If $q\in S_{i}$ and $q\mid l^{p_{i}^{t_{i}-1}(p_i-1)}+\ldots +l^{p_{i}^{t_{i}-1}}+1$, then $q\mid l^{p_{i}^{t_{i}}}-1$ and so (1) cannot hold. Hence $q\mid l^{p_{i}^{t_{i}-2}}-1$. Thus $l^{p_{i}^{t_{i}-1}(p_{i}-1)}+\ldots +l^{p_{i}^{t_{i}-1}}+1\equiv p_{i}$ mod $q$. We see $q=p_{i}$. In this case, $l^{p_{i}^{t_{i}-1}}\equiv (l^{p_{i}^{t_{i}-2}})^{p_i}\equiv (1+p_iu)^{p_i}\equiv 1$ mod $p_{i}^{2}$ for some integer $u$ , so $l^{p_i^{t_{i}-1}(p_{i}-1)}+\ldots +l^{p_{i}^{t_{i}-1}}+1\equiv p_{i}$ mod $p_{i}^{2}$ and $p_{i}^{2}\nmid l^{p_{i}^{t_{i}-1}(p_{i}-1)}+\ldots +l^{p_{i}^{t_{i}-1}}+1$. Thus, the only prime in $S_{i}$ that divides $l^{p_{i}^{t_{i}-1}(p_{i}-1)}+\ldots +l^{p_{i}^{t_{i}-1}}+1$ is $p_{i}$ and only to the first order. We may take an odd prime divisor $M_{i}$ of $l^{p_{i}^{t_{i}-1}(p_{i}-1)}+\ldots +l^{p_{i}^{t_{i}-1}}+1$($>p_i$) with $M_{i}\not\in S_{i}$. Now $M_{i}\mid l^{p_{i}^{t_{i}}}-1$, but if $M_{i}\mid l^{p_i^{t_i-1}}-1$, then $l^{p_{i}^{t_{i}-1}(p_{i}-1)}+\ldots +l^{p_{i}^{t_{i}-1}}+1\equiv p_{i}$ mod $M_{i}$. So $M_{i}=p_{i}$ giving a contradiction. The two condition on $M_{i}$ is thus satisfied. Now take $N=\prod_{i=0}^{m}M_{i}$ as promised. The smallest positive integer $r$ such that $N\mid l^{r}-1$ is $2^{t_0}\prod_{i=1}^{m}p_{i}^{t_{i}}$, a multiple of $s$. For the second condition, if $m>0$, i.e. there are odd prime divisors of $s$, then $M_{1}\mid l^{2^{t-1}\prod_{i=1}^{m}p_{i}^{t_{i}}}-1$ yields a contradiction with $N\mid l^{r/2}+1$. If $m=0$, then the construction stops at the first step and we have seen that $M_{0}=N\nmid l^{2^{t_{0}-1}}+1$. \end{proof} The following lemmas are taken from \cite{BLGHT} Lemma 2.2 with a little modification for the second one. These lemma will be used to prove that certain reprsentations coming from the Dwork motive are ordinary. \begin{lemma} \label{2.3} {\it Suppose that a} $\in(\mathbb{Z}^{n})^{\mathrm{H}\mathrm{o}\mathrm{m}(F,\overline{\mathbb{Q}}_{l}),+}$ {\it and that} $$ r: \mathrm{Gal}(\overline{F}/F)\rightarrow GL_{n}(\overline{\mathbb{Q}}_{l}) $$ {\it is crystalline at all primes} $v\mid l$. {\it We think of} $v$ {\it as a valuation} $v$ : $F_{v}^{\times}\twoheadrightarrow \mathbb{Z}$. {\it If} $\tau$: $F\rightarrow\overline{\mathbb{Q}_{l}}$ {\it lies above v, suppose that} $$ \dim_{\overline{\mathbb{Q}}_{l}}\mathrm{gr}^{i}(r\otimes_{\tau,F_{v}}B_{\mathrm{D}\mathrm{R}})^{\mathrm{Gal}(\overline{F}_{v}/F_{v})}=0 $$ {\it unless} $i=a_{\tau,j}+n-j$ {\it for some} $j=1,\ldots,n$, {\it in which case} $$ \dim_{\overline{\mathbb{Q}}_{l}}\mathrm{gr}^{i}(r\otimes_{\tau,F_{v}}B_{\mathrm{D}\mathrm{R}})^{\mathrm{Gal}(\overline{F}_{v}/F_{v})}=1 $$ {\it For} $v\mid l$, {\it let} $\alpha_{v,1},\ldots, \alpha_{v,n}$ {\it denote the roots of the characteristic polynomial of} $\phi^{[F_{v}^{0}:\mathbb{Q}_{l}]}$ {\it on} $$ (r\otimes_{\tau,F_{v}^{0}}B_{\mathrm{c}\mathrm{r}\mathrm{i}\mathrm{s}})^{\mathrm{Gal}(\overline{F}_{v}/F_{v})} $$ {\it for any} $\tau: F_{v}^{0}\hookrightarrow\overline{\mathbb{Q}}_{l}$. (Here $F_v^0$ is the maximal unramified subextension in $F_v$. {\it This characteristic polynomial is independent of the choice of} $\tau$.) {\it Let} $\mathrm{v}\mathrm{a}\mathrm{l}_{v}$ {\it denote the valuation on} $\overline{\mathbb{Q}}_{l}$ {\it normalized by} $\mathrm{val}_{v}(l)=v(l)$. ({\it Thus} $\mathrm{v}\mathrm{a}\mathrm{l}_{v}\circ\tau=v$ {\it for any} $\tau:F_{v}\hookrightarrow\overline{\mathbb{Q}}_{l}$.) {\it Arrange the} $\alpha_{v,i}$'{\it s such that} $$ \mathrm{val}_{v}(\alpha_{v,1})\geq \mathrm{val}_{v}(\alpha_{v,2})\geq\ldots\geq \mathrm{val}_{v}(\alpha_{v,n}) $$ {\it Then} $r$ {\it is ordinary of weight} $a$ {\it if and only if for all} $v\mid l$ {\it and all} $i=1,\ldots, n$ {\it we have} $$ \mathrm{v}\mathrm{a}\mathrm{l}_{v}(\alpha_{v,i})=\sum_{\tau}(a_{\tau,i}+n-i) $$ {\it where} $\tau$ {\it runs over embedding} $F\hookrightarrow\overline{\mathbb{Q}}_{l}$ {\it above v}. \end{lemma} \begin{remark} We will use $D_{\mathrm{cris},\tau}(r)$, $D_{\text{st},\tau}(r)$ to denote $(r\otimes_{\tau,F_{v}^{0}}B_{\mathrm{c}\mathrm{r}\mathrm{i}\mathrm{s}})^{\mathrm{Gal}(\overline{F}_{v}/F_{v})}$, $(r\otimes_{\tau, F_v^0}B_{\text{st}})^{\mathrm{Gal}(\overline{F}_v/F_v)}$ resp. for any $p$-adic representation $r$ and embedding $\tau$ as above. \end{remark} \section{Dwork Motives} \label{3} In this section, $l$ can be any prime, $n$ be any integer $\geq 2$ and $N$ is an integer that is \begin{itemize} \item odd, not divisible by any prime factors of $ln$. \item $N>100n+100$ \end{itemize} but note that the case $n>2$ and $n=2$ differs a little bit, in that there will be a slight change of the category where the objects we considered lie in. We assume in this section that $F$ is a CM number field containing $\zeta_{N}$. We will modify the construction and argument in section 4 of \cite{BLGHT} to fit the situation where no self-duality holds. Let $T_{0}=\mathbb{P}^{1}-(\{\infty\}\cup\mu_{N})/\mathbb{Z}[1/N]$ with coordinate $t$ and $Y\subset \mathbb{P}^{N-1}\times T_{0}$ be a projective family defined by the following equation: $$ X_{1}^{N}+X_{2}^{N}+\ +X_{N}^{N}=NtX_{1}X_{2}\text{ . . . }X_{N} $$ $\pi:Y\rightarrow T_{0}$ is a smooth of relative dimension $N-2$. We will write $Y_{s}$ for the fiber of this family at a point $s$. Let $H=\mu_{N}^{N}/\mu_{N}$ where the second $\mu_N$ embeds diagonally and $$ H_{0}:=\{(\xi_{1},\ldots, \xi_{N})\in\mu_{N}^{N}:\xi_{1}\cdots\xi_{N}=1\}/\mu_{N}\subset H $$ Over $\mathbb{Z}[1/N, \zeta_{N}]$ there is an $H$ action on $Y$ by: $$ (\xi_{1},\ldots, \xi_{N})(X_{1},\ldots, X_{N}, t)=(\xi_{1}X_{1},\ldots, \xi_{N}X_{N}, (\xi_{1}\cdots\xi_{N})^{-1}t) $$ Thus $H_{0}$ acts on every fibre $Y_{s}$, and $H$ acts on $Y_{0}$. Fix $\chi$ a character $H_{0}\rightarrow\mu_{N}$ of the form: $$ \chi\ ((\xi_{1},\ldots, \xi_{N}))=\prod_{i=1}^{N}\xi_{i}^{a_{i}} $$ where \[ (a_{1},\ldots, a_{N})=(1,2,4,5,\ldots, (N-n+2)/2, (N+n-2)/2,\ldots, N-5, N-4, N-3,0,0,\ldots, 0) \] when $n>2$ is odd, \[ (a_{1},\ldots, a_{N})=(1,2,3,4,5,7,8,\ldots, (N-n+3)/2, (N+n-3)/2, (N+n-1)/2,\ldots, N-4,0,0,\ldots, 0) \] when $n>2$ is even, and \[ (a_{1},\ldots, a_{N})=(1,2,\ldots (N-3)/2, 0, 0, 0, (N+3)/2, \ldots, N-1) \] when $n=2$. Note that $3, N-2, N-1$ do not occur in $(a_1,\ldots, a_N)$ when $n>2$ is odd, $6, N-3, N-2, N-1$ do not occur in $(a_1,\ldots, a_N)$ when $n>2$ is even, and there are $n+1$ of $0$s in $(a_1,\ldots, a_N)$ in these cases ($n>2$). This character is well-defined because $\sum_{i=1}^{N}a_{i}\equiv 0$ mod $N$. Let $(b_1, \ldots, b_n)$ be mutually distinct residue classes in $\mathbb{Z}/N\mathbb{Z}$ such that $b_i+a_j\neq 0$ in $\mathbb{Z}/N\mathbb{Z}$ for any $j\in\{1,\ldots, N\}$. Hence we have the following expression: \[ \{b_{1},\ldots, b_{n}\}=\left\{ \begin{array}{ll} 1, 2, (N-n+4)/2, (N-n+6)/2,\ldots, (N+n-6)/2, (N+n-4)/2, N-3& n>2 \ \mathrm{odd}\\ 1, 2, 3, (N-n+5)/2, (N-n+7)/2,\ldots, (N+n-7)/2, (N+n-5)/2, N-6& n>2\ \mathrm{even}\\ (N-1)/2, (N+1)/2& n=2 \end{array}\right. \] when $n=3,4$, this is interpretted as $\{b_{1},\ldots, b_{n}\}=\{1,2, N-3\}, \{1,2,3, N-6\}$ respectively.) We have the following combinatorial property for the set $\{b_1, \ldots, b_n\}\subset \mathbb{Z}/N\mathbb{Z}$. \begin{lemma} \label{3.5} Let $n>2$. {\it Consider} $\{b_{1},\ldots, b_{n}\}$ {\it as a subset of} $\{0,1,\ldots,N-1\}\cong \mathbb{Z}/N\mathbb{Z}$. {\it If for some} $\alpha\in(\mathbb{Z}/N\mathbb{Z})^{\times}$, $\{\alpha b_{1},\ldots,\alpha b_{n}\}=\{b_{1},\ldots,b_{n}\}$ {\it holds, then} $\alpha=1.$ \end{lemma} \begin{proof} If $n$ is even, then $1\in\{b_{1},\ldots, b_{n}\}$ and so $\alpha\in\{b_{1},\ldots, b_{n}\}$. If $\alpha=2$, then $2\in\{b_{1},\ldots, b_{n}\}$ and so $4\in\{b_{1},\ldots, b_{n}\}$. But by the assumption $N>100n+100$, $3<4<(N-n+5)/2$, so that $4\notin\{b_{1},\ldots, b_{n}\}$. Thus $\alpha\neq 2$. Same argument $(3<9,36<(N-n+5)/2)$ shows that $\alpha\neq 3, N-6$. If $n=4$, we are done. For $n\geq 6$, if $\alpha=(N+x)/2$ for some $x$ odd and $x\in[-n+5, n-5]$, then $2\alpha=x\in\{b_{1},\ldots, b_{n}\}$, but by assumption on $N$ and $n$, we have $(N+n-5)/2<N-n+5\leq N-1$ and $1\leq n-5< (N-n+5)/2$. Therefore $\{1, 3,\ldots, n-5, N-n+5, N-n+7,\ldots, N-1\}\cap\{b_{1},\ldots, b_{n}\}\subset \{1, 3\}$ and so $x=1$ or $3$. In either case, if $\alpha=(N+x)/2\in\{b_{1},\ldots, b_{n}\}$, then $(N-x)/2\in\{b_{1},\ldots, b_{n}\}$, so $-x^{2}/4\equiv(N+x)/2\cdot(N-x)/2\in\{b_{1},\ldots, b_{n}\}$, thus $N-1\in\{4b_{1},\ldots, 4b_{n}\}$ or $N-9\in\{4b_{1},\ldots, 4b_{n}\}$. Viewed as a subset of the representatives $\{0,1,\ldots, N-1\}$, $\{4b_{1},\ldots, 4b_{n}\}\subset\{2, 6 ,\ldots,2n-10, N-(2n-10), N-(2n-14), \ldots, N-2\}\cup \{4,8,12, N-24\}$. But $N-1, N-9>2n-10, 12$, so they must lies in $\{N-(2n-10), N-(2n-14),\ldots, N-2\}\cup\{N-24\}$, a contradiction. If $n$ is odd, again $1\in\{b_{1},\ldots, b_{n}\}$ and so $\alpha\in\{b_{1},\ldots, b_{n}\}$. If $\alpha=2$, then again $4\in\{b_{1},\ldots, b_{n}\}$. But $2<4<(N-n+4)/2$ gives a contradiction. Similarly $\alpha\neq N-3$ because $2<9<(N-n+4)/2$. If $n=3$ we are done. For $n\geq 5$, we have $\alpha=(N+x)/2$ for some $x$ odd and $x\in [-n+4, n-4]$. Then $2\alpha=x\in\{b_{1},\ldots, b_{n}\}$, but by assumption on $N$, we have $(N+n-4)/2<N-n+4\leq N-1$ and $1\leq n-4< (N-n+4)/2$. Therefore $\{1, 3,\ldots, n-4, N-n+4, N-n+6,\ldots, N-1\}\cap\{b_{1},\ldots, b_{n}\}\subset\{1, N-3\}$ and so $x=1$ or $-3$. In either case, if $\alpha=(N+x)/2\in\{b_{1},\ldots, b_{n}\}$, then $(N-x)/2\in\{b_{1},\ldots, b_{n}\}$, so $-x^{2}/4\equiv(N+x)/2\cdot(N-x)/2\in\{b_{1},\ldots, b_{n}\}$, thus $N-1\in\{4b_{1},\ldots, 4b_{n}\}$ or $N-9\in\{4b_{1},\ldots, 4b_{n}\}$. Viewed as a subset of the representatives $\{0,1,\ldots, N-1\}$, $\{4b_{1},\ldots, 4b_{n}\}\subset\{2, 6, \ldots,2n-8, N-(2n-8), N-(2n-12), \ldots, N-2\}\cup\{4,8,N-12\}$. But $N-1, N-9>2n-8, 8$, so they must lie in $\{N-(2n-8), N-(2n-12), \ldots, N-2\}\cup\{N-12\}$, a contradiction. \end{proof} One reason for this choice is that when $n>2$ we want to avoid making the set $\{a_{1},\ldots, a_{N}\}$ self-dual so that Lemma \ref{3.2} and Lemma \ref{3.4} hold. And the above lemma will be crucial to avoid self duality. On the contrary, if the set $\{a_{1},\ldots, a_{N}\}$ is chosen to be self-dual , the $V[\lambda]$ defined below would be a Galois representation that takes values in $GSp_n$. For any prime $\lambda$ of $\mathbb{Z}[1/2N, \zeta_{N}]$ of residue characteristic $l$, we define the lisse sheaf $V_{\lambda}/(T_{0}\times{\rm Spec} \mathbb{Z}[1/2Nl, \zeta_{N}])_{et}$ by: $$ V_{\lambda}=(R^{N-2}\pi_{\ast}\mathbb{Z}[\zeta_{N}]_{\lambda})^{\chi,H_{0}} $$ when $n>2$. When $n=2$, we use the same formula to define the object $U_\lambda$ for a prime $\lambda$ of $\mathbb{Z}[1/2N, \zeta_{N}]^+$, following the notation of \cite{BLGHT}. Similarly, for any nonzero ideal $\mathfrak{n}$ of $\mathbb{Z}[1/2N, \zeta_{N}]$ of norm $M$, we can define the lisse sheaf $V[\mathfrak{n}]/(T_{0}\times{\rm Spec} \mathbb{Z}[1/2NM, \zeta_{N}])_{et}$ by: $$ V[\mathfrak{n}]=(R^{N-2}\pi_{\ast}(\mathbb{Z}[\zeta_{N}]/\mathfrak{n}))^{\chi,H_{0}} $$ when $n>2$, and we use the same formula to define the object $U[\mathfrak{n}]$ for a nonzero ideal $\mathfrak{n}$ of $\mathbb{Z}[1/2N, \zeta_{N}]^+$. Since $H$ acts on $Y_{0}$, we have the following decomposition: $$ V_{\lambda,0}=\bigoplus_{i=1}^{N}V_{\lambda,i}, \ V[\mathfrak{n}]_{0}=\bigoplus_{i=1}^{N}V_{i}[\mathfrak{n}] $$ here $V_{\lambda,i}$ and $V_{i}[\mathfrak{n}]$ are the subspace of $V_{\lambda,0}, V[\mathfrak{n}]$ where $H$ acts by the character $\chi_{i}$: $$ \xi\rightarrow\prod_{j=1}^{N}\xi_{j}^{a_{j}+i} $$ Again, we write same decomposition in the case $n=2$ as into $U_{\lambda, i}$ and $U_i[\mathfrak{n}]$. Fix an embedding $\tau:\mathbb{Q}(\zeta_{N})\hookrightarrow\mathbb{C}$ such that $\tau(\zeta_{N})=e^{2\pi i/N}$. Let $\tilde{\pi}: Y(\mathbb{C})\rightarrow T_{0}(\mathbb{C})$ denote the base change of $\pi$ along $\tau$, viewed as a map of complex analytic spaces and $V_{B}$ be the locally constant sheaf over $T_{0}(\mathbb{C})$: $$ V_{B}=(R^{N-2}\tilde{\pi}_{\ast}\mathbb{Z}[\zeta_{N}])^{\chi,H_{0}} $$ when $n>2$. And we denote the same object in the case $n=2$ as $U_B$. Let $\tau$ also denote the induced base change $(T_{0})_{\mathbb{C}}\rightarrow T_{0}\times \text{Spec} \ \mathbb{Z}[1/2NMl,\zeta_N]$. Under previous notation, $V_{B}\otimes_{\mathbb{Z}[\zeta_{N}]}\mathbb{Z}[\zeta_{N}]_{\lambda}$ corresponds to $\tau^{*}V_{\lambda}$ under the equivalence between locally constant analytic $\mathbb{Z}[\zeta_N]_\lambda$-sheaf on $T_0(\mathbb{C})$ and locally constant etale $\mathbb{Z}[\zeta_N]_\lambda$-sheaf on $(T_{0})_{\mathbb{C}}$. Similarly, $ V_{B}\otimes_{\mathbb{Z}[\zeta_{N}]}\mathbb{Z}[\zeta_{N}]/\mathfrak{n}$ corresponds to $\tau^{*}V[\mathfrak{n}] $ under the equivalence between locally constant analytic $\mathbb{Z}[\zeta_N]/\mathfrak{n}$-sheaf on $T_0(\mathbb{C})$ and locally constant etale $\mathbb{Z}[\zeta_N]/\mathfrak{n}$-sheaf on $(T_{0})_{\mathbb{C}}$. Similar relation holds when $n=2$, see \cite{BLGHT} section 4. \ Let $T_{0}'=\mathbb{P}^{1}-\{0,1, \infty\}$ with coordinate $t'$ and $Y'\subset \mathbb{P}^{N-1}\times T_{0}'$ be a projective family defined by the following equation: $$ X_{1}^{\prime N}+X_{2}^{\prime N}+\cdots+t^{\prime-1}X_{N}^{\prime N}=NX_{1}'X_{2}'\cdots X_{N}' $$ Then $t'\mapsto t^{N}$ gives an $N$-fold Galois covering $T_{0}-\{0\}\rightarrow T_{0}'$ and $X_{1}'\rightarrow X_{1}, X_{2}'\rightarrow X_{2}\ldots, X_{N}'\rightarrow tX_{N}$ identifies the pullback of $\pi': Y '\rightarrow T_{0}'$ along this covering with $\pi: Y-Y_{0}\rightarrow T_{0}-\{0\}$. Over $\mathbb{Z}[1/N, \zeta_{N}]$, $H_{0}$ acts on $Y'$ by $$ (\xi_{1},\ldots,\xi_{N})(X_{1}',\ldots, X_{N}', t')=(\xi_{1}X_{1}',\ldots, \xi_{N}X_{N}', t') $$ This $H_{0}$ action is compatible with the $H_{0}$ action on $Y-Y_{0}$. \ Let $\widetilde{\pi}': Y'(\mathbb{C})\rightarrow T_{0}'(\mathbb{C})$ be the base change of $\pi'$ along $\tau$ viewed as a map of complex analytic spaces and let $V_{B}'=(R^{N-2}\widetilde{\pi}_{\ast}'\mathbb{Z}[\zeta_{N}])^{\chi,H_{0}}$ be a locally constant sheaf over $T_{0}'(\mathbb{C})$. Then the pullback of $V_{B}'$ along the covering $T_{0}(\mathbb{C})-\{0\}\rightarrow T_{0}'(\mathbb{C})$ is naturally identified with $V_{B}$ over $T_{0}(\mathbb{C})-\{0\}$. Fix a nonzero base point $t\in T_{0}(\mathbb{C})$ and let $t'$ be its image in $T_{0}'(\mathbb{C})$ . Now we study the image of the monodromy representation: $$ \rho_{t}:\pi_{1}(T_{0}(\mathbb{C}), t)\rightarrow GL(V_{B,t}) $$ We in turn consider the monodromy representation: $$ \rho_{t'}:\pi_{1}(T_{0}'(\mathbb{C}), t')\rightarrow GL(V_{B,t'}') $$ \begin{proposition} {\it The sheaves} $V_{\lambda}, V[\mathfrak{n}], V_{B}, V_{B}'$ {\it are locally free over} $\mathbb{Z}[\zeta_{N}]_{\lambda}, \mathbb{Z}[\zeta_{N}]/\mathfrak{n}$, $\mathbb{Z}[\zeta_{N}], \mathbb{Z}[\zeta_{N}]$ {\it of rank} $n$ {\it respectively}. \end{proposition} \begin{proof} Locally freeness follows because the family is smooth and proper. For the rank part, one only need to check at the fibre over $0$ and apply Proposition I.7.4 of \cite{DMOS}. \end{proof} Similar relation holds in the case $n=2$, see \cite{BLGHT} section 4. The different aspect for the case $n=2$ is that we may use the locally constant sheaves $V_\lambda, V[\mathfrak{n}], V_B$ defined there with coefficients $\mathbb{Z}[\zeta_N]^+_\lambda, \mathbb{Z}[\zeta_N]^+/\mathfrak{n}, \mathbb{Z}[\zeta_N]^+$ respectively, such that $V_\lambda\otimes_{\mathbb{Z}[\zeta_N]^+}\mathbb{Z}[\zeta_N]$, $V[\mathfrak{n}]\otimes_{\mathbb{Z}[\zeta_N]^+}\mathbb{Z}[\zeta_N]$, $V_B\otimes_{\mathbb{Z}[\zeta_N]^+}\mathbb{Z}[\zeta_N]$ are isomorphic to $U_\lambda$, $U[\mathfrak{n}]$, $U_B$ respectively. Now we consistently work with $V_\lambda, V[\mathfrak{n}], V_B$, regardless of whether $n>2$ or $n=2$. We already have the counterpart of Lemma \ref{3.4} as provided by Corollary 4.7 of \cite{BLGHT}. i.e. $\overline{\rho}_{t}(\pi_{1}(T_{0}(\mathbb{C}), t))=SL(V_{B,t}/\lambda)$. So we focus on the case $n>2$ until the end of the proof of Lemma \ref{3.4}. Let $\gamma_{0}, \gamma_{1}, \gamma_{\infty}$ be the loop around $0,1, \infty$, generating $\pi_{1}(T_{0}'(\mathbb{C}), t')$ subject only to the relation $\gamma_{0}\gamma_{1}\gamma_{\infty}=1$. Here we let $\gamma_{0}$ be such oriented that its image in $\mathrm{G}\mathrm{a}\mathrm{l}(T_{0}(\mathbb{C})-\{0\}/T_{0}'(\mathbb{C}))$ is $e^{2\pi i/N}=\tau(\zeta_{N})$ \begin{lemma} \label{3.2} \begin{enumerate} \item $\rho_{t'}(\gamma_0)$ has characteristic polynomial $\prod_{j=1}^{n}(X-\zeta_{N}^{b_{\mathrm{j}}})$ {\it where} \item $\rho_{t'}(\gamma_{\infty})$ {\it has characteristic polynomial} $(X-1)^{n}.$ \item $\rho_{t'}(\gamma_{1})$ {\it is a transvection, i.e}.: {\it it is unipotent and} $\mathrm{K}\mathrm{e}\mathrm{r}(\rho_{t'}(\gamma_{1})-1)$ {\it has dimension} $n-1$. \end{enumerate} \end{lemma} \begin{proof} (1) The action of $\gamma_{0}$ on $V_{B,t'}'$, is equivalent to the $\zeta_{N}$ action on $V_{B,0}$, which is the scalar multiplication by $\zeta_{N}^{i}$ on the $\chi_{i}$ eigenspace of $V_{B,0}$. By proposition I.7.4 of \cite{DMOS}, the $\chi_{i}$-eigenspaces are nonzero if and only if $0\notin\{i+a_1,\ldots,i+a_N\}$, i.e. $i\in\{b_{1},\ldots, b_{n}\}$, in which case the eigenspaces are all of rank 1. Hence the expression of the characteristic polynomial of $\rho_{t'}(\gamma_{0})$ follows. \ (2) Suppose $Z_0$ is the variety $T(X_1^N+X_2^N+\cdots+X_N^N)=X_1X_2\cdots X_N$ contained in $\mathbb{P}^{N-1}\times \mathbb{A}^1$. We use $p$ to denote the projection $Z_0\rightarrow \mathbb{A}^1$. So it suffice to show the monodromy around $0$ of the larger local system $R^{N-2}p_\ast\mathbb{C}$ has charateristic polynomial a power of $(X-1)$. We apply Lemma 2.1 of \cite{qian} base changed to $\mathbb{C}$ via $W(k)[T, U^\pm]\rightarrow \mathbb{C}[T]$, $U\mapsto 1, T\mapsto T$ and a fixed isomorphism of $\overline{W(k)[\frac{1}{p}]}\cong \mathbb{C}$, to conclude that there is a blowup $\mathbb{X}$ of $Z_0$ that is an isomorphism outside the fiber over $0$ and is semistable over the base $\mathbb{A}^1$. Note that we call a map semistable if the corresponding normal crossing divisor is reduced and does not have self crossing throughout this paper. Thus, by the vanishing cycle technique used to prove local monodromy theorem (cf. \cite{ill} 2.1), we see that such monodromy is unipotent. Note that this makes use of the fact that our normal crossing divisor is reduced. \ (3) The proof is the same as part 2 of Lemma 4.3 in \cite{BLGHT}. \end{proof} Now we study the image of the monodromy map. Let $\lambda$ be a prime of $\mathbb{Z}[\zeta_{N}]$ (of $\mathbb{Z}[\zeta_{N}]^+$ when $n=2$) of characteristic $l$ and $\overline{\rho}_{t'}: \pi_{1}(T_{0}'(\mathbb{C}), t')\rightarrow GL(V_{B,t'}/\lambda)$, $\overline{\rho}_{t}: \pi_{1}(T_{0}(\mathbb{C}), t)\rightarrow GL(V_{B,t}/\lambda)$ be the reduction of $\rho_{t'}$ , $\rho_{t}$ by $\lambda$ respectively. We first give a description of $\rho_{t'}$ by Lemma \ref{3.2} and the following lemma. \begin{lemma} \label{3.3} {\it Let} $\rho$ {\it be the representation} $\rho:\pi_{1}(T_{0}'(\mathbb{C}), t')\rightarrow GL_{n}(\mathbb{Z}[\zeta_{N}])$ {\it sending} $\gamma_{0}$ {\it to} $B^{-1}, \gamma_{\infty}$ {\it to} $A$, {\it and} $\gamma_{1}$ {\it to} $BA^{-1}$, {\it where} $$ A=\ \left(\begin{array}{lllll} 0 & 0 & \cdots & 0 & -A_{n}\\ 1 & 0 & \cdots & 0 & -A_{n-1}\\ 0 & 1 & \cdots & 0 & -A_{n-2}\\ & & \ddots & & \\ 0 & 0 & \cdots & 1 & -A_{1} \end{array}\right) $$ $$ B=\ \left(\begin{array}{lllll} 0 & 0 & \cdots & 0 & -B_{n}\\ 1 & 0 & \cdots & 0 & -B_{n-1}\\ 0 & 1 & \cdots & 0 & -B_{n-2}\\ & & \ddots & & \\ 0 & 0 & \cdots & 1 & -B_{1} \end{array}\right) $$ {\it and} $A_{i}, B_{i}\in \mathbb{Z}[\zeta_{N}]$ {\it are the coe}ffi{\it cients of the expansions}: $$ (X\ -1)^{n}=X^{n}+A_{1}X^{n-1}+\cdots+A_{n} $$ $$\prod_{i=1}^{n}(X-\zeta_{N}^{-b_i})=X^{n}+B_{1}X^{n-1}+\cdots+B_{n} $$ {\it Then as representation into} $GL_{n}(\mathbb{C})$, $\rho_{t'}$ {\it and} $\rho$ {\it are equivalent}. \end{lemma} \begin{proof} See Theorem 3.5 of \cite{BH}. Observe also that $\rho(\gamma_{1})=BA^{-1}$ has the form $$\left(\begin{array}{lllll} C_{n} & 0 & \cdots & 0 & 0\\ C_{n-1} & 1 & \cdots & 0 & 0\\ & & \ddots & & \\ C_{2} & 0 & \cdots & 1 & 0\\ C_{1} & 0 & \cdots & 0 & 1 \end{array}\right) $$ with all the $C_{i}\in \mathbb{Z}[\zeta_{N}]$. \end{proof} Note that the matrix $A$ actually has minimal polynomial $(X-1)^n$ and is conjugate to $\rho_{t'}(\gamma_{\infty})$. We have the following corollary. \begin{cor} \label{maxun} $\rho_{t'}(\gamma_{\infty})$ has minimal polynomial $(X-1)^n$ and hence so is the image of the monodromy around $\infty$ under $\rho_t$. \end{cor} Let $\overline{\rho}:\pi_{1}(T_{0}'(\mathbb{C}),\ t')\rightarrow GL_{n}(\mathbb{Z}[\zeta_{N}]/\lambda)$ be the reduction of $\rho$ with respect to $\lambda$. (Following the argument of proposition 3.3 of \cite{BH}) Then if $\overline{\rho}$ has block upper-triangular form when base changed to $\overline{k(\lambda)}$, we see $\overline{\rho}(\gamma_{1})-1$ would vanish on one of the two blocks since it is a transvection, so that the eigenvalue of $\overline{\rho}(\gamma_{0})$ and $\overline{\rho}(\gamma_{\infty})$ would be the same on that block, which gives a contradiction because none of the $b_i$ is $0$. Thus $\overline{\rho}$ is absolutely irreducible. Let $\overline{\rho}_{t'}:\pi_{1}(T_{0}'(\mathbb{C}),\ t')\rightarrow GL(V_{B,t},/\lambda)$ be the reduction of $\rho_{t'}$ by $\lambda$. It has the same trace with $\overline{\rho}$ by Lemma \ref{3.3}. So their semisimplification are equivalent and thus they are equivalent and $\overline{\rho}_{t'}$ is absolutely irreducible. \begin{lemma} \label{3.4} {\it Assume the residue} fi{\it eld} $k(\lambda)$ {\it of} $\lambda$ {\it is} $\mathbb{F}_{l^{r}}$(So $r$ is the smallest integer such that $N\mid l^r-1$). {\it Under the assumption that} $N\nmid l^{r/2}+1$ {\it if} $r$ {\it is even and $n>2$, we have that} $\overline{\rho}_{t'}(\pi_{1}(T_{0}'(\mathbb{C}), t'))=SL(V_{B,t}',/\lambda)$ {\it and} $\overline{\rho}_{t}(\pi_{1}(T_{0}(\mathbb{C}), t))=SL(V_{B,t}/\lambda)$. \end{lemma} \begin{proof} The case when $n=2$ is already resolved by Lemma 4.6 of \cite{BLGHT}. We now focus in the case $n>2$. Let $H$ be the normal subgroup of $\pi_{1}(T_{0}'(\mathbb{C}), t')$ generated by $\gamma_{1}$. Then $\pi_{1}(T_{0}'(\mathbb{C}),\ t')/H$ is cyclic, and is generated by $\gamma_{0}H$ or $\gamma_{\infty}H$. Therefore the index $[\overline{\rho}_{t'}(\pi_{1}(T_{0}'(\mathbb{C}),\ t')) : \overline{\rho}_{t'}(H)]$ divides both the order of $\overline{\rho}_{t'}(\gamma_{0})$ and $\overline{\rho}_{t'}(\gamma_{\infty})$. The former is a divisor of $N$ and the latter is an $l$-power, thus $\overline{\rho}_{t'}(\pi_{1}(T_{0}'(\mathbb{C}), t'))=\overline{\rho}_{t'}(H)$. So $\overline{\rho}_{t'}(\pi_{1}(T_{0}'(\mathbb{C}), t'))$ is generated by transvections, hence by the main theorem of \cite{SZ}, $\overline{\rho}_{t'}(\pi_{1}(T_{0}'(\mathbb{C}),\ t'))$ is conjugate in $GL_{n}(k(\lambda))$ to one of the groups $SL_{n}(k), Sp_{n}(k)$ or $SU(n, k)$ for some subfield $k\subset k(\lambda)$. Here $SU(n, k)$ is defined when $[k:\mathbb{F}_{l}]$ even: $$ SU(n,\ k):=\{g\in SL_{n}(k): \sigma(g)^{t}g=1_{n}\} $$ where $\sigma$ is the unique order $2$ element in $\mathrm{G}\mathrm{a}\mathrm{l}(k/\mathbb{F}_{l})$. We want to show $\overline{\rho}_{t'}(\pi_{1}(T_{0}'(\mathbb{C}), t'))= SL_{n}(k(\lambda))$ by excluding the other cases. If $\overline{\rho}_{t'}(\pi_{1}(T_{0}'(\mathbb{C}), t'))$ is conjugate in $GL_{n}(k(\lambda))$ to one of the groups $SL_{n}(k)$, $Sp_{n}(k)$ or $SU(n, k)$ for some proper subfield $k\subsetneqq k(\lambda)$, then there exists a nontrivial $\sigma\in \mathrm{G}\mathrm{a}\mathrm{l}(k(\lambda)/\mathbb{F}_{l})$ that preserve the eigenvalues for any elements in $\overline{\rho}_{t'}(\pi_{1}(T_{0}'(\mathbb{C}), t'))$. Consider $\overline{\rho}_{t'}(\gamma_{0})$, this would contradict Lemma \ref{3.5}. \ If $\overline{\rho}_{t'}(\pi_{1}(T_{0}'(\mathbb{C}), t'))$ is conjugate to $Sp_{n}(k(\lambda))$ , by Proposition 6.1 of \cite{BH}, we have $$ \{b_{1},\ldots, b_{n}\}=\{-b_{1},\ldots, -b_{n}\} $$ which contradicts Lemma \ref{3.5}. \ If $\overline{\rho}_{t'}(\pi_{1}(T_{0}'(\mathbb{C}), t'))$ is conjugate to $SU(n, k(\lambda))$, then we are in the situation $[k(\lambda): \mathbb{F}_{l}]$ is even. Assume $k(\lambda)=\mathbb{F}_{l^{2s}}$. Take the eigenvalue of both sides of the equation $\sigma(\overline{\rho}_{t'}(\gamma_{0}))=({}^{t}\overline{\rho}_{t'}(\gamma_{0}))^{-1}$, we have $$ \{l^{s}b_{1},\ldots, l^{s}b_{n}\}=\{-b_{1},\ldots, -b_{n}\} $$ By Lemma \ref{3.5}, we must have $l^s\equiv -1$ mod $N$. This contradicts the condition. Thus $\overline{\rho}_{t'}(\pi_{1}(T_{0}'(\mathbb{C}), t'))=SL_{n}(k(\lambda))$ . View $\overline{\rho}_t$ as defined on $\pi_{1}(T_{0}(\mathbb{C})- \{0\}, t)$ via the surjection $\pi_{1}(T_{0}(\mathbb{C})-\{0\}, t)\rightarrow\pi_{1}(T_{0}(\mathbb{C}), t)$. Since $\pi_{1}(T_{0}(\mathbb{C})-\{0\}, t)\vartriangleleft \pi_{1}(T_{0}'(\mathbb{C}), t')$ with quotient group cyclic of order $N$, we have $\overline{\rho}_{t}(\pi_{1}(T_{0}(\mathbb{C})-\{0\},\ t))\vartriangleleft SL_{n}(k(\lambda))$ with quotient cyclic of order dividing $N$. Now as the only cyclic composition factor of $SL_{n}(k(\lambda))$ have order dividing $n$, we see $\overline{\rho}_{t}(\pi_{1}(T_{0}(\mathbb{C}), t))=\overline{\rho}_{t}(\pi_{1}(T_{0}(\mathbb{C})-\{0\}, t))=SL_{n}(k(\lambda))$ . \end{proof} Given any nonzero ideal $\mathfrak{n}$ of $\mathbb{Z}[1/2N, \zeta_{N}]$ (of $\mathbb{Z}[1/2N, \zeta_{N}]^+$ when $n=2$) and any finite free rank-$n$ $\mathbb{Z}[\zeta_{N}]/\mathfrak{n}$($\mathbb{Z}[\zeta_{N}]^+/\mathfrak{n}$ when $n=2$)-module $W$ with a continuous $G_{F}$-action, we can view $W$ as a lisse sheaf on $(\text{Spec} \ F)_{et}$. Now $\wedge^{n}V[\mathfrak{n}]$ is a lisse sheaf over $(T_{0})_{F}$ of rank 1, and the associated monodromy representation $\det\rho:\pi_{1}(T_{0}, t)\rightarrow GL(\wedge^{n}V[\mathfrak{n}]_{t})$ restricted to $\pi_{1}^{\mathrm{g}\mathrm{e}\mathrm{o}\mathrm{m}}(T_{0}, t)$ is trivial by Abhyankar's lemma and $\det(\gamma_{0})=\det(\gamma_{1})=\det(\gamma_{\infty})=1$. Thus $\det\rho$ factors through $\pi_{1}(\text{Spec} \ F)=G_{F}$. Suppose we are given an isomorphism of lisse sheaf over $(T_{0})_{F}$ via some prescribed isomorphism of $G_F$ characters: $$ \phi:\bigwedge^n W_{(T_{0})_{F}}\rightarrow\bigwedge^n V[\mathfrak{n}] $$ Let $\phi_{S}$ denote the base change of $\phi$ to some scheme $S$ over $(T_{0})_{F}$. Define the moduli functor $T_{W}$ as following: \[ T_{W}: (\mathrm{S}\mathrm{c}\mathrm{h}/(T_{0})_{F}) \rightarrow (\mathrm{Sets}) \] \[ S\mapsto\{\psi\in \mathrm{I}\mathrm{s}\mathrm{o}\mathrm{m}_{S}(W_{S}, V[n]_{S}): \wedge^{n}\psi=\phi_{S}\} \] It is reprsentable by a smooth $T_{W}/(T_{0})_{F}$. \begin{proposition} \label{3.6} {\it Under the notation and assumption above, if} $\mathfrak{n}=\mathfrak{P}_{1}\mathfrak{P}_{2}$, {\it where} $\mathfrak{P}_{1}, \mathfrak{P}_{2}$ {\it are two prime ideals of} $\mathbb{Z}[\zeta_{N}]$ {\it having different residue characteristic} $l_{1}, l_{2}$ ({\it prime to} $N$) {\it respectively. If each of the} $l_{i}$ {\it satisfy the following condition}: \begin{itemize} \item if $n>2$ the smallest positive $r$ {\it such that} $N\mid l_{i}^{r}-1$ {\it is even, then} $N\nmid l_{i}^{r/2}+1$. \end{itemize} and $\mathrm{max}\{l_1, l_2\}>10$, {\it then} $T_{W}$ {\it is geometrically connected}. \end{proposition} \begin{proof} Since $\pi_{1}(T_{0}(\mathbb{C}), t)\rightarrow SL(V_{B,t}/\mathfrak{P}_1)$ and $\pi_{1}(T_{0}(\mathbb{C}), t)\rightarrow SL(V_{B,t}/\mathfrak{P}_2)$ are surjective by Lemma \ref{3.4} and our condition, by Goursat Lemma we see that there exist isomorphic quotient $\phi: SL_{n}(\mathbb{F}_{l_{1}^{r}})/H_1\cong SL_{n}(\mathbb{F}_{l_{2}^{s}})/H_2$ such that the image of $\pi_{1}(T_{0}(\mathbb{C}), t)$ in $SL(V_{B,t}/\mathfrak{n})$ is the preimage of the diagonal $\{(t, \phi(t))\in SL_{n}(\mathbb{F}_{l_{1}^{r}})/H_1\times SL_{n}(\mathbb{F}_{l_{2}^{s}})/H_2\}$ under the natural quotient map. Here we let the residue field of $\mathfrak{P}_{1}, \mathfrak{P}_{2}$ be $\mathbb{F}_{l_{1}^{r}}, \mathbb{F}_{l_{2}^{s}}$ respectively. Assume without loss of generality that $l_1>10$. Then the only proper normal subgroups of $SL_{n}(\mathbb{F}_{l_1^{r}})$ are contained in its center and the quotient group $PSL_{n}(\mathbb{F}_{l_1^{r}})$ is a simple group. Thus if $SL_{n}(\mathbb{F}_{l_{1}^{r}})/H_1$ is not trivial, then it must have a Jordan-Holder factor isomorphic to $PSL_{n}(\mathbb{F}_{l_1^{r}})$. Since $l_1>10$, any Jordan-Holder factor of $SL_{n}(\mathbb{F}_{l_{2}^{s}})$ with $l_2\neq l_1$ is not isomorphic to $PSL_{n}(\mathbb{F}_{l_1^{r}})$ by checking the duplication relation in the case $A_n$ of the classification of finite simple groups.(cf. \cite{spgp}). This contradiction gives us $SL_{n}(\mathbb{F}_{l_{1}^{r}})/H_1=1$ and the map $\pi_{1}(T_{0}(\mathbb{C}), t)\rightarrow SL(V_{B,t}/\mathfrak{n})$ is surjective. Hence for any $t\in T_{0}(\mathbb{C})$ and any two geomereic points of $T_W$ above it which correspond to two isomorphisms $\psi_{1}, \psi_{2}: W\rightarrow V[\mathfrak{n}]_{t}$ that respects $\phi$(not necessarily respecting any Galois action because the points are geometric, hence such points always exist), we can pick a path $\gamma\in\pi_{1}^{\mathrm{g}\mathrm{e}\mathrm{o}\mathrm{m}}(T_{0}, t)$ such that its image under the monodromy map is $\psi_{2}\circ\psi_{1}^{-1}$. Going along $\gamma$ induces a path in $T_{W}(\mathbb{C})$ connecting $\psi_{1}$ and $\psi_{2}$ (viewed as points in $T_{W}(\mathbb{C}))$, so geometrically connectedness follows. \end{proof} \begin{lemma} \label{npower} Let $\zeta_N\in F$, $\lambda$ is a prime of $\mathbb{Q}(\zeta_N)$ ($\mathbb{Q}(\zeta_N)^+$ when $n=2$) over the fixed prime $l$ and $k(\lambda)$ be the residue field, then viewing $\det V[\lambda]$ as a $G_F$ representation as explained above, we have $\det V[\lambda](G_F)\subset (\mathbb{F}_{l^{2}}^{\times}k(\lambda)^{\times})^{n}$. \end{lemma} \begin{proof} By the analysis before, it suffices to calculate the $G_{F}$ action on $V_{\lambda,0}=\oplus_{i=1}^{N}V_{\lambda,i}=\oplus_{i=1}^{N}H^{N-2}(Y_{0},\ \mathbb{Z}[\zeta_{N}]_{\lambda})^{\chi_{i},H}$, where $Y_{0}$ is the Fermat hypersurface $X_{1}^{N}+\cdots+X_{N}^{N}=0$ in $\mathbb{P}^{N-1}$ and $\chi_{i}: H\rightarrow\mu_{N}$ is a character defined by: $$ \xi=(\xi_{1},\ldots, \xi_{N})\mapsto\prod_{j=1}^{N}\xi_{j}^{a_{j}+i} $$ By \cite{DMOS} Proposition 1.7.10, $V_{\lambda,i}\neq 0$ (in fact $1$-dimensional) only when $i\in\{b_{1},\ldots, b_{n}\}$, and $\mathrm{F}\mathrm{r}\mathrm{o}\mathrm{b}_{v}$ acts on it by a scalar $q^{-1}\displaystyle \prod_{j=1}^{N}g(v, a_{j}+i)$ , where $v$ is any place of $F$ whose residue characteristic does not divide $N$ or $l$, $q=\# k(v)$, and $g(v, a)$ (for $a\in \mathbb{Z}/N\mathbb{Z}$) is the Gauss sum defined with respect to an fixed additive character $\psi$ : $\mathbb{F}_{q}\rightarrow(\overline{\mathbb{Q}}_{l})^{\times}$: \[ g(v, a)=-\sum_{x\in \mathbb{F}_{q}^{\times}}t(x^{\frac{1-q}{N}})^{a}\psi(x) \] here we also fix an isomorphism $t$ from the group of $N$-th roots of unity in $\mathbb{F}_{q}^\times$ and the group of $N$-th roots of unity in $\overline{\mathbb{Q}}_{l}$. We remark that each $g(v, a)$ depends on the choice of $\psi$ but $q^{-1}\displaystyle \prod_{j=1}^{N}g(v, a_{j}+i)$ does not. Thus $\mathrm{F}\mathrm{r}\mathrm{o}\mathrm{b}_{v}$ acts as $$ q^{-n}\prod_{j=1}^{n}\prod_{i=1}^{N}g(v, a_{i}+b_{j}) $$ under $\det V_{\lambda,0}$. Considering the choice of $a_{i}$ and $b_{j}$, the product can be rewritten as \begin{equation} \begin{split}\label{4.1} \prod_{j=1}^{n}\prod_{i=1}^{N}g(v, a_{i}+b_{j})&=(\prod_{j=1}^{n}g(v, b_{j}))^{n}\prod_{j=1}^{n}\prod_{s\neq -b_{k},\forall k}g(v, s+b_{j})\\ &=(\prod_{j=1}^{n}g(v, b_{j}))^{n}(\prod_{s\neq 0}g(v, s))^{n}/\prod_{i,j\in\{1,\ldots,n\},i\neq j}g(v, b_{j}-b_{i})\\ &=(\prod_{j=1}^{n}g(v, b_{j}))^{n}(\prod_{s\neq 0}g(v, s))^{n}/\prod_{i,j\in\{1,\ldots,n\},i<j}q\\ &=(\prod_{j=1}^{n}g(v,\ b_{j}))^{n}(q^{(N-1)/2})^{n}/q^{n(n-1)/2} \end{split} \end{equation} for $v$ whose residue characterisitc is odd, and here $s$ always ranging through the residue class in $\mathbb{Z}/N\mathbb{Z}$. In the last two steps, we use that for any nonzero $a\in \mathbb{Z}/N\mathbb{Z}$, $$ g(v,a)g(v,-a)=(-1)^{a\frac{1-q}{N}}q=q $$ We further verify that $\displaystyle \prod_{j=1}^{n}g(v, b_{j})\in \mathbb{Q}_l(\zeta_{N})$ by checking: $\forall\sigma\in G_{\mathbb{Q}_l(\zeta_{N})}$, if $\sigma(\zeta_{p})=\zeta_{p}^{a}$ ($p$ is the residue characteristic of $v$), then \begin{equation} \begin{split}\label{4.2eq} \sigma(\prod_{j=1}^{n}g(v, b_{j}))&=\sigma(\prod_{j=1}^{n}\sum_{x\in \mathbb{F}_{q}^{\times}}-t(x^{\frac{1-q}{N}})^{b_{j}}\psi(x))\\ &=\prod_{j=1}^{n}\sum_{x\in \mathbb{F}_{q}^{\times}}-t(x^{\frac{1-q}{N}})^{b_{j}}\psi(x)^{a}\\ &=\prod_{j=1}^{n}\sum_{x\in \mathbb{F}_{q}^{\times}}-t((a^{-1}x)^{\frac{1-q}{N}})^{b_{j}}\psi(x)\\ &=\prod_{j=1}^{n}t(a^{\frac{q-1}{N}})^{b_{j}}\prod_{j=1}^{n}\sum_{x\in \mathrm{F}_{q}^{\times}}-t(x^{\frac{1-q}{N}})^{b_{j}}\psi(x)\\ &=\prod_{j=1}^{n}g(v, b_{j}) \end{split} \end{equation} since $\sum_{j=1}^{n}b_{j}=0$ mod $N$. This suffices when $n>2$. When $n=2$, we have to show $\displaystyle \prod_{j=1}^{n}g(v, b_{j})\in \mathbb{Q}_l(\zeta_{N})^+$. For this, it suffices to take a $\sigma\in G_{\mathbb{Q}_l(\zeta_{N})^+}$ such that $\sigma(\zeta_N)=\zeta_N^{-1}$ and $\sigma(\zeta_p)=\zeta_p$, and \begin{equation} \begin{split} \sigma(\prod_{j=1}^{2}g(v, b_{j}))&=\sigma(\prod_{j=1}^{n}\sum_{x\in \mathbb{F}_{q}^{\times}}-t(x^{\frac{1-q}{N}})^{b_{j}}\psi(x))\\ &=\prod_{j=1}^{2}\sum_{x\in \mathbb{F}_{q}^{\times}}-t(x^{\frac{1-q}{N}})^{-b_{j}}\psi(x)\\ &=\prod_{j=1}^{2}\sum_{x\in \mathbb{F}_{q}^{\times}}-t(x^{\frac{1-q}{N}})^{b_{j}}\psi(x)\\ &=\prod_{j=1}^{2}g(v, b_{j}) \end{split} \end{equation} because $\{b_1, b_2\}=\{-b_1, -b_2\}$. Therefore, we deduce that $\det V[\lambda](G_{F})$ lands in $(\mathbb{F}_{l^{2}}^{\times}k(\lambda)^{\times})^{n}$. \end{proof} \ We now use the comparison theorems to deduce some $p$-adic Hodge theoretic properties of some $V_{\lambda,t}$. Before doing that, let us fix some notation, following \cite{BLGHT}. Let $\mathcal{H}_{\mathrm{D}\mathrm{R}}$ denote the degree $N-2$ relative de Rham cohomology of $Y$: $$ \mathcal{H}_{\mathrm{D}\mathrm{R}}=\mathcal{H}_{\mathrm{D}\mathrm{R}}^{N-2}(Y/(T_{0}\times \mathbb{Q}(\zeta_{N}))) $$ It is a locally free sheaf over $T_{0}\times \mathbb{Q}(\zeta_{N})$ with a decreasing filtration $F^{j}\mathcal{H}_{\mathrm{D}\mathrm{R}}$ by local direct summands. For $\sigma\in \mathrm{G}\mathrm{a}\mathrm{l}(\mathbb{Q}(\zeta_{N})/\mathbb{Q})$ , let $\mathcal{H}_{\mathrm{D}\mathrm{R},\sigma}$, $F^{j}\mathcal{H}_{\mathrm{D}\mathrm{R},\sigma}$ be the "twist" of $\mathcal{H}_{\mathrm{D}\mathrm{R}}, F^{j}\mathcal{H}_{\mathrm{D}\mathrm{R}}$ respectively: $$ \mathcal{H}_{\mathrm{D}\mathrm{R},\sigma}=\mathcal{H}_{\mathrm{D}\mathrm{R}}\otimes_{\sigma^{-1},\mathbb{Q}(\zeta_{N})}\mathbb{Q}(\zeta_{N})\text{ , }F^{j}\mathcal{H}_{\mathrm{D}\mathrm{R},\sigma}=F^{j}\mathcal{H}_{\mathrm{D}\mathrm{R}}\otimes_{\sigma^{-1},\mathbb{Q}(\zeta_{N})}\mathbb{Q}(\zeta_{N}) $$ $H_{0}$ acts on $\mathcal{H}_{\mathrm{D}\mathrm{R}}, F^{j}\mathcal{H}_{\mathrm{D}\mathrm{R}}$ in the usual way. Let $V_{\mathrm{D}\mathrm{R},\sigma}$, $F^{j}V_{\mathrm{D}\mathrm{R},\sigma}$ denote the $\chi$ eigenspace of $\mathcal{H}_{\mathrm{D}\mathrm{R}}, F^{j}\mathcal{H}_{\mathrm{D}\mathrm{R}}$: $$ V_{\mathrm{D}\mathrm{R},\sigma}=(\mathcal{H}_{\mathrm{D}\mathrm{R},\sigma})^{\chi,H_{0}},\ F^{j}V_{\mathrm{D}\mathrm{R},\sigma}=(F^{j}\mathcal{H}_{\mathrm{D}\mathrm{R},\sigma})^{\chi,H_{0}} $$ where we view $\mathcal{H}_{\mathrm{D}\mathrm{R},\sigma}$, $F^{j}\mathcal{H}_{\mathrm{D}\mathrm{R},\sigma}$ as $\mathbb{Q}(\zeta_{N})$ vector space by acting on the right. Let $\mathrm{g}\mathrm{r}^{j}V_{\mathrm{D}\mathrm{R},\sigma}=F^{j}V_{\mathrm{D}\mathrm{R},\sigma}/F^{j+1}V_{\mathrm{D}\mathrm{R},\sigma}$ be the associated graded pieces. Again $H$ acts on $\mathcal{H}_{\mathrm{D}\mathrm{R},\sigma, 0}$, $F^{j}\mathcal{H}_{\mathrm{D}\mathrm{R},\sigma,0}$, and we have: $$ V_{\mathrm{D}\mathrm{R},\sigma,0}=\bigoplus_{i=1}^{N}V_{\mathrm{D}\mathrm{R},\sigma,i},\ F^{j}V_{\mathrm{D}\mathrm{R},\sigma,0}=\bigoplus_{i=1}^{N}F^{j}V_{\mathrm{D}\mathrm{R},\sigma,i} $$ here $V_{\mathrm{D}\mathrm{R},\sigma,i}$, $F^{j}V_{\mathrm{D}\mathrm{R},\sigma,i}$ are the subspace of $V_{\mathrm{D}\mathrm{R},\sigma,0}$, $F^{j}V_{\mathrm{D}\mathrm{R},\sigma,0}$ resp. where $H$ acts by: $$ \xi\rightarrow\prod_{j=1}^{N}\xi_{j}^{a_{j}+i} $$ and we let $\mathrm{g}\mathrm{r}^{j}V_{\mathrm{D}\mathrm{R},\sigma,i}=F^{j}V_{\mathrm{D}\mathrm{R},\sigma,i}/F^{j+1}V_{\mathrm{D}\mathrm{R},\sigma,i}$ be the associated graded pieces. Let $\lambda$ and $v$ be primes of $\mathbb{Z}[\zeta_N]$ both of characteristic $l$. Here $\lambda$ is the prime of the coefficients field as before and $v$ is the place we will restrict to in the $p$-adic Hodge theory setting. If $F$ is a finite extension over $\mathbb{Q}(\zeta_{N})_{v}$ and $t\in T_{0}(F)$, for an embedding $\sigma:F\hookrightarrow\overline{\mathbb{Q}(\zeta_{N})}_{\lambda}$, by the etale comparison theorem, we have \[ ((H^{N-2}_{et}(Y_{t}\times\overline{F},\ \mathbb{Z}[\zeta_{N}]_{\lambda})\otimes_{\mathbb{Z}[\zeta_{N}]_{\lambda}}\overline{\mathbb{Q}(\zeta_{N})}_{\lambda})\otimes_{\sigma,F}B_{\mathrm{D}\mathrm{R}})^{\mathrm{Gal}(\overline{F}/F)}\cong H_{\mathrm{D}\mathrm{R}}^{N-2}(Y_{t}/F)\otimes_{F,\sigma}\overline{\mathbb{Q}(\zeta_{N})}_{\lambda} \] as filtered vector space. Taking the $\chi$ eigenspace of $H_{0}$ action on both sides gives (notice the twist): $$ ((V_{\lambda,t}\otimes_{\mathbb{Z}[\zeta_{N}]_{\lambda}}\overline{\mathbb{Q}(\zeta_{N})}_{\lambda})\otimes_{\sigma,F}B_{\mathrm{D}\mathrm{R}})^{\mathrm{Gal}(\overline{F}/F)}\cong V_{\mathrm{D}\mathrm{R},\sigma_{|\mathbb{Q}(\zeta_{N})},t}\otimes_{F,\sigma}\overline{\mathbb{Q}(\zeta_{N})}_{\lambda} $$ as filtered vector space. Similarly, for $i\in\{1,2,\ldots, N\}$ and $\sigma:\mathbb{Q}(\zeta_{N})_{v}\hookrightarrow\overline{\mathbb{Q}(\zeta_{N})}_{\lambda}$, view $0\in T_{0}(\mathbb{Q}(\zeta_{N})_{v})$, we have: \[ ((V_{\lambda,i}\otimes_{\mathbb{Z}[\zeta_{N}]_{\lambda}}\overline{\mathbb{Q}(\zeta_{N})}_{\lambda})\otimes_{\sigma,\mathbb{Q}(\zeta_{N})_{v}}B_{\mathrm{D}\mathrm{R}})^{\mathrm{Gal}(\overline{\mathbb{Q}(\zeta_{N})_{v}}/\mathbb{Q}(\zeta_{N})_{v})}\cong V_{\mathrm{D}\mathrm{R},\sigma_{|\mathbb{Q}(\zeta_{N})},i}\otimes_{\mathbb{Q}(\zeta_{N})_{v},\sigma}\overline{\mathbb{Q}(\zeta_{N})}_{\lambda} \] For $a\in \mathbb{Z}/N\mathbb{Z}$, we will write $\overline{a}$ as the representative element in the range $\{$1, 2, \ldots, $N\}$ of $a$. Let $\tau_{0}:\mathbb{Q}(\zeta_{N})\hookrightarrow\mathbb{C}$ be the embedding $: \zeta_N\mapsto e^{2\pi i/n}$. Assume $\sigma^{-1}(\zeta_{N})=\zeta_{N}^{a}$. \begin{lemma} \label{3.7} {\it Under the notation and assumption above, we have} \begin{enumerate} \item $V_{DR,\sigma,i}\neq(0)$ {\it only when} $i\in\{b_{1},\ldots, b_{n}\}$. {\it And for each such $i$}, $V_{DR,\sigma,i}$ {\it is a one-dimensional} $\mathbb{Q}(\zeta_N)$-{\it vector space and} $\mathrm{g}\mathrm{r}^{j}V_{DR,\sigma,i}\neq 0$ {\it only when} $$ j=M(a)+\#\{b\in\{b_{1},\ldots, b_{n}\}: \overline{ab}<\overline{ai}\} $$ {\it here} $M(a)$ {\it is some constant determined by} $a$. \item $\mathrm{g}\mathrm{r}^{j}V_{DR,\sigma}$ {\it is locally free of rank} 1 {\it over} $T_{0}\times \mathbb{Q}(\zeta_{N})$ {\it when} $M(a)\leq j\leq M(a)+ n-1$ {\it and is} (0) {\it otherwise}. $V_{DR,\sigma}$ {\it is a locally free sheaf over} $T_{0}\times \mathbb{Q}(\zeta_{N})$ {\it of rank} $n$. \end{enumerate} \end{lemma} \begin{proof} Base change to $\mathbb{C}$ gives that $$ \mathrm{g}\mathrm{r}^{j}V_{\mathrm{D}\mathrm{R},\sigma,i}\otimes_{\mathbb{Q}(\zeta_{N}),\tau_{0}\sigma^{-1}}\mathbb{C}\cong H^{j,N-2-j}(Y_{0}(\mathbb{C}),\ \mathbb{C})_{(a(a_{1}+i),\ldots,a(a_{N}+i))} $$ where we define $Y(\mathbb{C})$ via the embedding $\tau_{0}$ and right hand side of the isomor-phism is defined as the eigenspace of $H^{j,N-2-j}(Y_{0}(\mathbb{C}), \mathbb{C})$ where $H$ acts by $\xi\rightarrow \prod_{j=1}^{N}\xi_{j}^{a(a_{j}+i)}$. Proposition I.7.4 and I.7.6 of \cite{DMOS} gives that right hand side is nonzero if and only if \begin{itemize} \item indices $a(a_{1}+i),\ldots, a(a_{N}+i)$ are all nonzero mod $N$ \item and $$ j+1=(a(a_{1}+i)+\ldots+a(a_{N}+i))/N $$ \end{itemize} i.e. $i\in\{b_{1},\ldots, b_{n}\}$ and we derive the formula of $j$ for a fixed such $i$ as below. For $1\leq d\leq N$, let $$ j(d)=(\overline{aa_{1}+d}+\ldots +\overline{aa_{N}+d})/N-1 $$ Note that the nonzero $a_{i}$ only appears once in the sum. Then $j(d+1)=j(d)+1$ if $d\equiv ab_{j}$ mod $N$ for some $b_{j}$ (in this case, none of $\overline{aa_{i}+d}$ is $N$), and $j(d+1)=j(d)$ if otherwise (in this case, exactly one of $\overline{aa_{i}+d}$ is $N$). Use this formula to induct,we see that taking $M(a)=j(1)$ gives the formula in (1). Since $\mathrm{g}\mathrm{r}^{j}V_{\mathrm{D}\mathrm{R},\sigma}$ is locally free, it suffice to look at the fibre over $0$. Lining up $\overline{ab_{i}}$ in order, we see that the $j$ such that $\mathrm{g}\mathrm{r}^{j}V_{\mathrm{D}\mathrm{R},\sigma,0}\neq 0$ are precisely $M(a),\ldots, M(a)+ n-1$. (2) follows immediately. \end{proof} \begin{lemma} \label{3.8} {\it Under the notation and assumption above, and let} $t\in F$ {\it as a point in} $T_{0}(F)$ $\zeta_N\in F$, {\it we have} \begin{enumerate} \item $V_{\lambda,t}$ {\it is a de Rham representation of} $G_{F}$. {\it For} $\sigma:F\hookrightarrow\overline{\mathbb{Q}(\zeta_{N})}_{\lambda}$, {\it if} $\sigma^{-1}(\zeta_{N})=\zeta_{N}^{a}$, {\it then the Hodge-Tate weight of} $V_{\lambda,t}\otimes_{\mathbb{Z}[\zeta_{N}]_{\lambda}}\overline{\mathbb{Q}(\zeta_{N})_{\lambda}}$ {\it with respect to} $\sigma$ {\it are} $$ \{M(a),\ M(a)+1,\ldots, M(a)+n-1\}. $$ \item {\it If} $t\in \mathcal{O}_{F}$ {\it and} $t^{N}-1\not\in \mathfrak{m}_{F}$, {\it then} $V_{\lambda,t}$ {\it is crystalline}. \item {\it If} $l\equiv 1$ {\it mod} $N$ {\it and} $t\in \mathfrak{m}_{F}$, {\it then} $V_{\lambda,t}$ {\it is ordinary of weight} $(\lambda_{\sigma,i})$ {\it with} $\lambda_{\sigma,i}=M(a_{\sigma}),\ \forall i$, where $a_{\sigma}$ satisfy $\sigma^{-1}(\zeta_{N})=\zeta_{N}^{a_{\sigma}}$ . \item If $v(t)<0$, {\it then} $V_{\lambda,t}$ {\it is regular and ordinary of weight} $(\lambda_{\sigma,i})$ {\it with} $\lambda_{\sigma,i}=M(a_{\sigma}),\ \forall i$, where $a_{\sigma}$ satisfy $\sigma^{-1}(\zeta_{N})=\zeta_{N}^{a_{\sigma}}$ . \end{enumerate} \end{lemma} \begin{proof} (1) is clear from the comparison theorem and Lemma \ref{3.7}. (2) follows from the fact that these $Y_{t}$ have good reduction modulo the maximal ideal of $F$. (3) we observe that since $t\in \mathfrak{m}_{F}$, $$ (V_{\lambda,0}\otimes_{\sigma, F}B_{\mathrm{c}\mathrm{r}\mathrm{i}\mathrm{s}})^{\mathrm{Gal}(\overline{F}/F)}\cong(V_{\lambda,t}\otimes_{\sigma, F}B_{\mathrm{c}\mathrm{r}\mathrm{i}\mathrm{s}})^{\mathrm{Gal}(\overline{F}/F)} $$ as $\phi$-module because they can both be written as the $\chi$-eigenspace of the crystalline cohomology of the reduction of $Y_{0}$. Moreover, the Hodge-Tate weights of $V_{\lambda,0}$ and $V_{\lambda,t}$ are the same by (1). Thus by Lemma \ref{2.3}, $V_{\lambda,t}$ is ordinary of weight $(M(a_\sigma))$ if and only if $V_{\lambda,0}$ is ordinary of weight $(M(a_\sigma))$ . Recall $V_{\lambda,0}=\oplus_{i=1}^{N}V_{\lambda,i}$ as $G_{\mathbb{Q}(\zeta_{N})_{v}}=G_{\mathbb{Q}_{l}}$ representation. They are both $\mathbb{Q}(\zeta_{N})_{\lambda}=\mathbb{Q}_{l}$ vector space. Since \[ ((V_{\lambda,i}\otimes_{\mathbb{Z}[\zeta_{N}]_{\lambda}}\overline{\mathbb{Q}(\zeta_{N})}_{\lambda})\otimes_{\sigma,\mathbb{Q}(\zeta_{N})_{v}}B_{\mathrm{D}\mathrm{R}})^{\mathrm{Gal}(\overline{\mathbb{Q}(\zeta_{N})_{v}}/\mathbb{Q}(\zeta_{N})_{v})}\cong V_{\mathrm{D}\mathrm{R},\sigma_{|\mathbb{Q}(\zeta_{N})},i}\otimes_{\mathbb{Q}(\zeta_{N})_{v},\sigma}\overline{\mathbb{Q}(\zeta_{N})}_{\lambda} \] $V_{\lambda,i}$ is $1$-dimensional when $i\in\{b_{1},\ldots, b_{n}\}$ and has Hodge-Tate weight $M(a_\sigma)+ \#\{b\in\{b_{1},\ldots, b_{n}\}: \overline{a_\sigma b}<\overline{a_\sigma i}\}$ in this case. Thus $$ V_{\lambda,0}\otimes_{\mathbb{Q}_{l}}\overline{\mathbb{Q}}_{l}\cong\bigoplus_{i=0}^{n-1}\overline{\mathbb{Q}}_{l}(-M(a_\sigma)-i) $$ as $I_{\mathbb{Q}(\zeta_{N})_{v}}=I_{\mathbb{Q}_{l}}$-representation. Therefore (3) follows. (4) This is the main theorem of \cite{qian}. \end{proof} \section{Proof of Main Result Theorem \ref{1.1}} We fix a non-CM elliptic curve $E/\mathbb{Q}$. For any prime $l'$, let $\overline{r}_{E, l'}$ be the $G_\mathbb{Q}$ representation $H^1_{et}(E, \mathbb{F}_{l'})$. Write $n=l^{a}m, l\nmid m$ We could find (by Lemma \ref{2N}) a positive integer $N$ satisfying the following properties related only to $\overline{r}, F^{\mathrm{a}\mathrm{v}}$ and $n$ as given in Theorem \ref{1.1}. \begin{itemize} \item $N$ is odd, and is not divisible by any prime factors of $ln$, any prime that is ramified in $F^{\mathrm{a}\mathrm{v}}$ and $F^{\mathrm{k}\mathrm{e}\mathrm{r}\overline{r}}$ and any prime where the elliptic curve $E$ has bad reduction. \item $N>100n+100$ \item $\mathbb{F}_{l^{2}}\mathbb{F}'\subset \mathbb{F}_{l}(\zeta_{N})$ , where $\mathbb{F}'$ is the finite field generated by all the $m$-th roots(hence {\it n}-th roots) of elements in the field $\mathbb{F}_{l^{s}}$ we choose such that the residual representation $\overline{r}: G_F\rightarrow GL_n(\mathbb{F}_{l^{s}})$. And when $n=2$, we further want that $\mathbb{F}_l(\zeta_N)=\mathbb{F}_{l^r}$ for some $r$ even and $\mathbb{F}_{l^{2}}\mathbb{F}'\subset \mathbb{F}_{l}(\zeta_{N})^+$. These all amounts to the condition that the smallest positive integer $r$ such that $N\mid l^r-1$ is divisible by certain integers. \item Let $\mathbb{F}_l(\zeta_N)=\mathbb{F}_{l^r}$. When $r$ is even, we have $N\nmid l^{r/2}+1$. \end{itemize} Set $F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}$ to be the normal closure over $\mathbb{Q}$ of $F^{\mathrm{a}\mathrm{v}}\overline{F}^{\mathrm{K}\mathrm{e}\mathrm{r}\overline{r}}(\zeta_l)$. Thus by the condition above, $\mathbb{Q}(\zeta_N)$ and $F^{\text{avoid}}$ are linearly disjoint over $\mathbb{Q}$, since any rational prime $p$ that is ramified in their intersection has to divide $N$ while also ramified to $F^\text{avoid}$. Such prime does not exist, so their intersection is unramified over $\mathbb{Q}$ and thus must be $\mathbb{Q}$. Hence $F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}$ and $F(\zeta_{N})$ are linearly disjoint over $F$. Following the proof of Corollary 7.2.4 of \cite{tap}, we can prove the following statement: \begin{proposition} \label{po4.1} {\it In the above notation, there exists a rational prime $l'$ such that}: \begin{itemize} \item $ l'\equiv 1$ {\it mod} $N$ \item $l'>2ln+5$ and is unramified in $F$. \item $\overline{r}_{E,l'}(G_{\widetilde{F}})=GL_{2}(\mathbb{F}_{l'})$, {\it here} $\widetilde{F}$ {\it is the normal closure of $F$ over} $\mathbb{Q}$. \item $\exists\sigma\in G_{F}-G_{F(\zeta_{l'})}$ {\it such that} $\overline{r}_{E,l'}(\sigma)$ {\it is a scalar} \item $E$ {\it has good ordinary reduction at} $l'$. \end{itemize} {\it And there exists a} fi{\it nite Galois extension} $F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}/\mathbb{Q}$ {\it and a finite totally real Galois extension} $F^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}/\mathbb{Q}$ {\it unramified above the prime divisors of} $N$ {\it such that}: \begin{itemize} \item $F_2^\text{avoid}\cap F^\text{avoid}=\mathbb{Q}$ \item $F^\text{suff}\cap F^\text{avoid} F_2^\text{avoid}=\mathbb{Q}$ \item $\overline{\mathbb{Q}}^{\mathrm{k}\mathrm{e}\mathrm{r}\overline{r}_{E,l'}}\subset F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}$ \item $F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}$ {\it and} $F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}$ {\it are unramified above} prime divisors of $N$ \end{itemize} {\it and for any finite totally real extension} $F'/F^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}$ {\it such that} $F'\cap F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}=\mathbb{Q}$, $\mathrm{S}\mathrm{y}\mathrm{m}\mathrm{m}^{n-1}r_{E,l'}|_{G_{F'}}$ {\it is automorphic}. \end{proposition} \begin{proof} We first pick an $l'$ satisfying the listed properties. This can be done because the first condition give a set of primes of positive density. The second, third and fifth condition exclude a set of primes of density $0$(the third by \cite{Ser72} and the fifth by \cite{Ser81} Theorem 20), while the fourth condition follows from the second and the third.(Just pick a $u\in \mathbb{F}_{l'}^\times$ with $u^2\neq 1$ and $\sigma\in G_{\widetilde{F}}$ such that $\overline{r}_{E, l'}(\sigma)=u$.) Carry out the proof of Corollary 7.2.4 of \cite{tap} to $F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}= F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}, E=E, \mathcal{M}=\{n-1\}, \mathcal{L}=$ \{the prime divisors of $N$\} and take the $l$ in the proof to be the rational prime $l'$ we just picked. Note that the properties of $l'$ listed in our proposition implies all the properties of $l$ needed in the first paragraph of proof of Corollary 7.2.4 of \cite{tap}. Inspecting the proof closely would give that the additional properties (the third and fourth of the lower bullet list) also hold: The third property follows from the choice of $F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}=F_{1}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}L_{3}$ in the 12th line of page 183 and $L_{3}=L_{2}\overline{\mathbb{Q}}^{\mathrm{k}\mathrm{e}\mathrm{r}\overline{r}_{E,l'}}$ in the 14th line of page 182. For the fourth property, $F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}$ unramified over $\mathcal{L}$ follows from the choice of $N$ and $F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}$ unramified above $\mathcal{L}$ follows from \begin{enumerate} \item $L_{3}$ unramified above $\mathcal{L}$ since $\overline{\mathbb{Q}}^{\mathrm{k}\mathrm{e}\mathrm{r}\overline{r}_{E,l'}}$ is unramified above $\mathcal{L}$ and that each $\overline{\mathbb{Q}}^{\mathrm{k}\mathrm{e}\mathrm{r}\mathrm{I}\mathrm{n}\mathrm{d}_{G_{\mathbb{Q}}}^{G_{L}}\overline{\psi}_{m}}$ is unramified over $\mathcal{L}$ ($\psi_{m}$ is unramified over $\mathcal{L}$, see the properties of $\psi_m$ in the beginning of Page 182 and $L$ is also unramified above $\mathcal{L}$, see the paragraph before the last paragraph in Page 181) gives that their composite $L_{2}$ (page 182) is unramified over $\mathcal{L}$. \item $F_{1}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}$ is obtained from applying Proposition 7.2.3 of \cite{tap} to $F=F_{0}=\mathbb{Q}$ (the $F$ is that in the proposition, not our $F$), $\{r_{m},\ m\in \mathcal{M}\}$, $\mathcal{L}$ and $F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}L_{3}$ (See the second paragraph of Page 183). In terms of the proof of Proposition 7.2.3 of \cite{tap}, $F_{1}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}=\overline{\mathbb{Q}}^{\mathrm{k}\mathrm{e}\mathrm{r}\prod_{i}\overline{r}_{i}'}(\zeta_{l'})$ with $\overline{r}_{i}'$ unramified above $\mathcal{L}$ and $l'\notin \mathcal{L}$ (See the last line of Page 180). \end{enumerate} \end{proof} Note that the condition on $N, l, l'$ guarantess the hypothesis of $N, l, l'$ in section \ref{3} are all satisfied. Now, apply Lemma \ref{2.1} to $F=F, M=F(\zeta_{N}), F_{0}=F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}(\zeta_N)$, to see we may take a finite CM Galois extension $E/F$ with $E=LM$ for some totally real Galois extension $L/\mathbb{Q}$ such that $L$ and $F_0$ are linearly disjoint over $\mathbb{Q}$, and that we may find some characters $\chi_{1}$ : $\mathrm{Gal}(\overline{F}/E)\rightarrow(\mathbb{Z}[\zeta_{N}]/\lambda)^{\times}$ and $\chi_{2}$ : $\mathrm{Gal}(\overline{F}/E)\rightarrow(\mathbb{Z}[\zeta_{N}]/\lambda')^{\times}$ such that $(\overline{\chi_{1}}\times\overline{\chi_{2}})^{n}\cong(\det V[\lambda\lambda'])\otimes\det(\overline{r}\times \mathrm{S}\mathrm{y}\mathrm{m}\mathrm{m}^{n-1}r_{E,l'})^{\vee}$ as $G_{E}$-module. The condition of Lemma \ref{2.1} is verified below. On the side of characteristic $l$, fix a prime $\lambda$ of $\mathbb{Q}(\zeta_N)$ ($\mathbb{Q}(\zeta_N)^+$ when $n=2$) and denote the residue field by $k(\lambda)$. Note that the condition of $N$ in the beginning of this section gives that $\det\overline{r}$ actually has image landing in $(k(\lambda)^{\times})^{n}$. We also assumed that $\mathbb{F}_{l^2}\subset k(\lambda)$, $n\mid \#(k(\lambda)^\times)$. Hence, applying Lemma \ref{npower} to the field $F(\zeta_N)$, we see that on the characteristic $l$ side, we have $\det V[\lambda]\otimes (\det \overline{r})^\vee$ (as a $G_{F(\zeta_N)}$ representation) has image in $(k(\lambda)^{\times})^{n}$. On the other side of characteristic $l'$, fixing a prime $\lambda'$ of $\mathbb{Z}[\zeta_{N}]$ ($\mathbb{Z}[\zeta_{N}]^+$ when $n=2$) over $l'$, we have that $\det \mathrm{S}\mathrm{y}\mathrm{m}\mathrm{m}^{n-1}\overline{r}_{E,l'}=(\chi_{\mathrm{c}\mathrm{y}\mathrm{c}})^{n(n-1)/2}$, which have image in $\mathbb{F}_{l^2}^\times$. Applying Lemma \ref{npower} to the prime $l'$ to see that $(\det\text{Symm}^{n-1}\overline{r}_{E,l'})^{\vee}\otimes (\det V[\lambda'])$ (as a $G_{F(\zeta_N)}$ representation) has image in $(k(\lambda')^{\times})^{n}$. Let $W$ be the $\mathbb{Z}[\zeta_{N}]/\lambda\lambda'$-module with a $G_E$ action given by the representation $(\overline{\chi_{1}}\otimes\overline{r})\times(\overline{\chi_{2}}\otimes \mathrm{S}\mathrm{y}\mathrm{m}\mathrm{m}^{n-1}\overline{r}_{E,l'})$. The isomorphism $(\overline{\chi_{1}}\times\overline{\chi_{2}})^{n}\cong(\det V[\lambda\lambda'])\otimes\det(\overline{r}\times \mathrm{S}\mathrm{y}\mathrm{m}\mathrm{m}^{n-1}r_{E,l'})^{\vee}$ induces an isomorphism $$ \phi:\bigwedge^n W_{(T_{0})_{E}}\rightarrow\bigwedge^n V[\lambda\lambda'] $$ . In this way, the moduli functor $T_{W}$ is well-defined by $\phi$ over $E.$ We see that the conditions of Proposition \ref{3.6} are satisfied for $N$ and $l, l'$. Thus $T_{W}$ is geometrically connected. Note that $F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}$ and $M=F(\zeta_N)$ are linearly disjoint over $F$ because $F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}$ and $\mathbb{Q}(\zeta_{N})$ linearly disjoint over $\mathbb{Q}$, which in turn comes from $F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}, F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}, F^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}$ all unramified over the prime divisors of $N$. Since $L$ is linearly disjoint with $F_0$ over $\mathbb{Q}$ and $M\subset F_0$, we have that $E=LM$ and $F_0$ are linearly disjoint over $M$. Now $E$ and $F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}$ are linearly disjoint over $F$, because $E\cap F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}=LM\cap F_0\cap F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}=M\cap F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}=F$ We will need a theorem of Moret-Bailly from \cite{HSBT}. \begin{proposition} \label{4.2} {\it Let} $F$ {\it be a number} fi{\it eld and let} $S=S_{1}\amalg S_{2}\amalg S_{3}$ {\it be a} fi{\it nite set of places of} $F$, {\it so that every element of} $S_{2}$ {\it is non-archimedean. Suppose that} $T/F$ {\it is a smooth, geometrically connected variety. Suppose also that} \begin{itemize} \item {\it for} $v\in S_{1}$ , $\Omega_{v}\subset T(F_{v})$ {\it is a non-empty open subset} ({\it for the} $v$-{\it topology}) \item {\it for} $v\in S_{2}$ , $\Omega_{v}\subset T(F_{v}^{\text{nr}})$ {\it is a non-empty open} $\mathrm{G}\mathrm{a}\mathrm{l}(F_{v}^{nr}/F_{v})$-{\it invariant subset}. \item {\it for} $v\in S_{3}, \Omega_{v}\subset T(\overline{F}_{v})$ {\it is a non-empty open} $\mathrm{Gal}(\overline{F}_{v}/F_{v})$-{\it invariant subset}. \end{itemize} Suppose finally that $H/F$ is a finite Galois extension. Then there is a finite Galois extension $F'/F$ and a point $P\in T(F')$ such that: \begin{itemize} \item $F'/F$ {\it is linearly disjoint from} $H/F$ \item {\it every place} $v$ {\it of} $S_{1}$ {\it splits completely in} $F'$ {\it and if} $w$ {\it is a prime of} $F'$ {\it above} $v$, {\it then} $P\in\Omega_{v}\subset T(F_{w}')$ \item {\it every plae} $v$ {\it of} $S_{2}$ {\it is unramified in} $F'$ {\it and if} $w$ {\it is a prime of} $F'$ {\it above $v$,then} $P\in\Omega_{v}\cap T(F_{w}')$ \item {\it if} $w$ {\it is a prime of} $F'$ {\it above some $v$} $\in S_{3}$, {\it then} $P\in\Omega_{v}\cap T(F_{w}')$. \end{itemize} \end{proposition} \ Let $F^{+}\subset F, E^{+}\subset E$ be the maximal totally real subfield respectively. We apply Proposition \ref{4.2} to the smooth geometrically connected variety $T={\rm Res}_{\mathbb{Q}}^{EF^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}}T_{W}$ defined over $\mathbb{Q}$. We take $H=F_0L=F_0E$. We take $S_{1}=\{\infty\}$, $S_{2}=\emptyset$ and $S_{3}=\{l, l'\}$. For $v\in S_1$, we take $\Omega_{v}={\rm Res}_{\mathbb{Q}}^{EF^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}}T_{W}(\mathbb{R})$, i.e. the whole set which is clearly open and non-empty since each copy of $T_W(\mathbb{C})$ are non-empty. For $v\in S_{3}$, there exists an algebraic morphism $p: T\rightarrow \mathrm{Res}^{EF^\text{suff}}_{\mathbb{Q}}T_0$ and we define \[ \Omega_{l,0}=\{t=(t_\tau)\in \mathrm{Res}^{EF^\text{suff}}_{\mathbb{Q}}T_0(\overline{\mathbb{Q}}_l)=\prod_{\tau: EF^\text{suff}\hookrightarrow\overline{\mathbb{Q}}_l}T_{0, \tau}(\overline{\mathbb{Q}}_l) \ |\ v_l(t_\tau)<0, \forall \tau\} \] \[ \Omega_{l', 0}=\{t=(t_\tau)\in \mathrm{Res}^{EF^\text{suff}}_{\mathbb{Q}}T_0(\overline{\mathbb{Q}}_{l'})=\prod_{\tau: EF^\text{suff}\hookrightarrow\overline{\mathbb{Q}}_{l'}}T_{0, \tau}(\overline{\mathbb{Q}}_{l'})\ |\ v_{l'}(t_\tau)>0, \forall \tau\} \] and we define $\Omega_l=p^{-1}(\Omega_{l, 0})$, $\Omega_{l'}=p^{-1}(\Omega_{l', 0})$. Both sets are clearly open, non-empty and Galois invariant. Hence, we get a finite totally real Galois extension $L'/\mathbb{Q}$ with $L'$ linearly disjoint with $F_0L$ over $\mathbb{Q}$ and a point $ t\in T_{0}(L'EF^\text{suff})$ (because $L'$ and $EF^\text{suff}$ are linearly disjoint over $\mathbb{Q}$) such that if we denote $L'EF^\text{suff}$ by $F'$ and $L'E^+F^\text{suff}$ by $(F')^+$ then \begin{itemize} \item $F'\supset EF^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}$ is a CM Galois extension over $F$ \item $\overline{\chi_{1}}^{-1}V[\lambda]_{t}\cong\overline{r}|_{G_{F'}}$ \item $\overline{\chi_{2}}^{-1}V[\lambda']_{t}\cong \overline{\mathrm{S}\mathrm{y}\mathrm{m}\mathrm{m}^{n-1}r_{E,l'}}|_{G_{F'}}$ \item $v(t)<0$ for all primes $v|l$ of $F'$ \item $v(t)>0$ for all primes $v|l'$ of $F'$ \end{itemize} Now $ F'\cap F_2^\text{avoid}=L'EF^\text{suff}\cap F_0L\cap F_2^\text{avoid}=EF^\text{suff}\cap F_2^\text{avoid}$ because $L'$ and $F_0L$ are linearly disjoint over $\mathbb{Q}$. Moreover, $EF^\text{suff}\cap F_2^\text{avoid}=LMF^\text{suff}\cap F_0\cap F_2^\text{avoid}=MF^\text{suff} \cap F_2^\text{avoid}$ because $L$ and $F_0$ are linearly disjoint over $\mathbb{Q}$. Furthermore, $MF^\text{suff} \cap F_2^\text{avoid}=F^\text{suff} F(\zeta_N)\cap F^\text{suff} F^\text{avoid} F_2^\text{avoid}\cap F_2^\text{avoid}=F^\text{suff} F\cap F_2^\text{avoid}$ because $F^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}}F^{\mathrm{s}\mathrm{u}\mathrm{f}\mathrm{f}}$ and $\mathbb{Q}(\zeta_{N})$ linearly disjoint over $\mathbb{Q}$. Finally, $F^\text{suff} F\cap F_2^\text{avoid}=F^\text{suff} F\cap F^\text{avoid} F_2^\text{avoid}\cap F_2^\text{avoid}=F\cap F_2^\text{avoid}\subset F^\text{avoid}\cap F_2^\text{avoid}=\mathbb{Q}$ because $F^\text{suff}$ and $F^\text{avoid} F_2^\text{avoid}$ are linearly disjoint over $\mathbb{Q}$. Therefore, we conclude that $F'\supset F^\text{suff}$ and is linearly disjoint with $F_2^\text{avoid}$ over $\mathbb{Q}$, so we see by Proposition \ref{po4.1} that $\mathrm{S}\mathrm{y}\mathrm{m}\mathrm{m}^{n-1}r_{E,l'}|_{G_{(F')^+}}$ and hence $\mathrm{S}\mathrm{y}\mathrm{m}\mathrm{m}^{n-1}r_{E,l'}|_{G_{F'}}$ is automorphic. Since $F'$ and $\overline{\mathbb{Q}}^{\mathrm{K}\mathrm{e}\mathrm{r}\overline{r}_{E,l'}}(\subset F_{2}^{\mathrm{a}\mathrm{v}\mathrm{o}\mathrm{i}\mathrm{d}})$ are linearly disjoint over $\mathbb{Q}$, we again have \begin{itemize} \item $\overline{r}_{E,l'}(G_{F'})\supset SL_{2}(\mathbb{F}_{l'})$ \item $\exists\sigma\in G_{F'}-G_{F'(\zeta_{l'})}$ such that $\overline{r}_{E,l'}(\sigma)$ is a scalar \end{itemize} Note that by similar reasoning as the previous paragraph, we have that $ F'\cap F^\text{avoid} F_2^\text{avoid}=L'EF^\text{suff}\cap F_0L\cap F^\text{avoid} F_2^\text{avoid}=EF^\text{suff}\cap F^\text{avoid} F_2^\text{avoid}=LMF^\text{suff}\cap F_0\cap F^\text{avoid} F_2^\text{avoid}=MF^\text{suff} \cap F^\text{avoid} F_2^\text{avoid}=F^\text{suff} F(\zeta_N)\cap F^\text{suff} F^\text{avoid} F_2^\text{avoid}\cap F^\text{avoid} F_2^\text{avoid}=F^\text{suff} F\cap F^\text{avoid} F_2^\text{avoid}=F$ and thus $F'$ is linearly disjoint with $F^{\mathrm{a}\mathrm{v}}$ over $F$ as we wanted in the main theorem. Let $\chi_2: G_E\rightarrow \mathbb{Q}(\zeta_N)^\times$ be the Teichmuller lift of $\overline{\chi}_2$. We would like to apply Theorem 6.1.2 of \cite{tap} to $p=l'$, $\rho=V_{\lambda', t}\otimes \chi_2^{-1}$ and $r_{\iota}(\pi)=\text{Symm}^{n-1}r_{E,l'}|_{G_{F'}}$. Clearly $\overline{\rho}\cong \overline{r_{\iota}(\pi)}$. For the residual representation $\text{Symm}^{n-1}\overline{r}_{E,l'}|_{G_{F'}}$, the two properties of $\overline{r}_{E,l'}$ listed in \ref{po4.1} gives that it is absolutely irreducible and condition (4) of Theorem 6.1.2 of \cite{tap} is satisfied. Now apply Lemma 7.1.5 (2) of \cite{tap} to $F=F$, $F_1=F'$, $l=l'$, $\overline{r}=\overline{r}_{E,l'}$ and the fact that $F'$ and $\overline{\mathbb{Q}}^{\ker \overline{r}_{E,l'}}$ are linearly disjoint over $\mathbb{Q}$, to see $(\text{Symm}^{n-1}\overline{r}_{E, l'})(G_{F'(\zeta_{l'})})=(\text{Symm}^{n-1}\overline{r}_{E, l'})(G_{F(\zeta_{l'})})$ is enormous. Apply Lemma 7.1.5 (4) of \cite{tap} to the same situation ($H\subset \widetilde{F}F\overline{\mathbb{Q}}^{\ker \overline{r}_{E, l'}}\subset \widetilde{F}F_2^\text{avoid}\subset F^\text{avoid} F_2^\text{avoid}$, thus $H'\subset F^\text{avoid} F_2^\text{avoid}$ since $F^\text{avoid}$ and $F_2^\text{avoid}$ are both Galois over $\mathbb{Q}$, and then $F'$ linearly disjoint with $F^\text{avoid} F_2^\text{avoid}$ over $F$ is satisfied) to see that $\overline{\rho}$ is decomposed generic. Now $V_{\lambda', t}$ is an regular ordinary representation for any place $v_{l'}\mid l'$ of $F'$ by Lemma \ref{3.8} (3) ($v_{l'}(t)>0$) and thus so is $\rho$. $\text{Symm}^{n-1}r_{E, l'}|_{G_{F'}}$ is ordinarily automorphic by our choice of $l'$. Theorem 6.1.2 of \cite{tap} thus gives that $V_{\lambda', t}$ is automorphic as a $G_{F'}$ representation. And so $V_{\lambda, t}$ is automorphic as a $G_{F'}$ representation. Now by Lemma \ref{3.8} (4) ($v_l(t)<0$ for any $v_l\mid l$), we see that $V_{\lambda, t}\otimes \chi_1^{-1}$ is regular ordinary as $G_{F'}$ representation and hence $\overline{r}\mid_{G_{F'}}$ is ordinarily automorphic. This finishes the proof of Theorem \ref{1.1}. Since there are different versions of \cite{tap} with different Lemma 7.1.5, we state it as the following. The proof is taken from an old version of \cite{tap}: \begin{lemma} Suppose $F/\mathbb{Q}$ is a finite extension with normal closure $\widetilde{F}/\mathbb{Q}$ and $n\in\mathbb{Z}_{>0}$. Suppose also that $l>2n+5$ is a rational prime and that $\overline{r}:G_F\rightarrow GL_2(\overline{\mathbb{F}}_l)$ is a continuous representation such that $\overline{r}(G_{\widetilde{F}})\supset SL_2(\mathbb{F}_l)$. Finally assume $F_1/F$ is a finite extension that is linearly disjoint from $\overline{F}^{\ker \overline{r}}$ over $F$. Then: (2) $(\text{Symm}^{n-1}\overline{r})(G_{F(\zeta_l)})$ is enormous. (4) If $F_1/F$ is Galois and linearly disjoint over $F$ from the normal closure $H'$ of $H=\widetilde{F}\overline{F}^{\ker \mathrm{ad}\overline{r}}$ over $\mathbb{Q}$, then $\text{Symm}^{n-1}\overline{r}|_{G_{F_1}}$ is decomposed generic. \end{lemma} \begin{proof} (2) This is Lemma 7.1.5(2) of \cite{tap}. (4) It suffices to show $\text{Symm}^{n-1}\overline{r}_{G_{F_2}}$ is decomposed generic for some finite extension $F_2/F_1$. We may assume without loss of generality that $F$ is Galois over $\mathbb{Q}$. Now by \cite{DDT} Theorem 2.47(b), $\mathrm{ad}\overline{r}(G_F)=PGL_2(k)$ or $PSL_2(k)$ for some finite extension $k/\mathbb{F}_l$. We first take an at most qurdratic extension $E/F$ such that $\mathrm{ad}\overline{r}(G_E)=PSL_2(k)$. Then by Goursat Lemma, $\mathrm{Gal}(\widetilde{E}/E)=(\mathbb{Z}/2\mathbb{Z})^r$ for some $r\geq 0$. Hence $\overline{F}^{\ker \mathrm{ad}\overline{r}}$ and $\widetilde{E}$ are linearly disjoint over $E$ by an analysis of the simple factors of each Galois group. Thus $\mathrm{ad}\overline{r}(G_{\widetilde{E}})=PSL_2(k)\supset PSL_2(\mathbb{F}_l)$. Now since $E\subset\overline{F}^{\ker\mathrm{ad}\overline{r}}$, $\widetilde{E}\overline{\widetilde{E}}^{\ker\mathrm{ad}\overline{r}}\subset\widetilde{\overline{F}^{\ker\mathrm{ad}\overline{r}}}=H'$, and so its normal closure over $\mathbb{Q}$ is still $H'$. Now $F_1$ and $H$ linearly disjoint over $F$ implies that $E_1:=\widetilde{E}F_1$ is linearly disjoint from $H'$ over $\widetilde{E}$ as well. We therefore can assume (replacing $F$ by $\widetilde{E}$ and $F_1$ by $E_1$) without loss of generality that $\mathrm{ad}\overline{r}(G_F)=PSL_2(k)$ and $F$ is Galois over $\mathbb{Q}$. Now we choose a sequence of subfields $F=F_0'\subset F_1'\subset \cdots\subset F_s'=F_1$ such that $F_i'$ is Galois over $F_{i-1}'$ and $\mathrm{Gal}(F_i'/F_{i-1}')$ is simple. Set $\widetilde{F}_i'$ to be the normal closure of $F_i'$ over $\mathbb{Q}$. Hence $\mathrm{Gal}(\widetilde{F}_i'/\widetilde{F}_{i-1}')$ is trivial if and only if $F_i'\subset\widetilde{F}_{i-1}'$ and is of form $\Delta_i^m$ for some $m>0$ where $\Delta_i=\mathrm{Gal}(F_i'/F_{i-1}')$ otherwise (Goursat Lemma). Now if $H\cap\widetilde{F}_s'=F$, then we may apply Lemma 7.1.5 (3) of \cite{tap} to conclude. Otherwise, there should exist a minimal $i$ such that $H\cap \widetilde{F}_i'\neq F$. Since $\mathrm{Gal}(H/F)=PSL_2(k)$ is simple, we see that $H\subset \widetilde{F}_i'$. Now minimality gives $\widetilde{F}_{i-1}'\cap H=F$, so that $\Delta_i^m=\mathrm{Gal}(\widetilde{F}_i'/\widetilde{F}_{i-1}')\twoheadrightarrow \mathrm{Gal}(H/F)$, thus $\Delta_i=PSL_2(k)$, $m>0$. We claim there exists $\sigma\in G_\mathbb{Q}$ such that $\sigma H\subset F_i'\widetilde{F}_{i-1}'$. We may write $\widetilde{F}_i'$ as the composite of $\sigma_jF_i'\widetilde{F}_{i-1}'$, where $\sigma_1,\ldots, \sigma_m$ are elements of $G_\mathbb{Q}$, the fields $\sigma_jF_i'\widetilde{F}_{i-1}'$ is Galois over $\widetilde{F}_{i-1}'$ with Galois group $PSL_2(k)$, and for any two disjoint subsets $I, J$ of $\{1,\ldots, m\}$, the composite of $\sigma_jF_i'\widetilde{F}_{i-1}'$ for $j\in I$ and the composite of $\sigma_jF_i'\widetilde{F}_{i-1}'$ for $j\in J$ are linearly disjoint over $\widetilde{F}_{i-1}'$. Now because $H\subset \widetilde{F}_i'$ and $\widetilde{F}_{i-1}'\cap H=F$, we see that $H\widetilde{F}_{i-1}'$ is a Galois subextension of $\widetilde{F}_i'/\widetilde{F}_{i-1}'$ with Galois group $PSL_2(k)$. We may pick the smallest $j$ such that $E_j:=(\sigma_1F_i')\cdots(\sigma_jF_i')\widetilde{F}_{i-1}'\supset H\widetilde{F}_{i-1}'$. The minimality gives that $E_{j-1}$ as a Galois extension of $\widetilde{F}_{i-1}'$ does not contain $H\widetilde{F}_{i-1}'$, whose Galois group over $\widetilde{F}_{i-1}'$ is simple. It follows that $E_{j-1}$ is linearly disjoint with $H\widetilde{F}_{i-1}'$ over $\widetilde{F}_{i-1}'$. Therefore, restriction map takes $\mathrm{Gal}(E_j/E_{j-1})$ onto $\mathrm{Gal}(H\widetilde{F}_{i-1}'/\widetilde{F}_{i-1}')$. Observe that $E_{j-1}$ and $\sigma_jF_i'\widetilde{F}_{i-1}'$ are linearly disjoint over $\widetilde{F}_{i-1}'$, hence $\mathrm{Gal}(E_j/\widetilde{F}_{i-1}')=\mathrm{Gal}(E_j/E_{j-1})\times \mathrm{Gal}(E_j/\sigma_jF_i'\widetilde{F}_{i-1}')$. The latter group $\mathrm{Gal}(E_j/\sigma_jF_i'\widetilde{F}_{i-1}')$ commutes with $\mathrm{Gal}(E_j/E_{j-1})$ inside $\mathrm{Gal}(E_j/\widetilde{F}_{i-1}')$. Hence under restriction map, by the surjective result proved above, $\mathrm{Gal}(E_j/\sigma_jF_i'\widetilde{F}_{i-1}')$ maps into the center of $\mathrm{Gal}(H\widetilde{F}_{i-1}'/\widetilde{F}_{i-1}')$, which is trivial. This gives us that $\sigma_jF_i'\widetilde{F}_{i-1}'\supset H\widetilde{F}_{i-1}'$. Thus taking $\sigma_j$ is sufficient for our claim. Consider the image of $\mathrm{Gal}(\widetilde{F}_{i-1}'F_i'/F_i')$ in $\mathrm{Gal}(\sigma H/F)$ under the natural restriction map. The fact $m>0$ gives that $F_i'$ and $\widetilde{F}_{i-1}'$ are linearly disjoint over $F_{i-1}'$, and so $\mathrm{Gal}(\widetilde{F}_{i-1}'F_i'/F_i')$ and $\mathrm{Gal}(\widetilde{F}_{i-1}'F_i'/\widetilde{F}_{i-1}')$ are commuting subgroups of $\mathrm{Gal}(\widetilde{F}_{i-1}'F_i'/F_{i-1}')$. Under the restriction map to $\mathrm{Gal}(\sigma H/F)$, since $\sigma H\cap\widetilde{F}_{i-1}'=F$, $\mathrm{Gal}(\widetilde{F}_{i-1}'F_i'/\widetilde{F}_{i-1}')$ surjects onto $\mathrm{Gal}(\sigma H/F)$, so the image of $\mathrm{Gal}(\widetilde{F}_{i-1}'F_i'/F_i')$ lies in the center of $\mathrm{Gal}(\sigma H/F)\cong PSL_2(k)$, and hence is trivial. Therefore, $\sigma H\subset F_i'$, which contradicts the condition that $H'$ and $F_1$ are linearly disjoint over $F$. \end{proof} For the proof of Theorem \ref{1.4}, we know from above that $\overline{\chi_{1}}^{-1}V[\lambda]_{t}\cong\overline{r}|_{G_{F'}}$ and $V_{\lambda, t}\otimes \chi_1^{-1}$ is regular ordinary as a $G_{F'}$ representation. Thus, in order to apply Theorem 6.1.2 of \cite{tap}, it suffices to verify that the conditions (3) and (4) of that theorem holds for $\overline{r}|_{G_{F'}}$. Since $F'$ is linearly disjoint with $\overline{F}^{\ker \overline{r}}\subset F^\text{avoid}$ over $F$, all conditions except decomposed genericity follows from the corresponding conditions of $\overline{r}$. Now $F'=L'EF^\text{suff}=F L'LF^\text{suff}(\zeta_N)$ is Galois over $F$. And the Galois closure of $\overline{F}^{\ker \overline{r}}(\zeta_l)$ is linearly disjoint with $F'$ over $\mathbb{Q}$ because this Galois closure is contained in $F^\text{avoid}$ and that $ L'LF^\text{suff}(\zeta_N)\cap F^\text{avoid}=L'LF^\text{suff}(\zeta_N)\cap LF_0\cap F^\text{avoid}=LF^\text{suff}(\zeta_N)\cap F^\text{avoid}=LF^\text{suff}(\zeta_N)\cap F_0\cap F^\text{avoid}=F^\text{suff}(\zeta_N)\cap F^\text{avoid}=F^\text{suff}(\zeta_N)\cap F^\text{suff} F^\text{avoid} F_2^\text{avoid}\cap F^\text{avoid}=F^\text{suff} \cap F^\text{avoid}=\mathbb{Q}$. Now we may apply Lemma 7.1.6 of \cite{tap} to see the decomposed genericity. Hence we also finish the proof of Theorem \ref{1.4}. \bibliographystyle{amsalpha}
1,314,259,993,965
arxiv
\section{Introduction} The COVID-19 disease was initially spotted in December of 2019 in Wuhan, China, and was detected worldwide shortly after that \cite{i1}. In January 2020, the World Health Organization (WHO) stated its outbreak as a public health emergency and global concern, and later, a pandemic in March of 2020 \cite{i2}. SARS-CoV-2 causes COVID-19, a novel variety of coronavirus that has not been identified beforehand in humans \cite{i3i4}. Coronaviruses are common among animals, and some can infect humans \cite{i3i4,i5}. Bats are the natural hosts of these viruses, and several other species of animals have also been identified as sources \cite{i6,i7,nnone}. For example, MERS-CoV3 is transmitted from camels to humans, while SARS-CoV-14 is transmitted from intermediate hosts such as civet cats that were involved in the development of SARS-CoV-1 \cite{i8,i9}. The new coronavirus is genetically closely related to the SARS-CoV-1 virus \cite{i10}. The SARS-CoV2 virus is transmitted mainly through respiratory droplets and aerosols from an infected person while sneezing, coughing, talking, or breathing in the presence of others \cite{i11,i12}. The virus can survive at varying surfaces from a few hours to several days, and prior research has estimated the incubation period of this disease to be within 1 and 14 days \cite{i13}. However, the amount of live virus decreases over time and may not always be present in sufficient quantities to cause infection \cite{i14,i15}. The most frequent symptoms of COVID-19 include fever, dry cough, and fatigue. Pain, diarrhea, headache, sore throat, conjunctivitis, loss of taste or smell are other variable symptoms of the virus \cite{i16,i17}. Nevertheless, the most severe symptoms seen in COVID-19 patients include difficulty for breathing or shortness of breath, chest pain, and loss of movement or ability to speak \cite{i16,i17,i18}. Early diagnosis of this disease in the preliminary stages is vital. So far, various screening methods have been introduced for the diagnosis of COVID-19. At present, nucleic acid-based molecular diagnosis (RT-PCR5 test) is considered the gold standard for early detection of COVID-19 \cite{i19,i20}. According to a WHO report, all diagnoses of COVID-19 must be verified by RT-PCR \cite{i21}. However, performing the RT-PCR test needs specialized equipment and equipped laboratories that are not available in most countries and takes at least 24 hours to determine the test outcome. Also, the test result may not be accurate and may require re-RT-PCR or other tests. Therefore, X-ray and CT-Scan imaging can be used as a primary diagnostic method for screening people suspected of having COVID-19 \cite{i22,i23}. X-ray imaging is one of the medical imaging techniques used to diagnose COVID-19. X-ray imaging benefits include low cost and low risks of radiation that are dangerous to human health \cite{i24,i25}. In this imaging technique, the detection of COVID-19 is a relatively complicated task. An X-ray physician may also misdiagnose diseases such as pulmonary tuberculosis \cite{i26,i27}. CT-Scan imaging is used to reduce COVID-19 detection error. CT-scans have very high contrast and resolution, and are very successful in diagnosing lung diseases such as COVID-19 \cite{i28,i29}. CT-Scan can also be used as a clinical feature of COVID-19 disease patients. CT scans of subjects with COVID-19 had shown marked destruction of the pulmonary parenchyma 6, such as interstitial inflammation and extensive consolidation \cite{i30}. During CT-Scan imaging of patients, multiple slices are recorded to diagnose COVID-19. This high number of CT-Scan images requires a high accuracy from specialists for accurate diagnosis of COVID-19. Factors such as eye exhaustion or a massive number of patients to interpret CT-Scan may lead to misdiagnosis of COVID-19 by specialists \cite{i31}. Due to the stated challenges, the use of artificial intelligence (AI) methods for accurate diagnosis of COVID-19 on CT-Scan or X-Ray imaging modalities is of utmost importance. The design of computer aided diagnosis systems (CADS) based on AI using CT-Scan or X-Ray images for precise diagnosis of COVID-19 has been highly regarded by researchers \cite{i32,i33,abc4}. Deep learning (DL) is one of the fields of AI, and many research papers have been published on their application for diagnosing COVID-19 \cite{i34,i35}. In this paper, a new method of diagnosing COVID-19 from CT-Scan images using DL is presented. First, CT-Scan images of people with COVID-19 and normal people were recorded in Gonabad Hospital (Iran). Next, three expert radiologists have labeled the patients' images. They have also selected informative slices from each scan. Then, after preprocessing data with a Gaussian filter, various deep learning networks were trained in order to separate COVID-19 from healthy patients. In this step, a CycleGAN \cite{r10,abc5} architecture was first used for data augmentation of CT-Scan data; after that, a number of pre-trained deep networks \cite{abc3} such as DenseNet \cite{r4}, ResNet \cite{r2}, ResNest \cite{r5}, and ViT \cite{r6} have been used to classify CT-Scan images. Figure \ref{figone} shows the block diagram of method. The results show that the proposed method of this study has promising results in detecting COVID-19 from CT-scan images of the lung. \begin{figure}[h] \includegraphics[width=\textwidth]{pics/cover.jpg} \caption{Overall diagram of proposed method.\label{figone}} \end{figure} The rest of the paper is organized as follows. In the next section, we present a review of previous research on the diagnosis of COVID-19 from CT-Scan images using DL techniques. In Section 3, the proposed method of this research is presented. In Section 4, the evaluation process and the results of the proposed method are presented. Section 5 includes the discussion of paper and finally, the paper ends with the conclusion and future directions. \section{Related Works} Prior research papers on the diagnosis of the COVID-19 disease using machine learning can be divided according to the algorithms used or the underlying modalities. Figure \ref{figab} shows various types of methods that can be used for diagnosis of COVID-19. As can be seen in this figure, the methods based on medical imaging can be divided into two groups: CT scan and X-ray. The focus of this article is on CT scan modality. Also, machine learning algorithms can be divided into two categories: DL \cite{ndeep,abc2} and conventional machine learning methods \cite{nbishop,nntwo,abc1}. Due to the large number of machine learning papers for diagnosing COVID-19 disease from CT modality, we have only reviewed papers that have used deep learning methods for this imaging modality. Table \ref{tab1} provides an overview of these papers, the datasets used by them, the components of methods, and finally, their performance. \begin{figure}[h] \includegraphics[width=\textwidth]{pics/ab_1.pdf} \caption{Various criteria used for COVID-19 detection and their categories.\label{figab}} \end{figure} \clearpage \begin{center} \tiny \setlength\LTleft{-123pt} \setlength\LTright{0pt} \begin{longtable}{|c|c|c|c|c|c|c|c|c|c|} \caption{\centering Review of related works.} \label{tab1}\\ \hline \textbf{Ref} & \textbf{Dataset} & \textbf{Modality} & \textbf{Number of Cases} & \textbf{Pre-Processing} & \textbf{DNN} & \textbf{Post-Processing} & \textbf{Toolbox} & \textbf{K Fold} & \makecell{ \textbf{Performance}\\\textbf{Criteria}}\\ \hline \cite{a1} &Clinical &CT &\makecell{3000 COVID-19 Images,\\3000 Non-COVID-19 Images} &\makecell{Patches\\Extraction} &\makecell{VGG-16,\\GoogleNet,\\ResNet-50} &\makecell{Feature Fusion,\\Ranking\\Technique,\\SVM} &-- &-- &\makecell{Acc=98.27\\Sen=98.93\\Spec=97.60\\Prec=97.63}\\ \hline \cite{a2} &\makecell{Datasets from\\ \cite{a17} \& \cite{a8}} &CT &\makecell{460 COVID-19 Images,\\397 Healthy Control (HC) Images} &\makecell{Data\\Augmentation\\(DA)} & \makecell{CNN\\Based on \\SqueezeNet} &\makecell{Class Activation\\Mapping (CAM)} &Matlab 2020a &10 &\makecell{Acc=85.03\\Sen=87.55\\Spec=81.95\\Prec=85.01}\\ \hline \cite{a3} &Various Datasets &CT &\makecell{2373 COVID-19 Images,\\2890 Pneumonia Images,\\3193 Tuberculosis Images,\\3038 Healthy Images} &-- &\makecell{Ensemble\\DCCNs} &-- &Matlab 2020b &-- &\makecell{Acc=98.83\\Sen=98.83\\Spec=98.82\\F1-Score=98.30}\\ \hline \cite{a4} &Clinical &CT &\makecell{98 COVID-19 Patients,\\103 Non-COVID-19 Patients} &\makecell{Visual\\Inspection} &BigBiGAN &-- &TensorFlow &-- &\makecell{Sen=80\\Spec=75}\\ \hline \cite{a5} &Clinical &CT &\makecell{148 Images from 66 COVID-19\\Patients, 148 Images from\\66 HC Subjects} &\makecell{Visual\\Inspection} &ResGNet-C &-- &-- &5 &\makecell{Acc=96.62\\Sen=97.33\\Spec=95.91\\Prec=96.21}\\ \hline \cite{a6} & \makecell{COVID-CT\\Dataset} & CT & \makecell{349 COVID-19 Images,\\397 Non-COVID-19 Images} & \makecell{Scaling Process,\\DA} & \makecell{Multiple\\Kernels-ELM\\-based DNN} & -- & Matlab & 10 & \makecell{Acc=98.36\\Sen=98.28\\Spec=98.44\\Prec=98.22}\\ \hline \cite{a7} & Clinical & CT & \makecell{210,395 Images From 704\\ COVID-19 Patients and\\498 Non-COVID-19 Subjects} & DA & \makecell{U-net\\ \hline Dual-Branch\\Combination\\Network} & Attention Maps& PyTorch & 5 &\makecell{Acc=92.87\\Sen=92.86\\Spec=92.91}\\ \hline \cite{a8} & Various Dataset & CT & \makecell{2933 COVID-19 Images} & \makecell{Deleting Outliers,\\Normalization,\\Resizing} & \makecell{Ensemble\\DNN} & -- & Matlab R2019a & 5 & \makecell{Acc= 99.054\\Sen= 99.05\\Spec=99.6\\F1-Score= 98.59}\\ \hline \cite{a9} & Clinical & CT & \makecell{320 COVID-19 Images,\\320 Healthy Control Images} & \makecell{Histogram\\Stretching,\\Margin Crop,\\Resizing,\\Down Sampling} & FGCNet & \makecell{Gradient-\\Weighted CAM\\ (Grad-CAM)} & -- &-- &\makecell{Acc=97.14\\Sen=97.71\\Spec=96.56\\Prec=96.61}\\ \hline \cite{a10} &Clinical &CT &\makecell{180 Viral Pneumonia,\\94 COVID-19 Cases} &\makecell{ROIs\\Extraction} &\makecell{Modified\\Inception} &-- &-- &-- &\makecell{Acc=89.5\\Sen=88\\Spec=87\\F1-Score=77}\\ \hline \cite{a11} &Clinical &CT &\makecell{3389 COVID-19 Images,\\1593 Non-COVID-19 Images} &\makecell{Segmentation,\\ Generating\\Lung Masks} & \makecell{3D ResNet34\\with\\Online\\Attention} &Grad-CAM &PyTorch &5 &\makecell{Acc=87.5\\Sen=86.9\\Spec=90.1\\F1-Score=82.0}\\ \hline \cite{a12} &\makecell{COVIDx-CT\\Dataset} &CT &\makecell{104,009 Images From\\1,489 Patient Cases} &\makecell{Automatic\\Cropping\\Algorithm, DA} &COVIDNet-CT &-- &TensorFlow &-- &\makecell{Acc= 99.1\\Sen=97.3\\PPV=99.7}\\ \hline \cite{a13} &Various Datasets &CT &\makecell{349 COVID-19 Images,\\397 Non-COVID-19 Images} & \makecell{Resizing,\\Normalization,\\Wavelet-Based\\DA} &ResNet18 &\makecell{Localization of\\Abnormality} &Matlab 2019b &-- &\makecell{Acc=99.4\\Sen=100\\Spec=98.6}\\ \hline \cite{a14} &COVID-CT &CT &\makecell{345 COVID-19 Images,\\397 Non-COVID-19 Images} &Resizing, DA &\makecell{Conditional\\GAN\\ \hline ResNet50} &-- &\makecell{TensorFlow,\\Matlab} &-- &\makecell{Acc=82.91\\Sen=77.66\\Spec=87.62}\\ \hline \cite{a15} &Clinical &CT &\makecell{151 COVID-19 Patient,\\498 Non-COVID-19 Patient} &\makecell{Resizing,\\Padding, DA} &3D-CNN &\makecell{Interpretation\\by Two\\Radiologists} &-- &-- &AUC=70\\ \hline \cite{a16} & \makecell{SARS-CoV-2\\CT-Scan Dataset} & CT & \makecell{1252 CT COVID-19 Images,\\1230 CT non-COVID-19 Images} & -- & \makecell{GAN with\\Whale\\Optimization\\Algorithm} & -- & Matlab 2020a & 10 & \makecell{Acc=99.22\\Sen=99.78\\Spec=97.78\\F1-score=98.79}\\ \hline \cite{a17} &Various Datasets &CT &\makecell{1,684 COVID-19 Patient,\\1,055 Pneumonia,\\914 Normal Patients} &Resizing &Inception V1 & \makecell{Interpretation by\\6 Radiologists,\\t-SNE Method} &-- &10 &\makecell{Acc=95.78\\AUC=99.4}\\ \hline \cite{a18} &Clinical &CT &\makecell{2267 COVID-19 CT Images,\\1235 HC CT Images} & \makecell{Compressing,\\Normalization,\\Cropping,\\Resizing} &ResNet50 &-- &Keras &-- &\makecell{Acc=93\\Sen=93\\Spec=92\\F1-Score=92}\\ \hline \cite{a19} &Clinical &CT &\makecell{108 COVID-19 Patients,\\86 Non-COVID-19 Patients} &\makecell{Visual\\Inspection,\\Grey-Scaling,\\Resizing} &\makecell{Various\\Networks} &-- &-- &-- &\makecell{Acc=99.51\\Sen=100\\Spec=99.02}\\ \hline \cite{a20}& Various Datasets &CT &\makecell{413 COVID-19 Images,\\439 Non-COVID-19 Images} & \makecell{Feature Extraction\\with ResNet-50} & 3D-CNN &-- &-- &10 &\makecell{Acc=93.01\\Sen=91.45\\Spec=94.77\\Prec=94.77}\\ \hline \cite{a21} &Clinical &CT &\makecell{150 3D COVID-19 Chest CT,\\CAP and NP Patients\\(450 Patient Scans)} &\makecell{Sliding\\Window, DA} &\makecell{Multi-View\\U-Net\\ \hline 3D-CNN} & \makecell{Weakly\\Supervised\\Lesions\\Localization,\\CAM} &TensorFlow &5 &\makecell{Acc=90.6\\Sen=83.3\\Spec=95.6\\Prec=74.1}\\ \hline \cite{a22} &Various Datasets &CT &\makecell{449 COVID-19 Patients,\\425 Normal, 98 Lung Cancer,\\397 Different Kinds of\\Pathology} &\makecell{Resizing,\\Intensity\\Normalization} & \makecell{Autoencoder\\Based\\DNN} &-- &\makecell{Keras,\\TensorFlow} &-- &\makecell{Dice=88\\Acc=94.67\\Sen=96\\Spec=92}\\ \hline \cite{a23} &\makecell{COVID-19 CT\\from \cite{a17}} &CT &746 Images &-- &GAN &-- &Matlab &-- &\makecell{Acc=84.9\\Sen=85.33\\Prec=85.33}\\ \hline \cite{a24} & \makecell{COVID-19 CT\\Datasets, Cohen} &CT &\makecell{345 COVID-19 CT Images,\\375 Non-COVID-19 CT Image} &\makecell{2D Redundant\\Discrete WT\\(RDWT) Method,\\Resizing} &ResNet50 &\makecell{Grad-CAM,\\Occlusion\\Sensitivity\\Technique} &Matlab &10 &\makecell{Acc=92.2\\Sen=90.4\\Spec=93.3\\F1-Score=91.5}\\ \hline \cite{a25} & \makecell{SARS-CoV-2\\CT Scan Dataset} &CT &\makecell{1262 COVID-19 Images,\\1230 HC Images} &-- &\makecell{Convolutional\\Support Vector\\Machine\\(CSVM)} &-- &Matlab 2020a &-- &\makecell{Acc=94.03\\Sen=96.09\\Spec=92.01\\Pre=92.19}\\ \hline \cite{a26} &\makecell{Chest CT\\and X-ray} &\makecell{X-Ray,\\CT} &\makecell{5857 Chest X-Rays,\\767 Chest CTs} &-- &\makecell{Various\\Networks} &Heat Map &Keras &-- &\makecell{Acc=75\\(CT)}\\ \hline \cite{a27} & \makecell{medseg\\DlinRadiology} & CT & \makecell{10 Axial Volumetric CTS\\(Each Containing 100 Slices\\of COVID-19 Images)} & Resizing & \makecell{VGG16,\\Resnet-50\\ \hline U-net} & -- & -- & -- & \makecell{Acc=99.4\\Spec=99.5\\Sen=80.83\\Dice=72.4\\IOU=61.59}\\ \hline \cite{a28} &BasrahDataset &CT &50 Cases, 1425 Images &\makecell{Gray-Scaling,\\Resizing} &VGG 16 &-- &Keras &-- &\makecell{Acc=99\\F1-Score=99}\\ \hline \cite{a29} &Kaggle &CT &\makecell{1252 COVID CT Images,\\1240 non-COVID CT Images} &\makecell{Resizing,\\Normalization,\\DA} &Covid CT-net &heat map &\makecell{TensorFlow,\\Keras} &-- &\makecell{Acc=95.78\\Sen=96\\Spec=95.56}\\ \hline \cite{a30} &COVID-CT &CT &\makecell{708 CTs, 312 with COVID-19,\\396 Non-COVID-19} &Normalization &LeNet-5 &-- &-- &5 &\makecell{Acc=95.07\\Sen=95.09\\Prec=94.99}\\ \hline \end{longtable} \end{center} \section{Materials and Methods} This section of the paper is devoted to discussing the applied method and its components. In this paper, we have firstly collected a new CT scan dataset from COVID-19 patients; then, from each scan, the informative slices were selected by physicians. After that, several convolutional neural networks pre-trained on the ImageNet dataset \cite{r1} were fine-tuned to the task at hand. Here, we trained a Resnet-50 architecture \cite{r2}, an EfficientNet B3 architecture \cite{r3}, a Densenet-121 architecture \cite{r4}, a ResNest-50 architecture \cite{r5}, and a ViT architecture \cite{r6}. Several data augmentation techniques alongside a CycleGAN model were applied to improve the performance of each network further. Here, the details of each step are presented; firstly, an explanation is given on the applied dataset. Specifications of each applied deep neural network (DNN) are discussed afterward. Finally, CycleGAN is explained in the last part alongside the overall proposed method. \subsection{Dataset} In this paper, a new CT scans dataset of COVID-19 patients was collected from Gonabad Hospital in Iran; all data were recorded by radiologists between June 2020 and December 2020. The number of subjects with COVID-19 is 90, and 99 of the subjects are normal. It is noteworthy to mention that the normal subjects are patients with suspicious symptoms and not merely a control group; this makes this dataset unique compared to its prior ones, as they usually have used scans of other diseases for the control group. Patients with COVID-19 or normal subjects range in age from 2 to 88 years; 69 of which are female and 120 are male (both COVID-19 and normal classes). A total of 1766 slices of these scans were finally selected by specialist physicians from the abnormal class and 1397 from the normal class; the labeling of each CT image was performed by three experienced radiologists along with two infectious disease physicians. In addition, RT-PCR was taken from each subject to confirm labelings. All ethical approvals have been obtained from the hospital to use CT scans of COVID-19 patients and normal individuals for research purposes. Figure \ref{figdata} illustrates a few CT scans of healthy individuals and patients with COVID-19. \begin{figure}[h] \centering \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/n1.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/n2.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/n3.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/n4.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/n5.jpg} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/n6.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/n7.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/n8.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/n9.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/n10.jpg} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/a1.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/a2.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/a3.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/a4.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/a5.jpg} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/a6.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/a7.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/a8.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/a9.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.17\textwidth]{pics/a10.jpg} \end{subfigure} \caption{Examples of dataset, first two rows contain images from healthy subjects, whereas the last two rows contain images from COVID-19 patients.\label{figdata}} \end{figure} \subsection{Deep Neural Networks} \subsubsection{Resnet} ResNet \cite{r2} architecture was introduced in 2015 with a depth of 152 layers; it is known as the deepest architecture up to that year and still is considered as one of the deepest. There are various versions of the architecture with different depths that are used depending on the need. This network's main idea was to use a block called residual block, which tried to solve the problem of vanishing gradients, allowing the network to go deeper without reducing performance. Proving its capabilities by winning the Image Net Challenge in 2015; the ideas of this network have been applied in many others ever since. In this paper, a version of this network with a depth of 50 has been used, which is a wise choice given the considerably smaller amount of data compared to the ImageNet database. \subsubsection{EfficientNet} Three different criteria must be tested to design a convolutional neural network: the depth, width, and resolution of the input images. Choosing the proper values of the three criteria in such a way that they form a suitable network together is a challenging task. Increasing the depth can lead to finding complex patterns, but it can also cause problems such as vanishing gradients. More width can increase the quality of the features learned, but accuracy for such network tends to quickly saturate. Also, high image quality can have a detrimental effect on accuracy. The network was introduced in \cite{r3} with a study on how to scale the network in all three criteria properly. Using a step-by-step scheme, the network first finds the best structure for a small dataset and then scales that structure according to the activity. The network has been used for many tasks, including diagnosing autism \cite{no} and schizophrenia \cite{ns}. \subsubsection{Densenet} Introduced by Huang et al. \cite{r4}, DenseNet, densely connected convolutional networks, has improved the baseline performance on benchmark computer vision task and shown its efficiency. Utilizing residuals in a better approach has allowed this network to exploit fewer parameters and go deeper. Also, by feature reuse, the number of parameters is reduced dramatically. Its building blocks are dense blocks and transition layers. Compared to ResNet, DenseNet uses concatenation in residuals rather than summing them up. To make this possible, each feature vector of each layer is chosen to have the same size for each dense block; also, training these networks has been shown to be easier than prior ones \cite{r4}. This is arguably due to the implicit deep supervision where the gradient is flowing back more quickly. The capability to have thin layers is another remarkable difference in DenseNet compared to other state-of-the-art techniques. The parameter K, the growth rate, determines the number of features for each layer's dense block. These feature vectors are then concatenated with the preceding ones and given as input to the subsequent layer. Eliminating optimization difficulties for scaling up to hundreds of layers is another DenseNet superiority. \subsubsection{ViT} Arguably, the main problem with convolutional neural networks is their failure in encoding relative spatial information. In order to overcome this issue, researchers in \cite{r6} have adopted the self-attention mechanism from natural language processing (NLP) models. Basically, attention can be defined as trainable weights that model each part of an input sentence's importance. Changing networks from NLP to computer vision, pixels are picked as parts of the image to train the attention model on them. Nevertheless, pixels are very small parts of an image; thus, one can pick a bigger segment of an image as one of its parts, i.e., a 16 by 16 block of images. ViT uses a similar idea; by dividing the image into smaller patches to train the attention model on them. Also, ViT-Large has 24 layers with a hidden size of 1,024 and 16 attention heads. Examination shows not only superior results but also significantly reduced training time and also less demand for hardware resources \cite{r6}. \subsubsection{ResNest} Developed by researchers from Amazon and UC Davis, ResNest \cite{r5} is also another attention-based neural network that has also adopted the ideas behind ResNet structure. In its first appearance, this network has shown significant performance improvement without a large increase in the number of parameters, surpassing prior adaptations of ResNet such as ResNeXt and SEnet. In their paper, they have proposed a modular Split-Attention block that can distribute attention to several feature-map groups. The split-attention block is made of the feature-map group and split-attention operations; then, by stacking those split-attention blocks similar to ResNet, researchers were able to produce this new variant. The novelties of their paper are not merely introducing a new structure, but they also introduced a number of training strategies. \subsection{Data Augmentation and Training Process} Generative adversarial networks were first introduced in 2014 \cite{ngan} and found their way into various fields shortly after \cite{ndeep}. They have also been used as a method for data augmentation, and network pretraining \cite{nngan} previously as well. A particular type of these networks is CycleGAN \cite{r10}, a network created mainly for unpaired image-to-image translation. In this particular form of image-to-image translation, there is no need for a dataset containing paired images, which is itself a challenging task. The CycleGAN comprises of training of two generator discriminators simultaneously. One generator uses the first group of images as input and creates data for the second group, and the other generator does the opposite. Discriminator models are then utilized to distinguish the generated data from real ones and feed the gradients to generators subsequently. The CycleGAN used in this paper has a similar structure to the one presented in the main paper \cite{r10}. Compared to other GAN paradigms, CycleGAN uses image-to-image translation, which simplifies the training process, especially where training data is limited, which also helps to create data of the desired class easily. However, using other GAN paradigms, such as conditional GAN \cite{nnfive}, one can also create data of a specific class, yet training those methods is more complicated. A diagram of the CycleGAN is presented in Figure \ref{figtwo}, and also a few samples of generated data are illustrated in Figure \ref{figgen}. \begin{figure}[h] \includegraphics[width=\textwidth]{pics/fig2.jpg} \caption{Overall diagram of applied CycleGAN.\label{figtwo}} \end{figure} \begin{figure}[!ht] \centering \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/norm_main_1.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ag1.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ab_main_1.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ng1.jpg} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/norm_main_2.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ag2.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ab_main_2.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ng2.jpg} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/norm_main_3.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ag3.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ab_main_3.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ng3.jpg} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/norm_main_4.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ag4.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ab_main_4.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ng4.jpg} \end{subfigure} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/norm_main_5.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ag5.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ab_main_5.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure} \centering \includegraphics[width=0.2\textwidth]{pics/ng5.jpg} \end{subfigure} \caption{Examples of generated data, the first column shows normal data from the main dataset. The second column shows the generated abnormal data from those images. The third column shows abnormal data from the main dataset. Lastly, the fourth column shows the generated normal data from those images.\label{figgen}} \end{figure} In this paper, to train the networks properly, first, we preprocessed images by applying a Gaussian filter. Then, we applied several data augmentation techniques \cite{r9}, namely, by using random flips, rotations, zooms, warps, lighting transforms, and also presizing \cite{r7}. We also studied our models' performance by training them using an augmented dataset generated by means of a CycleGAN model implemented using the UPIT library \cite{nnfour}. \section{Results} \subsection{Environment Setup and Hyper Parameter Selection} All models were trained using the FastAI library \cite{r7} and applying fine-tuning to the pre-trained models available at the timm repository \cite{nnthree} using a GPU Nvidia RTX 2080 Ti with 11 GB of RAM. As for the CycleGAN implementation, the UPIT library \cite{nnfour} was used. To find the best hyperparameters, such as the learning rate for the task at hand, and to evaluate our models properly, we divided the data into three parts: the first one for training, the second one for validation, and the last one for testing. This division was done using a 70/15/15 scheme, and also, no two slices of any patient are presented in two different parts simultaneously to make the results trustworthy. To set the learning rate for the architectures, we employed the two-stage procedure similar to the one presented in \cite{r3}; lastly, we applied early stopping in all the architectures to avoid overfitting. The final selected values for batch size and hyperparameters are all available in Table \ref{tabhyper}. \begin{table}[h] \caption{Selected hyperparameters for each network.\label{tabhyper}} \centering \resizebox{0.6\textwidth}{!}{ \begin{tabular}{|c|c|c|} \hline \textbf{Network} & \textbf{Batch Size} & \textbf{Learning Rate}\\ \hline Densenet-121 &16 &1.00E-03\\ \hline EfficientNet-B3 &16 &1.00E-03\\ \hline Resnet-50 &16 &1.00E-03\\ \hline ResNeSt-50 &16 &1.00E-04\\ \hline ViT &16 &1.00E-05\\ \hline \end{tabular}} \end{table} \subsection{Evaluation Metrics} The evaluation of each network's performance is measured by several different statistical metrics, considering that merely relying on one measure of accuracy, it is not possible to measure all the different aspects of the performance of a network. The metrics used in this article are accuracy, precision, recall, F1-score, and area under receiver operating characteristic (ROC) curve (AUC) \cite{n1}. How to calculate these metrics is also shown in Table 2. In this table, TP shows the number of positive cases that have been correctly classified, TN has shown the number of negative cases that have been correctly classified, and FP and FN are the numbers of positive and negative cases that have been misclassified, respectively. In addition, for each network, a learning curve is plotted that shows the speed of learning and how to converge. \begin{table}[h] \caption{Statistical metrics for performance evaluation.\label{tabmet}} \centering \resizebox{0.6\textwidth}{!}{ \begin{tabular}{|c|c|} \hline \makecell{\textbf{Performance Evaluation}\\ \textbf{Parameter}}& \makecell{\textbf{Mathematical }\\ \textbf{Equation}}\\ \hline Accuracy & $\frac{TP + TN}{FP + FN + TP + TN}$ \\ \hline Precision & $\frac{TP}{FP + TP}$ \\ \hline Recall & $\frac{TP}{FN + TP}$ \\ \hline F1-Score & $2\frac{Prec \times Sens}{Prec + Sens}$ \\ \hline AUC & \makecell{Area Under\\ROC Curve}\\ \hline \end{tabular}} \end{table} \subsection{Performaces} This part of the paper is dedicated to showing the results of networks. Each network is first trained without using the CycleGAN, and then the effect of adding CycleGAN is measured. Tables \ref{tabnogan} and \ref{tabgan} demonstrate the network results without and with CycleGAN, and Figures \ref{fignnogan} and \ref{figngan} also show the networks' learning curves. To make the results reliable, each network is evaluated ten times, and then the mean of performances, with confidence intervals, are presented. As observable in these tables, CycleGAN has improved the performance of EfficientNet, Resnet, and ResNeSt dramatically. Nevertheless, the ViT results show no sign of improvement in the presence of CycleGAN; this is arguably due to its robustness or indistinguishability of wrongly classified samples from the other class. ROC curve for one run of the networks is also plotted in Figure \ref{figroc}. \begin{table}[h] \caption{Results without CycleGAN.\label{tabnogan}} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Network} & \textbf{Accuracy (\%)} & \textbf{Precision (\%)} &\textbf{Recall (\%)} &\textbf{F1-score (\%)} & \textbf{AUC (\%)}\\ \hline Densenet-121 & 88.05 ± 3.81 & 80.94 ± 5.55 & 93.18 ± 3.70 & 87.36 ± 3.63 & 96.71 ± 2.80 \\ \hline EfficientNet-B3 & 94.69 ± 2.04 & 92.55 ± 3.40 & 96.77 ± 1.57 & 94.09 ± 2.19 & 99.03 ± 0.66 \\ \hline Resnet-50 & 94.69 ± 1.15 & 90.31 ± 2.21 & 98.74 ± 0.48 & 94.25 ± 1.15 & 99.43 ± 0.20 \\ \hline ResNeSt-50 & 96.30 ± 2.31 & 93.96 ± 3.00 & 98.02 ± 2.43 & 95.97 ± 2.53 & 99.60 ± 1.18 \\ \hline ViT & 99.60 ± 0.79 & 99.46 ± 1.39 & 99.64 ± 0.38 & 99.55 ± 0.88 & 99.99 ± 0.10 \\ \hline \end{tabular}} \end{table} \begin{table}[h] \caption{Results with CycleGAN.\label{tabgan}} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Network} & \textbf{Accuracy (\%)} & \textbf{Precision (\%)} &\textbf{Recall (\%)} &\textbf{F1-score (\%)} & \textbf{AUC (\%)}\\ \hline Densenet-121 & 89.24 ± 3.78 & 81.67 ± 5.84 & 96.23 ± 2.45 & 88.64 ± 3.55 & 97.22 ± 1.89 \\ \hline EfficientNet-B3 & 98.25 ± 2.57 & 97.03 ± 3.51 & 99.28 ± 2.20 & 98.05 ± 2.80 & 99.79 ± 0.90 \\ \hline Resnet-50 & 96.20 ± 0.79 & 94.09 ± 1.33 & 97.49 ± 1.11 & 95.78 ± 0.87 & 99.43 ± 0.42 \\ \hline ResNeSt-50 & 98.89 ± 1.09 & 98.58 ± 1.34 & 99.10 ± 1.70 & 98.75 ± 1.24 & 99.95 ± 0.22 \\ \hline ViT & 99.20 ± 2.91 & 98.92 ± 3.97 & 98.92 ± 2.41 & 99.10 ± 3.19 & 99.95 ± 0.92 \\ \hline \end{tabular}} \end{table} \clearpage \begin{landscape} \setlength\LTleft{-108pt} \setlength\LTright{0pt} \pagestyle{empty} \begin{figure}[h] \centering \begin{subfigure}[] \centering \includegraphics[width=0.6\textwidth]{pics/resnest.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure}[] \centering \includegraphics[width=0.6\textwidth]{pics/efficientnet.jpg} \end{subfigure} \begin{subfigure}[] \centering \includegraphics[width=0.6\textwidth]{pics/densenet.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure}[] \centering \includegraphics[width=0.6\textwidth]{pics/vit.jpg} \end{subfigure} \begin{subfigure}[] \centering \includegraphics[width=0.6\textwidth]{pics/resnest.jpg} \end{subfigure} \caption{Learning curve of networks without CycleGAN for (a) DenseNet, (b) EfficientNet, (c) ResNet, (d) ViT, and (e) ResNeSt.\label{fignnogan}} \end{figure} \begin{figure}[!h] \centering \begin{subfigure}[] \centering \includegraphics[width=0.6\textwidth]{pics/resnest-cyclegan.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure}[] \centering \includegraphics[width=0.6\textwidth]{pics/efficientnet-cyclegan.jpg} \end{subfigure} \begin{subfigure}[] \centering \includegraphics[width=0.6\textwidth]{pics/densenet-cyclegan.jpg} \end{subfigure} \hspace{2pt} \begin{subfigure}[] \centering \includegraphics[width=0.6\textwidth]{pics/vit-cyclegan.jpg} \end{subfigure} \begin{subfigure}[] \centering \includegraphics[width=0.6\textwidth]{pics/resnest-cyclegan.jpg} \end{subfigure} \caption{Learning curve of networks with CycleGAN for (a) DenseNet, (b) EfficientNet, (c) ResNet, (d) ViT, and (e) ResNeSt.\label{figngan}} \end{figure} \end{landscape} \begin{figure}[!h] \centering \begin{subfigure}[] \centering \includegraphics[width=0.8\textwidth]{pics/nogan_1.jpg} \end{subfigure} \begin{subfigure}[] \centering \includegraphics[width=0.8\textwidth]{pics/gan_1.jpg} \end{subfigure} \caption{ROC curve of networks without CycleGAN (a) and with it (b).\label{figroc}} \end{figure} \section{Discussion} In recent years, convolutional neural networks have revolutionized the field of image processing. Medical diagnoses are no exception, and today in numerous research papers in this field, the use of these networks to achieve the best accuracy is seen. Diagnosis of COVID-19 disease from CT images is also one of the applications of these networks. In this article, the performance of different networks in this task was examined, and also by applying a new method, an attempt was made to improve the performance of these networks. The networks used in this paper were Resnet, EfficientNet, Densenet, ViT, and ResNest, and the data augmentation method was based on CycleGAN. Table \ref{tab:newt} summarizes the proposed method of previous papers. By comparing this table with our current work, the advantages of our work can be listed as using ViT, a transformer-based architecture that has achieved state-of-the-art performances; collecting a new dataset; and finally using CycleGAN for data augmentation. \begin{table}[h] \caption{Summary of related works.} \label{tab:newt} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Ref} & \textbf{Dataset} & \textbf{Number of Cases (Images)} & \textbf{Pre-Processing} & \textbf{DNN} & \textbf{Performance (\%)}\\ \hline \cite{a1} & SIRM & 3000 COVID-19, 3000 HC & Patches Extraction & PreTrain Networks & Acc=98.27 \\ \hline \cite{a2} & Zhao et al & 460 COVID-19, 397 HC & DA & SqueezeNet & Acc=85.03 \\ \hline \cite{a3} & Indian & 2373 COVID-19, 6321 HC & -- & Ensemble DCCNs & Acc=98.83 \\ \hline \cite{a4} & Different Datasets & -- & Visual Inspection & BigBiGAN & -- \\ \hline \cite{a5} & Clinical & 148 COVID-19, 148 HC & Visual Inspection & ResGNet-C & Acc=96.62 \\ \hline \cite{a6} & Clinical & 349 COVID-19, 397 HC & Scaling Process, DA & MKs-ELM-DNN & Acc=98.36 \\ \hline \cite{a7} & COVID-CT & -- & DA & U-net + DCN & Acc=92.87 \\ \hline \cite{a8} & Public Dataset & 2933 COVID-19 & Normalization, Resizing & EDL\_COVID & Acc= 99.054 \\ \hline \cite{a9} & Clinical & 320 COVID-19, 320 HC & HS, Margin Crop, Resizing & FGCNet & Acc=97.14 \\ \hline \cite{a10} & Clinical & -- & ROIs Extraction & Modified Inception & Acc=89.5 \\ \hline \cite{a11} & Clinical & 3389 COVID-19, 1593 HC & Standard Preprocessing & 3D ResNet34 & Acc=87.5 \\ \hline \cite{a12} & COVIDx-CT & 104,009 & DA & COVIDNet-CT & Acc= 99.1 \\ \hline \cite{a13} & Different Datasets & 349 COVID-19, 397 HC & Resizing, Normalization, DA & ResNet18 & Acc=99.4 \\ \hline \cite{a14} & COVID-CT & 345 COVID-19, 397 HC & Resizing, DA & CGAN + ResNet50 & Acc=82.91 \\ \hline \cite{a15} & Clinical & -- & Resizing, Padding, DA & 3D-CNN & AUC=70 \\ \hline \cite{a16} & SARS-CoV-2 & 1252 COVID-19, 1230 HC & -- & GAN with WOA + InceptionV3 & Acc=99.22 \\ \hline \cite{a17} & Different Datasets & -- & Resizing & Inception V1 & Acc=95.78 \\ \hline \cite{a18} & Clinical & 2267 COVID-19, 1235 HC & Normalization, Cropping, Resizing & ResNet50 & Acc=93 \\ \hline \cite{a19} & Clinical & -- & \makecell{Visual Inspection, ROI, Cropping\\and Resizing} & ResNet 101 & Acc=99.51 \\ \hline \cite{a20} & Different Datasets & 413 COVID-19, 439 HC & -- & ResNet-50 + 3D-CNN & Acc=93.01 \\ \hline \cite{a21} & Clinical & -- & DA & Multi-View U-Net + 3D-CNN & Acc=90.6 \\ \hline \cite{a22} & Different Datasets & -- & Resized, Intensity Normalized & FCN & Acc=94.67 \\ \hline \cite{a23} & Zhao et al & 746 & -- & GAN + ShuffleNet & Acc=84.9 \\ \hline \cite{a24} & COVID-CT & 345 COVID-19, 375 HC & 2D RDWT, Resizing & ResNet50 & Acc=92.2 \\ \hline \cite{a25} & SARS-CoV-2 & 1262 COVID-19, 1230 HC & -- & CSVM & Acc=94.03 \\ \hline \cite{a26} & Different Datasets & 767 & -- & Different PreTrain Methods & Acc=75 \\ \hline \cite{a27} & MedSeg DII & -- & Resizing & U-Net + VGG16 and Resnet-50 & Acc=99.4 \\ \hline \cite{a28} & Basrah & 1425 & Resizing & VGG 16 & Acc=99 \\ \hline \cite{a29} & Kaggle & 1252 COVID, 1240 HC & Resizing, Normalization, DA & Covid CT-net & Acc=95.78 \\ \hline \cite{a30} & COVID-CT & 312 COVID-19, 396 HC & Normalization & LeNet-5 & Acc=95.07 \\ \hline Ours & Clinical & 1766 COVID-19, 1397 HC & Filtering, DA using CycleGAN & Different PreTrain Methods & Acc=99.60\\ \hline \end{tabular}} \end{table} Eventually, the ViT network reached an accuracy of 99.60\%, which shows its state-of-the-art performance and proves that it can be used as the heart of a CADS. By comparing the performances of our method compared to previous works in table \ref{tab1}, our methods' superiority is quite observable. The advantages of adding CycleGAN were also clearly displayed, and it was shown that this method could be used for this task by data augmentation to improve the performance of most deep neural networks. Finally, this article's achievements can be summarized in: first, introducing a new database and its public release, second, examining the performance of various neural networks on this database, and finally evaluating the use of CycleGAN for data augmentation and its impact on networks performances. Additionally, the performance of ViT was never previously studied for this task, which was investigated in this paper as well. To evaluate the method, a CT scan dataset was collected by physicians, which we also made available to researchers in public. Also, due to the fact that this dataset was collected from people suspected of having COVID-19, normal class data, unlike many previous datasets in this field, were collected from patients with suspicious symptoms and not from other diseases. \section{Conclusion, and Future Works} In the past year or so, nearly all people have found their lives changed due to the COVID-19 outbreak. Researchers in image processing and machine learning have not been an exception, considering many research papers that have been published on a variety of automatic diagnostic methods using medical imaging modalities and machine learning methods. Building an accurate diagnostic system in these pandemic conditions can relieve many of the burdens on physicians and also help to improve the situation. In this paper, the use of convolutional neural networks for the task at hand was investigated, and also the effect of adding CycleGAN for data augmentation was examined as another novelty of the paper. Finally, our method reached state-of-the-art performances and also have outperformed prior works, which shows its superiority. For future work, several different paths can be considered; first, more complicated methods in deep neural networks can be used, such as deep metric, few-shot learning, or feature fusion solutions. Also, the combination of different datasets to improve the accuracy and evaluate its impact on the training of different networks can be examined. Finally, combining different modalities to increase accuracy can also be a direction for future research.
1,314,259,993,966
arxiv
\section{Introduction} The physical characteristics of hadrons are encoded in various correlation functions of corresponding quark currents. One of the most important characteristics is the hadron mass. The calculation of a hadron mass from first principles consists in finding the relevant pole of two-point correlator $\left\langle JJ\right\rangle$, where the current $J$ is built from the quark and gluon fields and interpolates the given hadron. In the real QCD, the straightforward calculations of correlators are possible only in the framework of lattice simulations which are still rather restricted. It is usually believed that confinement in QCD leads to approximately linear radial Regge trajectories (see, e.g.,~\cite{phen}). The most important quantity in this picture is the slope of trajectories. The slope is expected to be nearly universal as arising from flavor-independent non-perturbative gluodynamics which thereby sets a mass scale for the light hadrons. Among the phenomenological approaches to the hadron spectroscopy, the method of spectral sum rules~\cite{svz} is likely the most related with QCD. In many cases, it permits to calculate reliably the masses of ground states on the radial trajectories. This method exploits some information from QCD via the Operator Product Expansion (OPE) of correlation functions~\cite{svz}. On the other hand, one assumes a certain spectral representation for a correlator in question. Typically the representation is given by the ansatz "one infinitely narrow resonance + perturbative continuum". Such an approximation is very rough but works well phenomenologically in many cases. Theoretically the zero-width approximation arises in the large-$N_c$ limit of QCD~\cite{hoof}. In this limit, the only singularities of the two-point correlation function of quark currents $J$ are one-hadron states. For instance, the two-point correlator of the scalar currents $J=\bar{q}q$ has the following form to lowest order in $1/N_c$ (in the momentum space), \begin{equation} \label{20} \Pi_S(q^2)=\left\langle J^S(q)J^S(-q)\right\rangle=\sum_n\frac{G_n^2M_S^2(n)}{q^2-M_S^2(n)}, \end{equation} where the residues appear from the definition of the matrix element $\langle0|J^S|n\rangle=G_nM_S(n)$. The OPE of the correlator~\eqref{20} in the large-$N_c$ limit and to the lowest order in the perturbation theory reads~\cite{rry} \begin{equation} \label{27} \Pi_S(Q^2)=\frac{3Q^2}{16\pi^2}\log{\frac{Q^2}{\mu^2}}+ \frac{3}{2Q^2}m_q\langle\bar{q}q\rangle -\frac{\alpha_s}{16\pi}\frac{\langle G^2\rangle}{Q^2} -\frac{11}{3}\pi\alpha_s\frac{\langle\bar{q}q\rangle^2}{Q^4}+\dots, \end{equation} where $\langle G^2\rangle$ and $\langle\bar{q}q\rangle$ denote the gluon and quark vacuum condensate, respectively. According to the main assumption of classical QCD sum rules~\cite{svz}, these vacuum characteristics are universal, i.e., their values do not depend on the quantum numbers of a quark current $J$ (the method is not applicable otherwise). In the present talk we will demonstrate how all these ideas can be used for calculation of large-$N_c$ masses of light scalar mesons. \section{Scalar sum rules: Some results} We will assume the linear radial spectrum with universal slope \begin{equation} \label{21} M_S^2(n)=\Lambda^2(n+m_s^2), \qquad n=0,1,2,\dots, \end{equation} and (for consistency with the OPE): $G_n=G$. With the linear ansatz~\eqref{21} for the radial mass spectrum, the expression~\eqref{20} can be summed analytically, expanded at large $Q^2=-q^2$ and compared with the corresponding OPE in QCD. Thus one obtains a set of sum rules. Similar large-$N_c$ sum rules were considered many times in the past for vector, axial, scalar and pseudoscalar channels (see, e.g., Refs. in~\cite{sr}). As {\it apriori} we do not know reliably the radial Regge behavior of scalar masses, two simple possibilities can be considered: (I) The ground $n=0$ state lies on the linear trajectory~\eqref{21}; (II) The state $n=0$, below called $\sigma$, is not described by the linear spectrum~\eqref{21}. The second assumption looks more physical. Within the latter assumption, the mass of $\sigma$-meson can be derived as a function of the intercept parameter $m_s^2$ (we refer to Ref.~\cite{we1} for details, the chiral limit is considered), \begin{equation} \label{33} M_\sigma^2=\frac{\frac{1}{16\pi^2}\Lambda^6m_s^2\left(m_s^2+\frac12\right)\left(m_s^2+1\right)+ \frac{11}{3}\pi\alpha_s\langle\bar{q}q\rangle^2} {\frac{3}{32\pi^2}\Lambda^4\left(m_s^4+m_s^2+\frac16\right)+\frac{\alpha_s}{16\pi}\langle G^2\rangle}. \end{equation} Substituting the physical values of vacuum condensates and numerical value for slope $\Lambda^2$ obtained from a solution of QCD sum rules, the mass function~\eqref{33} is displayed in Fig.~1~\cite{we1}. \begin{figure}[ht] \center{\includegraphics[width=0.7\linewidth]{Plot2}} \vspace{-0.3cm} \caption{\small The values of $M_\sigma$, $G_\sigma$, $G$, and the first state on the scalar trajectory $M_S(1)$ as a function of dimensionless intercept $m_s^2$.} \end{figure} The mass of the first radially excited state $M_S(1)$ is rather stable and seems to reproduce the mass of $a_0(1450)$-meson, $M_{a_0(1450)}=1474\pm19$~MeV~\cite{pdg}. Its isosinglet partner (the candidates is $f_0(1370)$) should be degenerate with $a_0(1450)$ in the planar limit. The plot in Fig.~1 demonstrates that the actual prediction for $M_\sigma$ is rather sensitive to the intercept of scalar linear trajectory, though initially $M_\sigma$ is not described by the linear spectrum~\eqref{21}. And {\it vice versa}, the expected value of $M_\sigma$ (around $0.5$~GeV~\cite{pdg}) imposes a strong bound on the allowed values of intercept $m_s^2$. The plot in Fig.~1 shows that $m_s^2$ is likely close to zero. Thus, interpolating the scalar states by the simplest quark bilinear current, we predict (in the large-$N_c$ limit!) a light scalar resonance with mass about $500\pm100$~MeV which could be a reasonable candidate for the scalar sigma-meson $f_0(500)$~\cite{pdg}. \section{Borelized scalar sum rules: Some results} The original QCD sum rules made use of the Borel transformation~\cite{svz}, \begin{equation} \label{6} L_M\Pi(Q^2)=\lim_{\substack{Q^2,n\rightarrow\infty\\Q^2/n=M^2}}\frac{1}{(n-1)!}(Q^2)^n\left(-\frac{d}{dQ^2}\right)^n\Pi(Q^2), \end{equation} The borelized version has a number of advantages and can be applied to our large-$N_c$ case. The details are contained in Ref.~\cite{we2}. In short, the mass of ground scalar meson $m_0\equiv M_S(0)$ as a function of Borel parameter is shown in Fig.~2. It is seen that there are two solutions with "Borel window" extending to infinity. \begin{figure}[ht] \center{\includegraphics[scale=0.7]{plot_scalar.eps}} \vspace{-0.3cm} \caption{\small The mass of ground scalar meson $m_0\equiv M_S(0)$ as a function of Borel parameter at $\Lambda^2=1.38\,\text{GeV}^2$~\cite{we2}.} \end{figure} The corresponding asymptotic values are given by \begin{equation} \label{11} M_S^2(0)=\frac{\Lambda^2}{2}\pm\frac{1}{2}\sqrt{\frac{\Lambda^4}{3}-64\pi^2\left(m_q\langle\bar{q}q\rangle + \frac{\alpha_s}{24\pi}\langle G^2\rangle\right)}. \end{equation} The heavier state corresponds to the ground scalar mass in the standard QCD sum rules. Normalizing this mass to the value $m_{f_0}=1.00\pm0.03$~GeV extracted from these canonical sum rules~\cite{rry} we predict the value of slope for the scalar trajectory, $\Lambda^2_{f_0}=1.38\pm0.07$~GeV$^2$, which is used in Fig.~2. We obtain then the mass of the lightest scalar state, $M_\sigma\approx0.62$~GeV. We arrive thus at the conclusion that our method predicts two parallel scalar trajectories. The ground state on the first trajectory can be identified with $f_0(980)$ and on the second one with $f_0(500)$~\cite{pdg}. The existence of two parallel radial scalar trajectories seems to agree with the experimental data~\cite{phen}. The masses of predicted radial states and a tentative comparison with the observed scalar mesons for two trajectories are displayed in Tables~1 and~2, correspondingly. \begin{table}[ht] \caption{\small The radial spectrum of the first $f_0$-trajectory for the slope $\Lambda^2=1.38\pm0.07\,\text{GeV}^2$. The first 5 predicted states are tentatively assigned to the resonances $f_0(980)$, $f_0(1500)$, $f_0(2020)$, $f_0(2200)$, and $X(2540)$~\cite{pdg}.} \begin{center} $\begin{array}{|c|c|c|c|c|c|} \hline n & 0 & 1 & 2 & 3 & 4\\ \hline m_{f_0}\,\text{(th 1)} & 1000\pm30 & 1540\pm20 & 1940\pm40 & 2270\pm50 & 2560\pm50 \\ \hline m_{f_0}\,\text{(exp 1)} & 990 \pm 20 & 1504 \pm 6 & 1992 \pm 16 & 2189 \pm 13 & 2539 \pm 14^{+38}_{-14} \\ \hline \end{array}$ \end{center} \end{table} \begin{table}[ht] \caption{\small The radial spectrum of the second $f_0$-trajectory for the slope $\Lambda^2=1.38\pm0.07\,\text{GeV}^2$. The first 5 predicted states are tentatively assigned to the resonances $f_0(500)$, $f_0(1370)$, $f_0(1710)$, $f_0(2100)$, and $f_0(2330)$~\cite{pdg}.} \begin{center} $\begin{array}{|c|c|c|c|c|c|} \hline n & 0 & 1 & 2 & 3 & 4\\ \hline m_{f_0}\,\text{(th 2)} & 620 & 1330\pm30 & 1780\pm40 & 2130\pm50 & 2430\pm60 \\ \hline m_{f_0}\,\text{(exp 2)} & 400\text{--}550 & 1200\text{--}1500 & 1723^{+6}_{-5} & 2101 \pm 7 & 2300\text{--}2350 \\ \hline \end{array}$ \end{center} \end{table} \section{Discussions and conclusions} We have put forward new extensions of SVZ sum rules making use of the large-$N_c$ (planar) limit and assuming for the radial excitations a linear Regge spectrum with universal slope. The choice of spectrum is motivated by hadron string models and related approaches and also by the meson spectroscopy. The considered ansatz allows to solve the arising sum rules with a minimal number of inputs. The prediction of the second scalar trajectory is a rather surprising feature of borelized planar sum rules~\cite{we2}. The ground state on the second radial trajectory turns out to be significantly lighter than on the first trajectory. It looks tempting to identify this state with the elusive $\sigma$ (called also $f_0(500)$) meson~\cite{pdg}. The lightest scalar state in the standard SVZ sum rules lies near 1~GeV~\cite{rry} and cannot be made significantly lighter within this method~\cite{sigma}. Our extension of the SVZ method leads thus to a new result. It is interesting to check whether a similar result appears in the framework of unborelized planar sum rules. Our analysis in the first part gives a positive answer. A light scalar state near 0.5~GeV, however, emerges in a different way~\cite{we1}. When one predicts some quark-antiquark state it is important to indicate its place on the angular Regge trajectory as well. In other words, what are $f_2,\,f_4,\,\dots$ companions of $f_0(500)$ on this trajectory? In order to answer this question we must know the slope of the trajectory under consideration. According to the analysis of the first paper in Ref.~\cite{phen}, the slope of $f_0$ trajectory, most likely, lies in the interval $1.1\div1.2$ GeV$^2$. Several independent estimations made in some further papers of Ref.~\cite{phen} seem to confirm this value. Consider for example the estimate on the $\sigma$-meson mass, $m_\sigma\approx390$~MeV~\cite{we1}. Then we obtain $m_{f_2}\approx1.53\div1.60$~GeV. The PDG contains a well established resonance $f_2(1565)$~\cite{pdg} with mass $m_{f_2(1565)}=1562\pm13$~MeV. It is a natural companion of $\sigma$-meson on the corresponding angular Regge trajectory. The next state would have the mass $m_{f_4}\approx2.13\div2.23$~GeV. The discovery of the predicted tensor meson $f_4$ (and perhaps the next companion $f_6$ with $m_{f_6}\approx2.60\div2.71$~GeV) would confirm our conjecture about the form of Regge trajectory with the $\sigma$-meson on the top. A tentative candidate for our $f_4$ in the Particle Data is the resonance $f_J(2220)$ having still undetermined spin --- its value is either $J=2$ or $J=4$~\cite{pdg}. Our model would favor the second possibility. It is interesting to note that the predicted trajectory is drawn in the first paper of Ref.~\cite{phen} among numerous angular Regge trajectories for isosinglet $P$-wave states of even spin. But the resonance $f_2(1565)$ is replaced there by $f_2(1525)$ (and is absent on other trajectories). As a result, $m_{f_0}^2$ has a very small negative value leading to disappearance of a scalar state from this trajectory. The predicted $f_4$-companion is labelled as $f_4(2150)$. The modern PDG contains the state $f_2'(1525)$ but this resonance is typically produced in reactions with $K$-mesons that evidently indicates on the dominant strange component. For this reason we should exclude it from our estimates. Our prediction of the Regge trajectory containing the $\sigma$-meson on the top seems to contradict to studies of the $\sigma$-state on the complex Regge trajectory which claim that because of very large width the corresponding state cannot belong to usual Regge trajectories~\cite{pelaez}. It is not excluded, however, that this observation may simply indicate on limitations of the usual methods which are applied to description of the $\pi\pi$-scattering. These methods are based on analyticity and unitarity of $S$-matrix and do not contain serious dynamical inputs. The generation of a huge width for $f_0(500)$ represents, most likely, some dynamical effect. For this reason uncovering the genuine nature of $\sigma$-meson requires dynamical approaches. Such approaches should simultaneously describe all resonances on the radial scalar trajectory and resonances in other channels (vector, tensor {\it etc.}). The planar QCD sum rules effectively take this into account and the appearance of $\sigma$ is directly related with the existence of all these resonances. The given property constitutes a conceptual difference of considered method from the dispersive approaches. Since the used sum rule method is based on the narrow-width approximation, a direct translation of our predictions to the physical parameters of a broad resonance may look questionable. As a matter of fact, we claim only that a scalar isoscalar pole in the range 400--600 MeV can naturally exist in the large-$N_c$ limit. Another pertinent question is why the $\sigma$-meson lies below the linear radial Regge trajectory like the ground vector states? In the latter case, one can propose a simple qualitative explanation. The ground vector states are $S$-wave, so they represent relatively compact hadrons. In this case, a contribution from the Coulomb part of the Cornell confinement potential, $V(r)=-\frac43\frac{\alpha_s}{r} + \sigma r$, is not small, effectively "decreasing" the tension $\sigma$ at smaller distances and, hence, masses of the ground $S$-wave states. In the case of $\sigma$-meson, one can imagine the following situation: This state represents a tetraquark but the admixture of additional $q\bar{q}$-pair is relatively small and gives a non-dominant contribution to the mass. For this reason we may use the large-$N_c$ limit as the first approximation. However, due to the extra $q\bar{q}$-pair, the $\sigma$-meson (originally representing a scalar $P$-wave state) can exist as a $S$-wave state. Due to this phenomenon, on the one hand, the decay of this state becomes OZI-superallowed, explaining thereby its abnormally large width, on the other hand, its mass decreases similarly to the masses of ground $S$-wave vector mesons. In conclusion, our analysis demonstrates that the existence of a light scalar state is well compatible with the structure of the planar sum rules in the scalar channel and may follow in a natural way from the Regge phenomenology in the large-$N_c$ limit.
1,314,259,993,967
arxiv
\section{Main results} We start by listing our main results. To simplify the definitions, here we concentrate on the case of CSS codes; more general results are given later in the text along with the corresponding proofs. \subsection{Definitions} \ A classical binary linear code\cite{MS-book} $\mathcal{C}$ with parameters $[n,k,d]$ is a $k$-dimensional subspace of the vector space $\mathbb{F}_{2}^{n}$ of all binary strings of length $n$. Code distance $d$ is the minimal weight (number of non-zero elements) of a non-zero string in the code. Rows of the binary \emph{generator matrix} $G$ of the code $\mathcal{C}\equiv \mathcal{C}_G$ are formed by its $k$ basis vectors. A linear code can also be specified by the binary \emph{parity check matrix} $H$, $\mathcal{C}=\{\mathbf{c}\in\mathbb{F}_{2}^{n}|H\mathbf{c}^T=0\}$. This implies that $H$ and $G$ are mutually orthogonal, $H G^T=0$, and also \begin{equation} \label{eq:dual-code} \mathop{\rm rank} H+\mathop{\rm rank} G=n. \end{equation} Parity check matrix is a generating matrix of the code ${\cal C}^\per ={\cal C}_H$ \emph{dual} to ${\cal C}$. Respectively, the matrix $H$ is an exact dual to $G$, $H\equiv G^*$. Note that here and throughout this work we assume that all linear algebra is done modulo $2$, as appropriate for the vector space $\mathbb{F}_2^n$. Given a binary matrix $\Theta$ with dimensions $N_s\times N_\mathrm{b}$, we define a generalized Wegner-type \cite{Wegner71} partition/correlation function with multi-spin bonds $R_b\equiv \prod_r S_r ^{\Theta_{r,b}}$ corresponding to the columns of $\Theta$ and Ising spin variables $S_r=\pm1$, $r=1,\ldots,N_s$: \begin{equation} \label{eq:generalized-wegner} \mathscr{Z}_{\mathbf{e},\mathbf{m}}(\Theta;\{K\})\equiv {1\over 2^{N_g}} \sum_{\{S_r=\pm1\}}\prod_{b=1}^{N_\mathrm{b}}R_b^{m_b}{\exp \biglb(K_b (-1)^{e_b} R_b\bigrb) \over 2\cosh\beta}, \end{equation} where we assume the couplings to be positive, $K_b\equiv \beta J_b>0$, with $\beta$ being the inverse temperature, the length-$N_\mathrm{b}$ binary vectors $\mathbf{e}$, $\mathbf{m}$ respectively specify the electric and magnetic disorder, and $N_g\equiv N_s-\mathop{\rm rank}\Theta$ is the count of linearly-dependent rows in $\Theta$. A quantum CSS code\cite{Calderbank-Shor-1996,Steane-1996} with parameters $[[n,k,d]]$ can be specified in terms of two $n$-column binary generator matrices ${\cal G}_X$, ${\cal G}_Z$ with mutually orthogonal rows, ${\cal G}_X {\cal G}_Z^{T}=0$. Such a code encodes $k=n-\mathop{\rm rank} {\cal G}_X-\mathop{\rm rank} {\cal G}_Z$ qubits in a block of $n$ qubits. A CSS code can be thought of as a couple of binary codes, one correcting $X$-type errors and the other $Z$-type errors. However, it turns out that any two errors $\mathbf{e}$ and $\mathbf{e}'$ of, e.g., $Z$-type differing by a linear combination of rows of ${\cal G}_Z$ have exactly the same effect on the quantum code---such errors are called \emph{degenerate}. The corresponding \emph{equivalence} is denoted $\mathbf{e}\simeq\mathbf{e}'$. A \emph{detectable} $Z$-type error $\mathbf{e}=\mathbf{e}_Z$ has a non-zero \emph{syndrome} $\mathbf{s}_Z={\cal G}_X \mathbf{e}^T$. An undetectable and \emph{non-trivial} $Z$-type error has a zero syndrome and is not degenerate with an all-zero error; we will call such an error a (non-zero $Z$-type) \emph{codeword} $\mathbf{c}=\mathbf{c}_Z$. The distance $d$ of a CSS code is the minimal weight of a $Z$- or an $X$-type codeword. For each error type, we introduce a spin glass partition function: \begin{equation} \label{eq:Z0} Z_0^{(\mu)}(\mathbf{e};\beta)\equiv\mathscr{Z}_{\mathbf{e},\mathbf{0}}({\cal G}_\mu;\{K_b=\beta\}),\;\mu=X,Z. \end{equation} The normalization is such that for a model of independent $X$ or $Z$ errors with equal probability $p$ (probabilities of $e_b=1$ are independent of each other and equal to $p$), at the Nishimori line \cite{Nishimori-1981,Nishimori-1980,Morita-Horiguchi-1980,Nishimori-book}, \begin{equation} \label{eq:nishimori-temperature} \beta=\beta_p,\quad e^{-2\beta_p}=p/(1-p), \end{equation} the partition function\ \eqref{eq:Z0} equals to a total probability of a $\mu$-type error equivalent to $\mathbf{e}$. We also define the partition function with an \emph{extended defect} of additionally flipped bonds at the support of the codeword $\mathbf{c}$, \begin{equation} \label{eq:Zc} Z_{\bf c}^{(\mu)}(\mathbf{e};\beta)\equiv Z_{0}^{(\mu)}(\mathbf{e}+\mathbf{c};\beta), \end{equation} as well as the partition function corresponding to all errors with the same syndrome $\mathbf{s}$ as $\mathbf{e}\equiv \mathbf{e}_\mathbf{s}$, \begin{equation} \label{eq:Ztot} Z_{\rm tot}^{(\mu)}(\mathbf{s};\beta)\equiv \sum_\mathbf{c} Z_{\bf c}^{(\mu)}(\mathbf{e}_\mathbf{s};\beta)=\mathscr{Z}_{\mathbf{e}_\mathbf{s},\mathbf{0}}({\cal G}_{\bar\mu}^{*};\{K_b=\beta\}), \end{equation} where the summation is over all $2^k$ mutually non-degenerate $\mu$-type codewords $\mathbf{c}$, such that ${\cal G}_{\bar\mu}\mathbf{c}^T=0$, and ${\bar\mu}=X$ if $\mu=Z$ and vice versa. The second form uses a matrix ${\cal G}_{\bar\mu}^*$ exactly dual to ${\cal G}_{\bar\mu}$, cf.\ Eq.\ \eqref{eq:dual-code}. Note that Eq.\ \eqref{eq:Ztot} at $\beta=\beta_p$ gives the correctly normalized probability to encounter the syndrome $\mathbf{s}$, $\sum_\mathbf{s}Z_\mathrm{tot}(\mathbf{s};\beta)=1$. Here and below we omit the error-type index $\mu$ to simplify the notations. Syndrome-based decoding is a classical algorithm to recover the error equivalence class from the measured syndrome. In maximum-likelihood (ML) decoding, one picks the codeword $\mathbf{c}=\mathbf{c}_\mathrm{max}(\mathbf{e})$ corresponding to the largest contribution $Z_\mathrm{max}(\mathbf{s};\beta)\equiv Z_{\mathbf{c}_\mathrm{max}(\mathbf{e})}(\mathbf{e};\beta)$ to the partition function~\eqref{eq:Ztot} at $\beta=\beta_p$. Given some unknown error with the syndrome $\mathbf{s}$, the conditional probabilities of successful and of failed ML recovery are, respectively, \begin{equation} \label{eq:Psucc} P_\mathrm{succ}(\mathbf{s}) ={Z_\mathrm{max}(\mathbf{s};\beta_p)\over Z_\mathrm{tot}(\mathbf{s};\beta_p)}, \quad P_\mathrm{fail}(\mathbf{s})=1-P_\mathrm{succ}(\mathbf{s}). \end{equation} The corresponding average over errors can be written as a simple sum over allowed syndrome vectors, \begin{equation} \label{eq:Psucc} P_\mathrm{succ} \equiv \left[{Z_\mathrm{max}(\mathbf{s}_\mathbf{e};\beta_p)\over Z_\mathrm{tot}(\mathbf{s}_\mathbf{e};\beta_p)}\right] =\sum_\mathbf{s}Z_\mathrm{max}(\mathbf{s};\beta_p), \end{equation} where the square brackets $[\,\cdot\,]$ denote an average over the errors $\mathbf{e}$. For a given infinite family of CSS codes, asymptotically certain ML decoding implies $ P_\mathrm{succ}^{(X)}\to1$ and $ P_\mathrm{succ}^{(Z)}\to1$ in the limit of large $n$. In terms of the spin glass model \eqref{eq:Z0}, this corresponds to a phase where in thermodynamical limit each likely disorder configuration $\mathbf{e}$ corresponds to a unique defect configuration $\mathbf{c}=\mathbf{c}_\mathrm{max}(\mathbf{e})$: \begin{definition}{ (CSS)} A \emph{fixed-defect phase} of the spin glass model \eqref{eq:Z0} corresponding to an infinite family of CSS codes has \begin{equation} [Z_\mathrm{max}^{(\mu)}(\mathbf{s}_\mathbf{e};\beta)/ Z_\mathrm{tot}^{(\mu)}(\mathbf{s}_\mathbf{e};\beta)]\to1,\quad n\to\infty. \label{eq:fixed-defect-CSS} \end{equation} \end{definition}\ignorespaces It is also useful to define a special case of such a phase where any likely disorder configuration does not introduce any defects: \begin{definition}{ (CSS)} A \emph{defect-free phase} of the spin glass model \eqref{eq:Z0} corresponding to an infinite family of CSS codes has \begin{equation} [Z_0^{(\mu)}(\mathbf{e};\beta)/ Z_\mathrm{tot}^{(\mu)}(\mathbf{s}_\mathbf{e};\beta)]\to1,\quad n\to\infty. \label{eq:defect-free-CSS} \end{equation} \end{definition}\ignorespaces \subsection{Results:\ ordered phases} \ We first prove that the only allowed ordered phase on the Nishimori line is the defect-free phase: \begin{theorem} For an infinite family of quantum stabilizer codes successful decoding with probability one implies that on the Nishimori line the corresponding spin model is in the defect-free phase, i.e., in any likely configuration $\mathbf{e}$ of flipped bonds the largest $Z_\mathbf{c}(\mathbf{e};\beta_p)$ corresponds to $\mathbf{c}_\mathrm{max}(\mathbf{e})=\mathbf{0}$. \label{th:Nishimori-line-defect-free} \end{theorem} Definitions \ref{def:fixed} and \ref{def:defect-free} are formulated in terms of the average ratios of partition functions. As an alternative, we introduce the free energy increment associated with adding an extended defect $\mathbf{c}$ to a most likely configuration at the given disorder $\mathbf{e}$ with the syndrome $\mathbf{s}= G_{\bar\mu} \mathbf{e}^T$, \begin{equation} \label{eq:defect-free-energy-max-CSS} \Delta F_\mathbf{c}^{\mathrm{max},\mu}(\mathbf{\mathbf{s}};\beta)\equiv \beta^{-1}\log{ Z_\mathrm{max}^{(\mu)}(\mathbf{s};\beta)\over Z_{\mathbf{c}_\mathrm{max}(\mathbf{e})+\mathbf{c}}^{(\mu)}(\mathbf{e};\beta) }, \;\,\mu=X,Z. \end{equation} We prove \begin{theorem}\label{th:divergent-defect-energy-max} For an infinite family of disordered spin models~\eqref{eq:Z0} (or Eq.\ \eqref{eq:partition0}), in a fixed-defect phase the averaged over the disorder free energy increment for an additional defect corresponding to a non-trivial codeword $\mathbf{c}\not\simeq\mathbf{0}$ diverges at large $n$, $[\Delta F_\mathbf{c}^{\mathrm{max}}(\mathbf{s}_\mathbf{e};\beta)]\to \infty$. \end{theorem} In the defect-free phase, the relevant analogous quantity is the free energy increment with respect to a given error $\mathbf{e}$, \begin{equation} \label{eq:defect-free-energy-zero-CSS} \Delta F_\mathbf{c}^{(0,\mu)}(\mathbf{\mathbf{e}};\beta)\equiv \beta^{-1}\log{ Z_0^{(\mu)}(\mathbf{e};\beta)\over Z_{\mathbf{c}}^{(\mu)}(\mathbf{e};\beta)}. \end{equation} The corresponding average over disorder diverges in the defect-free phase where $\mathbf{c}_\mathrm{max}(\mathbf{e})=\mathbf{0}$ for every likely error configuration $\mathbf{e}$. Then, the Theorem \ref{th:Nishimori-line-defect-free} leads to \begin{corollary} \label{th:corollary} On the Nishimori line, the disorder-averaged free energy increment $[\Delta F_\mathbf{c}^{(0)}(\mathbf{e};\beta_p)]$ corresponding to any non-trivial codeword $\mathbf{c}\not\simeq\mathbf{0}$ diverges at large $n$ for $p<p_c$, where $p_c$ is the error probability corresponding to the ML decoding transition on the Nishimori line. \end{corollary} \noindent We also introduce a \emph{tension} \begin{equation} \lambda_\mathbf{c}\equiv {[\Delta F_\mathbf{c}^{\rm max}]\over d_\mathbf{c}}, \quad d_\mathbf{c}\equiv \min_{\boldsymbol\sigma}\mathop{\rm wgt}(\mathbf{c}+{\boldsymbol\sigma} {\cal G}),\label{eq:line-tension-c} \end{equation} an analog of the domain wall line tension for the extended defects, and prove \begin{theorem} \label{th:tension-average} For disordered spin models \eqref{eq:Z0} (or Eq.\ \eqref{eq:partition0}) corresponding to an infinite family of quantum codes with asymptotic rate $R=k/n$, in a fixed-defect phase, the defect tension ${\overline\lambda}$ averaged over all non-trivial defect classes at large $n$ satisfy the inequality $\beta{\overline\lambda}\ge R\ln 2$. \end{theorem} \subsection{Results: order parameter} \ The spin models corresponding to families of quantum codes include the analogs of regular Ising model (e.g., regular Ising model on square lattice for the toric codes) as well as various gauge models, see Example \ref{ex:gauge}. In general, there is no local order parameter that can be used for an alternative definition of the ordered phase. In addition, while an analog of \emph{Wilson loop} operator can be readily constructed for these models and has the usual low- and high-temperature asymptotics, it remains an open question whether it can be used to distinguish between specific disordered phases. However, we constructed a set of non-local \emph{indicator} spin correlation functions which must all be asymptotically equal to one in the defect-free phase, while some of them change sign in the presence of extended defects. Using these, and the standard inequalities from the gauge theory of spin glasses, we prove the following bound on the location of the defect-free phase (this is an extension of Nishimori's result\cite{Nishimori-1980,Nishimori-1981} on possibly reentrant phase diagram for Ising models): \begin{theorem} \label{th:boundary} Defect-free phase cannot exist at any $\beta$ for $p$ exceeding that at the decoding transition, $p>p_c$. \end{theorem} \subsection{Results:\ phase transition} \ For zero-$R$ codes, the only mechanism of a continuous transition is for $\lambda_\mathbf{c}$ to vanish for some set of codewords $\mathbf{c}$. On the other hand, for finite-rate codes, Theorem \ref{th:tension-average} implies that there is also a possibility that at the transition point the tension remains finite, $\lambda_\mathbf{c}\ge \lambda_\mathrm{min}>0$, for every codeword $\mathbf{c}$. This corresponds to a transition driven by the entropy of extended defects. While generically the transition in models with multi-spin couplings is of the first order, it is continuous along the Nishimori line since the corresponding internal energy is known exactly and is a continuous function of $p$. Moreover, the specific heat remains finite at the transition point along the Nishimori line since the same inequality as for regular spin glasses applies\cite{Nishimori-1980,% Morita-Horiguchi-1980,Nishimori-1981,Nishimori-book}, \begin{equation} \label{eq:specific-heat-bound} [C(p;\beta_p]\le N_\mathrm{b} {\beta_p^2\over \cosh^2\beta_p}, \end{equation} where $N_\mathrm{b}=2n$ for the model \eqref{eq:partition0}, and $N_\mathrm{b}=n$ for the models \eqref{eq:Z0} corresponding to a half of a CSS code each. Thus, as in the usual spin models, we expect that the transition point $p=p_c$ at the Nishimori line is a multicritical point where several phases come together. Spin models corresponding to non-CSS zero-rate families of stabilizer codes are exactly self-dual. The same is true for CSS codes where the two generator matrices ${\cal G}_X$, ${\cal G}_Z$ can be mapped to each other, e.g., by column permutations, as is the case for the toric codes and, more generally, for the hypergraph-product (HP) codes\cite{Tillich-Zemor-2009}. For many such models, the transition point at the Nishimori line can be obtained to a high numerical accuracy using the strong-disorder self-duality conjecture\cite{Nishimori-1979,Nishimori-Nemoto-2003,Nishimori-2007,% Nishimori-Ohzeki-2006,Ohzeki-Nishimori-Berker-2008,% Ohzeki-2009,Bombin-PRX-2012,Ohzeki-Fujii-2012} \begin{equation} \label{eq:self-duality-disordered} H_2(p_c)=1/2, \end{equation} where $H_2(p)\equiv -p\log_2p-(1-p)\log_2(1-p)$ is the binary entropy function. While strictly speaking, there is no exact self-duality in the presence of disorder\cite{Aharony-Stephen-1980}, we have confirmed numerically that this expression is also valid, at least approximately, for several models constructed here, e.g., models with bond structure as in Example~\ref{ex:882-18-12}. However, for code families with finite rate, the decoding transition must be below the Shannon limit \begin{equation} R\le 1-H_2(p).\label{eq:shannon-threshold-CSS} \end{equation} Thus, Eq.\ \eqref{eq:self-duality-disordered} must be violated for $R\ge1/2$. On general grounds, we actually expect it to fail for any code family with a finite rate, $R>0$. \section{Background} \label{sec:background} \subsection{Stabilizer codes} \label{sec:background-codes} \ An $n$-qubit quantum code\cite{shor-error-correct,gottesman-thesis,% Calderbank-1997,Nielsen-book} is a subspace of the $n$-qubit Hilbert space $\mathbb{H}_{2}^{\otimes n}$. The idea is to choose a subspace such that a likely error shifts any state from the code to a linearly-independent subspace, to be detected with a suitable set of measurements. Any error, an operator acting on $\mathbb{H}_{2}^{\otimes n}$, can be expanded as a linear combination of the elements of the $n$-qubit Pauli group $\mathscr{P}_{n}$ formed by tensor products of single-qubit Pauli operators $X$, $Y$, $Z$ and the identity operator $I$: $\mathscr{P}_{n}=i^m \{I,X,Y,Z\}^{\otimes n}$, where $m=0,1,2,3$. A \emph{weight} of a Pauli operator is the number of non-trivial terms in the tensor product. An $n$-qubit quantum \emph{stabilizer code} $\mathcal{Q}$ $[[n,k,d]]$ is a $2^k$-dimensional subspace of $\mathbb{H}_2^{\otimes n}$, a common $+1$ eigenspace of all operators in the code's \emph{stabilizer}, an Abelian group $\mathscr{S}\subset\mathscr{P}_{n}$ such that $-\mathbbm{1}\not\in\mathscr{S}$. The stabilizer is typically specified in terms of its generators, $\mathscr{S}=\left\langle S_{1},\ldots,S_{n-k}\right\rangle $. Any operator proportional to an element of the stabilizer $\mathscr{S}$ acts trivially on the code and can be ignored. A non-trivial error proportional to a Pauli operator $E\not\in\mathscr{S}$ is detectable iff it anticommutes with at least one stabilizer generator $S_i$; such an error takes a vector from the code, $\ket\psi\in{\cal Q}$, to the state $E\ket\psi$ from an orthogonal subspace $E{\cal Q}$ where the corresponding eigenvalue $(-1)^{s_i}$ is negative. Measuring all $n-k$ generators $S_i$ produces the binary syndrome vector $\mathbf{s}\equiv \{s_1,\ldots,s_{n-k}\}$. Two errors (Pauli operators) that differ by an element of the stabilizer and a phase, $E_2=E_1 S e^{i\phi}$, $S\in\mathscr{S}$, are called mutually degenerate; they have the same syndrome and act identically on the code. Operators commuting with the stabilizer act within the code; they have zero syndrome. A non-trivial undetectable error $E$ is proportional to a Pauli operator which commutes with the stabilizer but is not a part of the stabilizer. These are the operators that damage quantum information; minimal weight of such an operator is the distance $d$ of the stabilizer code. A quantum or classical code of distance $d$ can detect any error of weight up to $d-1$, and correct up to $\lfloor d/2\rfloor$. A Pauli operator $E\equiv i^{m'}X^{\mathbf{v}}Z^{\mathbf{u}}$, where $\mathbf{v},\mathbf{u}\in\{0,1\}^{\otimes n}$ and $X^{\mathbf{v}}=X_{1}^{v_{1}}X_{2}^{v_{2}}\ldots X_{n}^{v_{n}}$, $Z^{\mathbf{u}}=Z_{1}^{u_{1}}Z_{2}^{u_{2}}\ldots Z_{n}^{u_{n}}$, can be mapped, up to a phase, to a binary vector $\mathbf{e}\equiv (\mathbf{v},\mathbf{u})$. A product of two Pauli operators corresponds to a sum ($\mod2$) of the corresponding vectors. Two Pauli operators commute if and only if the \emph{trace inner product} of the corresponding binary vectors is zero, $\mathbf{e}_1\star \mathbf{e}_2\equiv \mathbf{u}_{1}\cdot\mathbf{v}_{2}+\mathbf{v}_{1}\cdot\mathbf{u}_{2}=0 \bmod 2$. With this map, generators of a stabilizer group are mapped to rows of the binary generator matrix \begin{equation} G=(G_{X},G_{Z}),\label{eq:generator-matrix} \end{equation} with the condition that the trace inner product of any two rows vanishes \cite{Calderbank-1997}. This commutativity condition can be also written as $G_X G_Z^T+G_Z G_X^T=0$. For a more narrow set of CSS codes stabilizer generators can be chosen so that they contain products of only $X_i$ or $Z_i$ single-qubit Pauli operators. The corresponding generator matrix has the form \begin{equation} G=\mathop{\rm diag}({\cal G}_X,{\cal G}_Z), \label{eq:CSS} \end{equation} where the commutativity condition simplifies to ${\cal G}_{X}{\cal G}_{Z}^{T}=0 \bmod2$. The number of encoded qubits is $k=n-\mathop{\rm rank} G$; for CSS codes this simplifies to $k=n-\mathop{\rm rank} {\cal G}_X-\mathop{\rm rank} {\cal G}_z$. Two errors are mutually degenerate iff the the corresponding binary vectors differ by a linear combination of rows of $G$, $\mathbf{e}'=\mathbf{e}+\boldsymbol{\alpha} G$. It is convenient to define the conjugate matrix $\widetilde G\equiv (G_Z,G_X)$ so that $G\star G^T\equiv G \widetilde G^T=0$. Then, the syndrome of an error $\mathbf{e}\equiv (\mathbf{v},\mathbf{u})$ can be written as the product with the conjugate matrix, $\mathbf{s}=\widetilde G \mathbf{e}^T$. A vector with zero syndrome is orthogonal to rows of $\widetilde G$; we will call any such vector which is not a linear combination of rows of $G$ a non-zero \emph{codeword} $\mathbf{c}\not\simeq\mathbf{0}$. Two codewords that differ by a linear combination of rows of $G$ are equivalent, $\mathbf{c}_1\simeq\mathbf{c}_2$; corresponding Pauli operators are mutually degenerate. Non-equivalent codewords represent different cosets of the degeneracy group in the binary code with the check matrix $\widetilde G$. For an $[[n,k,d]]$ code, any non-zero codeword has weight $\mathop{\rm wgt}(\mathbf{c})\ge d$, and there are exactly $2k$ \emph{independent} codewords which can be chosen to correspond to $2k$ operators $\bar X_i$, $\bar Z_i$, $i=1,\ldots,k$ (with the usual commutation relations) acting on the logical qubits. \subsection{LDPC codes} \ A binary low density parity-check (LDPC) code is a linear code with sparse parity check matrix\cite{Gallager-1962,% Richardson-Shokrollahi-Amin-Urbanke-2001,% Richardson-Urbanke-2001,Chung-Forney-Richardson-Urbanke-2001}. These have fast and efficient (capacity-approaching) decoders. Over the last ten years classical LDPC codes have become a significant component of industrial standards for satellite communications, Wi-Fi, and gigabit ethernet, to name a few. \emph{Quantum LDPC codes}\cite{Postol-2001,MacKay-Mitchison-McFadden-2004} are just stabilizer codes\cite{gottesman-thesis,Calderbank-1997}, but with stabilizer generators which involve only a few qubits each compared to the number of qubits used in the code. Such codes are most often degenerate: some errors have trivial effect and do not require any correction. Compared to general quantum codes, with a quantum LDPC code, each quantum measurement involves fewer qubits, measurements can be done in parallel, and also the classical processing could potentially be enormously simplified. One apparent disadvantage of quantum LDPC codes is that, until recently\cite{Bravyi-Hastings-2013}, there has been no known families of such codes that have finite relative distance $\delta\equiv d/n$ for large $n$. This is in contrast to regular quantum codes where the existence of ``good'' codes with finite asymptotic rates $R\equiv k/n$ and finite $\delta$ has been proved\cite{Calderbank-Shor-1996,Feng-Ma-2004}. With such latter codes, and within a model where errors happen independently on different qubits with probability $p$, for $p<\delta/2$ all errors can be corrected with probability one. On the other hand, many quantum LDPC code families have a power-law scaling of the distance with $n$, $d\propto n^{\alpha}$, with $\alpha\le 1/2$. Examples include code families in Refs.~\cite{Tillich-Zemor-2009,Kovalev-Pryadko-2012,% Kovalev-Pryadko-Hyperbicycle-2013,Andriyanova-Maurice-Tillich-2012}; a single-qubit-encoding code family suggested in Ref.~\cite{Freedman-Meyer-Luo-2002} has the distance scaling as $d\propto (n\log n)^{1/2}$. An infinite quantum LDPC code family with sublinear power-law distance scaling has a finite error correction threshold, including the fault-tolerant case where the measured syndromes may have errors, as long as each stabilizer generator involves a limited number of qubits, and each qubit is involved in a limited number of stabilizer generators\cite{Kovalev-Pryadko-FT-2013}. This makes quantum LDPC codes the only code family where finite rate is known to coexist with finite fault-tolerant error-correction threshold, potentially leading to substantial reduction of the overhead for scalable quantum computation\cite{Gottesman-overhead-2013}. Note that the quantum LDPC codes in Ref.~\cite{Bravyi-Hastings-2013} have finite rate and finite relative distance, at the price of stabilizer generator weight scaling like a power-law, $w\propto n^{\gamma}$, $\gamma\le 1/2$; it is not known whether a fault-tolerant error-correction protocol exists for such codes. An example of a large code family containing quantum LDPC codes is the hypergraph-product (HP) codes \cite{Tillich-Zemor-2009} generalizing the toric code. Such a code can be constructed from two binary matrices, $\mat{H}_{1}$ (dimensions $r_{1}\times n_{1}$) and $\mat{H}_{2}$ (dimensions $r_{2}\times n_{2}$), as a CSS code with the generator matrices \cite{Kovalev-Pryadko-2012} \begin{equation} {\cal G}_{X}=(E_{2}\otimes\mathcal{H}_{1},\mathcal{H}_{2}\otimes E_{1}),\;\, {\cal G}_{Z}=(\mathcal{H}_{2}^{T}\otimes\widetilde{E}_{1}, \widetilde{E}_{2}\otimes\mathcal{H}_{1}^{T}). \label{eq:Till} \end{equation} Here each matrix is composed of two blocks constructed as Kronecker products (denoted with ``$\otimes$''), and $E_{1}$, $\widetilde{E}_{1}$, $E_{2}$, $\widetilde{E}_{2}$ are unit matrices of dimensions given by $r_{1}$, $n_{1}$, $r_{2}$ and $n_{2}$, respectively. Let us denote the parameters of classical codes using $\mat{H}_i$, $\mat{H}_i^T$ as parity check matrices, $\mat{C}^\perp_{\mat{H}_{i}}=[n_{i},k_{i},d_{i}]$, $\mat{C}^\perp_{\mat{H}_{i}^{T}}= [{\widetilde{n}}_{i},\widetilde{k}_{i},\widetilde{d}_{i}]$, $i=1,2$, with the convention\cite{Tillich-Zemor-2009} that the distance $d=\infty$ if the corresponding $k=0$. Then the parameters of the HP code are $n=n_{2}r_{1}+n_{1}r_{2}$, $k=k_{1}\tilde{k}_{2}+\tilde{k}_{1}k_{2}$ while the distance $d$ satisfies\cite{Tillich-Zemor-2009} a lower bound $d\ge\min(d_{1},d_{2},\widetilde{d}_{1},\widetilde{d}_{2})$ and two upper bounds: if $\widetilde{k}_{2}>0$, then $d\le d_{1}$; if $\widetilde{k}_{1}>0$, then $d\le d_{2}$. Particularly simple is the case when both binary codes are \emph{cyclic}, with the property that all cyclic shifts of a code vector also belongs to the code\cite{MS-book}. A parity check matrix of such a code can be chosen \emph{circulant}, with the first row using the coefficients of the \emph{check} polynomial $h(x)\equiv c_{0}+c_{1}x+\ldots+c_{n-1}x^{n-1}$ which is a factor of $x^n-1$. Then, we can choose both circulant matrices $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ in Eq.\ \eqref{eq:Till} square $n_i\times n_i$, which gives a CSS code with the parameters $[[2n_1n_2,2k_1k_2,\min(d_1,d_2)]]$. In particular, the toric codes\cite{kitaev-anyons,Dennis-Kitaev-Landahl-Preskill-2002} are obtained when the circulant matrices $\mathcal{H}_{1}$, $\mathcal{H}_{2}$ are generated by the polynomial $h(x)=1+x$, with $k_i=1$ and $d_i=n_i$, $i=1,2$. \section{Statistical mechanics of decoding.} \subsection{Maximum likelihood decoding} \ Let us consider one of the simplest error models, where the bit flip and phase flip errors happen independently and with equal probability $p$. The corresponding transformation of the single-qubit density matrix can be written as \begin{equation} \rho\mapsto p_{I}\rho+p_{x}X\rho X+p_{y}Y\rho Y+p_{z}Z\rho Z\:, \label{eq:decoh} \end{equation} where $p_{I}=(1-p)^2$, $p_x=p_z=p(1-p)$, $p_y=p^2$. After relabeling the axes ($y\leftrightarrow z$) this can be interpreted in terms of the amplitude/phase damping model with some constraint on the decoherence times $T_1$, $T_2$. Our goal, however, is not to consider the most general case, but to construct a simple statistical mechanical model. For the uncorrelated errors described by the completely-positive trace-preserving map~\eqref{eq:decoh}, the probability of an error described by the binary vector $\mathbf{e}=(\mathbf{v},\mathbf{u})$ (see the \textbf{Background} section) is \begin{equation} \label{eq:error-probability} P(\mathbf{\mathbf{e}})=\prod_{i=1}^{N_{\rm b}} p^{e_i}(1-p)^{1-e_i}= p^w(1-p)^{N_{\rm b}-w}, \end{equation} where $N_{\rm b}=2n$ and $ w\equiv \mathop{\rm wgt}(\mathbf{e})= \mathop{\rm wgt}(\mathbf{v})+\mathop{\rm wgt}(\mathbf{u})$ is the regular binary weight. Now, with a stabilizer code, all degenerate errors have the same effect and cannot be distinguished. Thus, one considers the net probability of an error having the same effect as $\mathbf{e}$, \begin{equation} P_0(\mathbf{e})={1\over 2^{N_{g}}}\sum_{\boldsymbol{\sigma}} p^w (1-p)^{N_{\rm b}-w},\quad w\equiv \mathop{\rm wgt}(\mathbf{e}+\boldsymbol{\sigma} G), \label{eq:prob0} \end{equation} where the generator matrix $G$ (see Eq.\ \eqref{eq:generator-matrix}) has dimensions $N_s\times N_{\rm b}$ and non-zero $N_g\equiv N_s-\mathop{\rm rank} G$ allows $G$ to have some linearly-dependent rows, cf.\ Eq.\ \eqref{eq:generalized-wegner}. The errors in Eq.\ \eqref{eq:prob0} are exactly degenerate with $\mathbf{e}$ but they are not all the errors having the same syndrome as $\mathbf{e}$. It is thus convenient to introduce the probability of an error equivalent to $\mathbf{e}$ shifted by a codeword $\mathbf{c}$, \begin{equation} P_\mathbf{c}(\mathbf{e})\equiv P_0(\mathbf{e}+\mathbf{c}), \label{eq:prob-c} \end{equation} and the total probability of an error with the syndrome $\mathbf{s}\equiv \widetilde G\mathbf{e}^T$, \begin{equation} \label{eq:prob-tot} P_{\rm tot}(\mathbf{s})=\sum_{\mathbf{c}}P_\mathbf{c}(\mathbf{e}), \end{equation} where $\mathbf{e}$ is any vector that gives the syndrome $\mathbf{s}$, and the summation is done over all $2^{2k}$ inequivalent codewords, length $N_{\rm b}$ zero-syndrome vectors, $\widetilde G \mathbf{c}^T=0$, that are linearly independent from the rows of $G$, see the \textbf{Background} section. When combined with the summation over the degeneracy vectors generated by the rows of $G$, see Eqs.\ \eqref{eq:prob0} and \eqref{eq:prob-c}, the summation in Eq.\ \eqref{eq:prob-tot} can be rewritten as that over all zero-syndrome vectors, \begin{equation} \label{eq:prob-tot-mod} P_\mathrm{tot}(\mathbf{s})=\!\!\!\!\sum_{\mathbf{x}:\widetilde G \mathbf{x}^T=0}\!\!\!\!p^w (1-p)^{N_{\rm b}-w},\;\, w\equiv \mathop{\rm wgt}(\mathbf{e}+\mathbf{x}). \end{equation} The probability \eqref{eq:prob-tot-mod} is normalized properly, so that the summation over all allowed syndrome vectors gives 1, \begin{equation} \label{eq:prob-tot-summed} \sum_\mathbf{s}P_\mathrm{tot}(\mathbf{s})=1. \end{equation} When decoding is done, only the measured syndrome $\mathbf{s}$ is known. For \emph{maximum likelihood} (ML) decoding, the inferred error vector corresponds to the most likely configuration given the syndrome. To find it, we can start with some error configuration $\mathbf{e}\equiv \mathbf{e}_\mathbf{s}$ corresponding to the syndrome $\mathbf{s}$, and find a codeword $\mathbf{c}=\mathbf{c}_\mathrm{max}(\mathbf{e})$ such that the corresponding equivalence class $\mathbf{e}+\mathbf{c}$ has the largest probability, \begin{equation} P_{\mathbf{c}_\mathrm{max}(\mathbf{e})}(\mathbf{e})= P_\mathrm{max}(\mathbf{s})\equiv \max_\mathbf{c} P_\mathbf{c}(\mathbf{e}).\label{eq:P-max} \end{equation} Unlike the codeword $\mathbf{c}_\mathrm{max}(\mathbf{e})$ which depends on the choice of $\mathbf{e}$, the maximum probability $P_\mathrm{max}(\mathbf{s})$ depends only on the syndrome $\mathbf{s}\equiv \widetilde G\mathbf{e}^T$. The conditional probabilities of successful and of failed recovery given some unknown error with the syndrome $\mathbf{s}$ become \begin{equation} \label{eq:P-recovery-of-s} P_\mathrm{succ}(\mathbf{s}) ={P_\mathrm{max}(\mathbf{s})\over P_\mathrm{tot}(\mathbf{s})},\quad P_\mathrm{fail}(\mathbf{s})\equiv 1-P_\mathrm{succ}(\mathbf{s}). \end{equation} The net probability of successful recovery averaged over all errors can be written as \begin{equation} \label{eq:prob-recovery} P_\mathrm{succ}\equiv [ P_\mathrm{succ}(\mathbf{s}_\mathbf{e})] =\sum_\mathbf{s}P_\mathrm{max}(\mathbf{s}). \end{equation} Here and in the following $[f(\mathbf{e})]\equiv \sum_\mathbf{e}P(\mathbf{e}) f(\mathbf{e})$ denotes the averaging over the errors with the probability \eqref{eq:error-probability}. The result in the r.h.s.\ was obtained by partial summation over all errors with the same syndrome, cf.\ the syndrome probability \eqref{eq:prob-tot}. Asymptotically successful recovery with probability one for an infinite family of QECCs implies that in the limit of large $n$, $P_\mathrm{succ}\to1$ while $P_\mathrm{fail}\to0$. Alternatively, in this limit Eqs.\ \eqref{eq:P-recovery-of-s} and \eqref{eq:prob-recovery} give \begin{equation}\label{eq:max-phase} \left[\dfrac{P_{\rm max}(\mathbf{s}_{\bf e})}% {P_{\rm tot}(\mathbf{s}_{\bf e})}\right]\to1. \end{equation} Comparing Eqs.\ \eqref{eq:prob-tot-summed} and \eqref{eq:prob-recovery}, we see that asymptotically, for each error that is likely to happen, the sum \eqref{eq:prob-tot} is dominated by a single term with $\mathbf{c}=\mathbf{c}_\mathrm{max}(\mathbf{e})$. We can state this formally as \begin{lemma} \label{lemma:upside-down} For an infinite family of quantum codes, successful decoding with probability one implies that asymptotically at large $n$, the ratio $$ r(\mathbf{e})\equiv { P_\mathrm{max}(\mathbf{s}_\mathbf{e})\over P_\mathrm{tot}(\mathbf{s}_\mathbf{e})}= {P_\mathrm{max}({\bf e})\over \sum_{\bf c} P_{\bf c}({\bf s}_{\bf e})}\to1. $$ for any error configuration $\mathbf{e}$ likely to happen. \end{lemma} \begin{proof Note that $r(\mathbf{e})< 1$. Indeed, the summation in the denominator is over all $\mathbf{c}$, one of them equals $\mathbf{c}_\mathrm{max}(\mathbf{e})$ while the remaining terms are positive. Now, let us choose an arbitrarily small $\epsilon>0$ and separate the errors into ``good'' where $1-r(\mathbf{e})<\epsilon$ and ``bad'' where $1-r(\mathbf{e})\ge \epsilon$. Use the following Bayesian expansion for the successful decoding probability: \begin{equation} \label{eq:Bayesian-expansion} P_\mathrm{succ}=(1-P_\mathrm{bad}) \left[ r(\mathbf{e})\right]_\mathrm{good}\!\!\!\!\!\!\! +P_\mathrm{bad}\left[ r(\mathbf{e})\right]_\mathrm{bad}\!\!\!\!\!\!, \end{equation} where the averaging in each term is limited to a particular type of errors as indicated. The first term can be bounded from above by $1-P_\mathrm{bad}$, while the second one by $P_\mathrm{bad}(1-\epsilon)$, which gives \begin{equation} P_\mathrm{succ}\le 1-\epsilon P_\mathrm{bad}. \label{eq:P-succ} \end{equation} Since $P_\mathrm{fail}=1-P_\mathrm{succ}\to0$ at large $n$, the probability $P_\mathrm{bad}$ can be made arbitrarily small by choosing large enough $n$. \end{proof} Generally, given an infinite family of codes, asymptotically certain recovery is possible with sufficiently small $p<p_c\le1/2$, as well in the symmetric region $p>1-p_c$, while it may not be a sure thing in the remaining interval $p_c\le p\le 1-p_c$. This defines the ML decoding transition. \subsection{Random bond spin model} \label{sec:spin-model} \ Given the well-established parallel between Wegner's models and binary codes\cite{Sourlas-1989,Nishimori-book}, it is straightforward to come up with a spin model matching the probabilities defined in the previous section. We use the binary error $\mathbf{e}$ to introduce the bond disorder using $J_b=(-1)^{e_b}$, and consider Wegner's partition function \eqref{eq:generalized-wegner} with $\Theta=G$, \begin{equation} \label{eq:partition0} Z_0(\mathbf{e};\beta)\equiv \mathscr{Z}_{\mathbf{e},\mathbf{0}}(G,\{K_b=\beta\}). \end{equation} The normalization is such that the probability in Eq.\ \eqref{eq:prob0} is recovered on the Nishimori line \eqref{eq:nishimori-temperature}, \begin{equation} \label{eq:nishimori-line} P_0(\mathbf{e})=Z_0(\mathbf{e};\beta_p),\quad e^{-2\beta_p}=p/(1-p). \end{equation} To shorten the notations, we will omit the inverse temperature $\beta$ whenever it is not likely to cause a confusion, $ Z_0(\mathbf{e})\equiv Z_0(\mathbf{e};\beta)$, and use $P_0(\mathbf{e})$ at the Nishimori line, $\beta=\beta_p$. We also define the partition function with an \emph{extended defect} of flipped bonds at the support of the codeword $\mathbf{c}$, $Z_\mathbf{c}(\mathbf{e};\beta)\equiv Z_\mathbf{0}(\mathbf{e}+\mathbf{c};\beta)$ [cf.\ Eq.\ \eqref{eq:prob-c}], the corresponding maximum $Z_\mathrm{max}(\mathbf{s};\beta)\equiv Z_{\mathbf{c}_\mathrm{max}}(\mathbf{e};\beta)$ [the maximum is reached at $\mathbf{c}_\mathrm{max}\equiv\mathbf{c}_\mathrm{max}(\mathbf{e};\beta)$ which may differ from that in Eq.\ \eqref{eq:P-max} depending on the temperature], as well as an analog of $P_\mathrm{tot}(\mathbf{s})$ [Eq.\ \eqref{eq:prob-tot}], \begin{equation} \label{eq:Z-tot} Z_\mathrm{tot}(\mathbf{s};\beta)=\mathscr{Z}_{\mathbf{e},\mathbf{0}}(\widetilde G^*,\{K_b=\beta\}), \end{equation} where the binary matrix $\widetilde G^*$ is exactly dual to $\widetilde G$, namely $\widetilde G^*\widetilde G^{T}=0$ and $\mathop{\rm rank} \widetilde G+\mathop{\rm rank} \widetilde G^*=N_{\rm b}$ (cf.\ Eq.\ \eqref{eq:dual-code}), and we used the fact that $\widetilde G^*$ is a generating matrix for all vectors $\mathbf{x}$ in Eq.\ \eqref{eq:prob-tot-mod}. Except for disorder, the partition function \eqref{eq:Z-tot} is related to Eq.\ \eqref{eq:partition0} by Wegner's duality transformation \cite{Wegner71}, \begin{equation} \dfrac{2^{(N_g-N_{s})/2}\mathscr{Z}_{\mathbf{e},\mathbf{0}}(\Theta,\{K\})}{ \prod_{b}\sqrt{(\tanh K_{b})^{2}+1}}= \dfrac{2^{(N_g^*-N_{s}^{*})/2}\mathscr{Z}_{\mathbf{0},\mathbf{e}}(\Theta^*,\{K^{*}\})}{ \prod_{b}\sqrt{(\tanh K_{b}^{*})^{2}+1}}, \label{eq:duality} \end{equation} where bonds are defined by the columns of a $N_s^*\times N_\mathrm{b}$ binary matrix $\Theta^*$ exactly dual to $\Theta$, see Eqs.\ \eqref{eq:dual-code} and \eqref{eq:generalized-wegner}. The dual model has the same number of bonds, $N_{\rm b}^{*}=N_{\rm b}$, $N_s^*$ spins, and its ground state degeneracy parameter $N_{g}^{*}=N_{s}^{*}-\mathop{\rm rank} \Theta^{*}$. The coupling parameters of mutually dual bonds are related by $\tanh K_{b}=\exp(-2K_{b}^{*})$. The conjugation in Eq.\ \eqref{eq:Z-tot} just rearranges the order of bonds and therefore leaves the partition/correlation function invariant, except for corresponding permutation of bond-specific variables: coupling parameters $K_b$ and electric and magnetic charges, \begin{equation} \mathscr{Z}_{\mathbf{e},\mathbf{m}}(\widetilde G^*,\{K\}) =\mathscr{Z}_{\widetilde{\mathbf{e}},\widetilde{\mathbf{m}}} (G^*,\{\widetilde K\}). \label{eq:conjugation} \end{equation} We note in passing that the binary matrices $\Theta$ and $\Theta^*$ defining the mutually dual partition functions in Eq.\ \eqref{eq:duality} can be also thought of as the generating matrices of the two dual binary codes [Eq.\ \eqref{eq:dual-code}], with some additional linearly dependent rows. In fact, Wegner's duality has been long known in the coding theory as the MacWilliams identities between weight generating polynomials of dual codes\cite{MacWilliams-1963,MS-book}. For a CSS code with the generator matrix in the form \eqref{eq:CSS} the partition function \eqref{eq:partition0} splits into a product of those for two non-interacting models corresponding to matrices ${\cal G}_{X}$ and ${\cal G}_{Z}$, see Eq.\ \eqref{eq:Z0}. In addition, two models defined by ${\cal G}_{X}$ and ${\cal G}_{Z}$ are dual to each other modulo logical operators. We can find the ground state degeneracies $2^{N_g^\mu}$, $\mu=X,Z$, of the corresponding models from $N_g^\mu=N_s^\mu-\mathop{\rm rank} {\cal G}_\mu$, where $N_s^\mu$, $\mu=X,Z$ defines the number of rows in the matrix ${\cal G}_\mu$. For hypergraph-product codes in Eq.\ \eqref{eq:Till} the ground state degeneracy is given by\cite{Kovalev-Pryadko-2012} $N_g^X=\tilde{k}_{1}\tilde{k}_{2}$ and $N_g^Z=k_{1} k_{2}$. \begin{example} \label{ex:CSS} For a CSS code with the check matrix \eqref{eq:CSS}, the partition function \eqref{eq:partition0} is a product of those for two mutually decoupled spin models defined by matrices $\Theta={\cal G}_X$ and $\Theta={\cal G}_Z$, respectively, see Eq.\ \eqref{eq:Z0}. Since ${\cal G}_X {\cal G}_Z^T=0$, in the absence of disorder these models are mutually dual, modulo logical operators. \end{example} \begin{example} \label{ex:TZ-vanilla} HP codes in Eq.\ \eqref{eq:Till} are CSS codes. In the special case $\mathcal{H}_{1}=\mathcal{H}_{2}^T$, the matrices ${\cal G}_X$ and ${\cal G}_Z$ can be mapped to each other by permutations of rows and columns; the two spin models \eqref{eq:Z0} are identical. In the absence of disorder both models are self-dual, modulo logical operators. \end{example} \begin{example} \label{ex:extended-cyclic} Suppose matrices $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ in Eq.\ \eqref{eq:Till} are square and circulant, corresponding to two cyclic codes with generally different check polynomials $h_1(x)$ and $h_2(x)$. Then the matrices ${\cal G}_X$ and ${\cal G}_Z$ can be mapped to each other by permutations of rows and columns, and thus in the absence of disorder the corresponding spin models \eqref{eq:Z0} are self-dual modulo logical operators. This map is generally different from that in the previous example. This case has a nice layout on square lattice with periodic boundary conditions, with the horizontal and vertical bonds $R_b$ in Eq.\ \eqref{eq:generalized-wegner} formed according to the pattern of coefficients in the polynomials $h_1(x)$ and $h_2(x)$. In particular, with $h_1(x)=h_2(x)=1+x$, the hypergraph-product code is a toric code, while Eq.\ \eqref{eq:Z0} gives two mutually decoupled Ising models. \end{example} \begin{example} \label{ex:DT} Debierre and Turban \cite{Debierre-Turban-1983} suggested a model that corresponds to a CSS code in the previous example with the check polynomials $h_{1}(x)=1+x$ and $h_{2}(x)=1+x+\ldots+x^{l-1}$ for some positive integer $l$. The two binary codes have $k_1=1$ (codewords are all-one or all-zero vectors), and, with $n_2$ divisible by $l$, $k_{2}=l-1$ ($2^{l-1}$ codewords given by the repetitions of all length-$l$ even-weight vectors). With $l=3$, each of the two equivalent spin models~\eqref{eq:Z0} have four ground states in a pattern of stripes given by the repetitions of the vectors $[1,1,0]$, $[0,1,1]$, $[1,0,1]$ or $[0,0,0]$. A boundary between two distinct ground states produce a pattern of ``unhappy'' bonds that corresponds to an extended defect $\mathbf{c}$ in Eq.\ \eqref{eq:prob-tot}, see Fig.~1, Right \end{example} \begin{figure}[tbp] \centering \includegraphics[width=1.\columnwidth]{Fig1} \caption{Left and Center: two basis ground states of the spin model in Example \protect\ref{ex:DT}, with black squares corresponding to flipped spins. An arbitrary ground state of this spin model is a linear combination of these two. Right: a domain wall between two such ground states. Green squares show the pattern of vertical and horizontal bonds involving interactions of two or three spins, respectively. A column of ``unhappy'' bonds forming the domain wall is shown with red.} \label{fig:fig1} \end{figure} \begin{figure}[tbp] \centering \includegraphics[width=1.\columnwidth]{Fig2} \caption{Left: nine ground states of the spin model corresponding to the ${\cal G}_X$ matrix of the HP code~\eqref{eq:Till} generated by circulant matrices $\mathcal{H}_{i}$ corresponding to $h_{1}(x)=h_2(x)=1+x+x^3$, where $n_1=n_2=21$ (they both must be factors of $7$), see Example~\protect\ref{ex:882-18-12}. An arbitrary ground state of the spin model is a linear combination of these nine states. Right: a domain wall formed between two such ground states. Green squares on white background show the patterns of horizontal and vertical bonds, each involves three spins. A column of ``unhappy'' bonds forming the extended defect is shown with red.} \label{figfigB} \end{figure} \begin{example} \label{ex:882-18-12} Spin models corresponding to quantum hypergraph-product codes $[[98s^2,6s,4s]]$, $s=1,2,\ldots$. The model is constructed from $7s\times 7s$ circulant matrices $\mathcal{H}_{i}$ corresponding to $h_{i}(x)=1+x+x^3$, $i=1,2$. A ground state of such a model is a linear combination of the nine basis states with the unit cell in Fig.\ 2, Left. Fig.\ 2, Right: a boundary between two ground states. \end{example} \subsection{Ordered state} \ In contrast to spin glass theory of classical binary codes where it is generally possible to apply a gauge transformation so that perfect decoding corresponds to a uniform magnetization \cite{Sourlas-1989,Nishimori-book}, this is not necessarily possible in the setting corresponding to a quantum code. For example, the models with the partition function \eqref{eq:partition0} include those with exact $S\to -S$ symmetry. In such a case it appears natural to introduce the average spin as an order parameter. On the other hand, such a symmetry is not generic; the partition function \eqref{eq:generalized-wegner} may not even have any degeneracy if $\Theta$ is a full-row-rank matrix. Also, except for the toric and related codes local in 2D \cite{Dennis-Kitaev-Landahl-Preskill-2002}, it is not at all clear what would be the relation of such an order parameter to the decoding transition in a given code. Here, we define an \emph{ordered} phase as an analog of the region of parameters where asymptotically certain decoding is possible. We start with two definitions describing different phases: \begin{definitionX \label{def:fixed} A \emph{fixed-defect phase} of the spin glass model \eqref{eq:partition0} corresponding to an infinite family of stabilizer codes has \begin{equation} [Z_\mathrm{max}(\mathbf{s}_\mathbf{e};\beta)/ Z_\mathrm{tot}(\mathbf{s}_\mathbf{e};\beta)]\to1,\quad n\to\infty. \label{eq:fixed-defect} \end{equation} \end{definitionX} \begin{definitionX} \label{def:defect-free} A \emph{defect-free phase} of the spin glass model \eqref{eq:partition0} corresponding to an infinite family of stabilizer codes has \begin{equation} [Z_0(\mathbf{e};\beta)/ Z_\mathrm{tot}(\mathbf{s}_\mathbf{e};\beta)]\to1,\quad n\to\infty. \label{eq:defect-free} \end{equation} \end{definitionX} We note that analogs of Lemma \ref{lemma:upside-down} apply for the ratios in Eqs.\ \eqref{eq:fixed-defect} and \eqref{eq:defect-free}. Thus, both in the fixed-defect and the defect-free phases, for any error $\mathbf{e}$ likely to happen, the partition function $Z_\mathrm{tot}(\mathbf{s}_\mathbf{e};\beta)$ is going to be dominated by a single defect configuration, $\mathbf{c}_\mathrm{max}(\mathbf{e})$. In the defect-free phase, $\mathbf{c}_\mathrm{max}(\mathbf{e})=\mathbf{0}$, while in a fixed-defect phase one may have a non-trivial defect $\mathbf{c}_\mathrm{max}(\mathbf{e})\not\simeq\mathbf{0}$. \subsection{No fixed-defect phase on the Nishimori line} \ On the Nishimori line, the definition of a fixed-defect phase matches that of a region with asymptotically certain successful decoding, see Eq.\ \eqref{eq:max-phase}. The latter region terminates at the decoding transition at the single-bit error probability $p=p_c$. On the other hand, the proof of the lower bound on the decoding threshold from Ref.~\cite{Kovalev-Pryadko-FT-2013} actually establishes the existence of a zero-defect phase on the Nishimori line, for small enough $p$. With both phases present, one would expect an additional transition between these phases at some $p<p_c$. Theorem \ref{th:Nishimori-line-defect-free} on p.\ \pageref{th:Nishimori-line-defect-free} shows that this does not happen because there is no fixed-defect phase along the Nishimori line. \begin{proof} of {Theorem \ref{th:Nishimori-line-defect-free}.} Below the decoding transition, $p<p_c$, according to Lemma \ref{lemma:upside-down}, the probability $P_\mathrm{tot}(\mathbf{s})$ to obtain each likely syndrome is dominated by a single disorder configuration $\mathbf{e}_0(\mathbf{s})$. This is also the configuration most likely to happen, as opposed to any other configuration corresponding to the same syndrome. \end{proof} In comparison, for $\beta\neq\beta_p$, the disorder probability distribution $P_0(\mathbf{e})$ is different from the partition function $Z_0(\mathbf{e};\beta)$. In general, the dominant contribution to $Z_\mathrm{tot}(\mathbf{s}_\mathbf{e};\beta)$ may come from some other defect configuration $\mathbf{c}_\mathrm{max}(\mathbf{e};\beta)\not\simeq\mathbf{0}$. In practical terms, when designing a decoding algorithm, we can concentrate on the portion of the free energy corresponding to $Z_\mathbf{0}(\mathbf{e};\beta_p)$ and ignore the possibility of any non-trivial defects without affecting the decoding probability in the limit of large $n$. \subsection{Free energy of a defect} \ {\em In a fixed-defect phase:\/} Let us introduce the free energy cost of flipping the bonds corresponding to an non-zero bits of the codeword $\mathbf{c}$ on top of the flipped bond pattern in the most likely configuration $\mathbf{c}_\mathrm{max}(\mathbf{e})$ corresponding to an error $\mathbf{e}$ with the syndrome $\mathbf{s}=\widetilde G \mathbf{e}^T$, \begin{equation} \label{eq:defect-free-energy-max} \Delta F_\mathbf{c}^{\mathrm{max}}(\mathbf{\mathbf{s}};\beta)\equiv \beta^{-1}\log{ Z_\mathrm{max}(\mathbf{s})\over Z_{\mathbf{c}_\mathrm{max}(\mathbf{e})+\mathbf{c}}(\mathbf{e}) }. \end{equation} \begin{proof}{of Theorem \ref{th:divergent-defect-energy-max}.} In the fixed-defect phase each syndrome $\mathbf{s}$ likely to happen must be characterized by a unique configuration of defects, with the other configurations strongly suppressed. Version of Lemma \ref{lemma:upside-down} appropriate for this phase (see Def.\ \ref{def:fixed}) implies that $\Delta F_\mathbf{c}^\mathrm{max}(\mathbf{s};\beta)\to \infty$ asymptotically at large $n$. The corresponding disorder average must also diverge at large $n$. \end{proof} If we introduce the minimum weight $d_\mathbf{c}$ of a bit string in the degeneracy class of $\mathbf{c}$, $d_\mathbf{c}\equiv \min_{\boldsymbol{\sigma}}\mathop{\rm wgt}(\mathbf{c}+\boldsymbol{\sigma} G)$, we can formulate the following bounds \begin{lemma} \label{lemma:Fmax-bounds} For any error $\mathbf{e}$ which gives the syndrome $\mathbf{s}$, any codeword $\mathbf{c}$, and any temperature $\beta^{-1}$, $0\le \Delta F_\mathbf{c}^{\mathrm{max}}(\mathbf{s};\beta)\le 2d_\mathbf{c}$. \end{lemma} \begin{proof} The lower bound follows trivially from the fact that $Z_{\rm max}(\mathbf{s})$ is the largest of $Z_\mathbf{c}(\mathbf{e})$. To prove the upper bound, use the Gibbs-Bogoliubov inequality in the form: \begin{equation} \beta^{-1}\log{Z_0(\mathbf{e}')\over Z_\mathbf{c}(\mathbf{e}')} \le \langle E_{\mathbf{c}+\mathbf{e}'}-E_{(\mathbf{e}')}\rangle= \sum_{b:\mathbf{c}_b\not\simeq\mathbf{0}} 2\langle (-1)^{\mathbf{e}'_b} R_b\rangle ,\label{eq:upper-bound} \end{equation} where $\mathbf{e}' \equiv \mathbf{e}+{\mathbf{c}_{\rm max}(\mathbf{e})}$ is the same-syndrome disorder configuration such that the maximum is reached at $\mathbf{c}=\mathbf{0}$, $E_{\mathbf{e}}\equiv \sum_b (-1)^{e_b}R_b$ is the energy of a spin configuration, see Eq.\ \eqref{eq:generalized-wegner}, and the averaging is done over all spin configurations contributing to $Z_0(\mathbf{e}';\beta)$. Each term in the r.h.s.\ of Eq.\ \eqref{eq:upper-bound} is uniformly bounded from above, $2 (-1)^{\mathbf{e}_b} R_b\le 2$; this gives $\Delta F_\mathbf{c}^{\mathrm{max}}(\mathbf{e};\beta)\le 2\mathop{\rm wgt} \mathbf{c}$. Minimizing over the vectors degenerate with $\mathbf{c}$ gives the stated result. \end{proof} Note that at zero temperature and in the absence of disorder, $\mathbf{e}=\mathbf{0}$, the upper bound in Lemma~\ref{lemma:Fmax-bounds} is saturated. We conjecture that a similar asymptotic scaling, with some finite \begin{equation} \lambda_\mathbf{c}\equiv {[\Delta F_\mathbf{c}^\mathrm{max}(\mathbf{s}_\mathbf{e};\beta)]\over d_\mathbf{c}},\label{eq:defect-tension} \end{equation} should be valid for the free energy increments averaged over disorder, with the defect \emph{tension} $\lambda_\mathbf{c}$ analogous to the domain wall tension in the 2D Ising model. In the fixed-defect phase, where $\Delta F_\mathbf{c}$ is expected to diverge, we thus expect the tensions~\eqref{eq:defect-tension} to be non-zero, $\lambda_\mathbf{c}>0$. {\em In the defect-free phase:\/} In such a phase, the total partition function~\eqref{eq:Z-tot} is entirely dominated by that without any extended defects, see Eq.\ \eqref{eq:partition0}. Instead of Eq.\ \eqref{eq:defect-free-energy-max}, it is convenient to consider the free energy increment for flipping the bonds corresponding to the codeword $\mathbf{c}$ starting with a given defect configuration $\mathbf{e}$, \begin{equation} \label{eq:defect-free-energy-zero} \Delta F_\mathbf{c}^{(0)}(\mathbf{e};\beta)\equiv \beta^{-1} \log {Z_0(\mathbf{e};\beta)\over Z_\mathbf{c}(\mathbf{e};\beta)}. \end{equation} Similar to the upper bound in Lemma \ref{lemma:Fmax-bounds}, we can state \begin{equation} \label{eq:F0-bound-d} \Delta F_\mathbf{c}^{(0)}(\mathbf{e};\beta)\le 2d_\mathbf{c}; \end{equation} however, the corresponding lower bound might be violated for some disorder configurations $\mathbf{e}$ where $\mathbf{c}_\mathrm{max}(\mathbf{e})\not\simeq\mathbf{0}$. In the defect-free phase, the total probability of such configurations, $P_\mathrm{defect}$, as well as the configurations where $F_\mathbf{c}^{(0)}(\mathbf{e};\beta)$ remains bounded, $P_\mathrm{finite}$, should be vanishingly small at large $n$, $P_\mathrm{defect}+P_\mathrm{finite}\to0$. The corresponding bounds can be readily formulated by analogy with Lemma~\ref{lemma:upside-down}. As a result, while in general the increments in Eqs.\ \eqref{eq:defect-free-energy-max} and \eqref{eq:defect-free-energy-zero} have both the initial and the final states different and cannot be easily compared, in the defect-free phase the corresponding averages should coincide asymptotically at $n\to\infty$. In particular, this implies $[\Delta F_\mathbf{c}^{(0)}(\mathbf{e};\beta)]\to\infty$ at large $n$ in the defect-free phase. {\em On the Nishimori line:\/} According to Theorem \ref{th:Nishimori-line-defect-free}, the only ordered phase at the Nishimori line is the defect-free phase. This immediately gives Corollary~\ref{th:corollary}. On the Nishimori line, it is convenient to consider the free energy $\Delta F_\mathbf{c}(\mathbf{s};\beta)$ of a defect $\mathbf{c}$ averaged over the errors $\mathbf{e}$ with the same syndrome, $\mathbf{s}=\widetilde G\mathbf{e}^T$, \begin{equation} \label{eq:defect-free-energy-ave-s} \Delta F_\mathbf{c}(\mathbf{s};\beta)\equiv \left[ \Delta F_\mathbf{c}^{(0)}(\mathbf{e};\beta) \right]_\mathbf{s}, \end{equation} where the average is extended over all non-equivalent codewords $\mathbf{c}$, \begin{equation} \label{eq:syndrome-averaging} \left[f(\mathbf{e})\right]_s\equiv \sum_\mathbf{c} {P_0(\mathbf{e}+\mathbf{c})\over P_\mathrm{tot}(\mathbf{s})} f(\mathbf{e}+\mathbf{c}). \end{equation} For the average \eqref{eq:defect-free-energy-ave-s}, we prove the following version of Lemma~\ref{lemma:Fmax-bounds}: \begin{lemma} \label{lemma:F-syndrome-averaged-Nishimori} At the Nishimori line, for every allowed syndrome $\mathbf{s}$ and every codeword $\mathbf{c}$, the free energy averaged over the errors with the same syndrome satisfies $0\le \Delta F_\mathbf{c}(\mathbf{s};\beta_p) \le 2d_\mathbf{c} $. \end{lemma} \begin{proof}% The upper bound is trivial since it applies for every term in the average, see Eq.\ \eqref{eq:F0-bound-d}. The lower bound follows from the Gibbs inequality. Explicitly, introduce two normalized distribution functions of codewords $\mathbf{b}$: $f_{\mathbf{b}}\equiv P_\mathbf{0}(\mathbf{e}')/P_\mathrm{tot}(\mathbf{s})$, $ g_{\mathbf{b}}\equiv P_\mathbf{c}(\mathbf{e}')/P_\mathrm{tot}(\mathbf{s})$, where $\mathbf{e}'\equiv \mathbf{e}+\mathbf{b}$; then, using the map~\eqref{eq:nishimori-line} on the Nishimori line, $$ \beta\Delta F_\mathbf{c}(\mathbf{s};\beta_p)=\sum_{\mathbf{b}} f_\mathbf{b}\log {f_\mathbf{b}\over g_\mathbf{b}}\ge \sum_\mathbf{b}f_\mathbf{b}\left(1-{g_\mathbf{b}\over f_\mathbf{b}}\right)=0, $$ where the summation is done over all non-equivalent codewords $\mathbf{b}$ and we used $\log (x)\ge 1-1/x$. \end{proof} Note that this Lemma gives an alternative proof of Theorem \ref{th:Nishimori-line-defect-free}. \subsection{Self-averaging} \ Conditions of Theorem \ref{th:divergent-defect-energy-max} guarantee that the disordered system is not in a spin glass phase. A self-averaging for the partition functions $Z_\mathbf{c}(\mathbf{e};\beta)$ would immediately imply the statement of the theorem. Note however, that (\textbf{i}) in the presence of disorder self-averaging is not expected for the partition function even in the case of the toric codes as fluctuations could be exponentially large, and (\textbf{ii}) spin models corresponding to general families of quantum codes, whether LDPC or not, are expected to involve highly non-local interactions. Thus, without additional conditions, one cannot guarantee self-averaging even for the free energy. However, we did not rely on self-averaging in any of the proofs. In particular, results in this section apply to spin models corresponding to finite-rate quantum hypergraph-product and related codes\cite{Tillich-Zemor-2009,Kovalev-Pryadko-2012} that can be obtained from random binary LDPC codes: \begin{example} \label{ex:finite-R} This is a special case of the model in Example \ref{ex:TZ-vanilla}. Consider a \emph{random} binary matrix ${\cal H}$ with $h$ non-zero entries per row and $v$ per column, with $h<v$, e.g., see Ref.~\cite{Gallager-1962}. The rate of the corresponding binary code ${\cal C}_{\cal H}^\perp$ with parameters $[n_c,k_c,d_c]$ is limited, $R_\mathrm{c}\equiv k_\mathrm{c}/n_\mathrm{c}\ge 1-h/v$. With high probability at large $n_\mathrm{c}$, the classical code will have the relative distance in excess of $\delta_\mathrm{c}\equiv\delta_c(h,v)$ given in Ref.~\cite{Gallager-1962}. Such an $[n_{c},k_{c},d_{c}]$ code produces a quantum HP code \eqref{eq:Till} with ${\cal H}_1={\cal H}_2^T={\cal H}$, which is a quantum LDPC code with the asymptotic rate $k/n\ge (v-h)^{2}/(h^{2}+v^{2})$ and the distance scaling as $d/\sqrt{n}=\delta_{c}v/\sqrt{h^{2}+v^{2}}$. Such a code has a decoding transition at a finite $p$, see Ref.~\cite{Kovalev-Pryadko-FT-2013}. Our present results indicate that each of the corresponding spin models \eqref{eq:Z0} has non-local bonds involving up to $v$ spins, exponentially large number of mutually inequivalent extended defects, and an ordered state where such defects do not appear. In addition, as already stated in Example \ref{ex:TZ-vanilla}, the two models are self-dual modulo logical operators. \end{example} \section{Phase transitions} \subsection{Transition to a disordered phase} \ {\em Transition mechanism:\/} An ordered phase (whether fixed-defect or defect-free) of the model~\eqref{eq:Z-tot} is characterized by a unique defect pattern $\mathbf{c}_\mathrm{max}(\mathbf{e})$ for every likely configuration of flipped bonds $\mathbf{e}$. In the case of a code family where $k$ remains fixed, for the stability of such a phase it is sufficient that non-trivial defects $\mathbf{c}\not\simeq\mathbf{0}$ have divergent free energies, as in Theorem \ref{th:divergent-defect-energy-max}. On the other hand, defects can proliferate if at least one of the free energies $\Delta F_\mathbf{c}^{\rm max}$ remains bounded in the asymptotic $n\to\infty$ limit. The situation is different in the case of a code family with divergent $k$, e.g., with fixed rate $R\equiv k/n$, as in Example~\ref{ex:finite-R}. Here, the number of different defects, $2^{2k}-1$, diverges exponentially at large $n$; in an ordered phase the free energies of individual defects must be large enough to suppress this divergence. This implies, in particular, that for a typical defect the tension~\eqref{eq:defect-tension} must exceed certain limit. The statement of Theorem \ref{th:tension-average} concerns the corresponding average tension, \begin{equation} {\overline\lambda}\equiv(2^{2k}-1)^{-1} \sum_{\mathbf{c}\not\simeq\mathbf{0}}\lambda_\mathbf{c}. \label{eq:average-tension} \end{equation} \begin{proof}{of Theorem \ref{th:tension-average}.} Let us start with a version of Lemma~\ref{lemma:upside-down} for the fixed-defect phase (Def.\ \ref{def:fixed}): for any likely disorder configuration $\mathbf{e}$, \begin{equation} \sum_{\mathbf{c}\not\simeq\mathbf{0}}{Z_{\mathbf{c}+\mathbf{c}_\mathrm{max}(\mathbf{e})} (\mathbf{e};\beta) \over Z_\mathrm{max}(\mathbf{s_\mathbf{e}};\beta)}\to0, \label{eq:large-sum} \end{equation} asymptotically at $n\to\infty$. Note that we cannot just average this expression term-by-term, since unlikely errors could potentially dominate the sum which involves an exponentially large number of terms. Instead, we fix some $\epsilon>0$ and first consider the average of Eq.\ \eqref{eq:large-sum} only over the ``good'' errors where the sum does not exceed $\epsilon$. Using the standard inequality $\exp\langle f\rangle\le \langle \exp f\rangle $, we obtain the following expression involving the averages of the free energies \eqref{eq:defect-free-energy-max} over ``good'' errors only: \begin{equation} \label{eq:large-sum-upper-bound} \sum_{\mathbf{c}\not\simeq\mathbf{0}}\exp\left({-\beta [\Delta F_\mathbf{c}^{\rm max}(\mathbf{s}_\mathbf{e};\beta)]_\mathrm{good}}\right)\le \epsilon. \end{equation} Rewriting this sum in terms of an average over non-trivial defects which we denote as $\left\langle\, \cdot\,\right\rangle_{\mathbf{c}\not\simeq\mathbf{0}}$, and using the same inequality, we get \begin{equation} \label{eq:large-sum-upper-bound} (2^{2k}-1)\exp\left(-\beta \left\langle [\Delta F_\mathbf{c}^\mathrm{max}(\mathbf{s}_\mathbf{e};\beta)]_\mathrm{good} \right\rangle_{\mathbf{c}\not\simeq\mathbf{0}}\right)\le \epsilon. \end{equation} It is convenient to introduce an analog of the tension~\eqref{eq:defect-tension} for finite $\epsilon$, \begin{equation} \label{eq:tension-epsilon} \lambda_\mathbf{c}^{(\epsilon)}\equiv {[\Delta F_\mathbf{c}^{\rm max}]_\mathrm{good}\over d_\mathbf{c}}, \end{equation} along with the corresponding average ${\overline\lambda}_{(\epsilon)}$ over non-trivial defects $\mathbf{c}\not\simeq\mathbf{0}$, defined as in Eq.\ \eqref{eq:average-tension}. According to Lemma~\ref{lemma:Fmax-bounds}, each of the tensions satisfy $0\le \lambda_\mathbf{c}^{(\epsilon)}\le 2$, which means the same bounds for the defects-average, $0\le {\overline\lambda}_{(\epsilon)}\le 2$. With the help of the trivial upper bound $d_\mathbf{c}\le N_\mathrm{b}=2n$, Eq.\ \eqref{eq:large-sum-upper-bound} gives \begin{equation} \label{eq:large-sum-bound} (2^{2k}-1)\exp({-2n\beta{\overline\lambda}^{(\epsilon)}})\le\epsilon, \end{equation} which implies for large $n$, $k$ \begin{equation} \label{eq:tension-bound-epsilon} \beta{\overline\lambda}^{(\epsilon)}\ge {k\over n}\log 2=R\log2. \end{equation} We can now introduce the full average tension ${\overline\lambda}$ which involves both ``good'' and ``bad'' errors by writing a Bayesian expansion similar to Eq.\ \eqref{eq:Bayesian-expansion}. The key observation leading to the statement of the Theorem is that the contribution of ``bad'' errors disappears in the large-$n$ limit since for each error configuration the tension is limited, while the total probability of ``bad'' errors $P_\mathrm{bad}\to0$. \end{proof} As a consequence, for any code family with a finite rate $R$, we expect one of the two possibilities at the transition to a disordered phase: (\textbf{i}) Transition driven by proliferation of some (e.g., finite) subset of the defects whose tensions $\lambda_\mathbf{c}$ vanish at the transition, with the average in Theorem \ref{th:tension-average} still finite; and (\textbf{ii}) Transition driven by the entropy of some macroscopic number of the defects, in which case tensions of all defects remain bounded at the transition, $\lambda_\mathbf{c}\ge\lambda_0>0$. In the case (\textbf{i}), one gets to a phase with ``limited disorder'' where only some of all possible defects $\mathbf{c}$ may happen with non-zero probability at large $n$. {\em Continuity of the transition:\/} At the Nishimori line, the average energy is known exactly\cite{Nishimori-1981,Nishimori-1980,Nishimori-book}, it is a continuous function of parameters. This guarantees the continuity of the decoding transition. The same conclusion can be drawn from the bound \eqref{eq:specific-heat-bound} on the specific heat along the Nishimori line---the derivation is identical to the standard case\cite{Nishimori-1980,Morita-Horiguchi-1980,Nishimori-1981,Nishimori-book}. On the other hand, away from the Nishimori line, the transition from an ordered to a disordered phase can be (and often is) discontinuous. In particular, mean field analysis using the TAP equations (named for Thouless, Anderson, and Palmer, see Ref.~\cite{Thouless-Anderson-Palmer-1977}) generically gives a discontinuous transition for local magnetization whenever the bonds $R_b$ couple more than two spins. {\em Self-duality in the absence of disorder:\/} In the absence of errors, we can use Wegner's duality~\eqref{eq:duality} to relate the partition functions of the models with the generator matrices $G$ and $G^*$, that is, Eqs.\ \eqref{eq:partition0} and \eqref{eq:Z-tot}, since the matrices $G^*$ and $\widetilde G^*$ differ by an inessential permutation of columns (bonds). Assuming the transition is unique, whether continuous or not, it must happen at the self-dual point, $\sinh (2\beta_{\rm s.d.})=1$. Here Eq.\ \eqref{eq:duality} gives $Z_{\rm tot}(\mathbf{0};\beta_{\rm s.d.})= 2^k Z_{0}(\mathbf{0};\beta_{\rm s.d.})$, or, equivalently, \begin{equation} \label{eq:clean-self-dual-point} \sum_{\mathbf{c}\not\simeq\mathbf{0}}e^{-\beta_{\rm s.d.}\Delta F^{(0)}_\mathbf{c}(\mathbf{0};\beta_{\rm s.d.})}=2^k-1. \end{equation} This equation is exact since no disorder is involved. The summation over ${\bf c}$ here includes $2^{2k}-1$ terms, and the result is independent of the distance of the code. For a finite-$R$ code family, arguments similar to those in the proof of Theorem \ref{th:tension-average} give a lower bound ${\overline\lambda}_\mathrm{s.d.}\ge (R/2)\ln 2$, which is smaller by half of the corresponding bound deep inside an ordered phase. {\em Location of the multicritical point:\/} In many types of local spin glasses on self-dual lattices the transition from the ordered phase on the Nishimori line happens at a multicritical point whose location to a very good accuracy has been predicted by the strong-disorder self-duality conjecture\cite{Nishimori-1979,Nishimori-Nemoto-2003,Nishimori-2007,% Nishimori-Ohzeki-2006,Ohzeki-Nishimori-Berker-2008,% Ohzeki-2009,Bombin-PRX-2012,Ohzeki-Fujii-2012}. In case of the Ising spin glasses, the corresponding critical probability $p_c\approx 0.110$ satisfies Eq.\ \ref{eq:self-duality-disordered}. The derivation of this expression\cite{Nishimori-1979} uses explicitly only the probability distribution of allowed energy values for a single bond. Our limited simulations indicate that for several quasi-local models (see Example~\eqref{ex:extended-cyclic}) with finite $k$ the multicritical point is indeed located at $p_c\approx 0.11$, also very close to the Gilbert-Varshamov existence bound for zero-rate codes. However, for code families with finite rates $k/n$, see Example \ref{ex:finite-R}, the threshold probability must be below the Shannon limit \eqref{eq:shannon-threshold-CSS}, which means the self-duality conjecture must be strongly violated for $R>1/2$. \subsection{Transition between defect-free and fixed-defect phases} \ Theorem~\ref{th:Nishimori-line-defect-free} states that on the Nishimori line below the decoding transition the spin model \eqref{eq:partition0} is in the defect-free phase. If a distinct fixed-defect phase exists somewhere on the phase diagram, there is a possibility for a transition between these phases. More generally, defect-free phase is a special case of an ordered fixed-defect phase. One can imagine a transitions between two such phases. However, at least in the case of a temperature-driven transition, the spin model \eqref{eq:partition0} must become disordered at the transition point. Indeed, for a transition to happen at $T=T_0(p)$, at least for some of the likely disorder configurations, for $T<T_0(p)$, $Z_{\mathbf{c}_1}(\mathbf{e};\beta)$ must dominate, while for $T>T_0(p)$, some of errors $\mathbf{e}$ will be dominated by $Z_{\mathbf{c}_2}(\mathbf{e};\beta)$ with $\mathbf{c}_2\not\simeq\mathbf{c}_1$. This implies that at the actual transition point some codewords must become degenerate with non-zero probability, which would violate the condition in Def.\ \ref{def:fixed}. Once the system becomes disordered at some $p$, one would generically expect it to remain disordered at larger $p$. By this reason, we expect that non-trivial fixed-defect phases are not common. \subsection{Absence of a local order parameter} \ In Examples \ref{ex:CSS} to \ref{ex:finite-R} we considered some spin models which do not have any gauge-like symmetries. However, the same approach can be also used to construct non-local spin models which have ``local'' gauge symmetries and at the same time highly non-trivial phase diagrams. The following example is a generalization of the mutually dual three-dimensional Ising model and a random plaquette $\mathbb{Z}_2$ gauge. \begin{example} \label{ex:gauge} Consider a CSS code \eqref{eq:CSS} with the generators: \begin{eqnarray} {\cal G}_{X}&=& \left( E_{1}\otimes G,\;\;\; R\otimes E_{2}\right), \label{eq:3D1}\\ {\cal G}_{Z}&=& \left(\begin{array}{cc} R\otimes\widetilde{E}_{2},&\widetilde{E}_{1}\otimes G \\ E_{1}\otimes \widetilde{G},&0 \end{array}\right), \label{eq:3D2} \end{eqnarray} where $R$ is a square circulant matrix corresponding to the polynomial $h(x)=1+x$ and $G\equiv (G_X,G_Z)$ is the generator matrix \eqref{eq:generator-matrix} of an arbitrary quantum code. This construction follows the hypergraph-product code construction \eqref{eq:Till}, and the unit matrices $E_{1}$, $\widetilde{E}_{1}$, $E_{2}$, $\widetilde{E}_{2}$ are chosen accordingly. The additional block involving the conjugate matrix $\widetilde{G}=(G_{Z},G_{X})$ differentiates this construction from the hypergraph-product code construction. This code defines two non-interacting, mutually dual spin models \eqref{eq:Z0}. In particular, when $G$ corresponds to a toric code, we recover a three dimensional Ising model for $\mu=X$, and a three dimensional random plaquette $\mathbb{Z}_2$ gauge model for $\mu=Z$. \end{example} A spin model with a local gauge symmetry cannot have a local order parameter\cite{Wegner71}. Thus, one cannot hope to construct a local order parameter that would describe the transition from a defect-free phase and be applicable to all of the models \eqref{eq:partition0}. The same result can be obtained by noticing that the transition from the defect-free phase can be driven by delocalization of any of $2^{2k}-1$ non-trivial defects. For a finite-$R$ code family this number scales exponentially with $n$; we find it not likely that an order parameter defined locally can distinguish this many possibilities. \subsection{Spin correlation functions} \ The average of any product of spin variables which cannot be expressed as a product of the bond variables in the Hamiltonian is zero \cite{Wegner71}. Thus, we consider two most general non-trivial spin correlation functions: \begin{eqnarray} \label{eq:correlation-function-tot} Q^\mathbf{m}_\mathrm{tot}(\mathbf{e};\beta)&\equiv& {\mathscr{Z}_{\mathbf{e},\mathbf{m}}(\widetilde G^*;\{K_b=\beta\})\over \mathscr{Z}_{\mathbf{e},\mathbf{0}}(\widetilde G^*;\{K_b=\beta\})} ,\\ Q^\mathbf{m}_\mathbf{c}(\mathbf{e};\beta)&\equiv& {\mathscr{Z}_{\mathbf{e}+\mathbf{c},\mathbf{m}}( G;\{K_b=\beta\})\over \mathscr{Z}_{\mathbf{e}+\mathbf{c},\mathbf{0}}(G;\{K_b=\beta\})}; \label{eq:correlation-function-c} \end{eqnarray} both correlation functions satisfy $-1\le Q^\mathbf{m}(\mathbf{e};\beta)\le 1$. The thermal average in Eq.\ \eqref{eq:correlation-function-c} corresponds to summation over spin configurations in $Z_\mathbf{c}(\mathbf{e};\beta)$, while that in Eq.\ \eqref{eq:correlation-function-tot} corresponds to the same defect and spin configurations that enter $Z_\mathrm{tot}(\mathbf{s};\beta)$, cf.\ Eq.\ \eqref{eq:Z-tot}. Using the explicit form \eqref{eq:generalized-wegner}, definitions of $Z_\mathrm{tot}$ and $Z_\mathbf{c}$, and the fact that additional linearly-independent rows in $\widetilde G^*$ form a basis of non-equivalent codewords $\mathbf{c}$, we can write the following expansion \begin{equation} \label{eq:correlation-function-expansion} Q_\mathrm{tot}^\mathbf{m}(\mathbf{e};\beta) =\sum_\mathbf{c}(-1)^{\mathbf{c}\cdot \mathbf{m}} {Z_\mathbf{c}(\mathbf{e};\beta)\, Q_\mathbf{c}^\mathbf{m}(\mathbf{e};\beta)\over Z_\mathrm{tot}(\mathbf{s}_\mathbf{e};\beta)}. \end{equation} The correlation functions contain the products of $\prod_b R_b ^{m_b}=\prod_r (S_r )^{G_{rb}m_b}$, or the product of spin variables in the support of the syndrome vector $\mathbf{s}_{\widetilde{\mathbf{m}}}\equiv G \mathbf{m}^T=\widetilde G\widetilde{\mathbf{m}}^T$ corresponding to $\mathbf{m}$. Thus, the defined correlation functions are trivially symmetric with respect to any gauge symmetries, $S_r\to S_r (-1)^{\alpha_r}$, $\boldsymbol\alpha G=0$ (present whenever there are $N_g>0$ linearly dependent rows of $G$), as well as the transformations of $\mathbf{m}$ leaving the syndrome invariant, $\mathbf{m}\to \mathbf{m}+{\boldsymbol\gamma} \widetilde G$. \emph{Wilson loop:} In lattice gauge theory, in the absence of a local order parameter, the deconfining transition can be characterized by the average of the Wilson loop operator\cite{Wilson-loop-1974}, with the thermal and disorder average scaling down as an exponent of the area in the high-temperature phase, and an exponent of the perimeter in the low-temperature phase. In the case of the three-dimensional ${\mathbb Z}_2$ gauge model\cite{Wegner71,Kogut79}, see Example \ref{ex:gauge}, the corresponding correlator is a product of plaquette operators covering certain surface. The correlation function \eqref{eq:correlation-function-c} is a natural generalization to non-local Ising models, with the minimum weight $\mathbf{d}_\mathbf{m}\equiv \min_{\boldsymbol\gamma}\mathop{\rm wgt}(\mathbf{m}+{\boldsymbol\gamma}\tilde G)$ of $\mathbf{m}$ corresponding to the area, and the binary weight of the syndrome $\mathbf{s}_{\widetilde{\mathbf{m}}}$ corresponding to the perimeter. Indeed, taking $\mathbf{e}=\mathbf{c}=\mathbf{0}$, at high temperatures, independent bond variables $R_b$ fluctuate independently, and one can write $Q_\mathbf{0}^\mathbf{m}(\mathbf{0};\beta)=\langle \prod R_b^{m_b}\rangle\propto \beta^{d_\mathbf{m}}$, which corresponds to the area law. The same quantity at low temperatures can be evaluated in leading order by substituting average spin $S_b\to \langle S_b\rangle\sim M$, with the result $Q_\mathbf{0}^\mathbf{m}(\mathbf{0};\beta)\propto M^{\mathop{\rm wgt} \mathbf{s}_{\widetilde{\mathbf{m}}}}$, the perimeter law. We expect such a behavior to persist in a finite range of temperatures below the transition from the ordered phase, at least in the case of LDPC codes. However, in general there is no guarantee that the spin model \eqref{eq:partition0} has a unique transition, and the functional form of the spin correlation function \eqref{eq:correlation-function-c} with generic $\mathbf{m}$ cannot be easily found at intermediate temperatures. By this reason, it remains an open question whether the scaling of the analog of the Wilson loop can be used to distinguish between specific disordered phases. \emph{Indicator correlation functions.} Consider the correlation function \eqref{eq:correlation-function-expansion} for $\mathbf{m}$ such that the corresponding syndrome is zero $\mathbf{s}_{\widetilde{\mathbf{m}}}=\mathbf{0}$. Then the spin products in each term of the expansion disappear, and $Q^\mathbf{m}_\mathbf{c}(\mathbf{e};\beta)=1$ for any $\mathbf{c}$. The corresponding $\mathbf{m}$ are just the dual codewords $\widetilde{\mathbf{b}}$. In general, for a pair of codewords $\mathbf{b}$, $\mathbf{c}$, the scalar product $\mathbf{c}\cdot \widetilde{\mathbf{b}}=0$ iff the corresponding logical operators commute, see the \textbf{Background} section. For each codeword $\mathbf{c}\not\simeq\mathbf{0}$ there is at least one codeword $\mathbf{c}'$ such that $\mathbf{c}\cdot{\widetilde{\mathbf{c}}}'=1$, and the $2k$ scalar products $\mathbf{c}\cdot {\widetilde{\mathbf{b}}}$ with the basis codewords $\mathbf{b}$ are sufficient to recover the equivalence class of $\mathbf{c}$. We further note that in the defect-free phase, for any likely disorder $\mathbf{e}$, $Z_\mathrm{tot}(\mathbf{s}_\mathbf{e};\beta)$ is dominated by the term with $\mathbf{c}=0$, thus at large $n$ the average $[Q_\mathrm{tot}^{\widetilde{\mathbf{b}}}(\mathbf{s}_\mathbf{e};\beta)]=1$ for any codeword $\mathbf{b}$. Similarly, in a fixed-defect phase, there is only one dominant term $Z_\mathbf{c}(\mathbf{e};\beta)$, and $[Q_\mathrm{tot}^{\widetilde{\mathbf{b}}}(\mathbf{s}_\mathbf{e};\beta)]=\pm1$; the patterns of signs for different $\mathbf{b}$ can be used to find out which of the codewords $\mathbf{c}$ dominates the partition function. \subsection{Bound on the location of the defect-free phase} In order to prove the Theorem \ref{th:boundary}, we first need to extend identities of Nishimori's gauge theory of spin glasses\cite{Nishimori-1981,Horiguchi-Morita-1981,Nishimori-book} to the averages of the spin correlation functions \eqref{eq:correlation-function-tot}. We prove the following \begin{lemma} \label{lemma:Nishimori-identities} The disorder average of the spin correlation function \eqref{eq:correlation-function-tot} for any $\mathbf{m}$ satisfies $[Q_\mathrm{tot}^\mathbf{m}(\mathbf{e};\beta)]=[Q_\mathrm{tot}^\mathbf{m}(\mathbf{e};\beta)\,Q_\mathrm{tot}^\mathbf{m}(\mathbf{e};\beta_p)]$. \end{lemma} \begin{proof} Follows exactly the proof in the usual case, if we observe $$ \sum_\mathbf{{\boldsymbol\alpha}} P_0(\mathbf{e}+{\boldsymbol\alpha}\widetilde G^*) =2^{N_r-N_g+N_g^*}Z_\mathrm{tot}(\mathbf{s}_\mathbf{e};\beta_p), $$ where $N_r$ is the number of rows of the matrix $G$. \end{proof} \begin{proof}{of Theorem \ref{th:boundary}.} To shorten the notations, denote the correlation function in Lemma \ref{lemma:Nishimori-identities} as $A\equiv Q_\mathrm{tot}^\mathbf{m}(\mathbf{e};\beta)$ and $B$ the same correlation function at the Nishimori temperature, $\beta=\beta_p$. Lemma \ref{lemma:Nishimori-identities} gives \begin{equation} \label{eq:sg-identities} [A]=[AB],\quad [B]=[B^2]. \end{equation} Now, for any real-valued $t$, the inequality \begin{equation} 0\le [(A-t B)^2]=[A^2]+t^2[B^2]-2t [A B] \label{eq:uncertainty} \end{equation} must be valid. This is equivalent to $[AB]^2\le [A^2][B^2]$. Using the identities~\eqref{eq:sg-identities}, we obtain $[A]^2\le [A^2][B]\le [B]=[B^2]$, which is equivalent to \begin{equation} \label{eq:sg-inequality} [Q^{\mathbf{m}}_\mathrm{tot}(\mathbf{e};\beta)]^2\le [Q^{\mathbf{m}}_\mathrm{tot}(\mathbf{e};\beta_p)]. \end{equation} A different derivation of this inequality can be found in Ref.~\cite{Nishimori-2002}. If sum both sides of Eq.\ \eqref{eq:sg-inequality} over all dual codewords $\mathbf{m}=\widetilde{\mathbf{c}}$, using the expansion \eqref{eq:correlation-function-expansion}, we obtain \begin{equation} \label{eq:sg-inequality-two} \sum_{\mathbf{c}} [Q^{\mathbf{m}=\widetilde{\mathbf{c}}}_\mathrm{tot}(\mathbf{e};\beta)]^2\le 2^{2k}\left[{Z_0(\mathbf{e};\beta_p)\over Z_\mathrm{tot}(\mathbf{s}_\mathbf{e};\beta_p)}\right]. \end{equation} The r.h.s.\ equals exactly the average probability of successful decoding times $2^{2k}$; for large $n$ it equals $2^{2k}$ below the decoding transition, $p<p_c$, and it is smaller than $2^{2k}$ above the decoding transition. On the other hand, we saw that in in the defect-free phase, at large $n$, all correlation functions $[Q^{\widetilde{\mathbf{m}}}_\mathrm{tot}(\mathbf{e};\beta)]=1$. According to Eq.\ \eqref{eq:sg-inequality-two}, this is only possible for $p<p_c$. \end{proof} This implies that the phase boundary below the Nishimori line is either vertical or reentrant as a function of temperature. Recent numerical studies suggest that the second option is true for the random bond Ising model\cite{Thomas-Katzgraber-2011}. \section{Concluding Remarks} In this work we considered spin glass models related to the decoding transition in stabilizer error correcting codes. Generally, these are non-local models with multi-spin couplings, with exact Wegner-type self-duality at zero disorder, but no $S\to-S$ symmetry or other sources of ground state degeneracy. Nevertheless, we show that for models corresponding to code families with maximum-likelihood decoding (ML) transition at a finite bit error probability $p_c$, there is a region of an ordered phase which must be limited to $p\le p_c$, and a line of non-trivial phase transitions. The models support generally non-topological extended defects which generalize the notion of domain walls in local spin models. For a quantum code that encodes $k$ qubits, there are $2^{2k}-1$ different types of extended defects. A disordered phase is associated with proliferation of at least one of such defects. In an ordered phase, the free energy of each defect must diverge at large $n$. Moreover, for a code family with finite rate $k/n$, the average defect tension, an analog of domain wall line tension, must exceed some finite threshold (Theorem \ref{th:tension-average}). The original decoding problem corresponds to the Nishimori line at the phase diagram of the disordered spin model, with the maximum-likelihood (ML) decoding transition located exactly at the multicritical point of the spin model. The ML decoding threshold is the maximum possible threshold for any decoder. Thus, exploring this connection with statistical mechanics of spin glasses, one can compare codes irrespectively of the decoder efficiency, and get an absolute measure of performance for any given, presumably suboptimal, decoder. There are a number of open question in relation to the models we studied. In particular, is there some sort of universality for transitions with nonlocal spin couplings? If yes, what determines the universality class, and is there an analog of the hyperscaling relation? \section*{Acknowledgments} This work was supported in part by the U.S. Army Research Office under Grant W911NF-11-1-0027, by the NSF under Grant 1018935, and Central Facilities of the Nebraska Center for Materials and Nanoscience supported by the Nebraska Research Initiative. We acknowledge hospitality of the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center with support of the Gordon and Betty Moore Foundation. We also thank Professor Hidetoshi Nishimori for useful comments on early version of the manuscript. \bibliographystyle{pnas}
1,314,259,993,968
arxiv
\section{Introduction} \IEEEPARstart{B}{iofilms} can be defined as bacterial cities where communicating bacteria live together. A biofilm mainly consists of a bacterial population and extracellular polymeric substances (EPS). They are related to negative effects on human health, since they can cause infection or antibiotic resistance. Therefore, modeling the disruption of the biofilm is essential. During the growth phase of the biofilm, bacteria use a cell-to-cell communication mechanism called quorum sensing (QS). In QS, they send to each other autoinducer molecules to control if their population is sufficient in the medium. Once they sense that they reach a sufficient population, they trigger intracellular mechanisms such as aggregation, biofilm formation and production of virulence factors \cite{perez2016mathematical}. When the biofilm is formed on a surface, it is more difficult to eradicate it with chemical and mechanical disruption methods \cite{paluch2020prevention}. Hence, methods to exploit the QS mechanism are investigated to prevent or disrupt biofilm formation in the literature. To this end, quorum quenching (QQ) strategy which includes methods to inhibit the communication among bacteria is proposed \cite{paluch2020prevention}. These QQ methods include the degradation and inhibition of autoinducer molecules and blocking the autoinducer reception via blocking the intracellular signal transduction pathways. Another strategy based on exploiting the QS mechanism is the usage of QS mimickers, which can bind to autoinducer receptors of bacteria, employed in inter-kingdom signaling \cite{papenfort2016quorum}. For example, these QS mimickers are employed as a defense mechanism in plants to disrupt the biofilm with an early induction of QS. Rosmarinic acid, which is a QS mimicker secreted as a plant defense compound, triggers an early QS activation and then can kill the bacteria and eradicate the EPS in the biofilm \cite{corral2016rosmarinic}. As for the modeling of biofilm disruption via exploiting the QS, several methods in systems biology and molecular/biological communication literature are proposed. In \cite{fozard2012inhibition}, an individual based model is proposed to observe the effect of QS inhibition in biofilm formation. In \cite{martins2016using}, a deterministic bacterial wall model consisting of communicating bacteria is proposed to disrupt the bacteria via starvation. In \cite{martins2018molecular}, a deterministic biofilm suppression model in which QS signals are jammed is proposed. In addition, QS in a bacterial community is modeled by using a queuing model in \cite{michelusi2016queuing}. However, none of these models focuses on the disruption of the biofilm, i.e., killing bacteria and eradicating EPS, by exploiting the QS mimicking mechanism. In this paper, a stochastic biofilm disruption model by using QS mimickers is proposed. The biological phenomena for the formation and disruption of the biofilm are modeled via coupled chemical reactions, i.e., a chemical reaction network (CRN). This model is based on four biological states. The first two states represent the formation of the biofilm before (downregulation) and after (upregulation) the QS activation according to the autoinducer and QS mimicker concentrations. In these states, QS mimickers help for an earlier QS response. In the last two states, disruption is modeled via two different thresholds. Depending on the QS mimicker concentration, firstly, EPS is eradicated and then bacteria are killed in the last state. Furthermore, a state-based stochastic simulation algorithm is proposed to simulate the CRN using these biological states. Our results are validated with the experimental results of \textit{Pseudomonas aeruginosa} type as bacteria and rosmarinic acid as QS mimicker. Our model is able to show the stochasticity in the transition of the biological states and stochastic changes in the bacterial and EPS concentrations within the biofilm due to the randomness of the chemical reactions in the medium. The main contributions of this paper are to provide a state-based CRN model for the disruption of the biofilm via QS mimickers and a state-based stochastic simulation algorithm. \vspace{-0.4cm} \section{Model} \begin{figure}[h] \centering \includegraphics[width=1.00\columnwidth]{Biofilm_Disruption.pdf} \vspace{-0.6cm} \caption{ Biological processes/states for biofilm formation and its disruption. } \label{Biofilm_block} \vspace{-0.4cm} \end{figure} \subsection{Biological Background of Biofilm Formation/Disruption} In biofilm formation, QS mechanism can be employed to increase the EPS production rate. When the autoinducer concentration exceeds a threshold, bacteria pass from downregulation to upregulation state for a higher rate EPS production \cite{frederick2011mathematical}. Furthermore, biofilms can be disrupted by emitting biofilm disrupter molecules such as rosmarinic acid which is secreted by plants and acts also as a QS mimicker ($M$) \cite{corral2016rosmarinic}. These $M$ molecules trigger an earlier QS upregulation state. However, they also disrupt the biofilm in two different stages as shown by the \textit{in vitro} results in \cite{corral2016rosmarinic}. First, EPS is removed, when the concentration of $M$ is above a threshold. Second, bacteria begin to be killed in addition to the EPS disruption after the mimicker concentration exceeds a second disruption threshold. Hence, there are four states where two states are related to QS as downregulation and upregulation, and other two states define the EPS and biofilm (bacteria and EPS) disruption. Next, the biological processes given in this section are modeled based on these four states. \vspace{-0.3cm} \subsection{Chemical Reaction Network} \label{CRN} In this paper, our first aim is to model the phenomena about the effect of QS mimickers on QS and biofilm disruption by using the \textit{in vitro} experimental results in \cite{corral2016rosmarinic}. Therefore, the QS-based biofilm formation and QS mimicker-based biofilm disruption are assumed to occur in the same homogeneous volume ($V$) as in \cite{corral2016rosmarinic}. Bacteria reproduce according to the availability of nutrients in the medium. All the aforementioned processes as summarized in Fig. \ref{Biofilm_block} are modeled as coupled chemical reactions, i.e., a CRN, based on four states ($S_1$-$S_4$). In this CRN given in (\ref{R1a})-(\ref{R9}) according to their states, the chemical species $ A $, $ B $, $ E $, $ S_N $, $ M $ and $ C $ represent autoinducer molecules, bacteria, EPS, nutrient substrates, QS mimickers and nutrient-bacterium complex, respectively. In addition, $\emptyset$ shows the species that are of no interest and $Y_{b/s}$ is the yield coefficient for nutrient consumption. Moreover, it should be noted that the variables on the arrows in (\ref{R1a})-(\ref{R9}) represent the stochastic reaction constants ($r_{a_1}$, $ r_{e_1} $, etc.) which are not always the same with deterministic reaction rate constants \cite{gillespie1976general}. \setlength{\belowdisplayskip}{1pt} \setlength{\belowdisplayshortskip}{0pt} \setlength{\abovedisplayskip}{1pt} \setlength{\abovedisplayshortskip}{0pt} \paragraph*{\underline{State $S_1$}} \begin{center} \vspace{-0.5cm} \begin{tabular} {p{0.4\columnwidth}p{0.45\columnwidth}} \begin{equation} \label{R1a} \ce{\emptyset{} + B ->[r_{a_1}] A + B} \end{equation} & \begin{equation} \label{R2a} \ce{\emptyset{} + B ->[r_{e_1}] B + E} \end{equation} \end{tabular} \end{center} \vspace{-0.5cm} \paragraph*{\underline{State $S_2$}} ~\\ \begin{center} \vspace{-0.8cm} \begin{tabular} {p{0.4\columnwidth}p{0.45\columnwidth}} \begin{equation}\label{R1b} \ce{\emptyset{} + B ->[r_{a_2}] A + B} \end{equation} & \begin{equation} \label{R2b} \ce{\emptyset{} + B ->[r_{e_2}] B + E} \end{equation} \end{tabular} \end{center} \vspace{-0.3cm} \paragraph*{\underline{State $S_3$}} \begin{equation} \label{R3} \ce{E ->[r_{e_d}] \emptyset{}} \end{equation} \vspace{-0.2cm} \paragraph*{\underline{State $S_4$}} \begin{equation} \label{R4} \ce{B ->[r_{d}] \emptyset{}} \end{equation} \vspace{-0.2cm} \paragraph*{\underline{States $S_1 - S_4$}} ~\\ \begin{center} \vspace{-0.8cm} \begin{tabular} {p{0.4\columnwidth}p{0.45\columnwidth}} \begin{equation} \label{R5} \ce{\emptyset{} ->[r_m] M} \end{equation}& \begin{equation} \label{R6} \ce{A ->[r_\sigma] \emptyset{}} \end{equation} \end{tabular} \end{center} \begin{center} \vspace{-0.8cm} \begin{tabular} {p{0.4\columnwidth}p{0.45\columnwidth}} \begin{equation} \label{R7} \ce{B + S_N ->[r_c] C} \end{equation} & \begin{equation} \label{R8} \ce{C ->[r_g] (1 + Y_{b/s})B} \end{equation} \end{tabular} \end{center} \vspace{-0.5cm} \begin{equation} \label{R9} \ce{M ->[r_{dm}] \emptyset{}} \end{equation} In this CRN, reactions (\ref{R5})-(\ref{R9}) occur in all states. Reactions (\ref{R5}) and (\ref{R9}) represent the production and degradation of $M$, respectively. Autoinducer molecules degrade with the rate $r_\sigma$ as given in (\ref{R6}). Reactions (\ref{R7}) and (\ref{R8}) show the bacterial growth based on Monod kinetics in a chemical reaction form \cite{alvarez2019theoretical}. In (\ref{R7}), bacteria consume nutrient substrates with a rate $r_c$ to produce $C$ which actually represents the bacteria consuming the nutrients. Then, these $C$ complexes produce new bacteria with the rate $r_g$ as given in (\ref{R8}). $r_g$ can be calculated as \cite{alvarez2019theoretical} \begin{equation} \label{r_g} r_g = \frac{\mu_{max}(1+Y_{b/s})}{Y_{b/s}}, \end{equation} where $\mu_{max}$ is the maximum specific growth rate (h$^{-1}$). Besides, the stochastic production constant for $C$ can be given as $r_c = k_c C_g$ where $C_g$ is the nutrient concentration (g/l) and $k_c$ is the deterministic reaction rate constant as given by \cite{alvarez2019theoretical} \begin{equation} k_c = \frac{r_g}{(1+Y_{b/s}) K_M}, \end{equation} where $K_M$ is the Monod constant (g/l). Furthermore, reactions (\ref{R1a}) and (\ref{R2a}) show the productions of $A$ and $E$ with low rates ($r_{a_1}$ and $r_{e_1}$) at state $S_1$, respectively. In state $S_2$, (\ref{R1b}) and (\ref{R2b}) represent the production of $A$ and $E$ in higher rates ($r_{a_2}$ and $r_{e_2}$), respectively. In state $S_3$, $E$ is disrupted with the rate $r_{e_d}$ as shown in (\ref{R3}). In state $S_4$, bacteria are disrupted with the rate $r_d$ according to (\ref{R4}), while the EPS disruption in this state continues according to (\ref{R3}). The state decisions are made according to the detection rules given in (\ref{S1})-(\ref{S4}) by using the concentrations of $A$ ($C_A(t)$) and $M$ ($C_M(t)$), QS detection threshold ($\Gamma_{QS}$), EPS disruption threshold ($\Gamma_{DE}$) and biofilm (bacteria and EPS) disruption threshold ($\Gamma_{DB}$) with the condition $\Gamma_{QS}<\Gamma_{DE}<\Gamma_{DB}$. \paragraph*{\underline{State $S_1$ - QS Downregulation}} \begin{equation} \label{S1} C_M(t) < \Gamma_{DE} \quad \text{and} \quad C_A(t) + C_M(t) < \Gamma_{QS} \end{equation} \paragraph*{\underline{State $S_2$ - QS Upregulation}} \begin{equation} \label{S2} C_M(t) < \Gamma_{DE} \quad \text{and} \quad C_A(t) + C_M(t) \geq \Gamma_{QS} \end{equation} \paragraph*{\underline{State $S_3$ - EPS Disruption}} \begin{equation} \label{S3} \Gamma_{DE} \leq C_M(t) < \Gamma_{DB} \end{equation} \paragraph*{\underline{State $S_4$ - Biofilm Disruption}} \begin{equation} \label{S4} C_M(t) \geq \Gamma_{DB}. \end{equation} Next, these state-based decisions are employed for the stochastic simulation of the CRN. \vspace{-0.1cm} \section{State-based Stochastic Simulation Algorithm} \vspace{-0.0cm} In this section, the stochastic simulation method based on the biological states given in the previous section is elaborated. In this work, the direct Gillespie algorithm is employed \cite{gillespie1977exact}. However, since the biological states change the reaction probabilities in our case, we propose a state-based stochastic simulation algorithm (SbSSA) based on the direct Gillespie algorithm detailed as follows. Let $\mathbf{X}(t) = \left( A(t), B(t), E(t), M(t), S(t), C(t) \right)$ be the vector which holds the number of particles for all species in the CRN. The change vector which shows the change in the number of species with respect to the stoichiometric coefficients in (\ref{R1a})-(\ref{R9}) is also defined as $\pmb{\nu}_j = (\nu_{j,1},...,\nu_{j,6})$ for the $j^{th}$ reaction. Furthermore, the probability for the $ j^{th} $ reaction to occur in the infinitesimal period $[t, t+dt)$ is given by $a_j(\mathbf{x}) dt$ where $a_j(\mathbf{x})$ is the propensity function when $\mathbf{X}(t) = \mathbf{x}$. Lastly, we define the propensity vector for the CRN as $\mathbf{a} = (a_1, a_2,...,a_9)$. Here, this vector is defined for nine reactions, since reactions (\ref{R1a})-(\ref{R2a}) represented by $a_1$ and reactions (\ref{R1b})-(\ref{R2b}) represented by $a_2$ cannot occur simultaneously. Thus, the first four elements of the propensity vector is based on the biological states ($S_1$-$S_4$) and they are updated according to Algorithm \ref{Alg1}. \setlength{\textfloatsep}{1pt \begin{algorithm}[tb] \caption{State-based Stochastic Simulation Algorithm} \label{Alg1} \begin{algorithmic}[1] \While{$ t \leq t_s $} \If{$C_M(t) < \Gamma_{DE}$ \textbf{and} $C_A(t) + C_M(t) < \Gamma_{QS}$} \State $a(1:4) \gets [r_{a_1} X(2), r_{e_1} X(2), 0, 0 ] $ \Comment{$S_1$} \ElsIf{$ C_M(t) \hspace{-0.1cm} < \hspace{-0.1cm} \Gamma_{DE} $ \textbf{and} $ C_A(t) + C_M(t) \hspace{-0.1cm} \geq \hspace{-0.1cm} \Gamma_{QS} $} \State $a(1:4) \gets [r_{a_2} X(2), r_{e_2} X(2), 0, 0 ] $ \Comment{$S_2$} \ElsIf{$ \Gamma_{DE} \leq C_M(t) < \Gamma_{DB} $} \State $a(1:4) \gets [0, 0, r_{e_d} X(3), 0] $ \Comment{$S_3$} \Else \State $a(1:4) \gets [0, 0, r_{e_d} X(3), r_d X(2)] $ \Comment{$S_4$} \EndIf \State Update state-independent elements of $\mathbf{a}$ \Comment{All states} \State Determine $j$ and $\tau$ via Gillespie algorithm \cite{gillespie1977exact} \State $\mathbf{X} \gets \mathbf{X} + \pmb{\nu}_j$ \State $t \gets t + \tau$ \State $C_A(t) \gets X(1)/V$; $C_M(t) \gets X(4)/V$ \EndWhile \end{algorithmic} \end{algorithm} \setlength{\textfloatsep}{20.0pt plus 2.0pt minus 4.0pt} For a given simulation time ($t_s$) and thresholds, the state-dependent elements of $\mathbf{a}$ are determined according to the biological states as defined in (\ref{S1})-(\ref{S4}). Then, the elements of $\mathbf{a}$ which are not dependent on the states but only to the number of changing particles are updated. All of the propensities are calculated according to the order of the corresponding reaction as defined in \cite{gillespie1976general}. Next, one reaction ($j$) to occur is chosen for each step and the corresponding time step ($\tau$) is determined by the Gillespie algorithm \cite{gillespie1977exact}. In Gillespie algorithm, the time step is calculated as $\tau = (1/a_0) \ln(1/r_1)$ where $a_0$ is the sum of all elements in $\mathbf{a}$ and $r_1$ is a random variable drawn from a uniform distribution between $0$ and $1$, i.e., $U(0,1)$. In addition, $j$ is chosen so that $\sum_{k=1}^{j-1} a_k < r_2 a_0 \leq \sum_{k=1}^{j} a_k$ where $r_2 \sim U(0,1)$. The number of molecules are updated via the addition of the $j^{th}$ change vector with $\mathbf{X}$ and time ($t$) is updated by incrementing it with $\tau$. In Algorithm \ref{Alg1}, the Gillespie algorithm can be replaced by other modified versions such as explicit or implicit tau-leap methods \cite{gillespie2007stochastic}. \section{Numerical Results} In this section, numerical results which include the simulation and the validation with the \textit{in vitro} experimental results are given. As shown in Table \ref{Sim_parameters}, simulation parameters are mostly obtained from experimental works in \cite{henkel2013kinetic}, \cite{frederick2011mathematical}, \cite{beyenal2003double}, and \cite{corral2016rosmarinic} for \textit{Pseudomonas aeruginosa} type bacteria and rosmarinic acid as the QS mimicker. $r_c$ and $r_g$ are calculated by using the Monod kinetics by using $\mu_{max}$, $Y_{b/s}$, $K_M$ and $C_g$ values as explained in Section \ref{CRN}. Since stochastic reaction constants of $A$ and $M$ are much higher than the other species in the CRN, it results in unfeasible simulation times. Therefore, we assume these species in units consisting of $1$ n mol particles and use these units in simulations to scale the related stochastic reaction constants, i.e., $r_{a_1}$, $r_{a_2}$, $r_{m}$, $r_{\sigma}$ and the corresponding thresholds, i.e., $\Gamma_{QS}$, $\Gamma_{DE}$ and $\Gamma_{DB}$, which are only employed to determine the states. \begin{table}[tb] \vspace{-0.6cm} \centering \caption{Simulation parameters} \centering\setcellgapes{2pt}\makegapedcells \renewcommand\theadfont{\normalsize\bfseries} \scalebox{0.80}{ \begin{tabular}{p{55pt}|p{70pt}|p{55pt}|p{70pt}} \hline \textbf{Parameter} & \textbf{Value} & \textbf{Parameter} & \textbf{Value}\\ \hline \hline $r_{a_1}$ & $7.6$ n mol h$^{-1}$ \cite{henkel2013kinetic} & $r_{a_2}$ & $21.8$ n mol h$^{-1}$ \cite{henkel2013kinetic} \\ $r_{e_1}$ & $0.035$ h$^{-1}$ \cite{frederick2011mathematical} & $r_{e_2}$ & $0.35$ h$^{-1}$ \cite{frederick2011mathematical} \\ $r_{e_d}$ & $0.35$ h$^{-1}$ & $r_{d}$ & $7.5178$ h$^{-1}$ \\ $r_{m}$ & $872$ n mol h$^{-1}$ & $r_{\sigma}$ & $3.1$ n mol h$^{-1}$ \cite{henkel2013kinetic} \\ $\mu_{max}$ & $0.29$ h$^{-1}$ \cite{beyenal2003double} & $C_g$ & $0.005$ g l$^{-1}$ \cite{beyenal2003double} \\ $r_{c}$ & $0.0858$ h$^{-1}$ & $r_{g}$ & $0.7518$ h$^{-1}$ \\ $r_{dm}$ & $0.0031$ n mol h$^{-1}$ & $Y_{b/s}$ & $0.628$ \cite{beyenal2003double} \\ $V$ & $0.02$ l \cite{corral2016rosmarinic} & $K_M$ & $0.0269$ g l$^{-1}$ \cite{beyenal2003double} \\ $\Gamma_{QS}$ & $50$ $\mu$ mol l$^{-1}$ & $\Gamma_{DE}$ & $2$ m mol l$^{-1}$ \cite{corral2016rosmarinic} \\ $\Gamma_{DB}$ & $7.8$ m mol l$^{-1}$ \cite{corral2016rosmarinic} & & \\ \hline \hline \end{tabular} } \label{Sim_parameters} \vspace{-0.4cm} \end{table} \begin{figure}[b] \vspace{-0.8cm} \centering \includegraphics[width=0.8\columnwidth]{Val_5c_exp_sim_MC_1000.pdf} \vspace{-0.3cm} \caption{Mean bacterial growth (in colony forming units (CFU) per liter).} \label{Plot_val_B} \end{figure} Our proposed model is validated by the \textit{in vitro} experimental results obtained from \cite{corral2016rosmarinic} as shown in Figs. \ref{Plot_val_B} and \ref{Plot_val_dis}. Fig. \ref{Plot_val_B} shows the normalized mean concentration values of bacterial growth for two different parameter sets. The red dashed line in this figure is obtained via the experimental parameters in \cite{beyenal2003double}. Although this roughly agrees with the growth pattern, the experimental setup in \cite{beyenal2003double} was set for a chemostat which includes a flow of nutrients and dilutes the nutrient concentration to grow in a lower rate. A better fit is obtained by increasing the growth rate and decreasing the nutrient consumption rate as shown with the solid red line in Fig. \ref{Plot_val_B}. In Fig. \ref{Plot_val_dis}, survival percentage of bacteria is shown at the end of $24$ h, which overlaps with the \textit{in vitro} results for the disruption of bacteria. \begin{figure}[bt] \vspace{-0.5cm} \centering \includegraphics[width=0.8\columnwidth]{Val_5b_exp_sim_MC_100.pdf} \vspace{-0.3cm} \caption{Survival percentage of bacteria at the end of 24 h according to the QS mimicker concentration.} \label{Plot_val_dis} \vspace{-0.4cm} \end{figure} In Fig. \ref{Plot_biofilm} (a), the box plot of the bacterial concentration where the whiskers show the minimum and maximum values, the red line in the box represents the median value and the lower and upper boundaries of the box depict the 25 and 75 percentile, respectively. Furthermore, the distribution of the biological states stemming from the stochasticity can be observed. Since the growth of bacteria within the biofilm depends only on the nutrient level, bacterial population keeps on growing in states $S_2$ and $S_3$. After nearly $12$ h, the growth stops due to the ending of nutrients. In state $S_4$, the concentration steeply diminishes to zero, since the bacterial disruption only occurs in this state. In Fig. \ref{Plot_biofilm} (b), the box plot with the distribution of states is shown for EPS concentration. While bacteria keep on producing EPS, they start to be disrupted by QS mimickers after passing to the third state. Since the EPS disruption rate is much lower than the bacterial disruption rate, the EPS concentration shows a more gradual disruption profile with respect to the bacteria. Moreover, a clearer picture with the mean normalized concentrations of the biofilm and its components (each of the components is normalized according to its local maximum) is depicted in Fig. \ref{Plot_biofilm} (c). Since the concentration of the biofilm mostly consists of bacteria, the effect of the EPS in the biofilm may seem neglibigle. However, the volume covered by the EPS can be larger than bacteria. Moreover, it is observed that there is uncertainty in the transitions between states. Understanding this uncertainty from a bacterial behavior viewpoint is planned to be researched as the future work. \begin{figure*}[b] \vspace{-0.3cm} \includegraphics[width=0.33\textwidth]{Plot_B.pdf} \includegraphics[width=0.33\textwidth]{Plot_E.pdf} \includegraphics[width=0.33\textwidth]{Plot_Biofilm.pdf} \\ \scriptsize \hspace*{0.16\textwidth} (a) \hspace{0.31\textwidth} (b) \hspace{0.31\textwidth} (c) \\ \vspace{-0.5cm} \caption{Distribution of states and box plots of (a) bacterial concentration (b) EPS concentration. (c) Mean normalized concentration profile of the biofilm.} \label{Plot_biofilm} \end{figure*} \section{Conclusion} In this paper, a stochastic biofilm disruption model based on QS mimickers is proposed. In this model, a CRN is used for the biological processes including QS, production of the biofilm and its disruption. A stochastic state-based simulation algorithm is proposed and results are validated by experimental data. As the future work, the proposed method is planned to be employed to investigate the effect of communication in the bacterial behavior during the biofilm formation and disruption. \bibliographystyle{ieeetran}
1,314,259,993,969
arxiv
\section{\uppercase{Introduction}} \label{sec:introduction} \noindent In recent years, we have seen convolutional neural networks (CNN) dominate benchmark after benchmark for computer vision since the 2012 ImageNet competition breakthrough \cite{krizhevsky2012imagenet}. These methods prosper with an abundance of labeled data, and an abundance of data is often required for acceptable results \cite{oquab2014learning}. In contrast, for most people, it is only necessary to see one picture of an Atlantic Puffin to be able to identify correctly such a bird as one. \\ \indent To be fair, we have a lot of prior experience. It is easy to make a mental note: ``a puffin is a small black and white bird with orange feet and a colorful beak'' because we have learned a useful representation of the salient aspects of the image. Instead of being bogged down by the details of every exact pixel value, as an untrained AI might, we can focus our attention on the most useful features of the image. \\ \indent For this reason, investigations on the efficacy of methods to learn a concept from few samples are often done through the lens of representation learning \cite{bengio2013representation}, for example via transfer learning \cite{pan2010survey} or low-shot learning \cite{wang2018low}. \\ \indent In this work we consider a method to measure data efficiency, the performance of an algorithm as a function of the number of data points available during training time, which is an important aspect of machine learning \cite{kamthe2017data}, \cite{al2015efficient}. We quantitatively examine the performance of CNNs and hierarchical information-preserving graph-based slow feature analysis (HiGSFA) \cite{escalante2016improved} networks for varying training set sizes and for varying task types. \\ \indent HiGSFA has been chosen because it is the most recent supervised extensions of slow feature analysis (SFA) and has shown promise in visual processing with a notable distinction from CNNs: the computation layers are trained in a "greedy" layer-wise manner instead of via gradient descent \cite{escalante2016improved}. \\ \indent The methods are applied to visual tasks: a simple version of the MNIST classification task, where we vary the number of training points, and increasingly difficult tasks constructed from the Omniglot dataset. Our \textbf{contribution} in this work is a novel experimental protocol for evaluation of transfer learning applied to experimentally evaluate CNNs with the slowness-based HiGSFA. \section{\uppercase{Related Work}} \noindent Gathering data can be quite costly, so the question "how much is enough" has been considered in literature ranging from classical statistics \cite{krishnaiah1980handbook} over pattern recognition \cite{raudys1991small} to experimental design \cite{beleites2013sample}. As data plays a central role in machine learning as well, the study of its effective use has garnered attention from all branches of the field. \\ \indent In a similar vein as our work, \cite{lawrence1998size} analyze the effect of generalization when the number of sample points are varied for supervised learning tasks. Equipped with the prior that supervised learning methods' performance obeys the inverse power law, \cite{figueroa2012predicting} trained a model to predict the classification accuracy of a model given a number of inputs.\\ \indent Transfer learning straddles the intersection between supervised learning and unsupervised learning, where the focus is uncovering representations that are both general and also useful for particular applications. The Omniglot data set we consider was introduced in \cite{lake2015human} and has been popular for developing transfer learning methods \cite{bertinetto2016learning}, \cite{edwards2016towards}, \cite{schwarz2018progress}.\\ \indent With its sparse rewards and problems of credit assignment, reinforcement learning (RL) has a particular need for data efficiency, motivating such early works as prioritized sweeping \cite{moore1993prioritized}. More recently, \cite{riedmiller2005neural} designed the neural-fitted Q-learner for data efficiency. This method has been successfully combined with deep auto-encoder representations for visual RL \cite{lange2010deep}. Deep Q-Networks have made better still use of data for RL by combining experience replay, target networks, reward clipping and frame skipping \cite{mnih2013playing} \cite{mnih2015human}.\\ \indent SFA was introduced in 2002 by Wiskott and Sejnowski as an unsupervised learning method of temporally invariant features \cite{wiskott2002slow}. These features can be learned hierarchically in a bottom-up manner, reminiscent of deep CNNs: slow features are learned on spatial patches of the input and then passed to another layer for slow feature learning. The method is then called hierarchical slow feature analysis (HSFA) and has attracted attention in neuroscience for plausible modeling of grid, place, spatial-view, and head-direction cells \cite{franzius2007slowness}. \\ \indent For labeled data, the method admits a supervised extension in the form of graph-based SFA (GSFA) \cite{escalante2013solve}. Information is often lost in early layers of hierarchical SFA --- that could contribute to a globally slower signal --- prompting the development of HiGSFA \cite{escalante2016improved}.\\ \indent Deep learning extensions of SFA is currently an active research area. The SFA problem is solved with stochastic optimization in Power-SFA \cite{schuler2018gradient}. A differentiable whitening layer is constructed, allowing for a non-linear expansion of the input to be learned with backpropagation. Another recent method, SPIN \cite{DBLP:journals/corr/abs-1806-02215} learns eigenfunctions of linear operators with deep learning methods and can be applied to the SFA problem as well. \section{\uppercase{Methods}} \noindent Below we describe the novel experimental setup as well as the methods being evaluated using the setup. For the remainder of the article we assume CNNs to be well-known and understood but we can recommend \cite{cs231n2017convolutional} as a good pedagogical introduction to the method. \subsection{HiGSFA} HiGSFA belongs to a class of methods motivated by the slowness principle, which is based on the assumption that important aspects vary more slowly than unimportant ones \cite{sun2014dl}. This model takes as input data points such that data point $x_n$ is node $n$ in an undirected graph with weight $v(n)$. This can control the relative weight each data point has during the training but we set it as uniformly 1 is our experiments below. The edge between nodes $n$ and $n'$ is $\gamma_{n, n'}$ and signifies a relationship between the data. This could be their spatial or temporal proximities or whether they belong to the same class. For instance, during our classification tasks below, we set: \begin{equation} \gamma_{n, n'}= \begin{cases} 1,& \text{if } n \text{ and } n' \text{ in same class}\\ 0, & \text{otherwise} \end{cases} \end{equation} Given a function space $\mathcal{F}$ with elements $g_j$, we learn slowly varying features $y_j(n) = g_j(x_n)$ of the data by solving the optimization problem \cite{escalante2013solve}: \begin{equation} \begin{aligned} & \underset{g_j}{\text{minimize}} & & \frac{1}{R} \gamma_{n, n'} \sum\limits_{n, n'} \left( y_j(n) - y_j(n') \right)^2 \\ & \text{subject to} & & \frac{1}{Q} \sum\limits_{n} v_n y_j(n)\ \ \ \ \ \ \ \, = 0 \ \ \ \ \ \ \ \ \ \, \\ &&& \frac{1}{Q} \sum\limits_{n} v_n \left(y_j(n) \right)^2\, \ \ \ = 1 \ \ \ \ \ \ \ \ \ \ \, \\ &&& \frac{1}{Q} \sum\limits_{n} v_n y_j(n) y_{j'}(n) = 0\text{, } j' < j \ \\ & \text{where} &&Q = \sum\limits_{n} v_n,\ R = \sum\limits_{n, n'} \gamma_{n, n'} \end{aligned} \end{equation} The first constraint secures weighted zero mean, the second constraint secures weighted unit variance and the third one secures weighted decorrelation and order. \\ \indent To reduce computational complexity, we extract features of the data hierarchically. Similarly to CNNs, we extract features from $F \times F$ patches of the image data in the first layer, then extract features of $F' \times F'$ patches of the output features in the next layer and so on. The layers are trained by solving the optimization problem, one layer at a time, from the input layer to the output layer. The layer-wise parameters can be shared. As we can experience information-loss while doing these layer-wise optimizations, an information-preserving mechanism is added. The cost function is minimized locally, so we can experience information-loss if dimensions are discarded that do not minimize the function on a local level --- but could conceivably be better for the overall problem. For each layer (figure \ref{example}), a threshold is placed on the features with respect to their slowness. If an output feature or features would be too fast, we replace them by the most variance-preserving PCA features. Each layer thus outputs a combination of slow features and PCA features. \begin{figure}[!h] \fontsize{31}{05}\selectfont \centering {\resizebox*{0.4 \textwidth}{!}{\includegraphics {files/graphic.PNG}}} \caption{HiGSFA network layer. The feature generation is similar to that of the CNN. The layer outputs $N$ channels of slow features and $M$ channels of PCA features. The number of PCA channels features is either fixed beforehand or determined by replacing a number $M$ of the SFA features whose \textit{slowness} (cost function in eq. 2) exceeds a given threshold.} \label{example} \end{figure} \subsection{General description of protocol} The performance of two hypothesis $h_1$ and $h_2$, not necessarily from the same hypothesis set $\mathcal{H}$, is compared on a classification task. The learning curves of the two hypothesis are plotted as a function of the number of data points in the training set. This can be done simply by taking an increasing number of training points per class as we evaluate using MNIST, below. \\ \indent Alternatively, the number of training points per class are kept constant and the number of classes are varied. The relationship training and test set distributions is also altered, such that the task ranges from classical classification to transfer learning. We report a comparison of methods below using this scheme on the Omniglot data set. \subsection{Evaluation on MNIST} First, we compare classification accuracies on MNIST \cite{lecun1998gradient} as a function of the number of samples per class used during training. The images have a dimension of $28 \times 28$ pixels. For 100 iterations, we choose random samples from each class and use a thousand unused samples from each class for validation. Finally, the models are tested on the classic 10 thousand test images. \subsubsection{Architectures} We constructed a two-layer HiGSFA network with circa 13k parameters (the number is stochastic and changes from training set to training set), extracting 400 features from the data. The first layer has a filter size of $5\times5$ and a stride of 2, extracting 25 features for each spatial patch. The second layer has a filter size of $4\times4$ and a stride of 2, extracting 16 features for each spatial patch. \\ \indent The output of the first layer is concatenated with a copy of itself, where each element $x$ is replaced with $|x|^{0.8}$, doubling the number of channels and giving us nonlinearity. If the value of the objective function is larger than a threshold of 1.99, we select PCA features. This upper bound is motivated by the fact that non-predictive, white noise features take a value of 2 in the objective function \cite{creutzig2008predictive}. The parameters within each layer are shared. A single-layer softmax neural network was trained on the features of the second layer to handle classification, which has 4010 parameters.\\ \indent Two standard CNNs were constructed as well, one with the constraint to have a similar number of parameters as the HiGSFA network, and another with an amount closer to what is seen in practice on similar datasets. That is to say, the smaller CNN corresponds to the HiGSFA network. \\ \indent We call the smaller network CNN-1 which has 10,032 trainable parameters, excluding the number in the final layer for classification. The tasks have varying numbers of classes to be predicted, causing the classification layer to have varying numbers of parameters. CNN-1 has three convolutional layers, each one followed by ReLU and max pooling, the first two with 8 channels and the last one with 16. They are followed by a fully connected classification layer, using a softmax activation function. The first convolutional layer has a filter size of $7\times7$, and the other two have a filter size of $5\times5$. The convolutional layers have a stride of 1 and the max pooling layers have a stride of 2.\\ \indent We call the larger network CNN-2, with 116,214 parameters (not counting the classification layer). It is the same as CNN-1 except the convolutional layers have twice the number of channels, and a dense layer with 150 units is added before the classification layer. \\ \indent Note that the parameter configurations of both HiGSFA and CNNs have not been optimized for the best performance on the tasks below. They were designed to be lightweight according to general best practices \cite{hadji2018we} \cite{escalante2016improved}. This allows for more trials and tighter confidence bounds while achieving fair performance on the tasks. \subsection{Evaluation on Omniglot} Omniglot is a handwritten character dataset consisting of 50 alphabets with 14 to 55 characters each, each character having 20 samples \cite{lake2015human}. The alphabets vary from real alphabets, such as Greek, to fictional ones, such as Alienese (from the TV show “Futurama”). Each sample was drawn by a different person for this dataset. It is typically split into 30 training alphabets, and 20 testing alphabets. Note that the training-testing split separates the alphabets; all samples originating from all characters from a given alphabet appear in either the training set or the test set but not both. This makes it a transfer learning task as the training and test data set samples drawn from separate distributions. In the original work using the dataset, the methods were first trained on the 30 background alphabets, and then a 20 way one shot classification task was performed. Two samples are taken from each of 20 characters from random evaluation alphabets. One sample is placed in what we’ll call a probe set, and the other in a target set. The methods then try to find the corresponding sample in the target set that is the same character as any given sample in the probe set. \begin{figure}[!h] \centering {\resizebox*{0.4 \textwidth}{!}{\includegraphics {files/omniglotnice.png}}} \caption{16 way one shot classification. Symbols on the left are presented to the algorithm, one at a time, and the task is to find the same character from the symbols on the right.} \label{fig-16way} \end{figure} In the vein of the original Omniglot task, we compare several models in three challenges. In all challenges, we do 16 way one shot classification using 1-nearest-neighbor (1-NN) under the Euclidean distance. The challenges differ in how the test set is related to the training set: \subsubsection{Challenge 0} From 16 random characters used for training, we take two samples that the models were trained on. These samples are placed in two sets, the probe and target sets, such that each set contains one sample of each character. The model under consideration extracts features from each image. We then iterate through each feature vector from images in the probe set and find the closest feature vector from the target set. If those two vectors belong to images of the same class, then we count it as a success. \subsubsection{Challenge 1} Same as above, but we take characters used during training for the probe set and perform classification on samples that were not used during the training. \subsubsection{Challenge 2} Same again, but now we do the classification on characters that do not belong to alphabets used during training. \subsubsection{Omniglot architectures} All model architectures are the same for MNIST and Omniglot, but the Omniglot images are resized to $35\times35$, having the effect that HiGSFA outputs 784 features. The number of model parameters does not change as the weights are shared for the image patches. The HiGSFA features used for classification are simply the 784 output features and we do not train a neural network classifier on them. The total number of parameters in the CNNs depend on the number of training classes, due to the classification layer. We fix the number of alphabets to 8 and vary the number of characters per class to be 4, 6, 8, 10, 12 and vice versa. The number of parameters for CNN-1 range between 18k and 35k, and for CNN-2 range between 121k and 130. If we do not count the parameters from the final classification layer, then the number of parameters for CNN-1 for these tasks is always 10,032 and the number for CNN-2 is 116,214. After training the CNNs, we perform feature extraction by intercepting the output of the second-to-last layer. Here the assumption is that CNNs learn a representation for the classification layer \cite{DBLP:journals/corr/RazavianASC14}. We are then interested in comparing the strength of HiGSFA and CNN representations when used by a 1-NN classifier. \begin{table*}[ \centering \begin{tabular}{ r | l l | l l | l l} \hline Samples\, & HiGSFA & & CNN-1 && CNN-2 &\Tstrut \\ & Acc. & Std. & Acc. & Std. & Acc. & Std. \Bstrut \\ \hline 5\, & 35.683 & $\pm$ 0.430 & $\textbf{72.361}$ &$\pm$ 0.365 & 72.320 &$\pm$ 0.094\, \,\,\ \Tstrut\\ 10\, & 75.736 & $\pm$ 0.222 & $\textbf{80.392}$ &$\pm$ 0.241 & 79.551 &$\pm$ 0.175 \\ 50\, & $\textbf{92.970}$ &$\pm$ 0.050 & 90.320 &$\pm$ 0.101 & 91.465 & $\pm$ 0.070 \\ 200\, & $\textbf{96.246}$ &$\pm$ 0.027 & 94.672 &$\pm$ 0.062 & 95.648 & $\pm$ 0.051 \\ 500\, & 97.188 &$\pm$ 0.013 & $96.579$ &$\pm$ 0.046 & \textbf{97.308} &$\pm$ 0.054 \\ 2000\, & 97.887 &$\pm$ 0.009 & 98.247 &$\pm$ 0.020 & $\textbf{98.571}$ & $\pm$ 0.023 \\ 6000\, & 98.134 & $\pm$ 0.008 & 98.687 &$\pm$ 0.014 & $\textbf{98.949}$ &$\pm$ 0.015 \\ \end{tabular} \caption{\textbf{MNIST Accuracies.} The percentage of correctly classified samples on the test set along with the standard error of the mean (SEM). } \label{tab:totalresults} \end{table*} \subsubsection{Training} The models were trained on varying amounts of samples per character. The HiGSFA network was trained to solve the optimization problem on each image patch, one layer at a time. All neural networks were trained in Keras \cite{chollet2015keras} using ADAM \cite{kingma2014adam}, with default parameters, to minimize cross-entropy.\\ \indent After each epoch, the error was calculated on the validation set. Early stopping was performed after the validation error had increased four times in total during the training. The training for Omniglot is the same, except instead of early stopping, the CNNs were trained for 20 epochs in all cases. \begin{figure*}[!h]% \centering{{\includegraphics[width=5.7cm]{files/index3a.png} }}% \qquad{{\includegraphics[width=5.7cm]{files/index6a.png} }}% \qquad{{\includegraphics[width=5.7cm]{files/index2a.png} }}% \qquad{{\includegraphics[width=5.7cm]{files/index5a.png} }}% \qquad{{\includegraphics[width=5.7cm]{files/index1a.png} }}% \qquad {{\includegraphics[width=5.7cm]{files/index4a.png} }}% \caption{\textbf{Classification Accuracies}. There are either 8 alphabets and we vary the characters per alphabet, or vice versa. The error bars indicate the standard error of the mean. These plots are best viewed in color.}% \label{fig:plots}% \end{figure*} \section{\uppercase{Results}} \subsection{MNIST Results} We trained the models using 5, 10, 50, 200, 2000 or 4000 samples per digit. In table \ref{tab:totalresults}, we see the statistics from 100 runs, where the models were trained from random initializations, evaluated and tested. The convolutional networks have the highest accuracies when there are 2000 or more samples per class and when there are only 5 or 10 samples per class. \\ \indent However, HiGSFA has a higher accuracy than the CNN with a similar number of parameters for 500 samples per class. Furthermore, HiGSFA has higher accuracies than both CNNs for 200 and 50 samples per class. The CNN with a larger number of parameters always has higher prediction accuracies than the one with a lower number of parameters. \subsection{Omniglot Results} The 1-NN classifier uses the second-to-last CNN outputs or HiGSFA features. We fix either the number of alphabets, or characters-per-alphabet, to be 8 and vary the other number from 4 to 12 in increments of 2. The number of samples per character is either 4 or 16. The largest total number ($\text{alphabets} \times \text{characters per alphabet} \times \text{samples per character}$) of samples used for training is 1536 and the lowest is 128. \\ \indent In figure \ref{fig:plots}, we see the average of all the runs over the different samples per characters and number of classes. In all of the challenges, the CNNs have higher accuracies than HiGSFA. On average, CNN-2 has higher accuracies in challenges 0 and 2. Neither CNN achieves significantly better accuracy than the other in challenge 1. \section{\uppercase{Discussion and conclusion}} \noindent The work of this paper is intended to facilitate understanding of algorithms from the point of view of having particularly low numbers of samples. We present simple-to-implement challenges that allow for evaluation of data efficiency in the context of representation learning. \\ \indent For the models experimented on, we see that the CNNs usually perform better, but HiGSFA outperforms the CNNs on 50 and 200 sample training sets from the MNIST data. One can speculate that the default CNN architectures ensure generalization through max-pooling whereas SFA mostly learns to generalize from a moderately sized data set. \\ \indent Another explanation for the different ranges of comparative performance optima is the choice of delta-threshold of HiGSFA. The method overestimates the slowness of the slowest features when it has too few samples. This has the effect that fewer PCA features are selected for a lower number of samples. On the other hand, with more than 200 samples, there could be too many PCA features chosen. Setting the number of slow features to be a constant for all sample sizes could be better for robustness than fixing the delta threshold. \\ \indent Notice the trend in challenge 0: the accuracy goes down as the number of samples increases. This is due to the samples used for the probe and target sets being drawn from the training set and we are training and testing on larger sets as we move from left to right. \\ \indent Overall, for the Omniglot challenges, the accuracies of the CNNs lie comfortably above the HiGSFA accuracies, but it’s not always discernible whether the larger or the smaller CNN performs better. An explanation for this could be that the tasks are not difficult enough for more parameters to be necessary. The local optimality of GSFA could result in an insufficiently robust or transferable representation if there are many classes and few samples per class. \\ \indent These challenges are more complicated set of classification tasks than the MNIST ones, with a larger number of classes overall. This give CNNs an opportunity to take advantage of having been trained directly for classification when they are presented a similar task. Although HiGSFA takes advantage of class labels, it suffers in comparison for not taking into account the downstream task during training.\\ \indent For future work, a complete extension of the experiments here could include an analysis on the effect that different type of data would have on the performance. This would yield further insight than varying the number of rather homogeneous data used for training. Additionally, the performance of a wider array of popular methods can be compared. \\ \indent More types of benchmarks for comparing different models over varying training set sizes would be helpful for this kind of research. Knowledge gained from them would as well allow practitioners to choose the right model for the scale and type of the problem they wish to solve. These experiments give rise to the question: how can these methods with their different strengths and weaknesses profit from each other? \vfill \bibliographystyle{apalike} {\small
1,314,259,993,970
arxiv
\section{Analysis}\label{sec:ana} \input{Tables/Table_ratio} We report two of the results on reaction prediction on USPTO\_STEREO\_separated dataset on Figure~\ref{fig:case_2}. We describe the details of our proposed approaches step by step. The textual sequences of reactants in the source side are decomposed into reaction-aware-substructures and rest fragments. For the target side, reaction-aware-substructures do not need to be generated during prediction. The lengths of target sequences to be predicted are reduced significantly. For the example in the Figure~\ref{fig:case_2}, our model and Molecular Transformer both predict similar structures to the ground truth. However, Molecular Transformer makes a mistake on the chiral type of the chiral center. Our model can avoid such prediction mistakes on the chiral center in reaction-aware-substructures. \section{Proposed Approach}\label{sec:appoach} \subsection{Reaction Retrieval}\label{sec:retri} In reaction pathway planning, chemists generally need to obtain insights and inspirations from existing reaction pathways learned through previous education and professional experience. Similarly, the retrieval module shall obtain a list of candidates similar to the given query from a large collection of data efficiently. In the task of reaction outcome prediction, the query will be the reactants and reagents, and the retrieved candidates will be a list of molecules from the product side. For retrosynthesis analysis, the query will be the product, and the candidates will be a list of reactants. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.16]{Figures/dual_encoder.pdf} \end{center} \caption{Overview of the dual-encoder retrieval model.} \label{fig:dual} \end{figure} To learn and measure the similarity between the reactants and reagents with the product, we propose to use the dual-encoder architecture~\citep{bromley1993signature}, which is also introduced in the memory-based machine translation~\citep{cai-etal-2021-neural}. We use two independent Transformer encoders~\citep{vaswani2017attention} in the dual-encoder architecture. One is to encode the reactants and reagents, and the other is to encode the product, as shown in Figure~\ref{fig:dual}. Transformer~\citep{vaswani2017attention} is a prominent encoder-decoder model that has achieved great success in natural language processing, computer vision, and speech processing. It consists of an encoder and a decoder, each of which is a stack of $L$ identical blocks, where each encoder block is mainly a combination of a self-attention module and a position-wise feed-forward network. For details about the Transformer model, please refer~\citet{vaswani2017attention}. Note that we only employ the Transformer encoder in dual-encoder. We add the \texttt{[BOS]} token to the SMILES strings of both reactants (plus reagents) and products, which are then fed into the Transformer encoders. The source and target representations of \texttt{[BOS]} token are considered as the output, denoted by $E_{\mathrm{src}}$ and $E_{\mathrm{tgt}}$, respectively. The reactants and reagents are the source inputs, and the product is the target output in reaction outcome prediction (retrosynthesis is reversed.). The overall objective is to minimize the distance between $E_{\mathrm{src}}$ and $E_{\mathrm{tgt}}$ in high-dimensional space for a given reaction. Following the training strategy proposed by~\citet{cai-etal-2021-neural}, we propose two objectives for cross-alignment. The first objective is that the gold target has the highest-ranking score given the source among all the targets. This is approximated by maximizing the ranking score in a batch of source-target pairs when the batch size is relatively large. For a batch of $B$ source-target pairs sampled from the training set at each training step, let $X$ and $Y$ be the $B\times d$ matrix of the encoded source and target vectors, respectively. We define the ranking function as the dot product of the encoded source and target representations. We have $S=X Y^{T}$, which is a $B \times B$ matrix of ranking scores, where each row corresponds to one source, and each column corresponds to one target in the batch. The pair $\left(X_{i}, Y_{j}\right)$ should be aligned when $i = j$, and otherwise not. The goal is to maximize the scores along the diagonal of the matrix and henceforth reduce the values in other entries. The loss function for the $i$-th source-target pair is as follows: \begin{equation} \mathcal{L}_{\mathrm{rank}}^{(i)}=\frac{-\exp \left(S_{i i}\right)}{\exp \left(S_{i i}\right)+\sum_{j \neq i} \exp \left(S_{i j}\right)} \end{equation} The second objective is mainly borrowed from machine translation, which aims to predict the tokens in the target given the source representation and vice versa. This objective introduces additional semantic alignment between source and target at the token level. For the $i$-th source-target pair, the bag-of-words loss is used for this token-level cross-alignment and is formulated as: \begin{equation} \mathcal{L}_{\text {token }}^{(i)}=-\sum_{w_{y} \in \mathcal{Y}_{i}} \log p\left(w_{y} \mid X_{i}\right)+\sum_{w_{x} \in \mathcal{X}_{i}} \log p\left(w_{x} \mid Y_{i}\right) \end{equation} where $\mathcal{X}_{i}$ and $\mathcal{Y}_{i}$ denote the set of tokens in the $i$-th source and target, respectively. The probability $p$ is computed by a linear projection layer followed by a softmax layer. For the dual-encoder model, the overall loss is: \begin{equation} \mathcal{L} = \frac{1}{B} \sum_{i=1}^{B} \mathcal{L}_{\mathrm{rank}}^{(i)}+\mathcal{L}_{\mathrm{token}}^{(i)}. \end{equation} When the dual-encoder is trained, we can obtain the dense vectors for all the targets in the training data. Recall that all the targets in the training data refer to all the product molecules in reaction outcome prediction or the reactants and reagents in the task of retrosynthesis analysis. We leverage Faiss~\citep{johnson2019billion}, an open-source toolkit\footnote{\url{https://github.com/facebookresearch/faiss}}, to perform Maximum Inner Product Search (MIPS) on large collections of dense vectors. It does so by building the index of dense vectors, which is optimized for MIPS search. The Faiss index code in our work is ``IVF1024 HNSW32, SQ8'', which is the graph-based index with Hierarchical Navigable Small World (HNSW) algorithm~\citep{malkov2020efficient}. In our approach, we pre-compute and index the dense vector representations of all targets on the training data with the target encoder. For input query $x$, which is the reactants and reagent in reaction outcome prediction and product in retrosynthesis analysis, we use the source encoder to obtain its dense vector representation $E_{\mathrm{src}}(x)$, and retrieve a ranked list of candidates by MIPS on the Faiss index. \subsection{Reaction-aware Substructure Exaction} \label{sec:substructure} Given the training objective of the dual-encoder model, the retrieved top candidates shall be similar to the golden target. We further assume that these candidates share a common substructure with the golden target. Although this hypothesis is not always valid, we observe that the assumption is reasonable in most cases. These common substructures are reaction-aware because the retrieved candidates vary for different reactions. Our goal is to extract reaction-aware substructures given the query and the top cross-aligned targets obtained. The extraction is mainly based on molecular fingerprint, which is widely used in molecular substructure and similarity search. Molecular fingerprints are a way of encoding the structure of a molecule. The most common type of fingerprint is a series of binary bits that represent the presence or absence of particular substructures in the molecule. Comparing fingerprints can help determine the similarity between two molecules or locate the aligned atoms. Circular fingerprints are one of the methods capable of capturing 3D topological information. It maintains the environment of the center atom, which covers the neighbor atoms in different radii. The \textit{de facto} standard circular fingerprints are the Extended-Connectivity Fingerprints (ECFPs), based on the Morgan algorithm, which is specifically designed for structure-activity modeling. Circular fingerprints are obtained through an enumeration of sub-molecular neighborhoods. First, each atom is encoded by an integer identifier, which is a hashed encoding representation of structural properties. The neighborhood information of the constituent atoms and bonds in different radii are iteratively assigned as the atom's numerical identifiers. The radius of a circular fingerprint refers to the size of the largest neighborhood surrounding each atom that is considered during enumeration. The fingerprint consists of the combination of all unique identifiers and is subsequently folded into a binary vector of fixed length by converting integer identifiers into indices of the vector. \begin{figure}[ht] \centering \includegraphics[scale=0.18]{Figures/exaction.pdf} \caption{Reaction-aware substructure extraction.} \label{fig:sub} \end{figure} We use the toolkit RDKit~\citep{greg_landrum_2022_6388425} to extract common substructures. The overall extraction scheme is illustrated in Figure~\ref{fig:sub}. In our approach, we calculate the circle fingerprints of the query and the top $10$ retrieved candidates with a radius ranging from $2$ to $6$. For example, in Figure~\ref{fig:sub}, the fingerprint \texttt{2474739114} encodes the environment of the center atom (index \texttt{6}) and its neighbors in radius \texttt{4} in the query. For each candidate, we build the atom alignments with the query using the shared fingerprints, as are highlighted in green and blue in the fingerprint table in Figure~\ref{fig:sub}. We select atoms to build the substructure if they are aligned in more than half of the retrieved candidates. We further remove atoms in the substructure that are not single-bond connected to non-substructure atoms. Otherwise, separating molecules into fragments might be counterintuitive from a chemical perspective, e.g., it may destroy the aromaticity of the original molecule. Many reactions involve more than single bonds, such as double bonds or triple bonds. We will explain why these atoms are excluded from the substructure, and how we handle these reactions in Section~\ref{sec:transformer}. For simplicity, we also remove atoms that connected to multiple non-substructure atoms. Note that for a specific reaction, the atoms in the extracted substructure might not be fully connected. They could be different parts of one molecule or parts of different molecules. We will show an example (Figure~\ref{fig:case_retro}) in Section~\ref{sec:experiment}. Now we separate the query into substructures and other fragments. The assumption is that these substructures remain unchanged during the reaction. Note that we might have multiple fragments connected to atoms of the substructure. We introduce isotopic numbers as labels to differentiate these bonds. As is shown in Figure~\ref{fig:sub}, we add the isotopic label to the bond between the atom \texttt{C} in the common substructure, and the atom \texttt{N} in the fragment, resulting in SMILES snippets with the isotopic labels \texttt{[1CH3]} and \texttt{[1NH3]}, respectively. Note that we introduce additional hydrogen atoms in the substructure and other fragments after breaking the bonds, making them look like charge-neutral molecules rather then radicals. We can remove these hydrogen atoms easily when restoring the original molecule. Atoms with same isotopic number means that they are connected in the original molecule, for example, \texttt{[1CH3]} is connected to \texttt{[1NH3]} and \texttt{[2CH3]} is connected to \texttt{[2NH3]}. With isotopic number labeled bonds, we can easily isolate the substructure from other molecule fragments or restore the original molecule from the substructure and other fragments. The broken bonds between the substructure and other fragments do not necessarily mean that all of them will be the sites of reactivity. Only some of the broken bonds might become reactivity sites (Figure~\ref{fig:case_retro} and Figure~\ref{fig:case_pred} in Section~\ref{sec:experiment}), and in extreme cases none of them will be sites of reactivity (Figure\ref{fig:single} in Section~\ref{sec:transformer}). We will show several cases in the following sections. \subsection{Substructure-level Sequence-to-sequence learning}\label{sec:transformer} We can also isolate the substructure on the target side on the training data. The source and target molecules are converted into substructures and other molecular fragments for both tasks. We use the SMILES strings to represent these substructures and fragments and cast both tasks as the substructure-level sequence-to-sequence learning problems. For sequence-to-sequence learning-based approaches, Molecular Transformer~\citep{schwaller2019molecular,tetko2020state} has achieved state-of-the-art performance on the reaction outcome prediction and retrosynthesis analysis~\citep{duan2020retrosynthesis}, it uses textual SMILES representations of reactants, reagents, and products, and treat reaction prediction or retrosynthesis as a machine translation task. The output SMILES is decoded by a Transformer decoder token by token (atom by atom). Recall that in the dual-encoder to retrieve reactions, we only use the Transformer encoder. The Transformer decoder has an additional cross-attention module. It attends to the hidden representation from the encoder and the output of its self-attention module. As a result, it combines the information of the source sequence and the target sequence that has been generated so far. \begin{figure}[ht] \centering \includegraphics[scale=0.18]{Figures/s2s.pdf} \caption{Substructure-level sequence-to-sequence learning.} \label{fig:s2s} \end{figure} In our approach, the input sequence will be the SMILES of the substructure and fragments separated by token ``$|$'', as is shown in Figure~\ref{fig:model} and Figure~\ref{fig:s2s}. We assume that the substructure is stable and remains unchanged during the reaction, for the target, we only need to predict the fragments and their binding atoms in the substructure. For the example in reaction outcome prediction in Figure~\ref{fig:s2s}, the original product is converted to \texttt{[1CH3]|CNC([1NH2])=O}, which means that the atom \texttt{1N} in the fragment \texttt{CNC([1NH2])=O} shall be connected to the \texttt{1C} atom of the substructure (represented by the token \texttt{[1CH3]}). We show more examples in Table~\ref{tab:sample}. \input{Tables/Table_sample} In case we fail to extract any substructures for a given query and the retrieved targets, because there might be no atoms with the number of fingerprint alignments above the threshold (the minimal number of alignments is $5$ out of the $10$ retrieved targets), the input and output representations will fall back to the original form as in Molecular Transformer. Based on our formulation, both tasks are simplified, and the average length of the target sequences are significantly reduced, which also helps to reduce the model complexity. We further restore the target molecules based on the substructure, the binding atoms in the substructure, and other molecular fragments. \begin{figure}[ht] \centering \includegraphics[scale=0.135]{Figures/single_bond.pdf} \caption{The removal operation on the atoms with non-single bonds connecting to other fragments.} \label{fig:single} \end{figure} The reason we could perform bottom-up modular assembly is that the broken bonds are single, otherwise, we need to predict the bond types when we attach the predicted fragments with extracted substructures, which makes the problem complicated. Our approach is capable of handling reactions that involve non-single bonds, as shown in Figure~\ref{fig:single}. The initial substructures are fully connected and contain one atom with a non-single bond connecting to one other fragment. After removing this atom, we obtain the final substructures and convert the original input to the substructure-level input of our model. The chemical changes among reactants and products are expected to be captured and predicted by the substructure-level sequence-to-sequence learning model. Note that for the case shown in Figure~\ref{fig:single}, the site of reactivity is in the predicted fragment. \section{Conclusion} In this paper, we introduce reaction-aware substructures to capture the subtle reactivity differences among reactants and products. The substructures are stable and remain unchanged during chemical reactions. We achieve significant improvements over existing methods on both tasks. For future work, we will introduce the substructures to graph neural network-based approaches for reaction and retrosynthesis prediction. We might also investigate how to apply the substructures to other tasks like molecule property prediction or molecular design. \section{Experiments}\label{sec:experiment} \subsection{Data} \label{sec:exp:data} \input{Tables/Table_data} We use the publicly available reaction datasets USPTO~\citep{lowe2012extraction}, which use the SMILES strings to describe the chemical reactions. The details of the datasets are shown in Table~\ref{tab:data}. For reaction outcome prediction, the two datasets USPTO\_MIT and USPTO\_STEREO datasets are preprocessed with two methods, denoted by \textit{mixed} and \textit{separated}, respectively. Thus, we have USPTO\_MIT\_separated, USPTO\_MIT\_mixed, USPTO\_STEREO\_separated, and USPTO\_STEREO\_mixed. \textit{Separated} means that the reactants are separated by a ``\textbf{$>$}'' token with the reagents (e.g., solvents and catalysts). Reagents participate in the reaction but do not contribute any atoms to the product. For \textit{mixed}, the reactants and the reagents are mixed. This setting is more challenging because the model needs to predict the reaction center from more molecules than \textit{separated}. Most previous work~\citep{schwaller2019molecular,tetko2020state,sacha2021molecule,tu2021permutation,irwin2022chemformer} focus on the USPTO\_MIT dataset filtered by~\citet{jin2017predicting}, while the USPTO\_STEREO dataset retains the stereochemical information. It should be noted that stereochemical information is of critical importance. For example, the shape of a drug molecule is an important factor in determining how it interacts with the various biological molecules (enzymes, receptors, etc.) that encountered in the body~\citep{book_stereochemistry}. For one-step retrosynthesis, we test our approach on the USPTO\_full dataset~\citep{dai2019retrosynthesis}, which consists of $950$K cleaned reactions from the USPTO during $1976$-$2016$. We perform evaluations using the top-$k$ exact match accuracy, i.e., given a source, whether one of the $k$ generated targets exactly matches the ground truth. Following~\citet{schwaller2018found}, we canonicalize the molecules with the toolkit RDKit~\citep{greg_landrum_2022_6388425} and tokenize all the inputs using the regular expression ``$ (\backslash[[\wedge\backslash]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p| \backslash(|\backslash)|\backslash.|=|\#|-| \backslash+|\backslash\backslash\backslash\backslash|\backslash/|:|\sim|@|\backslash?|>|\backslash*|\backslash \$|\backslash\%[0-9]{2}|[0-9]) $''. Note that the isotopically labeled atoms are within the square brackets, and will be tokenized as single tokens. \subsection{Parameter Settings} \input{Tables/Table_para} Table~\ref{tab:para} shows the Transformer parameter settings in our approach. For the dual-encoder reaction retrieval model, we follow the same parameter settings described in~\citet{cai-etal-2021-neural}. The batch size is $4096$, and the label smoothing is $0.1$. For substructure-level sequence-to-sequence learning, we follow the parameter settings of the Molecular Transformer~\citep{schwaller2019molecular} and implement the Transformer model with OpenNMT~\citep{opennmt}. Similar to Molecular Transformer, for each batch, we handle $4096$ tokens at most. We used the ADAM optimizer~\citep{DBLP:journals/corr/KingmaB14} (${\beta}_1 = 0.9$, ${\beta}_2 = 0.998$) and the same Noam learning rate scheduler as described in~\citep{vaswani2017attention}. We do not tune these hyper-parameters, though \citet{Seo_Song_Yang_Bae_Lee_Shin_Hwang_Yang_2021} suggest that the performance could be significantly boosted by increasing the dropout from $0.1$ to $0.3$. \subsection{Reaction-Aware Substructures} Here, we analyze the substructures of a specific reaction type: heteroatom alkylation and arylation. Our extracted substructures are reaction-aware, which indicates that there might have common substructures across the same type of reactions. We classify the reactions on the USPTO\_MIT dataset based on the reaction fingerprints and reagent features~\citep{schneider2015development,schneider2016big}. There are ten reaction types ranging from heteroatom alkylation and arylation to carbon-carbon bond formation. We choose the reaction type heteroatom alkylation and arylation mainly because most reactions of this type have extracted substructures. \input{Tables/Table_hetero} For heteroatom alkylation and arylation reactions, we find that the percentages of substructures with aromaticity in the top-$10$ and $20$ most frequent substructures are $80\%$ and $70\%$, respectively. It demonstrates that the extracted substructures are stable and remain unchanged during this type of reaction. We further filter the benzene homologs and the halogenated derivatives of benzene and its homologs from the substructures and show the top-$5$ substructures in Table~\ref{tab:hetero}. They are all heterocyclic compounds. For heterocyclic compounds, the atoms constituting the ring contain at least one heteroatom in addition to carbon atoms. The ring systems of aromatic heterocycles have a certain degree of stability and aromaticity, and the rings are not easily broken. As organic heteromonocyclic parents, pyridine, morpholine, quinoline, and piperazine have high frequencies. In addition, phthalimide is also a common substructure. The direct N-alkylation of phthalimides with alcohols under Mitsunobu conditions and potassium phthalimide with alkyl halides (Gabriel Synthesis) are popular alternative approaches to phthprotected amines. As a protecting group, the substructure of phthalimide can be exacted by our method. The results show that these extracted substructures have chemical interpretability. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.18]{Figures/rxt_aware.pdf} \caption{Reaction-aware substructures, the reactants all contain the structure of phthalimide.} \label{fig:rxt_aware} \end{center} \end{figure} It is important to note that the extracted substructure is reaction-aware, which captures the reaction-specific subtle chemical changes among reactants and products. We show four examplar reactions that all their reactants contain the structure of phthalimide in Figure~\ref{fig:rxt_aware}. The reaction types are also derived following~\citep{schneider2015development,schneider2016big}. The extracted substructures vary among different reaction types. For reaction (a), the phthalimide, as the protecting group, is removed from the reactant to generate the product. For reaction (b), the phthalimide involves in the reduction procession. For reaction (c) of functional group interconversion and reaction (d) of C–C bond formation, phthalimide does not contribute atoms to the final product. In our approach, phthalimide is not considered as the substructure for reaction (a) and reaction (b). The substructures of reaction (c) and reaction (d) are also different, although they both contain phthalimide. All these reaction-specific substructures are extracted as expected. \subsection{Results on ONE-STEP RETROSYNTHESIS} \label{sec:result:retro} \input{Tables/Table_retro} We report the results of the one-step retrosynthesis on the USPTO\_full dataset in Table~\ref{tab:retro}. RetroSim~\citep{coley2017computer} treats retrosynthesis as template ranking based on molecular similarity, while MEGAN~\citep{sacha2021molecule} as sequence of molecular graph edits. GLN~\citep{dai2019retrosynthesis} employs the conditional graph logic network to learn chemical templates for retrosynthesis analysis, RetroPrime~\citep{wang2021retroprime} decomposes the given product molecule into synthons and then generates reactants by attaching the leaving groups. Aug. Transformer~\citep{tetko2020state} incorporates some data augmentation strategies with the Transformer model. DMP~\citep{zhu2021dual} introduces dual-view pretrained representations into the Transformer or GLN models, where the dual-view considers both molecule graph and SMILES sequence representations. Graph2SMILES~\citep{tu2021permutation} combines Transformer decoder with the permutation invariant molecular graph encoders. GTA~\citep{Seo_Song_Yang_Bae_Lee_Shin_Hwang_Yang_2021} proposes molecular graph-aware attention mask for both self- and cross-attention in Transformer. Our method achieves the best top-$1$ and top-$10$ accuracy among all the baselines. Note that our approach does not require any reaction templates built upon expert systems or template libraries, or the atom mappings from reactants to the product provided by the dataset. The atom-mapping information, to some degree, might reveal the information about the reactivity sites~\citep{wang2021retroprime}. We don't have data augmentation as employed in the GTA model~\citep{Seo_Song_Yang_Bae_Lee_Shin_Hwang_Yang_2021}. Our model outperforms the Aug. Transformer~\citep{tetko2020state} and GTA model~\citep{Seo_Song_Yang_Bae_Lee_Shin_Hwang_Yang_2021} with $7.2$\% and $5$\% absolute improvements in top-$1$ accuracy, respectively. The improvement in our approach can be attributed to two main factors: 1) our approach extracts substructures for $70.1\%$ reactions on the USPTO\_full test data, a relatively high coverage, 2) we only need to generate fragments for the isotopic labeled binding atoms in the substructures, which simplifies the problem. For product molecules with substructures, the average length of prediction is reduced from $48.1$ to $25.6$. The design of our model has advantages over the previous token by token decoding model, e.g., Aug. Transformer~\citep{tetko2020state}, Graph2SMILES~\citep{tu2021permutation} and GTA~\citep{Seo_Song_Yang_Bae_Lee_Shin_Hwang_Yang_2021}. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.18]{Figures/case_retro.pdf} \caption{Case study of retrosynthesis prediction with Aug.Transformer~\citep{tetko2020state}.} \label{fig:case_retro} \end{center} \end{figure} Here, we show one retrosynthesis case in Figure~\ref{fig:case_retro}, for which our model makes the correct top-$1$ prediction while Aug. Transformer~\citep{tetko2020state} not. The textual sequence of the given product molecule on the source side is converted to the substructures and other fragments. Note that the extracted substructure is not a fully connected graph, they come from different parts of one molecule. Our model correctly generates the fragments connected to the isotopic atoms. As is discussed in Section~\ref{sec:substructure}, not all broken bonds will be the sites of reactivity, in Figure~\ref{fig:case_retro}, the bond for which its atoms are labeled with the isotopic number $3$ is of this type. The predicted \textbf{Fragment 3}$^*$ of the target side is the same as the \textbf{Fragment 2} from the source. It indicates that our approach, to some degree, is robust for different substructures, and can restore the golden targets even if the substructure is not ``perfect''. The perfect substructure refers to a substructure for which all of the broken bonds are sites of reactivity. \subsection{Results on Reaction Outcome Prediction} \label{sec:result:reaction} \input{Tables/Table_pred/pred_ratio} \input{Tables/Table_pred/pred_ratio_acc} For reaction prediction, we observe similar substructure coverage as in the retrosynthesis analysis. Table~\ref{tab:pred_ratio} shows the detailed results on the USPTO\_MIT and USPTO\_STEREO datasets. The average prediction length of the reactions with substructures is reduced to $21.6$ and $17.7$, respectively. We first compare our approach with Molecular Transformer, as the architecture of our substructure-level sequence-to-sequence learning is exactly the same as Molecular Transformer. We report results on various settings (MIT or STEREO, mixed or separated, data augmentation or not) and compare the top-$1$ accuracy of predictions (with/without reaction-aware substructures) with Molecular Transformer~\citep{schwaller2019molecular} in Table~\ref{tab:pred_ratio_acc}. Our models achieve the best overall top-$1$ accuracy among all the settings. Recall that for reactions without substructures, the input and output will fall back to the original form as in Molecular Transformer, and the performance of our approach is degraded. Because the training data of these reactions in our approach is around $27$\% to that in Molecular Transformer. We combine our approach with Molecular Transformer. For the subsets without reaction-aware substructures, we take the predictions from Molecular Transformer~\citep{schwaller2019molecular}. As shown in the last column of Table~\ref{tab:pred_ratio_acc}, the overall accuracy is further improved significantly. \input{Tables/Table_pred/pred_sota_mit} \input{Tables/Table_pred/pred_sota_stereo} Now we compare our model with the best-reported results by the state-of-the-art models in Table~\ref{tab:pred_sota_mit} and Table~\ref{tab:pred_sota_stereo}. Our model achieves comparable results on the USPTO\_MIT dataset, and the best performance on the USPTO\_STEORO dataset. Note that Chemformer~\citep{irwin2022chemformer} pre-train the molecular representation on a large collection of data, and the size of its parameters is much larger than ours. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.18]{Figures/case_pred.pdf} \caption{Case study of reaction prediction with Molecular Transformer~\citep{schwaller2019molecular}.} \label{fig:case_pred} \end{center} \end{figure} We also show a case on the USPTO\_STEREO (separated) dataset in Figure~\ref{fig:case_pred}. Both our approach and the Molecular Transformer can predict the correct sites of reaction. However, Molecular Transformer gives the wrong predictions of the newly generated fragments. Moreover, Molecular Transformer predicts the incorrect chiral type of the chiral center. Our model can avoid such prediction errors thanks to the reaction-aware substructures. The chiral atoms could be contained within the substructures, and our model does not need to predict the chirality of these atoms. \section{Introduction}\label{sec:intro} Organic synthesis is an essential branch of synthetic chemistry that mainly involves the construction of organic molecules through various organic reactions. Reaction outcome prediction~\citep{corey1969computer}, which aims to predict feasible products given reactants and reagents, and retrosynthesis analysis~\citep{corey1988robert}, which aims to propose possible reaction precursors given the desirable product, are two substantial tasks in computer-aided organic synthesis. Accurate predictions could help find optimized reaction pathways from numerous possible reactions. In this paper, reagents are defined as chemical species that do not appear in the product (e.g., solvent and catalyst), and differing from reactants (precursors to the final product). Recently, machine learning-based approaches have achieved promising results on both tasks. Many of these methods employ encoder-decoder frameworks, where the encoder part encodes the molecular sequence or graph as high dimensional vectors~\citep{schwaller2019molecular,tetko2020state,duan2020retrosynthesis,wang2021retroprime,Seo_Song_Yang_Bae_Lee_Shin_Hwang_Yang_2021,tu2021permutation,irwin2022chemformer}, and the decoder attends to the output from encoder and predicts the output sequence token by token in an autoregressive manner. Note that the sequences of the molecules are usually represented as SMILES (Simplified Molecular-Input Line-Entry System) strings~\citep{weininger1988smiles,weininger1989smiles}, and the graph refers to the molecular graph structure. For example, Molecular Transformer~\citep{schwaller2019molecular} uses textual SMILES representations of reactants, reagents, and products treating reaction prediction as a machine translation task from one language (reactants-reagents) to another (product). In contrast, retrosynthesis is the reverse task\footnote{Note that the reagents prediction is not included in retrosynthesis analysis, for convenience, we still consider it as the reverse task, and will not highlight this in the following description.}. Casting the tasks as machine translation tasks enables the use of deep neural architectures that are well developed in natural language processing. For example, the self-attention based Transformer architecture~\citep{vaswani2017attention} are employed in recent state-of-the-art models~\citep{schwaller2019molecular,tetko2020state,duan2020retrosynthesis,Seo_Song_Yang_Bae_Lee_Shin_Hwang_Yang_2021,wang2021retroprime,irwin2022chemformer}. In the decoding stage, the output SMILES is autoregressively generated token by token (atom by atom). It should be noted that this is not considered intuitive or explainable for chemists in synthesis design or retrosynthesis analysis. In real-world route-scouting tasks, synthetic chemists generally rely on their professional experience to obtain inspiration from existing reaction pathways. Reaction prediction or retrosynthesis analysis often starts from molecular substructures or fragments that are chemically similar to the target molecules. These substructures or fragments help provide clues to an assembly puzzle involving a series of chemical reactions toward the final product. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.19]{Figures/overview.pdf} \end{center} \caption{Method overview on reaction outcome prediction. The reactants and reagents are separated by ``{\color{pink} $\pmb >$}''. We could switch the reactants with the product to get the pipeline for retrosynthesis analysis.} \label{fig:model} \end{figure} Therefore, we propose to leverage reaction-aware substructures in organic synthesis, where the substructures capture the subtle chemical changes among reactants and products while remaining free from expert systems or template libraries. We cast the retrosynthesis and reaction prediction as sequence-to-sequence learning tasks with the reaction-aware substructures. The pipeline of the overall framework is illustrated in Figure~\ref{fig:model}\footnote{This figure and the following figures in this paper mainly illustrate the \textit{separated} setting in reaction outcome prediction task. For the \textit{mixed} setting, we do not differentiate reactants from reagents. See Section~\ref{sec:exp:data} for details.}, which consists of the following modules: \begin{enumerate}[(a)] \item \textbf{Reaction Retrieval} \\ The reaction retrieval module aims to retrieve similar reactions given the product (retrosynthesis prediction) or the reactants and reagents (reaction outcome prediction). We introduce a learnable cross-lingual memory retriever~\citep{cai-etal-2021-neural} used in machine translation tasks to align the reactants and the corresponding products in high dimensional space. The retrieval model is based on the dual-encoder framework~\citep{bromley1993signature} such that for each reaction, the learned representation of reactants and reagents is similar to that of the product. After the dual-encoder retrieval model is trained, we could obtain the dense vector representations of all the reactants (plus reagents) and products, as is shown in Figure~\ref{fig:model}(a). In the task of reaction outcome prediction, the reactants and reagents will be the query to retrieve molecules that are similar in the high dimensional space, while in the task of the retrosynthesis analysis, the product will be the query to retrieve similar reactants. For fair a comparison with other methods, the retrieved candidates only come from the training data. \item \textbf{Reaction-aware Substructure Extraction} \\ Given the training objective of the dual-encoder retrieval model, the retrieved molecules should be similar to the golden target (the reactants in retrosynthesis analysis, the product molecule in reaction outcome prediction). Therefore, we could extract the common substructures from the query molecules and the top cross-aligned candidates based on molecular fingerprints, and assume these common structures exist in the golden targets. Note that these substructures are relatively stable and remain unchanged during the reaction. More details are provided in Section~\ref{sec:substructure}. The common substructure provides reaction-level, fragment-to-fragment mapping from reactants to products. It should also be noted that these substructures are reaction-aware and could be considered as reaction templates learned from the dual retrieval model. We then separate the molecules into common substructures and other molecular fragments. Molecular fragments in this paper refer to atoms and bonds that are not in the common substructure. Our method also involves tracking potential reaction sites. When multiple bonds are broken to isolate the substructures, we introduce ``isotopic numbers'' to virtually tag the atoms of the broken bonds, as is shown in Figure~\ref{fig:model}(b). This method is analogous to using an isotope to label the atom of interest during a Nuclear Magnetic Resonance (NMR) study. It is important to note that our isotopic number labels do not actually denote changes in the number of neutrons in an atom’s nuclei; the isotopic number is not related to an atom’s chemical identity, but solely used to track potential reaction sites of interest. \item \textbf{Substructure-level Sequence-to-sequence Learning} \\ With the reaction-aware substructure and molecular fragments, we convert the original token- (atom-) level sequence to a substructure-level sequence. The new input sequence will be the SMILES of the substructure followed by the SMILES of other fragments with isotopic numbers. The output sequence will be isotopically labeled fragments, which consist of the isotopically labeled atom in the common substructure and the SMILES of the fragments. In other words, the fragments are connected to common structures with bonds specified by isotopic labels. Subsequently, both reaction outcome prediction and retrosynthesis analysis will be cast to structure-level sequence-to-sequence learning tasks. Given the model predicted isotopically labeled fragments, we perform bottom-up modular assembly of the molecular architectures to obtain the final molecular graph and its SMILES strings. An example is shown in Figure~\ref{fig:model}(c): in the model output sequence, \texttt{1C} in \texttt{[1CH3]} is an isotopically labeled atom from the substructure, and it shall be attached to the atom \texttt{1N} in the \texttt{[1NH3]} fragment because they are labeled by the same isotopic number of \texttt{1}. \end{enumerate} The substructures are intrinsically related to how human researchers interpret the nature of chemical reactions, and our approach achieves significant improvement on both two tasks. The top-1 accuracy from our models for retrosynthesis on the USPTO\_full dataset is $51.6\%$, and $82.7\%$ for reaction outcome prediction on the USPTO\_STEREO dataset (\textit{separated}), achieving a respective $5\%$~/~$4.1\%$ absolute improvement over state-of-the-art models. \section{Related Work}\label{sec:related} The early approaches in computer-aided synthesis prediction and retrosynthesis analysis use chemical reaction rules based on subgraph pattern (reaction templates) matching, as in expert systems such as LHASA~\citep{corey1972computer} and SYNTHIA~\citep{szymkuc2016computer}. Template-based approaches use reaction templates or rule libraries, which contain reaction information about the atoms and chemical bonds near the sites of reactivity. Template-based methods consider all possible sites of reactivity in the molecule and enumerate possible chemical bond changes. These methods heavily rely on the templates, which requires considerable human effort to ensure that the template library covers most organic reactions. \citet{segler2017neural,coley2017prediction,baylon2019enhancing,chen2021deep} formulate reaction or retrosynthesis prediction as template classification or ranking~\citep{segler2017neural,baylon2019enhancing,chen2021deep,coley2017prediction} based on molecular similarity~\citep{coley2017computer} with deep neural networks. They select the top-ranked templates, which can then be applied to transform the input molecules into outputs. The templates utilized in these methods still depend on precomputed atomic mappings (how atoms in reactants map to corresponding those in products). How to obtain a complete and reliable atomic mapping relationship is also a complex problem. As solutions to address these limitations, several template-free approaches have been developed recently, which can be categorized into graph edit-based and translation-based approaches. The graph edit-based approaches cast reaction or retrosynthesis prediction as graph transformations~\citep{jin2017predicting,coley2019graph,do2019graph,bradshaw2019generative,PPR:PPR109708,sacha2021molecule}. Modeling or predicting electron flow in reactions~\citep{bi2021non} can also be considered as a variant of graph-based methods. Besides, some semi-template-based methods also improve prediction performance by identifying the special sites of reactivity and then recovering graphs or sequences~\citep{shi2020graph,yan2020retroxpert,NEURIPS2021_4e2a6330,wang2021retroprime}. Translation-based approaches formalize the problems as SMILES-to-SMILES translation, typically with sequence models such as Recurrent Neural Networks~\citep{nam2016linking,schwaller2018found} or the Transformer~\citep{schwaller2019molecular,yang2019molecular,lin2020automatic,duan2020retrosynthesis,tetko2020state}. Variants of these approaches are introduced, such as reranking and pre-training~\citep{zheng2019predicting,zhu2021dual,irwin2022chemformer}. Some models that fuse molecule graph information with translation-based approaches also achieve promising results\citep{zhu2021dual,tu2021permutation,Seo_Song_Yang_Bae_Lee_Shin_Hwang_Yang_2021}. It is well accepted that substructures or functional groups are essential in chemical reactions. \citet{wang2022chemicalreactionaware} propose new chemical-reaction-aware molecule embeddings which preserve the equivalence of reactant and product molecules in the embedding space by forcing the sum of reactant embeddings and the sum of product embeddings to be similar. \citet{zhang2021motif} proposed motif-based graph self-supervised learning, where graph motifs refer to important subgraph patterns in molecules. The exploration of the chemical substructure or subgraph also provides efficient solutions to build large-scale chemical libraries~\citep{doi:10.1021/ci200413e} for drug discovery~\citep{merlot2003chemical}. In our work, we explicitly introduce reaction-aware stable substructures in reaction and retrosynthesis prediction. The substructures are automatically mined with a fully data-driven approach. \section{Results} \subsection{Reaction-Aware Substructures} \input{Tables/Table_hetero} As described in Section~\ref{sec:substructure}, our reaction-aware substructures are exacted from the query and the retrieved top cross-aligned targets. The retrieved candidates vary for different reactions, but we assume that the candidates share a common substructure with the golden target. As results, our substructures are significantly associated with reaction types. In other words, for a specific class of reaction types, we can extract certain unique substructures that occur relatively frequently in reactants and products and maintain structural invariance across the reaction. The USPTO-50k dataset is a subset of the USPTO\_full dataset, and the reactions are classified into ten reaction types. The ten reaction classes ranging from heteroatom alkylation and arylation to carbon-carbon bond formation present the most common reactions in organic synthesis. To demonstrate the robustness of our substructures, we implement the evaluation and analysis on the much larger USPTO\_MIT dataset. The reactions in the dataset are classified and labeled by the methods based on reaction fingerprints and reagent features~\citep{schneider2015development,schneider2016big}. For heterocyclic compounds, the atoms constituting the ring contain at least one heteroatom in addition to carbon atoms. Generally speaking, the ring systems of aromatic heterocycles have a certain degree of stability and aromaticity, and the rings are not easily broken in general chemical reactions. For the reactions of Heteroatom alkylation and arylation, we count the frequency of different substructures and report the top-5 heterocyclic substructures in Table~\ref{tab:hetero}. As organic heteromonocyclic parents, pyridine, morpholine, quinoline, and piperazine have a high frequency. In addition, phthalimide is also an exacted substructure with a high frequency. The direct N-alkylation of phthalimides with alcohols under Mitsunobu conditions and potassium phthalimide with alkyl halides (Gabriel Synthesis) are popular alternative approaches to Phthprotected amines. As a protecting group, the substructure of phthalimide can be exacted by our methods, which also proves that our methods can efficiently achieve reaction-aware ability and chemical interpretability. \subsection{Results on ONE-STEP RETROSYNTHESIS} \input{Tables/Table_retro} We report the results of onestep retrosynthesis on USPTO\_full dataset on Table~\ref{tab:retro} and compare them with all existing methods that report results on this dataset. Our method achieves higher top-$1$ accuracy than all methods without using any templates. Our approach also brings the side benefits that we do not require mapping atoms from reactants to product or separating reactants from reagents. Furthermore, we do not introduce the atom-mapping information which can inject information about the reaction center~\citep{wang2021retroprime}, or data augmentation which are used in graph-truncated cross-attention~\citep{seo2021gta}. Our models our model outperforms the baseline Transformer model with $5.8$ and $2.2$ point increases in top-$1$ and $10$ accuracies, respectively. The promising results also outperform the state-of-the-art GTA model~\citep{seo2021gta} with $3.6$ and $5.1$ points in top-$1$ and $10$ accuracies with a great margin. Similarly with the reaction outcome prediction tasks, our reaction-aware substructure methods cover around $70\%$ reactions pairs of the USPTO\_full dataset and the ensemble strategy can further improve the final performance of the model.
1,314,259,993,971
arxiv
\section{Introduction} \label{sec:intro} Graph neural networks (GNNs), and the field of geometric deep learning, have seen rapid development in recent years \cite{hamilton2020, bronstein2021G5} and have attained popularity in various fields involving graph and network structures. Prominent examples of GNN applications include molecular property prediction, physical systems simulation, combinatorial optimization, or interaction detection in images and text. Many of the current GNN designs are based on the principle of neural message-passing \cite{gilmer2017MP}, where information is iteratively passed between neighboring nodes along existing edges. However, this paradigm is known to suffer from several deficiencies, including theoretical limits of their representational capacity~\cite{xu2018gin} and observed limitations of their information propagation over graphs~\cite{alon2020bottleneck,li2018insights,min2020scattering}. Two of the most prominent deficiencies of GNNs are known as \emph{oversquashing} and \emph{oversmoothing}. Information \emph{oversquashing} refers to the exponential growth in the amount of information that has to be encoded by the network with each message-passing iteration, which rapidly grows beyond the capacity of a fixed hidden-layer representation~\cite{alon2020bottleneck}. Signal \emph{oversmoothing} refers to the tendency of node representations to converge to local averages~\cite{li2018insights}, which can also be observed in graph convolutional networks implementing low pass filtering over the graph~\cite{min2020scattering}. A significant repercussion of these phenomena is that they limit the ability of most GNN architectures to represent long-range interactions (LRIs) in graphs. Namely, they struggle in capturing dependencies between distant nodes, even when these have potentially significant impact on output prediction or appropriate internal feature extraction towards it. Capturing LRIs typically requires the number of GNN layers (i.e., implementing individual message passing steps) to be proportional to the diameter of the graph, which in turn exacerbates the oversquashing of massive amount of information and the oversmoothing that tends towards averaging over wide regions of the graph, if not the entire graph. In this paper, we study the utilization of multiscale hierarchical meta-structures to enhance message passing in GNNs and facilitate capturing of LRIs. By leveraging hierarchical message passing between nodes, our Hierarchical Graph Net (HGNet) architecture can propagate information within $O(\log |V(G)|)$ steps instead of $O(\mathrm{diam}(G))$, leading to particular improvements for sparse graphs with large diameters. We note that a few works have recently proposed related approaches using hierarchical constructions, namely g-U-Net \cite{gao2019gUnet} and GXN \cite{li2020GXN}. g-U-Net employs a similarity-based top-k pooling called gPool for hierarchical construction over which it implements bottom-up and simple top-down message passing. GXN introduced mutual information based pooling (VIPool) together with a more complex cross-level message passing. Broadly related are also differentiable pooling methods such as DiffPool~\cite{ying2018diffpool}, EdgePool~\cite{diehl2019edgepool}, or GraphZoom~\cite{deng2020graphzoom}. However, these employ only one-directional hierarchical pooling for computing graph-level representations. While LRIs are widely accepted as being important for both theoretical studies and in practice, most benchmarks used to empirically validate GNN models do not clearly exhibit this property. Out of these, the importance of LRIs is perhaps best justified in biochemistry datasets, where the 2D structure of proteins and molecules is used as their graph representation. However, edges of such graphs do not encode 3D forces and global properties, leaving it up to the model to learn to recognize such LRIs. Several highly specialized models have been proposed for molecular data, but these are typically not applicable to other domains, which also hinders analysis of their modeling improvements towards particularly capturing LRIs. Therefore, in our experiments we primarily focus on quantifying the benefit of using a hierarchical structure compared to the standard practice of GNN layer stacking. We also introduce two benchmarking tasks designed to elucidate capability of general-purpose GNNs to leverage LRIs. Here, we show hierarchical models outperform their standard GNN counterparts when their hierarchical graph construction matches well with the original graph structure and the prediction task, while uncovering related limitations of gPool in g-U-Net. \section{Hierarchical graph net} \label{sec:hgnet} To build a hierarchical message passing model, we need to construct a hierarchical graph representation and define an inter- and intra-level message passing mechanism. \subsection{Graph coarsening for hierarchical representation} Building a hierarchical representation principally involves iterative application of graph coarsening and pooling operations. Graph coarsening computes a mapping from nodes of a starting graph $G_\ell$ onto nodes of a new smaller graph $G_{\ell+1}$, while the pooling step computes node and edge features of $G_{\ell+1}$ from $G_\ell$. Here we explore two different approaches: EdgePool~\cite{diehl2019edgepool} and the Louvain method for community detection~\cite{blondel2008Louvain}. \textbf{EdgePool} \cite{diehl2019edgepool} is a method based on the principle of edge contractions. First, the raw score of an edge $e_{u,v}=(u, v)$ is obtained by a linear combination of respective node features $x_u$ and $x_v$: $r_{u,v} = W (x_u || x_v) + b$. Raw scores of edges incident to a node $u$ are then normalized as $0.5 + \mathrm{softmax}_{v\in \mathcal{N}(u)} r_{u,v}$ to obtain the final edge scores $s_{u,v}$. Finally, a maximal set of edges is greedily selected according to their scores $s_{a,b}$ and then contracted to create a new graph $G_{\ell+1}$ from $G_\ell$, while nodes in $G_\ell$ that were not merged are carried forward to $G_{\ell+1}$. Two nodes $a, b$ in $G_{\ell+1}$ are then connected by an edge iff there exist two nodes in $G_\ell$ the $a, b$ were constructed from that had been adjacent in $G_\ell$. Contraction of an edge $(u^{(\ell)},v^{(\ell)}) \in G_\ell$ results in a new node $w^{(\ell+1)}$ with features $x^{(\ell+1)}_w := s^{(\ell)}_{u,v} (x^{(\ell)}_u + x^{(\ell)}_v)$. Multiplying the new node features by the edge score facilitates gradient-based learning of the scoring function, which would otherwise be independent of the final objective function. \textbf{Louvain method} for community detection \cite{blondel2008Louvain} is a heuristic method based on greedy maximization of modularity score of each community. It is an $O(N \log N)$ algorithm without learnable parameters that is deterministic for a fixed random seed. The Louvain algorithm merges clusters (communities) into a single node and iteratively performs modularity clustering on the condensed graph until the score cannot be improved. The size of the condensed graph cannot be directly controlled, but seems to yield satisfying contraction ratios in practice. To build a hierarchical meta-graph over a starting graph $G_0$, we use average node and edge feature pooling according to the modular communities identified in $G_\ell$ by the Louvain method to construct the following level $G_{\ell+1}$. \begin{figure}[tb] \begin{minipage}[]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.7cm]{HGNet-edgepool-v2}} \end{minipage} \caption{\textbf{HGNet} with two hierarchical levels over an original graph $G_0$ of 12 vertices (in black) and 14 edges. The dashed lines represent inter-level edges. (left) Two levels of EdgePool coarsening, highlighted by red arrows, create the hierarchical structure. A GCN layer is applied before each EdgePool coarsening and at the final coarsest level $G_2$. (right) Message passing down the hierarchy is implemented by an RGCN layer at $G_1$ and then $G_0$ levels, highlighted by green arrows, where inter-level edges are treated as a distinct edge type.} \label{fig:hgnet} \end{figure} \subsection{Hierarchical message passing in HGNet} Both EdgePool and the Louvain method provide a recipe for construction of a hierarchical graph representation. We propose Hierarchical Graph Network (HGNet) based on either one of these approaches (see Figure \ref{fig:hgnet}), sharing the same hierarchical message passing approach that we describe next. Our message passing both within and between levels is principally similar to that of g-U-Net. Consider a hierarchical meta-graph with $L$ levels over some $G_0$. The forward propagation in HGNet consists of a computational pass going up the hierarchy and of a pass going down the hierarchy, resulting in the final embedding of each node in $G_0$. In the upwards pass we first apply a GCN layer \cite{kipf2016GCN} to $G_\ell$, starting with $\ell=0$, followed by node and edge pooling according to either EdgePool or the Louvain method to instantiate the next hierarchical level $G_{\ell+1}$. This process iterates until the final level $L$, at which point no more pooling is done and the downwards pass starts. In this downwards pass we utilize RGCN \cite{schlichtkrull2018RGCN} layers at each $G_\ell$ level $\ell \in \{L-1, \dots, 0\}$, where we add special edges that connect merged nodes in $G_\ell$ with their respective representatives in $G_{\ell+1}$ by an edge of unique type. \textbf{Complexity.} We now analyze the asymptotic complexity of our hierarchical meta-graph based on the EdgePool variant. Let us assume that in each round of edge contractions the size of the greedy maximum matching is at least a constant fraction $m \geq 2$ of the number of remaining nodes, i.e., $\frac{N}{m}$. Note that $m=2$ when the selected set of edges is a perfect matching. That means after the first round there will be $N\frac{m-1}{m}$ nodes in the next $G_1$ level. Thus, the total number of nodes in the entire hierarchical structure over a $G_0$ with $N$ nodes is $\sum_{\ell=0}^\infty N \left(\frac{m-1}{m}\right)^\ell = mN = O(N)$, while the number of possible levels is $\log_{\frac{m}{m-1}} N = O(\log N)$. This construction therefore guarantees that, if $G_0$ is connected, the shortest path length between any two nodes is upper-bounded by $O(\log N)$. We can also expect the number of edges in our hierarchical graph to remain asymptotically equal to the number of edges in the input graph $G_0$. Assume there are $E = \Omega(N)$ edges in $G_0$ out of $O(N^2)$ possible and that they are uniformly distributed. Then after one round of EdgePool, the number of edges in $G_1$ is expected to be $O(E\left(\frac{m-1}{m}\right)^2)$, because the number of possible edges in $G_1$ compared to $G_0$ has decreased from $O(N^2)$ to $O(\left[N\frac{m-1}{m}\right]^2)$, i.e., we can expect $\left(\frac{m-1}{m}\right)^2$ contraction factor for the number of edges. Therefore, we can expect $\sum_{\ell=0}^\infty E \left(\frac{m-1}{m}\right)^{2\ell} = \frac{m^2}{2m-1}E = O(E)$ intra-level edges in total. From the construction of the hierarchy it is also clear that the number of inter-level edges (connecting nodes between adjacent hierarchical levels) is $O(N)$ as the total number of nodes is $O(N)$. Therefore, the total number of edges is expected to remain $O(E)$. Given a deep enough hierarchy and large enough node representation capacity, the final node embeddings can incorporate LRIs from the entire graph $G_0$, as well as local information. In the case of EdgePool, the asymptotic complexity of our HGNet remains that of GCN, as even despite our hierarchical graph having up to $O(\log N)$ hierarchical levels, its size remains asymptotically unchanged under reasonable assumptions. For a standard message passing GNN to theoretically achieve this capability, it is necessary to stack $O(\mathrm{diam}(G))$ layers, which may be prohibitively expensive. \section{Results} \label{sec:results} \begin{table*}[htb] \caption{\textbf{Legacy graph benchmarks.} CiteSeer, Cora and PubMed provide only one standard data split, and therefore we show test accuracy averaged over three runs with different random seeds for these datasets. For graph classification tasks (right side of the table) we used 10-fold stratified cross-validation. Shown heatmaps are normalized per dataset (column). } \vspace{2pt} \centering \includegraphics[width=0.95\linewidth]{HGNet-stdds} \label{tab:res-std} \end{table*} In order to evaluate the performance of HGNet, we consider a wide variety of graph data, including transductive node classification and inductive graph-level classification. Our benchmarks include two settings of HGNet (namely, with EdgePool and Louvain hierarchical structures) and six competitive baseline models: GCN~\cite{kipf2016GCN}, GCN+VN (GCN extended with a Virtual Node connected to all other nodes), GAT~\cite{velickovic2017GAT}, ChebNet~\cite{tang2019chebnet}, GIN~\cite{xu2018gin}, and g-U-Net~\cite{gao2019gUnet}. The experimental setup is identical for all tested methods. Each method is trained for 200 epochs, followed by a selection of the best model based on the validation performance, and finally performance on the test split is reported. In case of GCN, GCN+VN, GAT, ChebNet and GIN, we always used a stack of 2 layers unless explicitly stated otherwise. In the case of g-U-Net, we reproduced published hyperparameters~\cite{gao2019gUnet} as closely as possible. For each method we default to 32-dimensional hidden node representation; other hyperparameters specific to certain tasks or datasets are described in the respective sections. We note that our reproduced g-U-Net results differ from the original publication~\cite{gao2019gUnet}, as there only the best validation set results were reported rather than performance on independent test sets. This erroneous practice had occurred on several occasions in the relatively nascent field of graph deep learning~\cite{errica2019fair}. \subsection{Node classification in citation networks} For our first benchmark, we consider semi-supervised node classification on the CiteSeer, Cora and PubMed citation networks~\cite{yang2016planetoid}. Our HGNet variants are configured with one hierarchical level and g-U-Net with four levels as per published hyperparameters. Citation networks are known to exhibit high homophily~\cite{zhu2020BeyondHomophily}, i.e., nodes tend to have the same class label as most of their first degree neighbors. First-order message passing GNNs are known to perform well in high-homophily settings \cite{zhu2020BeyondHomophily}, which is validated by our experiments presented in Table \ref{tab:res-std}, with the exception of GCN+VN and GIN. All three hierarchical methods (i.e., g-U-Net, HGNet-EdgePool, and HGNet-Louvain) attain very similar results, slightly behind the best performing GAT, GCN, and ChebNet. The low performance of GCN+VN, a model geared towards capturing global information, and middle-of-the-pack performances of the hierarchical methods can be explained by the high homophily present in the data, and support prior findings \cite{huang2020lp} showcasing that global graph information is not vital in these datasets. Hence, given similar model capacity and experimental settings, methods favoring local information, such as GAT and GCN, outperform the more sophisticated ones. We conclude that CiteSeer, Cora and PubMed are not directly suitable to test the ability of GNN models to capture global information or LRIs, despite their extensive use and popularity in such benchmarks \cite{gao2019gUnet, li2020GXN}. \begin{table}[htb] \caption{\textbf{Citation networks with $k$-hop sanitized dataset splits.} The reported metric is the average test accuracy over three training runs with different random seeds, while keeping the same resampled splits. Heatmaps are normalized per block given by a dataset and neighborhood size $k$ combination } \vspace{2pt} \begin{minipage}[]{1.0\linewidth} \centering \centerline{\includegraphics[width=8.5cm]{HGNet-sanitmod}} \end{minipage} \label{tab:res-sanitmod} \end{table} \subsubsection*{Resampled citation networks} In an effort to make the prediction tasks of CiteSeer, Cora and PubMed citation networks more suitable for testing the models' ability to utilize information from farther nodes, we experimented with a specific resampling of their training, validation and test splits. The standard semi-supervised splits \cite{yang2016planetoid} follow the same key for each dataset: 20 examples from each class are randomly selected for training, while 500 and 1000 examples are drawn uniformly randomly for the validation and test splits. We used principally the same key, but a different random sampling strategy. Once a node is drawn, we enforced that none of its $k$-th degree neighbors is selected for any split. This approach guarantees that a $k$-hop neighborhood of each labeled node is ``sanitized'' of labels. As such, we prevent potential correct-class label imprinting in the representation of these $k$-th degree neighbors during the semi-supervised transductive training. For a model to leverage such imprinting benefit of homophily, it has to be able to reach beyond this $k$-hop neighborhood, assuming that the class homophily spans that far in the underlying data. We experimented with $k\in\{1,2\}$ for all 3 citation networks and kept the same hyperparameters from the prior experiments, but varied the number of stacked layers or hierarchy levels, as applicable, for each GNN method. Results averaged over runs with 3 random seeds are shown in Table \ref{tab:res-sanitmod}. For $k=1$ we see consistent degradation of performance for single-layer GNNs, while even one level of hierarchy provides significant advantage for the hierarchical models. GAT and GCN recover competitive performance given two layers, which allows the models to reach second-order neighborhood with some nodes that are labeled during training. Hierarchical models however do not benefit from using two levels, as with even just one level their receptive field is already large enough to reach beyond first-order neighborhood of a node. In case of $k=2$ we observe similar behavior, but now hierarchical models typically benefit from employing two or three levels. This is particularly true for PubMed, the largest tested dataset. In this scenario we believe we have reached the limit of these datasets in the sense that we do not expect third-degree or further nodes to be consistently of significant relevance. We can see that for most methods the performance is relatively similar between two or three layers. Our resampling approach is fundamentally limited by the strong local homophily present in these citation networks and beyond $k=2$ cannot be used to test capability of the models to leverage LRIs. \subsection{Graph-level prediction} \begin{table*}[htb] \caption{\textbf{OGB molecular benchmarks.} HGNet results are obtained and presented as per OGB standards, shown is the mean and standard deviation from 10 runs with different random seeds. HGNet models have 1, 2, or 3 levels and otherwise mirror hyperparameters of the OGB baselines that each have 5 layers. The metrics for baselines are from the OGB online leaderboard.} \vspace{5pt} \centering \includegraphics[width=0.92\linewidth]{HGNet-ogbgmol} \label{tab:res-ogbgmol} \end{table*} \begin{table*}[htb] \caption{\textbf{Color-connectivity datasets.} The average test accuracy in 10-fold stratified CV for various depths of the models.} \vspace{5pt} \begin{minipage}[]{1.0\linewidth} \centering \centerline{\includegraphics[width=18cm]{HGNet-islandsH}} \end{minipage} \label{tab:res-islands} \end{table*} We now turn our focus to graph-level classification. We start by benchmarking all methods using a set of commonly used datasets: COLLAB, IMDB-BINARY, IMDB-MULTI, D\&D, NCI1, ENZYMES, and PROTEINS \cite{morris2020tudataset}. In the second part we present a new set of datasets we designed to challenge the GNN methods in learning to recognize a complex set of features. In this section, we use global mean pooling for each method to obtain the graph-level representation from individual nodes of a graph. Using this representation, a graph is finally classified by a 2-layer MLP classifier with 128-dimensional hidden layer. Our experimental results in common graph-classification datasets are presented in Table \ref{tab:res-std} (right side). One of our HGNet variants is the best performing method in 4 out of the 7 datasets. GCN+VN performs well on molecular datasets where global information is important, as does HGNet. However, g-U-Net falls behind in this setting, likely due to the nature of top-k pooling in its gPool, which destroys local information and appears to have difficulty extracting complex global features. \subsubsection*{OGB molecular benchmarks} We tested HGNet on two Open Graph Benchmark (OGB) \cite{hu2020OGB} molecular property prediction datasets: \textit{ogbg-molpcba} and \textit{ogbg-molhiv}. For our HGNet we used the same experimental setup and GCN layer implementation as provided by OGB. Both EdgePool and Louvain versions of HGNet with 2 hierarchical levels (2L), composed of 3 GCN and 2 RGCN-like layers, outperform GCN with 5 layers (see Table~\ref{tab:res-ogbgmol}). Employing a hierarchical meta-graph is more powerful than stacking the same number of layers. We note that adding global readouts via Virtual Node is remarkably beneficial in \textit{ogbg-molpcba}, albeit at the cost of many additional parameters. \vspace{-5pt} \subsubsection*{Color-connectivity task} Open Graph Benchmark and other recent initiatives are increasing the bar for GNN benchmarking, as many established benchmarking datasets are too small or too simple to adequately test the expressive power of new GNN methods. However, the motivation to include a new dataset in a suite is typically based on the interest in a particular application domain and the scale of the dataset. Unfortunately, none of the existing benchmarks provably require the capture of LRIs for significant performance gain. This issue was not realized in the benchmarking of prior hierarchical methods \cite{gao2019gUnet, li2020GXN}, except \cite{stachenfeld2020SMP} that proposed shortest path prediction task in random graphs. Here we propose to employ a task not used for GNN benchmarking before -- classifying the connectivity of same colored nodes in graphs of varying topology. Our color-connectivity datasets are created by taking a graph and randomly coloring half of its nodes one color, e.g., red, and the other nodes blue, such that the red nodes either create a single connected island or two disjoint islands. The binary classification task is then distinguishing between these two cases. The node colorings were sampled by running two red-coloring random walks starting from two random nodes. We used 16x16 and 32x32 2D grids, as well as the Euroroad and Minnesota road networks \cite{rossi2015NR} for the underlying graph topology. For each, we sampled a balanced set of 15,000 examples, except for Minnesota network for which we generated 6,000 examples due to memory constraints. Solving this task requires combination of local and long-range information, while a global readout, e.g., via Virtual Node, is expected to be unsatisfactory. HGNet-EdgePool is the single best method in this suite of benchmarks (Table \ref{tab:res-islands}). Given the nature of the data, we observe a large difference in how suitable are the hierarchical graphs created by different approaches. In particular, gPool of g-U-Net fails to facilitate the learning process on large graphs. Next, global readout via Virtual Nodes in the GCN+VN model does not provide any improvement over the standard GCN, as evidently it is not able to capture complex features. On the other hand, we see that the ChebNet and GIN models perform well. ChebNet can learn filters that have large receptive field in graph space, which is important in this case. We suspect that GIN is powerful enough to learn local heuristics GCN and GAT fail to, which warrants further investigation. \vspace{-5pt} \section{Conclusion} \vspace{-5pt} Across many datasets, we saw hierarchical models outperform their standard GNN counterparts when construction of the hierarchical graph (its inductive bias) matches well with the graph structure and prediction task. We have not compared to methods highly specialized for a particular tasks, e.g., molecular property prediction, but rather focused on elucidating the effect of using a hierarchical structure compared to the standard approach of stacking GNN layers. Further research remains to be done in terms of exploring combinations of various pooling approaches, hierarchical message passing algorithms and utilization of, e.g., GIN layers instead of GCN. Our proposed color-connectivity task requires complex graph processing to which most existing message-passing GNNs do not scale. These datasets can serve as a common-sense validation for new and more powerful methods. Our testbed datasets can still be improved, as the node features are minimal and recognition of particular topological patterns (e.g., rings or other subgraphs) is not needed to solve the current task. Nevertheless, it represents a significant step forward in terms of understanding and benchmarking more complex graph neural networks. \bibliographystyle{IEEEbib} \vspace{-5pt}
1,314,259,993,972
arxiv
\section*{INTRODUCTION} Two of the simplest quantum systems are the qubit and the harmonic oscillator (oscillator for brevity). Their controlled interaction is realizable in a multitude of settings. These include cavity-quantum electrodynamics (QED), where atomic qubits interact with electromagnetic fields \cite{Haroche1}, circuit-QED, where superconducting qubits interact with microwave resonators \cite{Blais} or quantum nano-mechanics where mechanical oscillators interact with superconducting qubits \cite{LaHaye} and atomic qubits \cite{Treutlein}. It is generally perceived that the oscillators will have to be first cooled to a known pure state and isolated sufficiently from their environments before quantum effects in them can be seen. However, one can ask the following question in order to lower the demands for controlling quantum systems. How far can cooling and isolation be relaxed? We answer this question in the context of entangling two oscillators by a mediating qubit. This is important for those qubits which do not interact with each other. The entanglement of the oscillators could then be a resource for entangling qubits and their teleportation \cite{Davidovich,Browne,Bose-Kim,Palma-Kim}. Additionally, entangling mesoscopic oscillators, as available in quantum nanomechanics, fundamentally probes the boundary of the quantum world. The most natural initial state of an oscillator is a {\em thermal state}. Moreover an oscillator {\em continuously interacts with its thermal environment} even while interacting with a qubit. Can one use the available qubit-oscillator interactions to entangle two oscillators in the above situation in a verifiable way? Surprisingly, though work on entangling oscillators initially in ``pure states" is aplenty \cite{Palma-Kim,Davidovich,Browne,Agarwal-Bose,Agarwal-Solano}, only very recently the first major step in answering the above question has been taken. Refs.\cite{Ralph,Zheng} have shown how to entangle two thermal state oscillators by a mediator in a pure state with some heuristic/approximate considerations of decoherence induced by zero temperature baths (as opposed to a thermal environment). However, wherever thermal states are relevant, the thermal environment is also {\em unavoidable}. An exact analytic treatment of this problem by solving the appropriate master equation for a thermal environment is required to go beyond heuristics and ascertaining how the openness of the oscillators and their bath temperature constrain an entanglement generation protocol and its outcome. In this paper, we show that the above analytic treatment can be achieved by choosing a {\em specific form} of the qubit-oscillator interaction which is naturally available in quantum nanomechanics and can be implemented with minimal effort in cavity/circuit-QED. Together with the inclusion of a thermal environment for the oscillator, our treatment also enables the analytic inclusion of the dephasing of qubits that may be important in a quantum nanomechanical setting. The scheme we are considering for the entanglement preparation, relying on the strength of resonant interactions, can be faster than that with cross-Kerr-like interactions \cite{Ralph}. We also find an allowed time window for the entanglement to appear which becomes broad when decoherence and temperature are low. This can make the scheme more robust to errors in interaction times than most other schemes \cite{Davidovich,Browne}. In addition, we show the possibility of reciprocation of the entanglement of the oscillators to two qubits. Thus the generated mixed state of the oscillator can serve as a resource for qubit based quantum information processing (QIP). Our results should be particularly useful for quantum nanomechanics, where oscillators are yet to be cooled to the ground state. In technologies where preparation of a ground state oscillator is available \cite{Haroche1,Blais}, using thermal states will surely lower the demands on cooling and isolation. Our scheme will show that oscillators in almost classical states can indeed be entangled to provide a QIP resource even when they are continuously (but weakly) coupled to a thermal environment. In addition to this, we believe the solution method presented here to be relevant on its own, as it could be applied to a variety of problems involving the same interaction Hamiltonian and the same decoherence model. \section*{PROBLEM DESCRIPTION} We start by presenting the schematics of our entanglement generation protocol without specifying the Hamiltonian. At first, we let two identical oscillators interact with the same qubit, which we call the \textit{entangling qubit}, either sequentially (such as for an atomic qubit flying through a pair of cavities) or simultaneously (such as for a superconducting qubit). In the former case we will assume the two interaction times to be the same. The states of the qubit are $\{\ket g,\ket e\}$, while the oscillators are described by the annihilation operators $a,b$. Let the oscillators be initially in a thermal state and the qubit in $\ket g$ so that the initial state of the complete system is: \begin{align} &\rho(0)=\kebra{g}{g}\otimes\rho_\textrm{th}=\kebra{g}{g}\otimes\frac{e^{-\beta a^\dagger a}e^{-\beta b^\dagger b}}{Z^2}, \end{align} where $Z=\textrm{Tr}\{e^{-\beta a^\dagger a}\}=\textrm{Tr}\{e^{-\beta b^\dagger b}\}$ and $\beta=\omega/(k_B T)$. Throughout the paper, the unit of $\hbar=1$ is assumed. After an interaction between the qubit and the oscillators for a time $t$ the system will evolve to $\rho(t)$. At this point, we measure the internal state of the qubit to obtain two possible outcomes $f=g,e$. As a result the oscillators are projected in the mixed state $\rho_f(t)$. Our aim is to show that for an appropriate interaction, such a state may be entangled by an amount which depends on the temperature $T$ and the strength of decoherence. One could try to detect the entanglement contained in the state $\rho_f$, for example by verifying a Bell inequality violation by local measurements on the oscillators (we will discuss how to do this for our specific model). Alternatively, one could extract part of the entanglement on to an additional pair of qubits $A$ and $B$ (entanglement ``reciprocation"), initially in the separable state $\ket{g,g}=\ket g_A\otimes\ket g_B$ by making mode $a$ interact with qubit $A$ and mode $b$ with qubit $B$ respectively. The resulting operation alone may not suffice to entangle $A$ and $B$, as they will, in general, remain entangled with modes $a$ and $b$. So we perform additional measurements of momenta $\hat p_a=-i(a-a^\dagger);\quad \hat p_b=-i(b-b^\dagger)$ or the parities $\hat\Pi_a=(-1)^{a^\dagger a};\quad \hat\Pi_b=(-1)^{b^\dagger b}$ of the two oscillators. Note that no entanglement is created during such reciprocation since the two parts of the system ($A,a$ and $B,b$) are not directly interacting and the measurement is performed locally. Thus measuring $A$ and $B$ to be entangled is a \textit{sufficient condition} for the state $\rho_f(t)$ to be entangled. It also demonstrates that the oscillators can store entanglement to be later extracted by qubits. \section*{SOLUTION METHODS} We present the form of the qubit-oscillator interaction we use, first showing the case of a single oscillator and then generalizing to the case of two oscillators. The qubit and the oscillator are resonant at frequency $\omega$. The Hamiltonian in the interaction picture is: \begin{equation} H=\sigma_1(a+a^\dagger),\label{hamiltonian} \end{equation} where we have introduced the Pauli operators $\sigma_1=\kebra{e}{g}+\kebra{g}{e};\sigma_2=i(\kebra{e}{g}-\kebra{g}{e});\sigma_3=\kebra{g}{g}-\kebra{e}{e}$. This Hamiltonian can arise in a Jaynes-Cummings system by driving it with an external laser \cite{Agarwal-Solano} (this {\em revives the counter-rotating terms} while preserving the strength of the coupling \cite{Agarwal-Solano}). It also arises naturally in nanomechanical oscillators coupled to charge qubits (although in the latter case, a rotation of the qubit basis such that $\sigma_3\rightarrow\sigma_1$ is needed). The time evolution operator is \begin{align} &U(t)=e^{-iHt}=D(-i\sigma_1t) \end{align} where $D(\alpha)=e^{\alpha a^\dagger-\alpha^*a}$ is the displacement operator \cite{walls-milburn} (we can use $\sigma_1$ inside this operator as $[\sigma_1a,\sigma_1a^\dagger]=[a,a^\dagger]=1$). If the qubit is interacting with two oscillator modes $a,b$ the previous expression is modified into: \begin{equation} U(t)=D_a(-i\sigma_1t)D_b(-i\sigma_1t), \end{equation} where $D_i$ is the displacement operator for mode $i$ \cite{walls-milburn}. If the entangling qubit is measured in $\ket g$ or $\ket e$, the resulting operators acting on the oscillators (combining evolution and measurement) are \begin{align} &\bra g U(t)\ket g=\tfrac{1}{2}(D_a(it)D_b(it)+D_a(-it)D_b(-it)),\nonumber\\ &\bra e U(t)\ket g=-\tfrac{1}{2}(D_a(it)D_b(it)-D_a(-it)D_b(-it)).\label{effective} \end{align} The above operators have increasing entangling capabilities as $t$ gets larger, as can be seen for example by applying one of them to a product of two coherent states.\\ At this point we include the effect of a thermal environment in our treatment, for which we first get back to the interaction of a single oscillator with a single qubit. Introducing decoherence, the dynamics can be described by the master equation: \begin{equation} \frac{\partial \rho}{\partial t}=-i[\sigma_1(a+a^\dagger),\rho]+L(T)\rho+L_\phi\rho, \end{equation} where $L(T)$ is the Lindblad operator for the oscillator in a thermal bath at temperature $T$, which will, for cases we consider, be of the amplitude damping form \cite{walls-milburn}: \begin{align} &L(T)\rho=\frac{\kappa}{2}(n(T)+1)(2a\rho a^\dagger-a^\dagger a\rho-\rho a^\dagger a)+\nonumber\\ &\phantom{L(T)\rho=}+\frac{\kappa}{2}n(T)(2a^\dagger\rho a-a a^\dagger\rho-\rho a a^\dagger), \end{align} where $n(T)=(exp(\beta)-1)^{-1}$ and $\kappa$ is the damping rate. The operator $L_\phi\rho=\frac{\gamma}{2}(\sigma_1\rho\sigma_1-\rho)$ will be present for the case of charge qubits whose dephasing can be significant (here $\sigma_1$ instead of $\sigma_3$ has been used because for a charge qubit the basis is rotated, as mentioned earlier). The model is analytically solvable, and we think it is worth to present here the solution method in full detail, as it may be applied to other problems involving the same form of interaction Hamiltonian. We switch to the Wigner representation of the oscillator defined as \begin{equation} W(\alpha,t)=\frac{1}{\pi^{2}}\int d^2\eta\phantom{0} e^{\eta^*\alpha-\eta\alpha^*}\textrm{Tr}\{D(\eta)\rho(t)\}. \end{equation} This representation is widely used in quantum optics, where the density matrix of a single (or multi) mode field is encoded into the real-valued Wigner function \cite{walls-milburn}. However, since in this paper we are dealing with a composite system (oscillator + qubit), the state of the system can not be represented by a single function; it is indeed encoded in a {\em $2\times2$ matrix of functions}, which we may call the {\em Wigner matrix}. The Wigner matrix component $W_{i,j}$ is defined as the Wigner function for the operator $\langle i|\rho(t)\ket j$ where $i,j=e,g$. Note that $W_{i,j}$ is not necessarily real valued for $i\neq j$; nevertheless if $\rho$ is a genuine density matrix then the condition $W_{i,j}=W_{j,i}^*$ holds, i.e. the Wigner matrix is hermitian for any $(\alpha,t)$. The master equation for $\rho$ can be converted into a differential equation for $W$ by using the following correspondences (for a rigorous derivation see for example \cite{walls-milburn}): \begin{align} &a\rho\rightarrow \left(\alpha+\frac{1}{2}\partial_{\alpha^*}\right)W;\qquad\rho a\rightarrow \left(\alpha-\frac{1}{2}\partial_{\alpha^*}\right)W;\\ &a^\dagger\rho\rightarrow \left(\alpha^*-\frac{1}{2}\partial_{\alpha}\right)W;\qquad\rho a^\dagger\rightarrow \left(\alpha^*+\frac{1}{2}\partial_{\alpha}\right)W. \end{align} It follows that the Wigner matrix obeys the differential equation \begin{align} &\partial_t W=-i(\alpha+\alpha^*)[\sigma_1,W]-\frac{i}{2}(\partial_{\alpha^*}-\partial_\alpha)\{\sigma_1,W\}+\nonumber\\ &\phantom{\partial_t W}+L(T)W+L_\phi W,\label{diffeq} \end{align} where we introduced the anticommutator $\{o_1,o_2\}=o_1o_2+o_2o_1$. The Lindblad operator in terms of the complex variable $\alpha$ is \begin{equation} L(T)W=\frac{\kappa}{2}\left(\partial_\alpha \alpha+\partial_{\alpha^*}\alpha^*+2\Delta(T)\partial_\alpha\partial_{\alpha^*}\right)W, \end{equation} where $\Delta(T)=n(T)+1/2$. Note that (\ref{diffeq}) is indeed a system of coupled differential equations for the four components of the matrix $W$, which can be separated by choosing an appropriate decomposition of $W$ in the Pauli basis $\{1,\sigma_1,\sigma_2,\sigma_3\}$. In fact we can see that the commutator $[\sigma_1,\cdot]$ in (\ref{diffeq}) is non-vanishing only if it is applied to the Pauli operators $\sigma_2,\sigma_3$, while the anticommutator $\{\sigma_1,\cdot\}$ is non-vanishing only when it is applied to the operators $1,\sigma_1$. It is then a simple step further to verify that \begin{align} &[\sigma_1,\sigma_2\pm i\sigma_3]=\pm(\sigma_2\pm i\sigma_3),\\ &\{\sigma_1,1\pm\sigma_1\}=\pm(1\pm\sigma_1). \end{align} We have at this point all the tools we need to write down a decomposition of the Wigner matrix in ''normal modes`` \begin{align} &W=\tfrac{1}{4}(v_1(1+\sigma_1)+v_2(1-\sigma_1)+\nonumber\\ &\phantom{W}+v_3(\sigma_2-i\sigma_3)+v_4(\sigma_2+i\sigma_3)), \end{align} which then yields four decoupled differential equations: \begin{align} &\partial_t v_1=-\frac{i}{2}(\partial_{\alpha^*}-\partial_\alpha)v_1+L(T)v_1,\label{v1}\\ &\partial_t v_2=\frac{i}{2}(\partial_{\alpha^*}-\partial_\alpha)v_2+L(T)v_2,\\ &\partial_t v_3=2i(\alpha+\alpha^*)v_3+L(T)v_3-\gamma v_3,\label{w1}\\ &\partial_t v_4=-2i(\alpha+\alpha^*)v_4+L(T)v_4-\gamma v_4.\label{diffeqs} \end{align} We suppose that initially the oscillator is at thermal equilibrium with its environment at temperature $T$, while the qubit is in the ground state. In terms of initial conditions this translates into \begin{align} &v_1(0)=v_2(0)=-iv_3(0)=iv_4(0)=W_T(\alpha),\label{initials}\\ &W_T(\alpha)=\frac{1}{\pi \Delta}e^{-\frac{|\alpha|^2}{\Delta}}. \end{align} These initial conditions allow for a compact solution which we are going to present in full, while the solution for generic initial conditions can be expressed in terms of convolution integrals as discussed in Appendix. Equation (\ref{v1}) can be solved by choosing an ansatz of the form \begin{align} &v_1(\alpha,t)=W_T(\alpha+\lambda(t)).\label{ansv1} \end{align} We know that $W_T(\alpha)$ is such that $L(T)W_T(\alpha)=0$, and clearly $\partial_tW_T(\alpha)=0$. Thus, if (\ref{ansv1}) holds, it follows that \begin{align} &L(T)v_1=-\frac{\kappa}{2}\left(\lambda\partial_{\alpha}+\lambda^*\partial_{\alpha^*}\right)v_1,\\ &\partial_tv_1=\left(\dot\lambda\partial_\alpha +\dot\lambda^*\partial_{\alpha^*}\right)v_1.\label{dtv1} \end{align} By substituting these into Eq. (\ref{v1}) and collecting terms proportional to $\partial_\alpha v_1$, we obtain the differential equation \begin{align} &\dot\lambda=\frac{i}{2}-\frac{\kappa}{2}\lambda,\label{lambda} \end{align} with initial condition $\lambda(0)=0$ as a consequence of (\ref{initials}). Such equation can be easily solved, yielding \begin{equation} \lambda(t)=\frac{i}{\kappa}\left(1-e^{-\frac{\kappa}{2}t}\right). \end{equation} Following the same procedure we obtain \begin{equation} v_2(\alpha,t)=W_T(\alpha-\lambda(t)). \end{equation} For $v_3$ we consider the following ansatz: \begin{equation} v_3(\alpha,t)=iW_T(\alpha)e^{\mu(t)(\alpha+\alpha^*)+\nu(t)}. \end{equation} Similar considerations as before allow us to express \begin{align} &L(T)v_3=\frac{\kappa}{2}\left(-(\alpha+\alpha^*)\mu+2\Delta \mu^2\right)v_3,\\ &\partial_t v_3=\left((\alpha+\alpha^*)\dot\mu+\dot\nu\right)v_3. \end{align} This time, after substituting such expressions in Eq. (\ref{w1}), we can collect separately the terms proportional to $\alpha v_3$ and those proportional to $v_3$, to obtain the pair of coupled differential equations \begin{align} &\dot \mu=2i-\frac{\kappa}{2}\mu,\label{mu}\\ &\dot \nu=\kappa\Delta \mu^2-\gamma.\label{nu} \end{align} From (\ref{initials}) we get the initial conditions $\mu(0)=\nu(0)=0$, then the solutions of Eqs. (\ref{mu}) and (\ref{nu}) are \begin{align} &\mu(t)=4\lambda(t),\\ &\nu(t)=-\gamma t+\kappa\Delta\int_0^t\mu(\tau)^2d\tau. \end{align} Again, $v_4$ can be solved similarly, yielding \begin{equation} v_4(\alpha,t)=-iW_T(\alpha)e^{-\mu(t)(\alpha+\alpha^*)+\nu(t)}. \end{equation} At this point we can go back to our original problem and generalize the procedure for a qubit interacting with two oscillators. We have in this case a master equation for the two-mode Wigner matrix $W(\alpha,\beta)$. If we suppose that the two oscillators have the same temperature and the same value of $\kappa$, the solution is obtained from the single mode case by replicating each function, i.e. $v_i(\alpha,t)\rightarrow v_i(\alpha,t)v_i(\beta,t)$. Note that this last operation implies that the qubit is interacting simultaneously with the two oscillators. If the interaction is sequential we should consider the delay between the two interactions, which makes the first oscillator decay for longer (dephasing can usually be neglected for flying qubits). It is not hard to include this effect analytically if needed, however our calculations show that it can be taken into account with good approximation by doubling the damping rate of the oscillators, so that we can present a common treatment for the two settings. The Wigner function of the two oscillators after the qubit is measured in $f=e,g$ is \begin{align} &W_f(\alpha,\beta,t,T)=(\mathcal N(t,T))^{-1}\bra f W(\alpha,\beta,t,T)\ket f,\label{oscillstate} \end{align} where $\mathcal N(t,T)=\int d^2\alpha d^2\beta\phantom{o}\bra fW(\alpha,\beta,t,T)\ket f$. In the reciprocation procedure the two qubits plus two oscillators are described by a 4x4 Wigner matrix, initially given by $\kebra{gg}{gg}\times W_f(\alpha,\beta,t,T)$, and then evolved for a further time $t$ (for simplicity the same as before). Similar techniques as above allow to keep the treatment analytical. The state of the qubits after reciprocation is obtained from the Wigner matrix in both cases of our interest. For momentum measurements we use the property \begin{equation} \bra{p_a,p_b}\rho\ket{p_a,p_b}=\int dx_a dx_b W(x_a+ip_a,x_b+ip_b), \end{equation} which gives the overlap of the density matrix with the momentum eigenstates. In this case we decompose the complex variables into their respective real and imaginary parts, i.e. $\alpha=x_a+ip_a$, $\beta=x_b+ip_b$. When measuring parities we use \begin{equation} W(\alpha,\beta)=\frac{4}{\pi^2}\textrm{Tr}\{D_a(\alpha)^\dagger D_b(\beta)^\dagger\rho D_a(\alpha)D_b(\beta)\hat\Pi_a\hat\Pi_b\}, \end{equation} which is a well known alternative definition \cite{walls-milburn}. For example, the two qubit density matrix corresponding to the parity eigenvalues $\Pi_a=\Pi_b=+1$ is \begin{align} &\rho_{++}(t)\propto\int d^2\alpha d^2\beta W(\alpha,\beta)+\frac{\pi}{2}\int d^2\alpha W(\alpha,0)+\nonumber\\ &\phantom{\rho_{++}(t)\propto}+\frac{\pi}{2}\int d^2\beta W(0,\beta)+\frac{\pi^2}{4}W(0,0). \end{align} \section*{ENTANGLEMENT ANALYSIS} We proceed to evaluate the entanglement contained in the states prepared at each stage of the protocol. Let us start with state $\rho_f$ of the two oscillators after the entangling qubit is measured. We investigate its nonlocal properties by looking for a Bell inequality violation. We consider the measurable quantity \begin{equation} \Delta P(\alpha,\beta)=\frac{\pi^2}{4}W_f(\alpha,\beta)= P_{\textrm{same}}(\alpha,\beta)-P_{\textrm{diff}}(\alpha,\beta), \end{equation} where $P_\textrm{same/diff}(\alpha,\beta)$ is the probability of measuring the two oscillators with same/opposite parities after having applied the displacements $D_a(\alpha)^\dagger, D_b(\beta)^\dagger$. This quantity takes values between -1 and 1 and therefore it can be used to form a Bell inequality: \begin{align} &\mathcal B(\alpha,\beta,\alpha',\beta')=\Delta P(\alpha,\beta)+\Delta P(\alpha',\beta)+\nonumber\\ &.\phantom{\mathcal B(\alpha,\beta,\alpha',\beta')=}+\Delta P(\alpha,\beta')-\Delta P(\alpha',\beta')\leq2, \end{align} where $\alpha,\beta,\alpha',\beta'$ are arbitrary complex numbers. A value of $\mathcal B$ greater than 2 indicates the presence of quantum nonlocal correlations. The maximum violation for a bipartite system is $\mathcal B=2\sqrt2$, corresponding to strong nonlocality. In the limit of large $t$ and $\kappa,\gamma\rightarrow0$, the maximum value of $\mathcal B$ is given by \begin{equation} \mathcal B_{\textrm{max}}\simeq\frac{1}{(1+2n(T))^2}2\sqrt2, \end{equation} corresponding to a set of solutions $\{\bar \alpha,\bar \beta,\bar \alpha',\bar \beta'\}$. \begin{figure}[t!] \includegraphics[width=0.24\textwidth]{bell0.eps}\includegraphics[width=0.24\textwidth]{bell11.eps} \caption{Lower bound for the Bell inequality violation $\textrm{Max}(\mathcal B,2)$, relative to state $\rho_g$ as a function of temperature and interaction time. The case $\rho_e$ is qualitatively similar. Plot (a) corresponds to the ideal case $\gamma,\kappa\sim0$, where the maximum violation compatible with the system's temperature is reached for long enough times. In plot (b) $(\kappa,\gamma)=(5,3)\times10^{-3}$ as in \cite{Blais}, which limits the time window to circa 2 Rabi periods and makes the maximum violation drop to approximately 2.4\label{firstfig}} \end{figure} We can see that if $T$ is lower than a critical temperature $T_c\simeq0.408\omega$, then a violation of the Bell inequality holds. In the case of finite times and $\kappa,\gamma>0$, we get a lower bound for the Bell inequality violation by considering \begin{equation} \mathcal B_\textrm{max}\geq\mathcal B(\bar \alpha,\bar \beta,\bar \alpha',\bar \beta'), \end{equation} where $t$ has to be substituted by $\frac{2}{\kappa}(1-e^{-\kappa t/2})$. By introducing decoherence the time window in which it is possible to verify a Bell inequality violation becomes limited and the maximum value of $\mathcal B$ drops from the ideal value of $2\sqrt2$. Nevertheless if the decoherence and dephasing rates are reasonably small we are still left with a range of times and temperatures which give a value of $\mathcal B$ significantly larger than 2, as we can see in Fig.~\ref{firstfig}. Let us discuss our results for the reciprocation procedure. \begin{figure}[t!] \includegraphics[width=0.16\textwidth]{prob2a.eps}\includegraphics[width=0.16\textwidth]{fid2b.eps}\includegraphics[width=0.16\textwidth]{maxfid2c.eps} \caption{Entanglement reciprocation via momentum measurements, for the entangling qubit measured in $e$. The $g$ case is qualitatively similar. In plots (a) and (b) time and temperature have been fixed ($t=2$ and $T=\omega$). Plot (a) shows the probability distribution for the outcomes $p_a,p_b$. Plot (b) shows the fidelity of the two qubit density matrix to the Bell state $\ket{\psi^+}=\tfrac{1}{\sqrt2}(\ket{ge}+\ket{eg})$. A fidelity above 0.5 indicates an entangled state. Entanglement is conditioned to measuring $p_a,p_b$ inside the central peak of the probability distribution, which has a volume $\sim$25\%. The remaining peaks yield separable qubit states. In plot (c) the maximum fidelity (corresponding to $p_a,p_b\simeq0$) is shown as a function of temperature and interaction time. We set $\kappa=\gamma=0.01$ in all plots \label{secondfig}} \end{figure} In the case of momentum measurements we can find, conditionally, the qubits in an entangled state for values of $T$ well above $T_c$, as shown in Fig.~\ref{secondfig}. Remarkably this procedure sustains higher and higher temperatures as the parameters $\kappa$ and $\gamma$ approach zero. For $\kappa,\gamma=0$ we found no temperature limit at all, and we only had to choose long enough interaction times to let the entanglement appear. Fig.\ref{thirdfig} shows the entanglement (negativity) of the two-qubit density matrix in the case of reciprocation with parity measurements, averaged over the four possible outcomes $\Pi_a,\Pi_b=\pm1$. In this case we found again a limit in the sustainable temperature, which does not increase as $\kappa,\gamma\rightarrow0$. The presence of a maximum allowed temperature may be a peculiarity of parity measurements. It is important to point out that the interaction time does not need to be finely tuned at any step of our protocol, provided that we remain in the proper time window which allows the entanglement to appear. In all the situations presented this time window increases indefinitely as we increase our ability to build systems with smaller $\kappa$ and $\gamma$, so that technological advances in building higher quality qubits and oscillators will eventually remove any need of fine tuning in the interaction times. \section*{POSSIBLE IMPLEMENTATIONS AND CONCLUSIONS} Presently cavity-QED setups with flying atoms seem to be a promising ground for the implementation of our scheme, e.g. in \cite{Haroche1} Rauschenbeutel {\em et al.} achieved $(\kappa,\gamma)\simeq(0.007,0)$. Circuit-QED setups with superconducting qubits are also very appealing since values of $(\kappa,\gamma)\simeq(5,3)\times10^{-3}$ or even better can be achieved by present technology \cite{Blais}. Values of $(\kappa,\gamma)\simeq(0.23,0.005)$, which do not allow for a Bell inequality violation but still give decent results for reciprocation (e.g a maximum Bell-state fidelity above 0.85), seem feasible at present for a spin coupled to a nanomechanical resonator, as shown in \cite{Treutlein}. In the context of quantum nanomechanical systems coupled to superconducting qubits $\kappa\sim0.01$ is within reach, but the rapid qubit dephasing $\gamma\sim1$ is still an issue for present technology, although some progress seems possible in the near future \cite{LaHaye}. Note that parity measurements have already been performed successfully in a cavity-QED setting \cite{measure1}, while momentum measurements should be possible with capacitive transducers for quantum nanomechanics. We are convinced that the robustness of the protocol relies on the features of the Hamiltonian (\ref{hamiltonian}). In fact, as we see in (\ref{effective}), it allows us to apply to the oscillators a coherent superposition of displacements in opposite directions, with amplitude proportional to $t$. We expect to see entanglement between the two oscillators if $t$ is larger than the phase space extension of the initial state, even if mixed. However, large $t$ also means more decoherence, thus the competition between these two effects will determine the existence and extension of a time window in which entanglement can be established. We thank the Engineering and Physical Sciences Research Council United Kingdom, the Quantum Information Processing Interdisciplinary Research Collaboration, the Royal Society and the Wolfson Foundation. \begin{figure}[H] \includegraphics[width=0.24\textwidth]{parity1.eps}\includegraphics[width=0.24\textwidth]{parity2.eps} \caption{Average negativity of the two-qubit density matrix in the case of parity measurements. Plot (a) refers to the entangling qubit being measured in $g$, while plot (b) refers to the outcome $e$. The maxima are close to the value $\sim0.3$. $(\kappa,\gamma)=(0.007,0)$ as in \cite{Haroche1} \label{thirdfig}} \end{figure}
1,314,259,993,973
arxiv
\section{Introduction} Quantum entanglement \cite{Einstein} plays a key role in many potential applications in quantum information and computation \cite{Bennett1,Bennett2,Bostrom,Gisin,Zukowski}. The optimal success of a quantum communication protocol can be ascertained by use of maximally entangled states as resources for information transfer. However, in general, the use of nonmaximally entangled resources leads to probabilistic protocols and the fidelity of information transfer is always less than unity. For example, quantum teleportation of a single qubit using a three and four-qubit W state is always probabilistic and teleportation fidelity depends on the unknown parameter of the teleported state. On the other hand, Agrawal and Pati \cite{Pati1} proposed a new class of three-qubit W-type states for deterministic teleportation of a single qubit by performing three-qubit joint measurements. The efficiency of these W-type states, however, decreases if one performs standard two-qubit and single qubit measurements only\cite{Adhikari3} instead of performing a joint three-qubit measurement. We address the question of usefulness of such non-maximally entangled resources for sending maximum information from a sender to a receiver. \par We propose a new class of non-maximally entangled four-qubit W-type states for quantum information processing and demonstrate the possibility of deterministic teleportation of a single qubit with unit fidelity. For practical purposes, we emphasize on a protocol to share optimal bipartite entanglement. For this, we use partially entangled four-qubit W-type states as a starting resource between the two users and achieve the optimal bipartite entanglement by performing standard two-qubit measurements only. Our results show that the shared two qubit entanglement can lead to a maximally entangled resource for certain state parameters. We further demonstrate the need to analyze four-qubit W-type states by comparing the efficacy of three and four qubit W-type states as resources in terms of concurrence \cite{Wootters} of the finally shared entangled state between the two users. Interestingly, our results show that for certain ranges of parameters, four qubit W-type states are more efficient resources in comparison to three qubit W-type states for achieving optimal concurrence. \par For dense coding, we found that in principle a sender can transmit 2-bit classical message to a receiver by locally manipulating his/her single qubit. The teleportation and dense coding protocols are also generalized for N-qubit W-type states. \section{Telportation Using 4-particle W-type State } Teleportation is a process to transmit quantum information over arbitrary distances using a shared entangled resource. Although non-maximally entangled four-qubit W states can be used as resources for probabilistic teleportation of a single qubit \cite{Shi}, one cannot achieve teleportation of a single qubit using W states with certainty. We propose a new class of four qubit W states, namely \begin{eqnarray} \left|\Psi_{k}\right\rangle_{1234} &=& \frac{1}{2\sqrt{k+1}}\left[\left|1000\right\rangle+\sqrt{k}e^{i\gamma}\left|0100\right\rangle\right. \nonumber \\ &+&\left. \sqrt{k+1}e^{i\delta}\left|0010\right\rangle \right. + \left. \sqrt{2k+2}e^{i\zeta}\left|0001\right\rangle\right]_{1234} \end{eqnarray} that can be used for deterministic quantum teleportation. For example, if Alice wants to teleport an unknown state $\left|\phi\right\rangle_{a}=\left[\alpha\left|0\right\rangle+\beta\left|1\right\rangle\right]_{a}~,~\alpha^{2}+\beta^{2}=1$ to Bob, then Alice and Bob need to share the four qubit state $\left|\Psi_{k}\right\rangle_{1234}$ such that Alice has qubits $1$, $2$ and $3$ and Bob has qubit $4$. In Eq. (1), $k$ is a real number and $\gamma,\delta,\zeta$ represent phases. \par The joint state of five qubits can be represented as \begin{eqnarray} \left|\Phi\right\rangle_{a1234} &=& \left|\phi\right\rangle_{a} \otimes \left|\Psi_{k}\right\rangle_{1234} \end{eqnarray} In order to teleport the unknown state to Bob, Alice projects her four qubits on the states \begin{eqnarray} \left|\eta_{k}\right\rangle^{\pm}_{a123} &=& \frac{1}{2\sqrt{k+1}} \left[\left|0100\right\rangle+\sqrt{k}e^{i\gamma}\left|0010\right\rangle\right. \nonumber \\ &\pm& \left.\sqrt{k+1}e^{i\delta}\left|0001\right\rangle \right. \pm \left. \sqrt{2k+2}e^{i\zeta}\left|1000\right\rangle \right]_{a123}\nonumber \\ \left|\xi_{k}\right\rangle^{\pm}_{a123} &=& \frac{1}{2\sqrt{k+1}} \left[\left|1100\right\rangle+\sqrt{k}e^{i\gamma}\left|1010\right\rangle \right. \nonumber \\ &\pm& \left.\sqrt{k+1}e^{i\delta}\left|1001\right\rangle\right. \pm \left.\sqrt{2k+2}e^{i\zeta}\left|0000\right\rangle\right]_{a123} \end{eqnarray Although the teleportation protocol works for all $k$, $\gamma$, $\delta$ and $\zeta$, for simplicity we assume $k=1$ and $\gamma$ $=$ $\delta$ $=$ $\zeta$ $=0$. Thus, the joint state of five qubits can be re-expressed using Alice's measurement basis as \begin{eqnarray} \left|\Phi\right\rangle_{a1234} &=& \frac{1}{2}\left[\left|\eta_{1}\right\rangle^{+}_{a123}\left|\phi\right\rangle_{4} + \left|\eta_{1}^{-}\right\rangle^{+}_{a123}\sigma_{z}\left|\phi\right\rangle_{4} \right. \nonumber \\ &+& \left.\left|\xi_{1}^{+}\right\rangle^{+}_{a123}\sigma_{x}\left|\phi\right\rangle_{4} \right. + \left. \left|\xi_{1}^{-}\right\rangle^{+}_{a123}\iota\sigma_{y}\left|\phi\right\rangle_{4}\right] \end{eqnarray} where $\left|\phi\right\rangle_{4}=[\alpha\left|0\right\rangle +\beta\left|1\right\rangle]_{4}~,~\alpha^{2}+\beta^{2}=1 $.\\ A four-qubit joint measurement on qubits $a, 1, 2$ and $3$ will project the state of Bob's qubit onto one of the four possible states as shown in Eq. (4) with the equal probability of 1/4. \par Hence, teleportation of a single qubit using non maximally entangled four qubit W state is always successful. The use of proposed states as quantum channels also provides flexibility to the experimental set-ups by relaxing the requirement of a maximally entangled shared resource for a faithful teleportation. Since the teleportation is deterministic, the total probability and fidelity of teleporting a single qubit using a partially entangled four-qubit W state is also unity. \section{Teleportation Using W-type State of n-Particle System} In the previous section, we have successfully demonstrated the efficient quantum teleportation of a single qubit state using a new class of four-qubit W-type state. We now extend our idea to $n$-particle W-type states. \par In order to teleport the single qubit state $\left|\phi\right\rangle_{a}$ to Bob, Alice needs to share a $n$-particle state \begin{eqnarray} \lefteqn{\left|\Psi_{k}\right\rangle_{12..n} \nonumber} && \nonumber \\ && \frac{1}{\sqrt{(n-2)(2k+n-3)+2}} \left[\left|100...n\right\rangle_{12...n}\right. \nonumber \\ &+& \left.\sqrt{k}e^{i\gamma}\left|010...n\right\rangle_{12..n}+\sqrt{k+1}e^{i\delta}\left|001...n\right\rangle_{12..n}\right. \nonumber \\ &+& \left....\sqrt{k+(n-3)}e^{i\zeta}\left|000...10\right\rangle_{12..n}\right. \nonumber \\ &+& \left. \sqrt{(n-2)k+\frac{(n-2)(n-3)}{2}+1}e^{i\beta}\right. \nonumber \\ & & \left.\left|000...1\right\rangle_{12...n}\right] \end{eqnarray} with Bob such that particles $1$ to $n-1$ are with Alice and particle $n$ is with Bob. In this case, the projection bases used by Alice are \begin{eqnarray} \lefteqn{\left|\eta_{k}\right\rangle^{\pm}_{a,1,2...,n-1} =} &&\nonumber \\ && \frac{1}{\sqrt{(n-2)(2k+n-3)+2}} \left[\left|010...n\right\rangle\right. \nonumber \\ &+& \left. \sqrt{k}e^{i\gamma}\left|001...n\right\rangle + \sqrt{k+1}e^{i\delta}\left|0001...n\right\rangle \right. \nonumber \\ &+& \left. .....\sqrt{k+(n-3)}e^{i\zeta}\left|000...1\right\rangle\right.\nonumber \\ &\pm & \left.\sqrt{(n-2)k+\frac{(n-2)(n-3)}{2}+1}e^{i\beta}\right. \nonumber \\ & & \left.\left|100...0\right\rangle\right]_{a,1,2...,n-1} \end{eqnarray} \begin{eqnarray} \lefteqn{\left|\xi_{k}\right\rangle^{\pm}_{a,1,2...,n-1}=} && \nonumber \\ && \frac{1}{\sqrt{(n-2)(2k+n-3)+2}}\left[\left|110...n\right\rangle\right. \nonumber \\ &+& \left.\sqrt{k}e^{i\gamma}\left|101...n\right\rangle + \sqrt{k+1}e^{i\delta}\left|1001...n\right\rangle \right. \nonumber \\ &+& \left. ....\sqrt{k+(n-3)}e^{i\zeta}\left|100...1\right\rangle\right. \nonumber \\ &\pm & \left. \sqrt{(n-2)k+\frac{(n-2)(n-3)}{2}+1}e^{i\beta}\right. \nonumber \\ & & \left.\left|000...0\right\rangle\right]_{a,1,2...,n-1} \end{eqnarray} Similar to the teleportation protocol discussed in the previous section, we can express the joint state of $n+1$ particles in terms of Alice's projection bases as \begin{eqnarray} \left|\Phi\right\rangle_{a12...n}&=&\left|\phi\right\rangle_{a}\otimes\left|\Psi_{k}\right\rangle_{123...n} \nonumber \\ &=&\frac{1}{2}\left[\left|\eta_{k}\right\rangle^{+}_{a12...n-1}\left|\phi\right\rangle_{n}\right. \nonumber \\ &+ & \left. \left|\eta_{k}\right\rangle^{-}_{a12...n-1}\sigma_{z}\left|\phi\right\rangle_{n}\right. \nonumber \\ &+& \left. \left|\xi_{k}\right\rangle^{+}_{a12...n-1}\sigma_{x}\left|\phi\right\rangle_{n} \right. \nonumber \\ &+ & \left.\left|\xi_{k}\right\rangle^{-}_{a12..n-1}\iota\sigma_{y}\left|\phi\right\rangle_{n}\right] \end{eqnarray} Where $\left|\phi\right\rangle_{n}=[\alpha\left|0\right\rangle +\beta\left|1\right\rangle]_{n}~,~\alpha^{2}+\beta^{2}=1 $.\\ Eq. (8) clearly shows that the teleportation protocol is always successful with equal probability of 1/4 for the four different measurement outcomes of Alice. Therefore, Bob can always recover the original state by performing single qubit unitary transformations on the state of his qubit, once he receives the two bit classical message from Alice regarding her measurement outcome. \section{Analysis of the efficiency of W-type states in teleportation process} We have shown that the N-particle W-type state can be successfully used as an optimal resource for efficient teleportation. The successful completion of teleportation protocol depends on the availability of experimental set up to perform and distinguish multiqubit measurements. It is evident that with the present experimental techniques, one can perform and distinguish different Bell measurements. Therefore, we analyze the efficacy of our states for a protocol where two users want to create an efficient bi-partite entangled channel between them using the partially entangled four qubit W state $\left|\Psi_{k}\right\rangle_{1234}$. For this, we assume that Alice initially has a two-qubit entangled state $\left|\phi\right\rangle_{ab}=\left[\alpha\left|00\right\rangle+\beta\left|11\right\rangle\right]_{ab}~,~\alpha^{2}+\beta^{2}=1$ in addition to the shared W-type entangled state \begin{eqnarray} \left|\Psi_{k}\right\rangle_{1234} &=& \frac{1}{2\sqrt{k+1}}\left[\left|1000\right\rangle+\sqrt{k}\left|0100\right\rangle\right. \nonumber \\ &+&\left. \sqrt{k+1}\left|0010\right\rangle\right. + \left.\sqrt{2k+2}\left|0001\right\rangle\right]_{1234} \end{eqnarray} with Bob such that qubits $1, 2$ and $3$ are with Alice and qubit $4$ is with Bob. In order to share a bi-partite entanglement with Bob, Alice needs to perform Bell measurements \begin{eqnarray} \label{Bell} \left| \phi\right\rangle^{\pm} &= &\frac{1}{\sqrt{2}} \left[\left| 00 \right\rangle \pm \left| 11 \right\rangle \right], \nonumber \\ \left| \psi\right\rangle^{\pm} &=& \frac{1}{\sqrt{2}} \left[\left| 01 \right\rangle \pm \left| 10 \right\rangle \right] \end{eqnarray} on her qubits. There are different combinations in which Alice can perform these Bell measurements to achieve the required two qubit entanglement. We have examined all possible combinations and measurement outcomes, and here we will discuss only three optimal cases where the concurrence of finally shared two-qubit entangled state is optimal and efficient. We now proceed to analyze the efficacy of the protocol in terms of the concurrence of the final entangled state. \begin{description} \item[$\bullet$ Case:1] In the first case, Alice's measurement outcomes are $\left|\phi^{+}\right\rangle_{b1}$ and $\left|\phi^{+}\right\rangle_{23}$. Therefore, the joint state of two qubits shared between Alice and Bob can be represented as \begin{eqnarray} \left|\psi\right\rangle_{a4} &=& \frac{1}{\sqrt{(2k+2)\alpha^{2}+\beta^{2}}}\left[\sqrt{2k+2}\alpha \right. \nonumber \\ && \left.\left|01\right\rangle_{a4} + \beta\left|10\right\rangle_{a4}\right] \end{eqnarray} The concurrence of $\left|\psi\right\rangle_{a4}$ is \begin{equation} C^{(1)}_{4} = \frac{2\alpha\sqrt{1-\alpha^{2}}\sqrt{2k+2}}{(2k+1)\alpha^{2}+1} \end{equation} Where subscript of $C$ represents number of qubit and superscript represent different cases.\\ Eq. (12) clearly demonstrates that for any given real positive number $k$, if $|\alpha|^{2}$ is varied from $0$ to $1$ then concurrence first increases and then decreases to a minimum value. Interestingly, for $\alpha^{2}=\frac{1}{(2k+3)}$ concurrence of the shared entangled state is unity i.e. Alice and Bob can share a maximally entangled state. It is a interesting result since Alice and Bob initially started in a partially entangled state but by performing Bell state measurements they created a bi-partite maximum entanglement between them. The finally shared state, thus, can be used for various information processing protocols. This can be really useful in scenarios where the users in a communication protocol only have access to partially entangled multiqubit states. Further, the analysis presented here not only allows the users to create maximum entanglement but also releases the constraints on the experimental set up to perform and distinguish multiqubit measurements. The price one pays to achieve the maximum entanglement are two standard Bell measurements. Nevertheless, once the users achieve maximum entanglement, the state can be used for various efficient and optimal applications in quantum information and computation. \\ \item[$\bullet$ Case:2] In the second case, Alice's measurement outcomes are $\left|\phi^{+}\right\rangle_{b2}$ and $\left|\phi^{+}\right\rangle_{13}$. Hence the shared bipartite state and concurrence of this state can be given by \begin{eqnarray} \left|\psi\right\rangle_{a4} &=& \frac{1}{\sqrt{(2k+2)\alpha^{2}+k\beta^{2}}}\left[\sqrt{2k+2}\alpha \right. \nonumber \\ & & \left. \left|01\right\rangle_{a4} +\sqrt{k} \beta\left|10\right\rangle_{a4}\right] \end{eqnarray} and \begin{equation} C^{(2)}_{4} = \frac{2\alpha\sqrt{1-\alpha^{2}}\sqrt{2k+2}\sqrt{k}}{(k+2)\alpha^{2}+k}, \end{equation} respectively. Similar to the first case discussed above, the concurrence of the shared state first increases; attains the maximum value and then decreases to 0 for any $k$ and $0<\alpha\leq1$. Further, for $\alpha^{2}=\frac{k}{(3k+2)}$ concurrence of the shared state is unity. \\ \item[$\bullet$ Case:3] The third case provides another interesting observation that for Alice's measurement outcomes are $\left|\phi^{+}\right\rangle_{b3}$ and $\left|\phi^{+}\right\rangle_{12}$, the concurrence of shared bipartite state is independent of the parameter $k$. In this scenario, the shared bipartite state and its concurrence is represented as \begin{eqnarray} \left|\psi\right\rangle_{a4} &=& \frac{1}{\sqrt{(2k+2)\alpha^{2}+(k+1)\beta^{2}}}\left[\right. \nonumber \\ & & \left.\sqrt{2k+2}\alpha \left|01\right\rangle_{a4} \right. + \left.\sqrt{k+1} \beta\left|10\right\rangle_{a4}\right] \end{eqnarray} and \begin{equation} C^{(3)}_{4} = \frac{2\sqrt{2}\alpha\sqrt{1-\alpha^{2}}}{\alpha^{2}+1}, \end{equation} respectively. The concurrence given in Eq. (16) attains its maximum value i.e. unity for $\alpha^{2}=\frac{1}{3}$. \\ \item[$\bullet$ Case:4] The fourth case i.e. when Alice's measurement outcomes are $\left|\phi^{+}\right\rangle_{a1}$ and $\left|\phi^{+}\right\rangle_{b2}$, also provides another interesting observation such that the concurrence of shared bipartite state is independent of the parameter $k$ and $\alpha$. In this scenario, the shared bipartite state and its concurrence is represented as \begin{eqnarray} \left|\psi\right\rangle_{34} &=& \frac{1}{3n+3}\left[\sqrt{2k+2} \left|01\right\rangle_{34} \right. \nonumber \\ &+& \left.\sqrt{k+1} \left|10\right\rangle_{34}\right] \end{eqnarray} and \begin{equation} C^{(4)}_{4} = \frac{2\sqrt{2}}{3} \end{equation} respectively. The concurrence given in Eq. (18) does not depend on the input state. \\ \end{description} Fig. (1) compares the first three cases above to analyze the efficacy of shared bipartite state in terms of concurrence. For $k=1$, concurrence for cases 1 and 2 are same. For large $k$, case 2 and case 3 lead to identical results. Moreover, Fig. (1) also shows a relation between $\alpha$ and combination of Bell measurements to be performed to achieve the optimal concurrence. \par \begin{figure*}[t] \centering \setlength\fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=\textwidth]{Plot1.pdf}} \caption{Comparison of efficacy of shared bipartite states in three optimal cases} \end{figure*} A similar calculation for a shared $N$ qubit partially entangled state shows that the concurrence of final states, dependent on input parameters, are\\\\ \lefteqn{C=} \begin{equation} \frac{2\alpha\beta\sqrt{k+r}\sqrt{(N-2)k +\frac{(N-2)(N-3)}{2}+1}}{((N-2)k+\frac{(N-2)(N-3)}{2}+1)\alpha^{2}+ (k+r)\beta^{2}} \end{equation} where $r$ is a variable and varies from $0$ to $(N-3)$. Eq. (19) clearly depicts that for $r=(1-k)$, it is clear that the entanglement of the final state shared between Alice and Bob depends on the input state parameters $\alpha$ and $k$. For $k \to \infty$, the concurrence is given by \begin{equation} C = \frac{2\alpha\sqrt{1-\alpha^{2}}\sqrt{N-2}}{(N-3)\alpha^{2}+1} \end{equation} Hence, for a given range of $\alpha$, if $k$ is very large then the W-type state with smaller number of particle is a better resource. \\ Similarly the concurrence of final states, independent of input parameters, are \begin{equation} C = \frac{2\sqrt{k+r}\sqrt{(N-2)k+\frac{(N-2)(N-3)}{2}+1}}{((N-1)k+\frac{(N-2)(N-3)}{2}+1+r)}, \end{equation} where $r$ is a variable and varies from $0$ to $(N-3)$. It is evident from Eq. (21) that for $r=(1-k)$, entanglement of the final state shared between Alice and Bob depends only on $k$. For $k \to \infty$, the concurrence is given by \begin{equation} C = \frac{2\sqrt{N-2}}{(N-1)} \end{equation} Hence, if $k$ is very large then the W-type state with smaller number of particle is a better resource. \\ In order to analyze the usefulness of four qubit W-type states for such a protocol, we further compare the efficacy of three and four-qubit W-type states as resources in terms of concurrence of the finally shared entangled state. We found an interesting observation that for certain range of $\alpha$, the four qubit W-type states are more efficient resources in comparison to three qubit W-type states for achieving optimal concurrence shared between two users. For this, let us first give the form of three qubit W-type states as \begin{eqnarray} \left|\Psi_{k}\right\rangle_{123} &=& \frac{1}{\sqrt{2k+2}}\left[\left|100\right\rangle+\sqrt{k}\left|010\right\rangle\right. \nonumber \\ &+ & \left. \sqrt{k+1}\left|001\right\rangle\right]_{123} \end{eqnarray} Similar to the four-qubit case, there are optimal cases for which the concurrences of finally shared states can be given as \begin{equation} C^{(1)}_{3} = \frac{2\alpha\sqrt{(1-\alpha^{2})}\sqrt{k+1}}{(k)\alpha^{2}+1} \end{equation} and \begin{equation} C^{(2)}_{3} = \frac{2\alpha\sqrt{k(k+1)(1-\alpha^{2})}}{\alpha^{2}+k}, \end{equation} In above two cases the optimal concurrence of finally shared entangled states is dependent on input state. But similar to four-qubit case, there is a one optimal case in which concurrence of finally shared state is independent on input state. \begin{equation} C^{3}_{3} = \frac{2\sqrt{k+1}}{(k+2)} \end{equation} respectively. Fig. (2) clearly demonstrates the comparison between the efficiencies of three and four-qubit W states in terms of concurrence of shared bipartite state. \begin{figure*}[t] \centering \setlength\fboxsep{0pt} \setlength\fboxrule{0.25pt} \fbox{\includegraphics[width=\textwidth]{Plot2.pdf}} \caption{Comparison of the efficiency of three and four-qubit W-type states as resources} \end{figure*} Depending on the value of parameter $k$, we identify four different cases; \\ \textbf{Case 1:} For $k=1$ if $0<\alpha^{2}\leq \frac{k(\sqrt{2}-1)}{((k+2)-\sqrt{2})}$ then 4-particle W-type state is a better resource in comparison to 3-particle W-type state else vice-verse.\\ \textbf{Case 2:} For $k=2$ \begin{description} \item[$\bullet$ Range:1]If $0<\alpha^{2}\leq \frac{\sqrt{2}-1}{(2-\sqrt{2})k+1} $ then 4-particle W-type state is a better resource in comparison to 3-particle W-type state. \item[$\bullet$ Range:2]If $\frac{\sqrt{2}-1}{(2-\sqrt{2})k+1}<\alpha^{2}\leq \frac{\sqrt{k+1}-\sqrt{2}}{k\sqrt{2}-\sqrt{k+1}} $ then 3-particle W-type state is a better resource in comparison to 4-particle W-type state. \item[$\bullet$ Range:3]If $\frac{\sqrt{k+1}-\sqrt{2}}{k\sqrt{2}-\sqrt{k+1}}<\alpha^{2}\leq \frac{\sqrt{2}k-\sqrt{k}\sqrt{k+1}}{\sqrt{k}\sqrt{k+1}-\sqrt{2}} $ then 4-particle W-type state is a better resource in comparison to 3-particle W-type state. \item[$\bullet$ Range:4]If $\frac{\sqrt{2}k-\sqrt{k}\sqrt{k+1}}{\sqrt{k}\sqrt{k+1}-\sqrt{2}}< \alpha^{2}<1 $ then 3-particle W-type state is a better resource in comparison to 4-particle W-type state. \end{description} \textbf{Case 3:} For $k>2$ \begin{description} \item[$\bullet$ Range:1]If $0<\alpha^{2}\leq \frac{\sqrt{2}-1}{(2-\sqrt{2})k+1} $ then 4-particle W-type state is a better resource in comparison to 3-particle W-type state. \item[$\bullet$ Range:2]If $\frac{\sqrt{2}-1}{(2-\sqrt{2})k+1}<\alpha^{2}\leq \frac{k-\sqrt{2k}}{\sqrt{2k}-(k+2)} $ then 3-particle W-type state is a better resource in comparison to 4-particle W-type state. \item[$\bullet$ Range:3]If $\frac{k-\sqrt{2k}}{\sqrt{2k}-(k+2)}<\alpha^{2}\leq \frac{k-\sqrt{k}\sqrt{k+1}}{\sqrt{k}\sqrt{k+1}-(k+2)} $ then 4-particle W-type state is a better resource in comparison to 3-particle W-type state. \item[$\bullet$ Range:4]If $\frac{k-\sqrt{k}\sqrt{k+1}}{\sqrt{k}\sqrt{k+1}-(k+2)}<\alpha^{2}\leq \frac{\sqrt{2}k-\sqrt{k}\sqrt{k+1}}{\sqrt{k}\sqrt{k+1}-\sqrt{2}} $ then 4-particle W-type state is a better resource in comparison to 3-particle W-type state. \item[$\bullet$ Range:5]If $\frac{\sqrt{2}k-\sqrt{k}\sqrt{k+1}}{\sqrt{k}\sqrt{k+1}-\sqrt{2}}< \alpha^{2}<1 $ then 3-particle W-type state is a better resource in comparison to 4-particle W-type state. \end{description} \textbf{Case 4:} When $k$ is very large \begin{description} \item[$\bullet$ Range:1]If $0<\alpha^{2}\leq \frac{\sqrt{2}-1}{(2-\sqrt{2})k+1} $ then 4-particle W-type state is a better resource in comparison to 3-particle W-type state. \item[$\bullet$ Range:2]If $\frac{\sqrt{2}-1}{(2-\sqrt{2})k+1}<\alpha^{2}\leq \frac{k-\sqrt{2k}}{\sqrt{2k}-(k+2)} $ then 3-particle W-type state is a better resource in comparison to 4-particle W-type state. \item[$\bullet$ Range:3]If $\frac{k-\sqrt{2k}}{\sqrt{2k}-(k+2)}<\alpha^{2}\leq \frac{\sqrt{2}k-\sqrt{k}\sqrt{k+1}}{\sqrt{k}\sqrt{k+1}-\sqrt{2}} $ then 4-particle W-type state is a better resource in comparison to 3-particle W-type state. \item[$\bullet$ Range:4]If $\frac{\sqrt{2}k-\sqrt{k}\sqrt{k+1}}{\sqrt{k}\sqrt{k+1}-\sqrt{2}}< \alpha^{2}<1 $ then 3-particle W-type state is a better resource in comparison to 4-particle W-type state. \end{description} Hence, for practical implementation of an efficient bipartite state sharing protocol one can choose W-type states as resources according to the range of parameters $\alpha$ and $k$. \section{Superdense coding using W-type states of n-particle system} Superdense coding deals with efficient information transfer between the users in a communication protocol using a shared entangled resource. We use \begin{eqnarray} \left|\eta_{1}\right\rangle_{1234}^{+} &=& \frac{1}{2\sqrt{2}}\left[\left|0100\right\rangle+\left|0010\right\rangle\right. \nonumber \\ &+& \left.\sqrt{2}\left|0001\right\rangle + 2\left|1000\right\rangle\right]_{1234} \end{eqnarray} as a shared resource for superdense coding protocol between Alice and Bob such that the first qubit is with Alice and rest of the qubits are with Bob. In order to communicate the classical message to Bob, Alice first encodes her message using one of the four single qubit operations ${I,\sigma_{x},\sigma_{y},\sigma_{z}}$ on her qubit 1. The four operations map the originally shared state between Alice and Bob to four otrhogonal states \begin{eqnarray} \left(\sigma_{x} \otimes I \otimes I \otimes I \right) \left|\eta_{1}\right\rangle_{1234}^{+} &=& \left|\xi_{1}\right\rangle_{1234}^{+} \nonumber \\ \left(\sigma_{z} \otimes I \otimes I \otimes I \right)\left|\eta_{1}\right\rangle_{1234}^{+} &=& \left|\eta_{1}\right\rangle_{1234}^{-} \nonumber \\ \left(i\sigma_{y} \otimes I \otimes I \otimes I\right)\left|\eta_{1}\right\rangle_{1234}^{+} &=& \left|\xi_{1}\right\rangle_{1234}^{-} \nonumber \\ \left(I \otimes I \otimes I \otimes I\right)\left|\eta_{1}\right\rangle_{1234}^{+} &=& \left|\eta_{1}\right\rangle_{1234}^{+} \end{eqnarray} Thus, in principle, Alice can prepare four distinct messages for Bob by locally manipulating her qubit. Once Alice encodes the message, she sends her qubit to Bob. In order to distinguish between the messages sent by Alice, Bob can always perform an appropriate joint measurement on the state of four qubits. Hence, Bob will always be able to distinguish between the four messages produced by Alice. The protocol is optimal as by locally manipulating her one qubit, Alice can transmit two bits of classical message to Bob. \par We now proceed to demonstrate optimal dense coding protocol using our $N-$particle W-type state \begin{eqnarray} \lefteqn{\left|\eta_{k}\right\rangle^{+}_{12...N} =} && \nonumber \\ && \frac{1}{\sqrt{(N-2)(2k+N-3)+2}}\left[\left|010...N\right\rangle\right. \nonumber \\ &+&\left.\sqrt{k}\left|001...N\right\rangle + \sqrt{k+1}\left|0001...N\right\rangle \right. \nonumber \\ &+&\left. .....\sqrt{k+(N-3)}\left|000...1\right\rangle \right. \nonumber \\ &+& \left.\sqrt{(N-2)k+\frac{(N-2)(N-3)}{2}+1}\right. \nonumber \\ & &\left.\left|100...0\right\rangle\right]_{12...N} \nonumber \\ \end{eqnarray} where qubit 1 is with Alice and rest of the qubits are with Bob. Similar to the four particle case, Alice can produce four distinct messages for Bob using single qubit unitary transformations ${I,\sigma_{x},\sigma_{y},\sigma_{Z}}$ such that \begin{eqnarray} \left(I \otimes I \otimes I \otimes I\right)\left|\eta_{k}\right\rangle^{+}_{12...N} &=& \left|\eta_{k}\right\rangle^{+}_{12...N} \nonumber \\ \left(\sigma_{x} \otimes I \otimes I \otimes I \right)\left|\eta_{k}\right\rangle^{+}_{12...N} &=& \left|\xi_{k}\right\rangle^{+}_{12...N} \nonumber \\ \left(\sigma_{z} \otimes I \otimes I \otimes I\right)\left|\eta_{k}\right\rangle^{+}_{12...N} &=& \left|\eta_{k}\right\rangle^{-}_{12...N} \nonumber \\ \left(i\sigma_{y} \otimes I \otimes I \otimes I\right)\left|\eta_{k}\right\rangle^{+}_{12...N} &=& \left|\xi_{k}\right\rangle^{-}_{12...N} \end{eqnarray} Therefore our $N$-particle W-type state can also be used for optimal super dense coding protocol. \section{Conclusion} We have analyzed a class of partially entangled four-qubit W-type states for efficient quantum information processing tasks. Although performing and distinguishing multiqubit measurements is an uphill task, nevertheless, our states can be used for deterministic teleportation with unit fidelity. In order to demonstrate the practical utility of such states, we have discussed and compared the efficiency of three and four qubit W-type states for sharing optimal bipartite entanglement between two users. Our results will be of high importance in situations where users only have access to partially entangled states and would like to establish optimal bipartite entanglement for efficient and deterministic information processing. \par The analytical relations between the range of state parameters, and optimal concurrence of the finally shared state is also obtained allowing one to decide when to use a three or four qubit W-type states for a particular protocol. We have also shown that our states can be used for optimal dense coding as well. The protocols have also been generalized for the case of $N$ qubits.
1,314,259,993,974
arxiv
\section{Introduction} Ever since Wheeler discussed ``it from bit"~\cite{whee90}, there has been great interest in what constraints on the properties of the universe can be derived using some appropriate mathematical formulation of information. Some of this work relies on Shannon information theory~\cite{caticha2011entropic,goyal2010information,goyal2012information,hartle2005physics}, and some of it on Fisher information theory~\cite{frieden2004science}. There has also been work on this topic that focuses on the \emph{processing} of information, i.e., that views the universe through the lens of Turing machine (TM) theory~\cite{lloyd1990valuable,chaitin2004algorithmic, zure89a,zure89b,zurek1990complexity,zenil2012computable,lloyd2006programming,schmidhuber2000algorithmic,zuse1969rechnender}. Here I adopt a different approach. I focus on the fact that information concerning the state of the universe typically \emph{is held by some agent embedded in that universe}. For example, we cannot speak of Shannon information without specifying probability distributions --- which reflect the uncertainty of some specific agent concerning the state of the universe that contains them (e.g., uncertainty of a scientist making a prediction). This leads us to analyze how an agent can acquire information concerning the state of the world. That is the topic of this paper, as described in the remainder of this introduction. \subsection{Inference devices} There are (at least) four ways an agent can acquire some information concerning the universe in which it is embedded: via an observation device, via a control device, via a prediction device, and via a memory device, i.e., a ``retrodiction'' device. It turns out that there is some mathematical structure shared by all such information-acquiring devices. Devices with that structure are called ``Inference Devices" (IDs)~\cite{wolp01,wolp08b,wolp10,bind08}. In the first section of this paper I present two examples of how an agent can acquire information about the universe that contains them, illustrating that in both examples the agent has the mathematical structure of an ID. I then present some of the more elementary impossibility results concerning IDs. These results place strong constraints on what information about a universe can be jointly held by different IDs embedded in that universe. Importantly, these constraints arise only from the definition of IDs, without any assumptions about the laws of the universe containing the IDs; they hold in \emph{any} universe that allows agents that have information about that universe. In particular, they would hold even in a classical, finite universe, with no chaotic processes. They would also hold even in a universe with agents who have super-Turing computation abilities, can transmit information at super-luminal rates, etc. It is worth noting as well that these impossibility theorems hold even though there is no sense in which IDs have the ability of self-reference. After these preliminaries I present some of the connections between the theory of IDs and the theory of Turing Machines. In particular I analyze some of the properties of an ID version of universal Turing machines and of an ID version of Kolmogorov complexity~\cite{lloyd1990valuable,chaitin2004algorithmic, zure89a,zure89b,zurek1990complexity,zenil2012computable,lloyd2006programming,schmidhuber2000algorithmic,zuse1969rechnender}. I show that the ID versions of those quantities obey many of the familiar results of Turing machine theory (e.g., the invariance theorem of TM theory). I then consider one way to extend the theory of IDs to the case where there is a probability distribution over the states of the universe, so that no information is ever 100\% guaranteed to be true. In particular, I present a result concerning the products of probabilities of error of two separate IDs, a result which is formally similar to the Heisenberg uncertainty principle. These results all concern subsets of an entire universe, e.g., one or two IDs embedded in a larger universe. However we can expand the scope to an entire universe. The idea is to \emph{define} a ``universe'', with whatever associated laws of physics, to be a set of physical systems and IDs (e.g., a set of scientists), where the IDs can have information concerning those physical systems and / or one another. Adopting this approach, I use the theory of IDs to derive impossibility results concerning the nature of the entire universe. \subsection{Inference devices and epistemic logic} Most of the results presented to this point in the paper have appeared before, albeit in a more complicated, less transparent formalism than the one used here~\cite{wolp01,wolp08b,wolp10,bind08}. In the last sections of the paper I present new results. These all involve an extension of IDs, one that includes some of the features that are shared by the four ways for an agent to acquire information (observation, control, prediction and memory) but that are not in the original definition of IDs. I show that this strengthened version of IDs has a close relation to the various ways of formalizing ``knowledge'' that are considered in epistemic logic~\cite{fagin2004reasoning,aaronson2013philosophers,parikh1987knowledge,zalta2003stanford}. However the ID-based theory of knowledge is not subject to what is perhaps the major problem of these earlier ways of formalizing knowledge, the problem of \emph{logical omniscience}. To explain that problem, it is easiest to work with the \emph{event-based} formulation of epistemic logic pioneered by Aumann~\cite{aumann1976agreeing,auma99,aubr95,futi91,bibr88}. In this formulation we start with a space $U$ of possible states of the entire universe across all time. (In the literature this is typically called a set of ``possible worlds''.) An \emph{event} is defined as a subset of $U$. For example, let $U$ be a set of all possible histories of the universe across all time and space. Then the event \{there are no clouds in the sky in London during January 1, 2000\} is all $u \in U$ such that \{there are no clouds in the sky in London during January 1, 2000\}. Belief by an agent concerning the universe is formalized in terms of a partition of $U$, which is called an \emph{Aumann structure} and an associated \emph{belief operator} $B : U \rightarrow \{R_i\}$. This is supposed to represent the intuition that in any universe $u$, the agent \emph{believes} that an event $E \subset U$ holds iff $B(u) \subseteq E$. \emph{Knowledge} is then defined as true belief, i.e., a belief operator $K$ with the property that $u \in K(u)$ for all $u \in U$~\cite{fagin2004reasoning}. Similarly, in the event-based framework we say that ``an agent\textit{knows event $E$} in world $u \in U$'' if $E$ holds for all worlds that the agent believes are possible when the actual world is $u$. So an agent knows $E$ at $u$ if for their knowledge operator, $K(u) \subseteq E$. Now suppose that some event $E$ implies some event $E'$, i.e., $E' \supseteq E$. This means that under the event-based definition of knowledge, if agent $i$ knows event $E$ in world $u$, $E'$ is also true in world $u$ --- and agent $i$ knows event $E'$ in world $u$. Generalizing, the agent cannot know a set of facts without knowing all logical implications of those facts. This is known as the property of \textbf{logical omniscience}. As an example of this property, suppose that someone multiplies two huge prime numbers and then (honestly) tells that product to the agent --- so that the agent knows that product. Then since that product logically implies its unique prime factorization, under the event-based framework the agent must ``know'' the two prime numbers, independent of any considerations of whether they have a computer to help them do calculations. This of course is absurd. This problem of logical omniscience plagues possible-worlds models of epistemic logic like those based on Aumann structures. Some extensions to possible-worlds models have been proposed to address this problem, e.g., bounds on the computational powers of the agent~\cite{aaronson2013philosophers}, assuming that the agent reasons illogically~\cite{fagin2004reasoning}, and a set of related ``impossible possible worlds'' restrictions on the nature of the agent~\cite{fagin2004reasoning,aaronson2013philosophers,parikh1987knowledge,zalta2003stanford}. However none of these has proven broadly convincing. There are other difficulties with the event-based formalization of knowledge. In that framework, by simply defining the ``knowledge operator'' of a rock on the moon appropriately, we would say that the rock ``knows'' whether it is in sunlight or not (for example by having its knowledge operator pick sets of states of the universe based on the temperature of that rock). This pathology is due to the fact that the definition of knowledge operators in the event-based framework does not reflect the fact that knowledge is \emph{held by a sentient agent}. Specifically, any sentient agent that knows something about the universe is able to correctly answer arbitrary questions about what they know, either implicitly or explicitly. (Note that a lunar rock cannot answer such questions.) However there is nothing in the formal structure of Aumann structures, Kripke structures, or the like that involves the ability of agents to correctly answer questions. The ID framework is concerned precisely with such ability of an agent to answer questions about the information they have. As a result, in the extension of the ID framework into a full-fledged theory of knowledge, we cannot say that a rock on the moon ``knows'' whether it is in sunlight. Moreover, the ID-based theory of knowledge avoids the problem of logical omniscience. Specifically, in the ID-based formalization of knowledge, if an agent knows some fact $A$, and knows that $A$ implies $B$, then $B$ is true --- but the agent may not know that. None of the results below are difficult to prove; some of the proofs, especially those of the ``Laplace demon theorems'', are almost trivial. (The interest is the implications of the inference device axioms for metaphysics and epistemology, not the math needed to derive those implications.) Nonetheless, the interested reader can find all proofs that are not given below in~\cite{wolp08b}. \section{Inference Devices} \label{sec:review} In this section I review the elementary properties of inference devices, mathematical structures that are shared by the processes of observation, prediction, recall and control~\cite{wolp01,wolp08b,wolp10,bind08}. These results are proven by extending Epimenides' paradox to apply to novel scenarios. Results relying on more sophisticated mathematics, some of them new, are presented the following section. \subsection{Observation, prediction, recall and control of the physical world} \label{sec:physical} I begin with two examples that motivate the formal definition of inference devices. The first is an example of an agent making a correct observation about the current state of some physical variable. \begin{example} Consider an agent who claims to be able to observe $S(t_2)$, the value of some physical variable at time $t_2$. If the agent's claim is correct, then for any question of the form ``Does $S(t_2) = L$?", the agent is able to consider that question at some $t_1 < t_2$, observe $S(t_2)$, and then at some $t_3 > t_2$ provide the answer ``yes" if $S(t_2) = L$, and the answer ``no" otherwise. In other words, she can correctly pose any such binary question to herself at $t_1$, and correctly say what the answer is at $t_3$.{\footnote{It may be that the agent has to use some appropriate observation apparatus to do this; in that case we can just expand the definition of the ``agent" to include that apparatus. Similarly, it may be that the agent has to configure that apparatus appropriately at $t_1$. In this case, just expand our definition of the agent's ``considering the appropriate question" to mean configuring the apparatus appropriately, in addition to the cognitive event of her considering that question.}} To formalize this, let $U$ refer to a set of possible histories of an entire universe across all time, where each $u \in U$ has the following properties: \begin{enumerate}[i)] \item $u$ is consistent with the laws of physics, \item In $u$, the agent is alive and of sound mind throughout the time interval $[t_1, t_3]$, and the system $S$ exists at the time $t_2$, \item In $u$, at time $t_1$ the agent considers some $L$-indexed question $q$ of the form ``Does $S(t_2) = L$?", \item In $u$, the agent observes $S(t_2)$, \item In $u$, at time $t_3$ the agent uses that observation to provide her (binary) answer to $q$, and believes that answer to be correct.{\footnote{This means in particular that the agent does not lie, does not believe she was distracted from the question during $[t_1, t_3]$.}} \end{enumerate} The agent's claim is that for any question $q$ of the form ``Does $S(t_2) = L$?", the laws of physics imply that for all $u$ in the subset of $U$ where at $t_1$ the agent considers $q$, it must be that the agent provides the correct answer to $q$ at $t_3$. Any prior knowledge concerning the history that the agent relies on to make this claim is embodied in the set $U$. The value $S(t_2)$ is a function of the actual history of the entire universe, $u \in U$. Write that function as $\Gamma(u)$, with image $\Gamma(U)$. Similarly, the question the agent has in her brain at $t_1$, together with the time $t_1$ state of any observation apparatus she will use, is a function of $u$. Write that function as $X(u)$. Finally, the binary answer the agent provides at $t_3$ is a function of the state of her brain at $t_3$, and therefore it too is a function of $u$. Write that binary-valued function giving her answer as $Y(u)$. Note that since $U$ embodies the laws of physics, in particular it embodies all neurological processes in the agent (e.g., her asking and answering questions), all physical characteristics of $S$, etc. So as far as this observation is concerned, the agent is just a pair of functions $(X, Y)$, both with the domain $U$ defined above, where $Y$ has the range $ \{-1, 1\}$. A necessary condition for us to say that the agent can ``observe $S(t_2)$" is that for any $\gamma \in \Gamma(U)$, there is some associated $X$ value $x$ such that for all $u \in U$, so long as $X(u) = x$, it follows that $Y(u) = 1$ iff $\Gamma(u) = \gamma$. \label{ex:beg_1} \end{example} I now present an example of an agent making a correct prediction about the future state of some physical variable. \begin{example} Now consider an agent who claims to be able to predict $S(t_3)$, the value of some physical variable at time $t_3$. If the agent's claim is correct, then for any question of the form ``Does $S(t_3) = L$?", the agent is able to consider that question at some time $t_1 < t_3$, and produce an answer at some time $t_2 \in (t_1, t_3)$, where the answer is ``yes" if $S(t_3) = L$ and ``no" otherwise. So loosely speaking, if the agent's claim is correct, then for any $L$, by their considering the appropriate question at $t_1$, they can generate the correct answer to any question of the form ``Does $S(t_3) = L$?" at $t_2 < t_3$.{\footnote{\label{foot:4} It may be that the agent has to use some appropriate prediction computer to do this; in that case we can just expand the definition of the ``agent" to include that computer. Similarly, it may be that the agent has to program that computer appropriately at $t_1$. In this case, just expand our definition of the agent's ``considering the appropriate question" to mean programming the computer appropriately, in addition to the cognitive event of his considering that question.}} To formalize this, let $U$ refer to a set of possible histories of an entire universe across all time, where each $u \in U$ has the following properties: \begin{enumerate}[i)] \item $u$ is consistent with the laws of physics, \item In $u$, the agent exists throughout the interval $[t_1, t_2]$, and the system $S$ exists at $t_3$, \item In $u$, at $t_1$ the agent considers some question $q$ of the form ``Does $S(t_3) = L$?", \item In $u$, at $t_2$ the agent provides his (binary) answer to $q$ and believes that answer to be correct.{\footnote{This means in particular that the agent does not believe he was distracted from the question during $[t_1, t_2]$.}} \end{enumerate} The agent's claim is that for any question $q$ of the form ``Does $S(t_3) = L$?", the laws of physics imply that for all $u$ in the restricted set $U$ such that at $t_1$ the agent considers $q$, it must be that the agent provides the correct answer to $q$ at $t_2$. The value $S(t_3)$ is a function of the actual history of the entire universe, $u \in U$. Write that function as $\Gamma(u)$, with image $\Gamma(U)$. Similarly, the question the agent considers at $t_1$ is a function of the state of his brain at $t_1$, and therefore is also a function of $u$. Write that function as $X(u)$. Finally, the binary answer the agent provides at $t_2$ is a function of the state of his brain at $t_2$, and therefore it too is a function of of $u$. Write that function as $Y(u)$. So as far as this prediction is concerned, the agent is just a pair of functions $(X, Y)$, both with the domain $U$ defined above, where $Y$ has the range $ \{-1, 1\}$. The agent can indeed predict $S(t_3)$ if for the space defined above $U$, for any $\gamma \in \Gamma(U)$, there is some associated $X$ value $x$ such that, no matter what precise history $u \in U$ we are in, due to the laws of physics, if $X(u) = x$ then the associated $Y(u)$ equals $1$ iff $\Gamma(u) = \gamma$. \label{ex:beg_2} \end{example} Evidently, agents who perform observation and those who perform prediction are described in part by a shared mathematical structure, involving functions $X$ and $Y$ defined over the same space $U$ of all possible histories of the universe across all time. As formalized below, I refer to any such pair $(X, Y)$ as an ``inference device". Say that for some function $\Gamma$ defined over $U$, for any $\gamma \in \Gamma(U)$, there is some associated $X$ value $x$ such that, no matter what precise history $u \in U$ we are in, due to the laws of physics, if $X(u) = x$ then the associated $Y(u)$ equals $1$ iff $\Gamma(u) = \gamma$. Then I will say that the device $(X, Y)$ ``infers" $\Gamma$. See~\cite{wolp08b} for a more detailed elaboration of the examples given above of observation and prediction in terms of inference devices. Arguably to fully formalize each of these phenomena there should be additional structure beyond that defining inference devices. (See App. B. of~\cite{wolp08b}.) Most such additional structure is left for future research. However, one particular part of such additional structure is investigated below, in the discussion of ``physical knowledge''. In addition to considering observation and prediction, it is also shown in \cite{wolp08b} that a system that remembers the past is an inference device that infers an appropriate function $\Gamma(u)$.\footnote{Loosely speaking, memory is just retrodiction, i.e., it is using current data to predict the state of non-current data. However, rather than have the non-current data concern the future, in memory it concerns the past.} \cite{wolp08b} also shows that a device that controls a physical variable is an inference device that infers an appropriate function $\Gamma(u)$. All of this analysis holds even if what is observed / predicted / remembered / controlled is not the answer to a question of the form, ``Does $S(t) = L$?", but instead an answer to question of the form, ``is $S(t)$ more property $A$ than it is property $B$?" or of the form, ``is $S(t)$ more property $A$ than $S'(t)$ is?" In the sequel I will sometimes consider situations involving multiple inference devices, $(X_1, Y_1), (X_2, Y_2), \ldots$, with associated domains $U_1, U_2, \ldots$. For example, I will consider scenarios where agents try to observe one another. In such situations, when referring to ``$U$'', I implicitly mean $\cap_i U_i$, implicitly restrict the domain of all $X_i, Y_i$ to $U$, and implicitly assume that the codomain of each such restricted $Y_i$ is binary. \subsection{Notation and terminology} To formalize the preceding considerations, I first fix some notation. I will take the set of binary numbers $\mathbb{B}$ to equal $\{-1, 1\}$. In the canonical case where $U$ is the set of all possible histories of the entire universe across all space and time, the value of any specific physical variable is specified by a subset of the components of a full vector $u \in U$. (For example, the variable of the speed of a particular particle in a particular intertial frame at a particular time is given by a subset of the components of $u$.) So any such variable is just a function over $U$. Bearing this in mind, for any function $\Gamma$ with domain $U$, I will write the image of $U$ under $\Gamma$ as $\Gamma(U)$, i.e., for the set of possible values of some physical variable. I will also sometimes abuse this notation with a sort of ``set-valued function'' shorthand, and so for example write $\Gamma(V) = 1$ for some $V \subset U$ iff $\Gamma(u) = 1 \; \forall u \in V$. On the other hand, for the special case where the function over $U$ is a measure, I use conventional shorthand from measure theory. For example, if $P$ is a probability distribution over $U$ and $V \subset U$, I write $P(V)$ as shorthand for $\sum_{u \in V} P(u)$. For any function $\Gamma$ with domain $U$ that I will consider, I implicitly assume that the entire set $\Gamma(U)$ contains at least two distinct elements. For any (potentially infinite) set $R$, $|R|$ is the cardinality of $R$. Given a function $\Gamma$ with domain $U$, I write the partition of $U$ given by $\Gamma^{-1}$ as $\overline{\Gamma}$, i.e., \begin{eqnarray} {\overline{\Gamma}} &\equiv& \{ \{u : \Gamma(u) = \gamma\} : \gamma \in \Gamma(U)\} \end{eqnarray} I say that two functions $\Gamma_1$ and $\Gamma_2$ with the same domain $U$ are \textbf{(functionally) equivalent} iff the inverse functions $\Gamma_1^{-1}$ and $\Gamma_2^{-1}$ induce the same partitions of $U$, i.e., iff ${\overline{\Gamma_1}} = {\overline{\Gamma_2}}$. Recall that a partition $A$ over a space $U$ is a {\emph{refinement}} of a partition $B$ over $U$ iff every $a \in A$ is a subset of some $b \in B$. If $A$ is a refinement of $B$, then for every $b \in B$ there is an $a \in A$ that is a subset of $b$. Some of the elementary properties of refinement will be used below, and so I now review them. First, two partitions $A$ and $B$ are refinements of each other iff $A = B$. Say a partition $A$ is finite and a refinement of a partition $B$. Then $|A| = |B|$ iff $A = B$. For any two functions $A$ and $B$ with domain $U$, I will say that ``$A$ refines $B$" if ${\overline{A}}$ is a refinement of $\overline{B}$. Similarly, for any $R \subset U$ and function $A$, I will say that ``$R$ refines $A$" (or ``$A$ is refined by $R$") if $R$ is a subset of some element of $\overline{A}$. I write the characteristic function of any set $R \subseteq U$ as the binary-valued function \begin{eqnarray} \mathcal{X}_R(u) = 1 \Leftrightarrow u \in R \label{eq:shorthand1} \end{eqnarray} As shorthand I will sometimes treat functions as equivalent to one of the values in their image. So for example expressions like ``$\Gamma_1 = \Gamma_2 \Rightarrow \Gamma_3 = 1$'' means `$`\forall u \in U$ such that $\Gamma_1(u) = \Gamma_2(u)$, $\Gamma_3(u) = 1$''. To simplify terminology, rather than referring to Kronecker delta functions (and / or Dirac delta functions) throughout, I will refer to a {\bf{probe}} of a variable $V$, by which I mean any function over $U$ parametrized by a $v \in V$ of the form \begin{equation} \delta_v(v') = \begin{cases} 1 & \text{ if $v = v'$} \\ -1 & \text{ otherwise.} \end{cases} \end{equation} $\forall v' \in V$. Given a function $\Gamma$ with domain $U$ I sometimes write $\delta_\gamma(\Gamma)$ as shorthand for the function $u \in U \rightarrow \delta_\gamma(\Gamma(u))$. When I don't want to specify the subscript $\gamma$ of a probe, I sometimes generically write $\delta$. I write ${\mathcal{P}}(\Gamma)$ to indicate the set of all probes over $\Gamma(U)$. \subsection{Weak inference} \label{sec:weak} I now review some results that place severe restrictions on what a physical agent can predict / observe / control / remember and be guaranteed to be correct (in that prediction / observation / control / memory). To begin, I formalize the concept of an ``inference device" introduced in the previous subsection. \begin{definition} \label{def:id} An {\bf{(inference) device}} over a set $U$ is a pair of functions $(X, Y)$, both with domain $U$. $Y$ is called the {\bf{conclusion}} function of the device, and is surjective onto $\mathbb{B}$. $X$ is called the {\bf{setup}} function of the device. \end{definition} Given some function $\Gamma$ with domain $U$ and some $\gamma \in \Gamma(U)$, we are interested in setting up a device so that it is assured of correctly answering whether $\Gamma(u) = \gamma$ for the actual universe $u$. Motivated by the examples above, I will formalize this with the condition that $Y(u) = 1$ iff $\Gamma(u) = \gamma$ for all $u$ that are consistent with some associated setup value $x$ of the device, i.e., such that $X(u) = x$ for some $x$. If this condition holds, then setting up the device to have setup value $x$ guarantees that the device will make the correct conclusion concerning whether $\Gamma(u) = \gamma$. (Hence the terms ``setup function'' and ``conclusion function'' in Def. 1.) We can formalize this as follows: \begin{definition} \label{def:weak_inf} Let $\Gamma$ be a function over $U$ such that $|\Gamma(U)| \ge 2$. A device ${\mathcal{D}}$ {\bf{(weakly) infers}} $\Gamma$ iff $\forall \gamma \in \Gamma(U)$, $\exists x \in X(U)$ such that $\forall u \in U, X(u) = x \Rightarrow Y(u) = \delta_\gamma({\Gamma(u)})$. \end{definition} \noindent If ${\mathcal{D}}$ infers $\Gamma$, I write ${\mathcal{D}} > \Gamma$. I say that a device ${\mathcal{D}}$ infers a set of functions if it infers every function in that set. The following semi-formal example illustrates a scenario in which weak inference holds, and a related scenario in which it doesn't hold. \begin{example} A scenario in which weak inference holds is illustrated in Fig.~\ref{fig:correct_example}. In this example, for simplicity determinism is assumed. The full rectangle, including both colored rectangles, indicates the set of all possible histories of the universe across all time, $U$ (i.e., the set of all ``states of the world'', in the language of epistemology). In this example the function $\Gamma$ is whether the sky will (not) be cloudy at noon (at Greenwich, say). Since the ID is embedded in the universe, the precise question concerning the future state of the universe that it is instructed to answer picks out different subsets of the set of all possible histories of the universe across all time. There are two such sets indicated, corresponding to the ID being asked the question, ``will the sky be cloudy at noon?'' or being asked the question, ``will the sky be clear at noon?''. (Histories falling outside of both of those sets correspond to questions different from those two.) Again, since the ID is embedded in the universe, and since its answer can have two possible values, which answer it gives (say at 11am) is a partition across $U$. The separatrix between the two elements of that partition are indicated by the bold line. Finally, in all elements of $U$, the sky either will be clear at noon or will be cloudy. The two possibilities are indicated by the two colored rectangles. The ID weakly infers $\Gamma$, i.e., correctly predicts the state of the sky at noon, since whichever of the two possible questions it considers, it is guaranteed that its answer is correct. A related scenario where weak inference does not hold is illustrated in Fig.~\ref{fig:incorrect_example}. The only difference from the scenario depicted in Fig.~\ref{fig:correct_example} is that if the ID is asked the question, ``will the sky be cloudy at noon?'', and the sky in fact will be cloudy at noon, the ID will answer 'no' --- which is incorrect. \begin{figure} \hglue-5mm \includegraphics[width= 1.2\linewidth]{correct_prediction_example.pdf} \noindent \caption{An example of correct prediction as weak inference, where for simplicity determinism is assumed. The set $U$ of all possible histories of the universe is the full rectangle, including both the yellow and blue subsets, which correspond to the two possible states of the sky at noon. Two of the possible questions of the ID are indicated: one of them is asked by the ID in all universes within the union of the two red ellipses, and the other question is asked in all universes within the union of the two blue ellipses. The ID weakly infers $\Gamma$, i.e., correctly predicts the state of the sky at noon, since whichever of the two possible questions it considers, it is guaranteed that its answer is correct. } \label{fig:correct_example} \end{figure} \begin{figure} \hglue-10mm \includegraphics[width= 1.2\linewidth]{incorrect_prediction_example.pdf} \noindent \caption{An example where the prediction of an ID of the state of the sky at noon cannot be guaranteed of being correct, i.e., the ID does not weakly infer the function of $u$ giving the state of the sky at noon. The scenario is identical to the one depicted in Fig.~\ref{fig:correct_example}, except that if the ID is asked the question, ``will the sky be cloudy at noon?'', and the sky in fact will be cloudy at noon, the ID will answer 'no', which is incorrect. } \label{fig:incorrect_example} \end{figure} \label{ex:3} \end{example} \begin{example} While it is clearly grounded in a real-world scenario, Ex.~\ref{ex:3} obscures the mathematical essence of weak inference. A fully abstract, stripped-down example of weak inference is given in the following table, which provides functions $X(u), Y(u)$ and $\Gamma(u)$ for all $u$ in a space $U$. In this minimal example, $U$ has only three elements: \begin{center} \begin{tabular} {p{1cm} || p{1cm} | p{1cm} | p{1cm} } $u$ & $X(u)$ & $Y(u)$ & $\Gamma(u)$ \\ \hline \hline a & 1 & 1 & 1 \\ \hline b & 2 & -1 & 1 \\ \hline c & 1 & -1 & 2 \\ \end{tabular} \label{my-label} \end{center} \noindent In this example, $\Gamma(U) = \{1, 2\}$, so we are concerned with two probes, $\delta_1$ and $\delta_2$. Setting $X(u) = 2$ means that $u = b$, which in turn means that $\Gamma(u) = 1$ and $Y(u) = -1$. So setting $X(u) = 2$ guarantees that $\Gamma(u) \ne 2$, and so $\delta_2(\Gamma(u)) = Y(u)$ (which in this case equals -1, the answer 'no'). So the setup value $x = 2$ ensures that the ID correctly answers the binary question, ``does $\Gamma(u) = 2$?'', in the negative. Similarly, setting $X(u) = 1$ guarantees that $\delta_1(\Gamma(u)) = Y(u)$, so that it ensures that the ID correctly answers the binary question, ``does $\Gamma(u) = 1$?'', in the positive. \label{ex:4} \end{example} Ex.~\ref{ex:4} shows that weak inference can hold even if $X(u) = x$ doesn't always fix a unique value for $Y(u)$. Such non-uniqueness is typical when the device is being used for observation. Setting up a device to observe a variable outside of that device restricts the set of possible universes; only those $u$ are allowed that are consistent with the observation device being set up that way to make the desired observation. But typically just setting up an observation device to observe what value a variable has doesn't uniquely fix the value of that variable. As discussed in App. B of~\cite{wolp08b}, the definition of weak inference is very unrestrictive. For example, a device ${\mathcal{D}}$ is `given credit' for correctly answering probe $\delta(\Gamma(u))$ if there is \emph{any} $x \in X(U)$ such that $ X(u) = x \Rightarrow Y(u) = \delta({\Gamma(u)})$. In particular, ${\mathcal{D}}$ is given credit even if the binary question we would associate with $x$ (under some particular physical interpretation of what $X$, like in Ex.~\ref{ex:beg_1} and Ex.~\ref{ex:beg_2}) is not whether $\Gamma(u) = \gamma$, but some other question. In essence, the device receives credit even if it gets the right answer by accident. Unless specified otherwise, a device written as ``${{{\mathcal{D}}}}_i$'' for any integer $i$ is implicitly presumed to have domain $U$, with setup function $X_i$ and conclusion function $Y_i$ (and similarly for no subscript). Similarly, unless specified otherwise, expressions like ``min$_{x_i}$'' mean min$_{x_i \in X_i(U)}$. \subsection{The two Laplace's Demon theorems} \label{subsec:id_major} \noindent \emph{``An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough ... nothing would be uncertain and the future just like the past would be present before its eyes.''} $ $ \noindent --- Pierre Simon Laplace, ``A Philosophical Essay on Probabilities'' $ $ There are limitations on the ability of any device to weakly infer functions. Perhaps the most trivial is the following: \begin{proposition} \label{prop:prop1} For any device ${\mathcal{D}}$, there is a function that ${\mathcal{D}}$ does not infer. \end{proposition} \begin{proof} Choose $\Gamma$ to be the function $Y$, so that the device is trying to infer itself. Then choose the negation probe $\delta(y \in {\mathbb{B}}) = -y$ to see that such inference is impossible. (Also see~\cite{wolp08b}.) \end{proof} \noindent \begin{figure} \hglue-10mm \includegraphics[width= 1.2\linewidth]{ID_impossibility_laplace.pdf} \vglue-35mm \noindent \caption{The time $t_1$ is less than $t_2$, which in turn is less than noon. $V$ is the set of all time-$t_2$ universes where Laplace is thinking the answer ``yes'' in response to the $t_1$ question Laplace heard --- whatever that question was. $V'$ is $V$ evolved forward to noon. At $t_1$, we ask Laplace, ``will the universe be outside $V'$ at noon?'' It is impossible for Laplace to answer correctly, no matter what his computational capabilities are, what the laws of the universe are, etc. } \label{fig:first_imp} \end{figure} It is interesting to consider the implications of Prop.~\ref{prop:prop1} for the case where the inference is prediction, as in Ex.~\ref{ex:beg_2}. Depending on how precisely one interprets Laplace, Prop.~\ref{prop:prop1} means that he was wrong in his claim about the ability of an ``intellect'' to make accurate predictions: even if the universe were a giant clock, it could not contain an intellect that could reliably predict the universe's future state before it occurred.{\footnote{Similar conclusions have been reached previously~\cite{mack60,popp88}. However in addition to being limited to the inference process of prediction, that earlier work is quite informal. It is no surprise than that some claims in that earlier work are refuted by well-established results in engineering. For example, the claim in~\cite{mack60} that ``a prediction concerning the narrator's future ... cannot ... account for the effect of the narrator's learning that prediction'' is just not true; it is refuted by adaptive control theory in general and by Bellman's equations in particular. Similarly, it is straightforward to see that statements (A3), (A4), and the notion of ``structurally identical predictors'' in~\cite{popp88} have no formal meaning.}} More precisely, for all $\Gamma$ as in Prop.~\ref{prop:prop1}, there could be an intellect ${\mathcal{D}}$ that can infer $\Gamma$. However Prop.~\ref{prop:prop1} tells us that for any fixed intellect, there must exist a $\Gamma$ that the intellect cannot infer. (See Fig.~\ref{fig:first_imp}.) The ``intellect'' Laplace refers to is commonly called Laplace's ``demon'', so I sometimes refer to Prop.~\ref{prop:prop1} as the ``first (Laplace's) demon theorem''. One might think that Laplace could circumvent the first demon theorem by simply constructing a second demon, specifically designed to infer the $\Gamma$ that thwarts his first demon. Continuing in this way, one might think that Laplace could construct a set of demons that, among them, could infer any function $\Gamma$. Then he could construct an ``overseer demon'' that would choose among those demons, based on the function $\Gamma$ that needs to be inferred. However this is not possible. To see this, simply redefine the device ${\mathcal{D}}$ in Prop.~\ref{prop:prop1} to be the combination of Laplace with all of his demons. These limitations on prediction hold even if the number of possible states of the universe is countable (or even finite), or if the inference device has super-Turning capabilities. It holds even if the current formulation of physics is wrong; it does not rely on chaotic dynamics, physical limitations like the speed of light, or quantum mechanical limitations. Note as well that in Ex.~\ref{ex:beg_2}'s model of a prediction system the actual values of the times of the various events are not specified. So in particular the impossibility result of Prop.~\ref{prop:prop1} still applies to that example even if $t_3 < t_2$ --- in which case the time when the agent provides the prediction is \emph{after} the event they are predicting. Moreover, consider the variant of Ex.~\ref{ex:beg_2} where the agent programs a computer to do the prediction, as discussed in Footnote~\ref{foot:4} in that example. In this variant, the program that is input to the prediction computer could even contain the future value that the agent wants to predict. Prop.~\ref{prop:prop1} would still mean that the conclusion that the agent using the computer comes to after reading the computer's output cannot be guaranteed to be correct. Prop.~\ref{prop:prop1} tells us that any inference device ${\mathcal{D}}$ can be ``thwarted'' by an associated function. However it does not forbid the possibility of some second device that can infer that function that thwarts ${\mathcal{D}}$. To analyze issues of this sort, and more generally to analyze the inference relationships within sets of multiple functions and multiple devices, we start with the following definition: \begin{definition} \label{def:setup_dist} Two devices $(X_1, Y_1)$ and $(X_2, Y_2)$ are {\bf{(setup) distinguishable}} iff $\forall x_1, x_2, \exists u \in U$ such that $X_1(u) = x_1, X_2(u) = x_2$. \end{definition} \noindent No device is distinguishable from itself. Distinguishability is symmetric, but non-transitive in general (and obviously not reflexive). Having two devices be distinguishable means that no matter how the first device is set up, it is always possible to set up the second one in an arbitrary fashion; the setting up of the first device does not preclude any options for setting up the second one. Intuitively, if two devices are not distinguishable, then the setup function of one of the devices is partially ``controlled" by the setup function of the other one. In such a situation, they are not two fully separate, independent devices. I will say that one ID $(X, Y)$ can weakly infer a second one, $(X', Y')$, if it can weakly infer the conclusion of the second ID, $Y'$. (See~\cite{wolp08b} for an example.) \begin{proposition} \label{prop:dist_not_infer} No two distinguishable devices $(X, Y)$ and $(X', Y')$ can weakly infer each other.{\footnote{In fact we can strengthen this result: If $(X', Y')$ can weakly infer the distinguishable device $(X, Y)$, then $(X, Y)$ can infer neither of the two binary-valued functions equivalent to $Y'$.}} \label{prop:2} \end{proposition} \noindent I will call Prop.~\ref{prop:2} the ``second (Laplace's) demon theorem''. See Fig.~\ref{fig:second_imp} for an illustration of Prop.~\ref{prop:2}, for two IDs called ``Bob'' and ``Alice'', in which they do not directly infer one another's conclusion, but rather infer functions of those conclusions. \begin{figure} \hglue-10mm \includegraphics[width= 1.2\linewidth]{ID_impossibility_monotheism.pdf} \vglue-35mm \noindent \caption{The time $t_1$ is less than $t_2$, which in turn is less than noon. $V$ is the set of all time-$t_2$ universes where Bob is thinking the answer ``yes'' in response to the $t_1$ question Bob heard --- whatever that question was. $W$ is the set of all time-$t_2$ universes where Alice is thinking the answer ``yes'' in response to the $t_1$ question Alice heard --- whatever that question was. $V'$ is $V$ evolved forward to noon, and $W'$ is $W$ evolved forward to noon. At $t_1$, we ask Bob, ``will the universe be in $W'$ at noon?'' (i.e., ``is Alice thinking `yes' at $t_2$?''). At that time we also ask Alice, ``will the universe be outside of $V'$ at noon?'' (i.e., ``is Bob \emph{not} thinking `yes' at $t_2$?''). It is impossible for both Bob and Alice to answer correctly, no matter what their computational capabilities are, what the laws of the universe are, etc. } \label{fig:second_imp} \end{figure} This second Laplace's demon theorem establishes that a whole class of functions cannot be inferred by ${\mathcal{D}}$ (namely the conclusion functions of devices that are distinguishable from ${\mathcal{D}}$ and also can infer ${\mathcal{D}}$). More generally, let $\mathcal{S}$ be a set of devices, all of which are distinguishable from one another. Then the second demon theorem says that there can be at most one device in $\mathcal{S}$ that can infer all other devices in $\mathcal{S}$. It is important to note that the distinguishability condition is crucial to the second demon theorem; mutual weak inference can occur between non-distinguishable devices. In~\cite{barrow2011godel} Barrow speculated whether ``only computable patterns are instantiated in physical reality". There ``computable" is defined in the sense of Turing machine theory. However we can also consider the term as meaning ``can be evaluated by a real world computer". If so, then his question is answered --- in the negative --- by the Laplace demon theorems. By combining the two demon theorems it is possible to establish the following: \begin{corollary} Consider a pair of devices ${\mathcal{D}} = (X, Y)$ and ${{{\mathcal{D}}}'} = (X', Y')$ that are distinguishable from one another and whose conclusion functions are inequivalent. Say that ${{{\mathcal{D}}}'}$ weakly infers ${\mathcal{D}}$. Then there are at least three inequivalent surjective binary functions $\Gamma$ that ${\mathcal{D}}$ does not infer. \label{coroll:1} \end{corollary} \noindent In particular, Coroll.~\ref{coroll:1} means that if any device in a set of distinguishable devices with inequivalent conclusion functions is sufficiently powerful to infer all the others, then each of those others must fail to infer at least three inequivalent functions. \subsection{Strong inference --- inference of entire functions} \label{sec:UTM} As considered in computer science theory, a computer is an entire map taking an arbitrary ``input'' given by the value of a physical variable, $\Gamma_1(u)$, to an ``output'' also given by the value of a physical variable, $\Gamma_2(u)$~\cite{hopcroft2000jd}. It is concerned with saying how the value of $\Gamma_2(u)$ would change if the value of $\Gamma_1(u)$ changed. So it is concerned with two separate physical variables. In contrast, weak inference is only concerned with inferring the value of a single physical variable, $\Gamma(u)$, not the relationship between two variables. So we cannot really say that a device ``infers a computer'' if we only use the weak inference concept analyzed above. In this subsection we extend the theory of inference devices to include inference of entire functions. In addition to allowing us to analyze inference of computers, this lays the groundwork for the analysis in the next section of the relation between inference and algorithmic information theory. To begin, suppose we have a function $f$ that relates two physical variables. Since those two variables are themselves functions defined over $U$, in general $f$ is not. To be more precise, suppose that there are two function $S$ and $T$ defined over $U$, where $S$ refines $T$, and that for all $s \in S(u)$, $f(s) = T(S^{-1}(s))$ is single-valued. We want to define what it means for a device to be able to ``emulate'' the entire mapping taking any $s \in S(U)$ to the associated value $f(s) = T(S^{-1}(s))$. One way to do this is to strengthen the concept of weak inference, so that for any desired input value $s \in S(U)$, the ID in question can simultaneously infer the output value $f(s)$ \emph{while also forcing the input to have the value $s$}. In other words, for any pair $(s \in S(U), t \in T(U))$, by appropriate choice of $x \in X(U)$ the ID $(X, Y)$ simultaneously answers the probe $\delta_t$ correctly (as in the concept of weak inference) \emph{and} forces $S(u) = s$. In this way, when the ID ``answers $\delta_t$ correctly'', it is answering whether $f(s) = t$ correctly, for the precise $s$ that it is setting. By being able to do this for all $s \in S(U)$, the ID can emulate the function $f$. Extending this concept from single-valued functions $f$ to include multivalued functions results in the following definition: \begin{definition} \label{def:defi_5a} Let $S$ and $T$ be functions both defined over $U$. A device $(X, Y)$ {\bf{strongly infers}} $(S, T)$ iff $\forall \; \delta \in {\mathcal{P}}(T)$ and all $s \in S(U)$, $\exists \; x$ such that $X(u) = x \Rightarrow \{S(u) = s, Y(u) = \delta(T(u))\}$. \end{definition} \noindent If $(X, Y)$ strongly infers $(S, T)$ we write $(X, Y) \gg (S, T)$. By considering the special case where $T(U) = {\mathbb{B}}$, we can use strong inference to formalize what it means for one device to emulate another device: \begin{definition} \label{def:defi_5} A device $(X_1, Y_1)$ {\bf{strongly infers}} a device $(X_2, Y_2)$ iff $\forall \; \delta \in {\mathcal{P}}(Y_2)$ and all $x_2$, $\exists \; x_1$ such that $X_1 = x_1 \Rightarrow X_2 = x_2, Y_1 = \delta(Y_2)$. \end{definition} \noindent See App. B in~\cite{wolp08b} for a discussion of how unrestrictive Def.~\ref{def:defi_5} is. Def.~\ref{def:defi_5} might seem peculiar, since $(X, Y) \gg (S, T)$ means that in a certain sense the function $X$ controls what the input to the function $s \rightarrow T(S^{-1}(S))$ is. However, by a simple change in perspective of what device is doing the strong inference, we can see that Def.~\ref{def:defi_5} applies even to scenarios that (before the change in perspective) do not involve such control. This is illustrated in the following example: \begin{example} Suppose ${{{\mathcal{D}}}}_2$ is a device that (for example) can be used to make predictions about the future state of the weather. Let $\Gamma$ be the set of future weather states that the device can predict, and let $X_2$ be the set of possible current meteorological conditions. So if this device can in fact infer the future state of the weather, then for any question $\delta_\gamma$ of whether the future weather will have value $\gamma$, there is some current condition $x_2$ such that if ${{{\mathcal{D}}}}_2$ is set up with that $x_2$, it correctly answers whether the associated future state of the weather will be $\gamma$. On the other hand, if ${{{\mathcal{D}}}}_2 \not > \Gamma$, then there is some such question of the form, ``will the future weather be $\gamma$?'' such that for \emph{no} input to the device of the current meteorological conditions will the device necessarily produce an answer $y_2$ to the question that is correct. One way for us to be able to conclude that some device ${{{\mathcal{D}}}}' = (X', Y')$ can ``emulate'' this behavior of ${{{\mathcal{D}}}}_2$ is to set up ${{{\mathcal{D}}}}_2$ with an arbitrary value $x_2$, and confirm that ${{{\mathcal{D}}}}'$ can infer the associated value of $Y_2$. So we require that for all $x_2$, and all $\delta \in \mathcal{P}(Y_2)$, $\exists x'$ such that if $X_2 = x_2$ and $X' = x'$, then $Y = \delta(Y_2)$. Now define a new device ${{{\mathcal{D}}}}_1$, with its setup function defined by $X_1(u) = (X'(u), X_2(u))$ and its conclusion function equal to $Y'$. Then our condition for confirming that ${{{\mathcal{D}}}}'$ can emulate ${{{\mathcal{D}}}}_2$ gets replaced by the condition that for all $x_2$, and all $\delta \in \mathcal{P}(Y_2)$, $\exists x_1$ such that if $X_1 = x_1$, then $X_2 = x_2$ and $Y = \delta(Y_2)$. This is precisely the definition of strong inference. \end{example} Say we have a Turing machine (TM) $T_1$ that can emulate another TM, $T_2$ (e.g., $T_1$ could be a universal Turing machine (UTM), able to emulate any other TM). Such ``emulation'' means that $T_1$ can perform any particular calculation that $T_2$ can. The analogous relationship holds for IDs, if we translate ``emulate'' to ``strongly infer'', and translate ``perform a particular calculation'' to ``weakly infer''. In addition, like UTM-style emulation (but unlike weak inference), strong inference is transitive. These results are formalized as follows: \begin{proposition} \label{thm:thm_2} Let ${{{\mathcal{D}}}}_1$, ${{{\mathcal{D}}}}_2$ and ${{{\mathcal{D}}}}_3$ be a set of inference devices over $U$ and $\Gamma$ a function over $U$. Then: {\bf{i)}} ${{{\mathcal{D}}}}_1 \gg {{{\mathcal{D}}}}_2$ and ${{{\mathcal{D}}}}_2 > \Gamma$ $\Rightarrow$ ${{{\mathcal{D}}}}_1 > \Gamma$. {\bf{ii)}} ${{{\mathcal{D}}}}_1 \gg {{{\mathcal{D}}}}_2$ and ${{{\mathcal{D}}}}_2 \gg {{{\mathcal{D}}}}_3$ $\Rightarrow$ ${{{\mathcal{D}}}}_1 \gg {{{\mathcal{D}}}}_3$. \end{proposition} \noindent In addition, strong inference implies weak inference, i.e., ${{{\mathcal{D}}}}_1 \gg {{{\mathcal{D}}}}_2 \Rightarrow {{{\mathcal{D}}}}_1 > {{{\mathcal{D}}}}_2$. Most of the properties of weak inference have analogs for strong inference: \begin{proposition} \label{prop:prop2} Let ${{{\mathcal{D}}}}_1$ be a device over $U$. {\bf{i)}} There is a device ${{{\mathcal{D}}}}_2$ such that ${{{\mathcal{D}}}}_1 \not \gg {{{\mathcal{D}}}}_2$. {\bf{ii)}} Say that $\forall \; x_1$, $|X_1^{-1}(x_1)| > 2$. Then there is a device ${{{\mathcal{D}}}}_2$ such that ${{{\mathcal{D}}}}_2 \gg {{{\mathcal{D}}}}_1$. \end{proposition} \noindent Strong inference also obeys a restriction that is analogous to Prop.~\ref{prop:dist_not_infer}, except that there is no requirement of setup-distinguishability: \begin{proposition} \label{thm:thm_3} No two devices can strongly infer each other. \end{proposition} Recall that there are entire functions that are not computable by any TM, in the sense that no TM can correctly compute the value of that function for every input to that function. On the other hand, trivially, any single output value of a function \emph{can} be computed by some TM (just choose the TM that prints that value and then halts). The analogous distinction holds for inference devices: \begin{proposition} Let $U$ be any countable space with at least two elements. \begin{enumerate} \item For any function $\Gamma$ over $U$ such that $|\Gamma(U)| \ge 3$ there is a device ${\mathcal{D}}$ that weakly infers $\Gamma$; \item There is a (vector-valued) function $(S, T)$ over $U$ that is not strongly inferred by any device. \end{enumerate} \label{prop:whats_inferrable} \end{proposition} \begin{proof} The proof is by construction. Let $X(u)$ be the identity function (so that each $u \in U$ has its own, unique value $x$). Choose $Y(u)$ to equal $1$ for exactly one $u$, $\bar{u}$. Then whatever the value $\gamma := \Gamma(\bar{u}) \in \Gamma(U)$ happens to be, for the probe $\delta_{\gamma}$ we can choose $x = X(\bar{u})$, so that the device correctly answers `yes' to the question of whether $\Gamma(u) = \Gamma(\bar{u})$. For any other probe $\delta_{\gamma'}$, note that since $|\Gamma(U)| \ge 3$, there must be a $u' \in U$ such that $\Gamma(u') \ne \gamma'$. Moreover, by construction $Y(u') = -1$. So if we choose $x$ to be $X(u')$, then the device correctly answers `no' to the question of whether $\Gamma(u') = \gamma'$. Since this is true for any $\gamma' \ne \Gamma(\bar{u})$, this completes a proof of the first claim. We also prove the second claim by construction. Choose both $S$ and $T$ to be the identity function, i.e., $S(u) = u$ and $T(u) = u$ for all $u$, so that $|S(U)| = |T(U)| = |U|$. So by the first requirement for some device $(X, Y)$ to strongly infer $(S, T)$, it must be that for any $s$, there is a value of $X$, $x(s)$, such that $X(u) = x(s) \Rightarrow S(u) = s$. Since $S$ is a bijection, this means that $x(s)$ must be a single-valued function, for each $s$ choosing a unique ($x$ which in turn chooses a unique) $u$. Since $T$ is also a bijection, this means that $Y(X^{-1}(x(s))$ must equal $1$, in order for the device to correctly answer `yes' to the probe of whether $T(u) = \delta_{T(S^{-1}(s))}$. However since this is true for all $s \in S(U)$, it is true for all $u \in U$. So $Y(U)$ is a singleton, contradicting the requirement that the conclusion function of any device be binary-valued. \end{proof} \section{Inference in stochastic universes} \subsection{Stochastic inference} There are several ways to extend the analysis above to incorporate a probability measure $P$ over $U$, so that inference is not exact, but only holds under some probability. In this subsection we present some of the elementary properties of one such measure of stochastic inference. Once there is a distribution over $U$, all functions like $X$, $Y$ and $\Gamma$ become random variables. Now recall that $\delta_\gamma(\Gamma)$ is shorthand for the function $u \in U \rightarrow \delta_\gamma(\Gamma(u))$ --- and so now it is a random variable. Bearing this in mind, the measure of stochastic inference we will consider here is defined as follows: \begin{definition} \label{def:def9} Let $P(u \in U)$ be a probability measure and $\Gamma$ a function with domain $U$ and finite range. Then we say that a device $(X, Y)$ (weakly) infers $\Gamma$ {\bf{with (covariance) accuracy}} \begin{eqnarray*} cov({\mathcal{D}}, \Gamma) &:=& \frac{\sum_{\delta \in {\mathcal{P}}(\Gamma)}\max_{x} \big[{\mathbb{E}}_P(Y \delta(\Gamma) \mid x)\big]}{|\Gamma(U)|} \end{eqnarray*} \end{definition} \noindent Writing it out explicitly, for countable $U$, the numerator in Def.~\ref{def:def9} is \begin{eqnarray} \sum_{\gamma \in \Gamma(U)} \max_{x \in X(U)} \bigg[ \sum_u Y(u) \delta_\gamma(\Gamma(u)) P( u \mid x) \bigg] \end{eqnarray} Intuitively, this is a probe-averaged, best-case (over $x \in X(U)$) probability of answering the probe correctly. Covariance accuracy is a way to quantify the degree to which ${\mathcal{D}} > \Gamma$ when the inference is subject to uncertainty. Clearly, $cov({\mathcal{D}},\Gamma) \le 1.0$, and if $P$ is nowhere 0, then $cov({\mathcal{D}},\Gamma) = 1.0$ iff ${\mathcal{D}} > \Gamma$.{\footnote{A subtlety with the definition of an inference devices arises in this stochastic setting: we can either require that $Y$ be surjective, as in Def. 1, or instead require that $Y$ be ``{stochastically surjective}'' in the sense that $\forall y \in {\mathbb{B}}, \; \exists u$ with non-zero probability such that $Y(u) = y$. The distinction between requiring surjectivity and stochastic surjectivity of $Y$ will not arise here.}} Covariance accuracy obeys the following bound: \begin{proposition} \label{prop:cov_lb} Let $P$ be a probability measure over $U$, ${\mathcal{D}} = (X, Y)$ a device, and $\Gamma$ a function over $U$ with finite $|\Gamma(U)|$. Then \begin{equation*} cov({\mathcal{D}},\Gamma) \geq \frac{(2 - |\Gamma(U)|) \max_x \big[ {\mathbb{E}}_P(Y \mid x ) \big]}{ |\Gamma(U)|} \end{equation*} \end{proposition} \begin{proof} For any probe $\delta_\gamma$ of $\gamma \in \Gamma(U)$, let $M_\gamma = \max_x \big[ {\mathbb{E}}_P(Y \delta_\gamma(\Gamma) \mid x ) \big]$. Define $x_m := \rm{argmax}_x \big[ {\mathbb{E}}_P(Y \mid x ) \big]$. Then $M_\gamma \geq {\mathbb{E}}_P(Y \delta_\gamma(\Gamma) \mid x_m )$ and \begin{equation*} \begin{split} cov({\mathcal{D}},\Gamma)& = \frac{\sum_{\gamma \in \Gamma(U)} M_\gamma}{|\Gamma(U)|} \geq \frac{\sum_{\gamma \in \Gamma(U)} {\mathbb{E}}_P(Y \delta_\gamma(\Gamma) \mid x_m )}{|\Gamma(U)|} \\ & = \frac{ \sum_u P(u \mid x_m)\sum_\gamma Y(u) \delta_\gamma(\Gamma(u))}{|\Gamma(U)|} \\ & = \frac{\sum_u P(u \mid x_m) (2 - |\Gamma(U)|) Y(u)}{|\Gamma(U)|} \\ & = \frac{(2 - |\Gamma(U)|) {\mathbb{E}}_P(Y \mid x_m )}{ |\Gamma(U)|}\\ & = \frac{ (2 - |\Gamma(U)|) \max_x \big[ {\mathbb{E}}_P(Y\mid x )\big]}{ |\Gamma(U)|}. \end{split} \end{equation*} \end{proof} \noindent This bound is sharp, as can be seen from the following example. \begin{example} Fix some device ${\mathcal{D}}$ and a value $|\Gamma(U)| < \infty$. Next divide each cell of the partition $X \times Y$ into $|\Gamma(U)|$ parts and assign them equal probability. Also map those cells to 1, $\dotso, |\Gamma(U)|$, so that $\Gamma(U) = \{1, \dotso, |\Gamma(U)|\}.$ For any given $x \in X$, let $a_x = P(Y = 1 \mid x), b_x = P(Y = -1 \mid x)$. For any $x \in X(U), \gamma \in \Gamma(U)$ and associated probe $\delta_\gamma $, \begin{equation*} \begin{split} \mathbb{E}_P(Y \delta_\gamma(\Gamma) \mid x)& = \frac{a_x + (|\Gamma(U)| - 1) b_x - (|\Gamma(U)| - 1) a_x + b_x}{|\Gamma(U)|}\\ & = \frac{(2 - |\Gamma(U)|) (a_x - b_x)}{|\Gamma(U)|} = \frac{(2 - |\Gamma(U)|){\mathbb{E}}_P(Y \mid x)}{|\Gamma(U)|}. \end{split} \end{equation*} We can use this to evaluate \begin{equation*} \begin{split} M_\gamma& := \max_x \big[{\mathbb{E}}_P(Y \delta_\gamma(\Gamma) \mid x) \big] \\ & = \frac{(2 - |\Gamma(U)|) \; \max_x \big[ {\mathbb{E}}_P(Y \mid x)\big]}{|\Gamma(U)|} \end{split} \end{equation*} Since this is the same for all probe parameter values $\gamma$, \begin{equation*} cov({\mathcal{D}},\Gamma) = \frac{(2 - |\Gamma(U)|) \; \max_x \big[ {\mathbb{E}}_P(Y \mid x)\big]}{ |\Gamma(U)|} \end{equation*} which establishes the claim. \label{ex:cov_lb} \end{example} The term $\frac{2 - |\Gamma(U)|}{ |\Gamma(U)|}$ in Prop.~\ref{prop:cov_lb} depends only on the size of the space $\Gamma(U)$.{\footnote{Note that this term $[2 - |\Gamma(U)|] \;/\; |\Gamma(U)|$ can be negative for $|\Gamma(U)| > 2$. This reflects our use of expected values and the convention that ${\mathbb{B}} = \{-1, 1\}$.}} The other term, max$_x \big( {\mathbb{E}}_P(Y \mid x) \big)$, can be viewed as a measure of the ``inference power'' of the device, by analogy with the power of a statistical test. It quantifies the device's ability to say `yes'. In the previous section some \emph{a priori} restrictions on the capabilities of IDs were presented. These restrictions involved whether certain properties of IDs can(not) be guaranteed with complete certainty. When we have a probability distribution over $U$ it is appropriate to replace consideration of ``guaranteed'' properties with consideration of properties that are likely but not necessarily guaranteed, e.g., as quantified with covariance accuracy. When we do that the restrictions of the previous section get modified, sometimes quite substantially. This is illustrated in the next two propositions. First, by Prop.~\ref{thm:thm_2}(i), if for devices ${\mathcal{D}}_1$, ${\mathcal{D}}_2$ and function $\Gamma$, ${\mathcal{D}}_1 \gg {\mathcal{D}}_2$ and ${\mathcal{D}}_2 > \Gamma$, then ${\mathcal{D}}_1 > \Gamma$. In covariance terms, this says that if ${\mathcal{D}}_1 \gg {\mathcal{D}}_2$ and $cov({\mathcal{D}}_2, \Gamma) = 1.0$, then $cov({\mathcal{D}}_1, \Gamma) = 1.0$. What happens to $cov({\mathcal{D}}_1, \Gamma)$ if $cov({\mathcal{D}}_2, \Gamma) < 1.0$? A partial answer is given by the following result: \begin{proposition} \label{prop:cov-sinf} There are devices ${\mathcal{D}}$, ${\mathcal{D}}'$, probability distribution $P$ defined over $U$, and function $\Gamma$, such that ${\mathcal{D}}' \gg {\mathcal{D}}$ and $cov({\mathcal{D}}, \Gamma)$ is arbitrarily close to 1.0 while $cov({\mathcal{D}}', \Gamma)$ = 0. \end{proposition} \begin{proof} The proof is by example. Let $U$ have ten states, labeled A$, \ldots, $ J and suppose that the functions $P, \Gamma, {\mathcal{D}} = (X, Y)$ and ${\mathcal{D}}' = (X', Y')$ are as in Fig.~\ref{fig:table-ws1}, with $0 \le p \le 1$. \begin{figure}[tbp] \hglue-1.2cm \includegraphics[width=1.5\columnwidth]{table-ws1.pdf} \vglue-12cm \caption{Specification of a scenario in which the stochastic version of Prop.~\ref{thm:thm_2}(i), concerning ``transitivity'' of weak inference through strong inference, fails drastically.} \label{fig:table-ws1} \end{figure} \begin{enumerate} \item To verify that ${\mathcal{D}}' \gg {\mathcal{D}}$, for the $1$-probe, for $x = 1,2$, choose $x' = 1,3$, respectively. For the $-1$-probe, for $x = 1,2$, choose $x' = 2,4$, respectively. \item $cov({\mathcal{D}}, \Gamma) = p$. To see this, for the $1$-probe, evaluate $\max_x {\mathbb{E}}_P(Y \delta_1(\Gamma) \mid x) = p$, the maximum occurring for $x = 1$. Similarly, for the $-1$-probe, evaluate $\max_x {\mathbb{E}}_P(Y \delta_{-1}(\Gamma) \mid x) = p$, the maximum occurring for $x = 2$. \item $cov({\mathcal{D}}', \Gamma) = 0$. To see this for both probes, note that ${\mathbb{E}}_P(Y' \delta(\Gamma) \mid x') = 0$ for each $x'$. \end{enumerate} The proof is completed by taking $p \rightarrow 1$. \end{proof} To understand Prop.~\ref{prop:cov-sinf}, recall that the definition of ${\mathcal{D}}' \gg {\mathcal{D}}$ requires that for any $x \in X(U)$ and for any probe $\delta_\gamma \in {\mathcal{P}}(\Gamma)$, there be \emph{some} $x'$ and associated $X'^{-1}(x') \subseteq U$ for which ${\mathcal{D}}'$ successfully emulates ${\mathcal{D}}$'s behavior at inferring $\delta_\gamma$. If the inference ${\mathcal{D}} > \Gamma$ is perfect, then ${\mathcal{D}}'$ also infers $\Gamma$. However, if the inference ${\mathcal{D}} > \Gamma$ is only partially correct, then that value $x'$ and associated subset of $U$, under which ${\mathcal{D}}' \gg {\mathcal{D}}$ may be precisely those $u$ for which ${\mathcal{D}}$ performs badly at inferring $\delta_\gamma$. Thus, ${\mathcal{D}}$ may do an excellent, though imperfect, job overall of inferring $\Gamma$ while ${\mathcal{D}}'$ fails completely. The second example of how the restrictions of the previous section get modified by introducing a probability distribution is that this makes the second Laplace's impossibility theorem become ``barely true'': \begin{proposition} \label{prop:non-dist-cov} There are devices ${\mathcal{D}}$ and ${\mathcal{D}}'$ with $X$ and $X'$ setup-distinguishable and a distribution $P$ where both $cov({\mathcal{D}}, {\mathcal{D}}')$ and $cov({\mathcal{D}}', {\mathcal{D}})$ are arbitrarily close to 1. \end{proposition} \begin{proof} The proof is by example. Let $U$ have sixteen states, labeled A, $\ldots$, P and suppose that the functions $P, \Gamma, {\mathcal{D}} = (X, Y)$ and ${\mathcal{D}}' = (X', Y')$ are as in Fig.~\ref{fig:table-ww1}, with arbitrary $0 < b < 1/6$, and $a = (1 - 6b)/2$. By inspection, $X$ and $X'$ are setup distinguishable. Next, plugging in yields $cov({\mathcal{D}},{\mathcal{D}}') = cov({\mathcal{D}}, Y') = a/(a + b)$. Moreover $cov({\mathcal{D}}',{\mathcal{D}}) = cov({\mathcal{D}},{\mathcal{D}}') $ by symmetry of the columns in Fig.~\ref{fig:table-ww1}. (${\mathbb{E}}_P(Y Y' \mid X = 1) = (a - b)/(a + b)$ and ${\mathbb{E}}_P(Y Y' \mid X = -1) = -1$.) \begin{figure}[tbp] \hglue-1.2cm \includegraphics[width=1.7\columnwidth]{table-ww1.pdf} \vglue-11cm \caption{Specification of a scenario in which the stochastic version of Prop.~\ref{prop:dist_not_infer}, concerning simultaneous inference of two setup-distinguishable IDs, fails drastically.} \label{fig:table-ww1} \end{figure} So by taking $b$ arbitrarily close to 0, both of the covariances can be made arbitrarily close to 1. \end{proof} Prop.~\ref{prop:non-dist-cov} shows that in a certain sense, as soon as any stochasticity is introduced into the universe, having two devices be setup-distinguishable no longer restricts their ability to simultaneously infer each other. However if we replace setup-distinguishability with the property that the setup functions of the two devices are statistically independent, then we recover strong restrictions on simultaneous inference. To illustrate this, let $M$ be the four-dimensional hypercube $\{0, 1\}^4$. Define the following three functions over $\vec{z} \in M$: \begin{enumerate} \item $k({\vec{z}}) = z_1 + z_4 - z_2 - z_3$; \item $m({\vec{z}}) = (z_2 - z_4)$; \item $n({\vec{z}})=(z_3 - z_4)$. \end{enumerate} \begin{proposition} \label{prop:prop6} Let $P$ be a probability measure over $U$, and ${{{\mathcal{D}}}}_1$ and ${{{\mathcal{D}}}}_2$ two devices where $X_1(U) = X_2(U) = {\mathbb{B}}$, and those variables are statistically independent under $P$. Define $P(X_1 = -1) \equiv \alpha$ and $P(X_2 = -1) \equiv \beta$. Say that ${{{\mathcal{D}}}}_1$ infers ${{{\mathcal{D}}}}_2$ with accuracy $\epsilon_1$, while ${{{\mathcal{D}}}}_2$ infers ${{{\mathcal{D}}}}_2$ with accuracy $\epsilon_2$. Then \begin{eqnarray*} \epsilon_1 \epsilon_2 \;&\le&\; {\mbox{max}}_{{\vec{z}} \in M} \big| \alpha \beta [k({\vec{z}})]^2 + \alpha k({\vec{z}})m({\vec{z}}) + \beta k({\vec{z}})n({\vec{z}}) + m({\vec{z}})n({\vec{z}})\big| . \end{eqnarray*} In particular, if $\alpha = \beta = 1/2$, then \begin{eqnarray*} \epsilon_1 \epsilon_2 \;&\le&\; \frac{{\mbox{max}}_{{\vec{z}} \in M} \; |\; (z_1 - z_4)^2 - (z_2 - z_3)^2\; |}{4} \nonumber \\ &=&\; 1/4. \end{eqnarray*} \end{proposition} The maximum for $\alpha = \beta = 1/2$ can occur in several ways. One is when $z_1 = 1$, and $z_2, z_3, z_4$ all equal $0$. At these values, both devices have an inference accuracy of 1/2 at inferring each other. Each device achieves that accuracy by perfectly inferring one probe of the other device, while performing randomly for the remaining probe. The ID framework as developed to date has no function measuring distance, nor one measuring time. So at present, one cannot even formulate an ID-analog of Heisenberg's uncertainty principle, never mind try to derive it. It is intriguing that despite this, Prop.~\ref{prop:prop6} is a bound on the product of uncertainties, exactly like Heisenberg's uncertainty principle. This suggests it may be worth exploring extensions of the ID framework that do involve distance and time, to see what \emph{a priori} constraints there might be on the product of uncertainties of two IDs that are measuring different aspects of the same system. (This idea is returned to in the last section below.) Finally, it should be noted that there are other ways to quantify the degree of weak inference when there is intrinsic uncertainty, in addition to covariance accuracy. For example, we could change Def.~\ref{def:def9} by replacing the sum over all probes $\delta$ and associated division by $|\Gamma(U)|$ with a minimum over all probes $\delta$. (This amounts to replacing an average-best-case expression with a worst-case expression.) \subsection{The complexity of inference} Constraints on what can be computed by a physical device can be derived from the laws of physics~\cite{lloyd2000ultimate}. There have also been attempts to go the other way, and derive constraints on the laws of physics from computation theory, in particular from algorithmic information theory (AIT)~\cite{livi08,chaitin2004algorithmic, zure89a,zure89b,zurek1990complexity,zenil2012computable}. These often implicitly involve uncertainty about the state of the universe. For example, the use of Kolmogorov complexity to model physical reality is often intimately related to the use of algorithmic probability~\cite{livi08,schmidhuber2000algorithmic,zuse1969rechnender}. (Indeed, the very first line in~\cite{schmidhuber2000algorithmic} is ``The probability distribution $P$ from which the history of our universe is sampled represents a theory of everything''.) One way to justify consideration of such a probability distribution in the first place is to identify it with uncertainty of some agent (e.g., a scientist) concerning the state of the universe. This importance of an agent in attempts to analyze physics using AIT suggests we extend the inference device framework to include structures similar to those considered in AIT. There are several ways to extend the ID framework this way. In this subsection I sketch the starting point for one of them. \label{sec:inf__compl} Given a TM $T$, the \emph{Kolmogorov complexity} of an output string $s$ is defined as the size of the smallest input string $s'$ that when input to $T$ produces $s$ as output. To construct our inference device analog of this, we need to define the ``size'' of an input region of an inference device ${\mathcal{D}}$. To do this, we assume we are given a measure $d\mu$ over $U$, and for simplicity restrict attention to functions $\Gamma$ over $U$ with countable range. Then we define the {\bf{size}} of $\gamma \in \Gamma(U)$ as -ln$\big[\int_{\Gamma^{-1}(\gamma)} d\mu(u) \; 1\big]$, i.e., the negative logarithm of the measure of all $u \in U$ such that $\Gamma(u) = \gamma$.{\footnote{As usual, if $U$ is countable, $\mu$ is a point measure, and the integral is a sum.}} We write this size as ${\mathcal{M}}_{\mu; \Gamma}(\gamma)$, or just ${\mathcal{M}}(\gamma)$ for short.{\footnote{If $\int d\mu(u) \; 1 = \infty$, then we instead work with differences in logarithms of volumes, evaluated under an appropriate limit of $d\mu$ that takes $\int d\mu(u) \; 1 \rightarrow \infty$. For example, we might work with such differences when $U$ is taken to be a box whose size goes to infinity. \label{foot:vol}}} We define inference complexity in terms of such a size function using the shorthand introduced just below Eq.~\eqref{eq:shorthand1}: \begin{definition} \label{def:def6} Let ${\mathcal{D}}$ be a device and $\Gamma$ a function over $U$ where $X(U)$ and $\Gamma(U)$ are countable and ${\mathcal{D}} > \Gamma$. The {\bf{inference complexity}} of $\Gamma$ with respect to ${\mathcal{D}}$ and measure $\mu$ is defined as \begin{eqnarray*} {\mathcal{C}}_\mu(\Gamma ; {\mathcal{D}}) \;\;&\triangleq& \;\; \sum_{\delta \in {\mathcal{P}}(\Gamma)} {\mbox{min}}_{x : X = x \Rightarrow Y = \delta(\Gamma)} [{\mathcal{M}}_{\mu,X} (x)]. \end{eqnarray*} \end{definition} \noindent In the sequel I will often have the measure implicit, and (for example) simply write ${\mathcal{C}}$ rather than ${\mathcal{C}}_\mu$. I will also mostly restrict attention to the case where $\mu$ is either a distribution or a semi-measure.\footnote{A natural alternative measure of ``inference complexity'' is given by replacing the sum over all probes in Def.~\ref{def:def6} with a max over all probes, so that we are analyzing the hardest possible question to ask about $\Gamma$. In the interests of space, we leave this for future work.} As an example, for the case where inference models the process of prediction, $\Gamma$ corresponds to a potential future state of some system $S$ external to ${\mathcal{D}}$. In this case ${\mathcal{C}}(\Gamma; {\mathcal{D}})$ is a measure of how difficult it currently is for ${\mathcal{D}}$ to predict that future state of $S$. Loosely speaking, the more sensitively that future state depends on current conditions, the greater the inference complexity of predicting that future state. Inference complexity of any function $\Gamma$ with respect to a device $(X, Y)$ is bounded by the Shannon entropy of $\mu(X)$: \begin{proposition} For any ID ${\mathcal{D}}$, probability distribution $\mu$, and function $\Gamma$ with a countable image such that ${\mathcal{D}} > \Gamma$, \begin{eqnarray} {\mathcal{C}}_\mu(\Gamma ; {\mathcal{D}}) \le |\Gamma| \times H_\mu(X) \nonumber \end{eqnarray} where $H_\mu(X)$ is the Shannon entropy of $\mu(X)$. \label{prop:inf_comp_entropy} \end{proposition} \begin{proof} Expand \begin{eqnarray*} \sum_{\delta \in \mathcal{P}(\Gamma)} \min_{x : X = x \Rightarrow Y = f(\Gamma)} [{\mathcal{M}}_{\mu,X} (x)] &\le& \sum_{x\in X(U)} {\mathcal{M}}_{\mu,X} (x) \\ &\le& -|\Gamma| \sum_{x\in X(U)} \frac{ {\mbox{log}}_2 \mu(x)}{|\Gamma|} \\ &\le& |\Gamma| H_\mu(X) \end{eqnarray*} \end{proof} Kolmogorov complexity concerns TMs computing a single output, rather than TMs emulating an entire function from inputs to outputs. The field of algorithmic information theory then analyzes the relation between Kolmogorov complexity and UTMs, i.e., TMs that emulate entire functions from inputs to outputs. Analogously, inference complexity concerns inferring a single value of a variable, i.e., it is defined in terms of \emph{weak} inference. So to investigate the inference device analog of algorithmic information theory means investigating the relation between inference complexity and IDs that emulate entire functions --- which involves strong inference instead of weak inference. To begin, recall perhaps the most fundamental result in AIT, the \emph{invariance theorem}. This theorem gives an upper bound on the difference between the Kolmogorov complexity of a string using a particular UTM $T_1$ and its complexity if using a different UTM, $T_2$. This bound is independent of the computation to be performed, and can be viewed as the Kolmogorov complexity of $T_1$ emulating $T_2$. Similarly, we can bound how much greater the inference complexity of a function can be for a device ${\mathcal{D}}_1$ than it is for a different device ${\mathcal{D}}_2$ if ${\mathcal{D}}_1$ can strongly infer ${\mathcal{D}}_2$: \begin{proposition} \label{thm:thm4} Let ${\mathcal{D}}_1$ and ${\mathcal{D}}_2$ be two devices and $\Gamma$ a function over $U$ where $\Gamma(U)$ is finite, ${\mathcal{D}}_1 \gg {\mathcal{D}}_2$, and ${\mathcal{D}}_2 > \Gamma$. Then for any distribution $\mu$, \begin{eqnarray*} &&{\mathcal{C}}_\mu(\Gamma ; {\mathcal{D}}_1) - {\mathcal{C}}_\mu(\Gamma ; {\mathcal{D}}_2) \;\;\le \;\; |\Gamma(U)| \; \times \nonumber \\ && \qquad \qquad {\max}_{x_2} \bigg( \min_{x_1 : \{X_1 = x_1 \Rightarrow X_2 = x_2, Y_1 = Y_2\}} [{\mathcal{M}}_{\mu,X_1}(x_1) - {\mathcal{M}}_{\mu,X_2}(x_2)] \bigg). \end{eqnarray*} \noindent \end{proposition} \noindent Note that since ${\mathcal{M}}_{\mu,X_1}(x_1) - {\mathcal{M}}_{\mu,X_2}(x_2) = {\mbox{ln}}\bigg[\frac{\mu (X_2^{-1}(x_2))} {\mu( X_1^{-1}(x_1))} \bigg]$, the bound in Prop.~\ref{thm:thm4} is independent of the units with which one measures volume in $U$. (Cf. footnote ~\ref{foot:vol}.) Furthermore, it is always true that $X_1 = x_1 \Rightarrow X_2 = x_2, Y_1 = Y_2$ iff $X_1^{-1}(x_1) \subseteq X_2^{-1}(x_2) \; \cap \; (Y_1Y_2)^{-1}(1)$. Accordingly, for all $(x_1, x_2)$ pairs arising in the bound in Prop.~\ref{thm:thm4}, $\frac{\mu (X_2^{-1}(x_2))} {\mu( X_1^{-1}(x_1))} \ge 1$. So the upper bound in Prop.~\ref{thm:thm4} is always non-negative. The max-min expression on the RHS of Prop.~\ref{thm:thm4} is independent of $\Gamma$. So the bound in Prop.~\ref{thm:thm4} is independent of all aspects of $\Gamma$ except the cardinality of $\Gamma(U)$. Intuitively, the bound is $|\Gamma(U)|$ times the worst-case amount of ``computational work'' that ${\mathcal{D}}_1$ has to do to ``emulate'' ${\mathcal{D}}_2$'s behavior for some particular value of $x_2$. Suppose that it takes a lot of computational work for ${\mathcal{D}}_2$ to infer $\Gamma$, and so it also takes a lot of computational work for ${\mathcal{D}}_1$ to infer $\Gamma$ by emulating ${\mathcal{D}}_2$. However, it might take very little work for ${\mathcal{D}}_1$ to infer $\Gamma$ directly. In fact, it may even be that ${\mathcal{C}}(\Gamma ; {\mathcal{D}}_1) < {\mathcal{C}}(\Gamma ; {\mathcal{D}}_2)$: \begin{proposition} \label{prop:sinf-sic} There are devices ${\mathcal{D}}$, ${\mathcal{D}}'$, probability distribution $P$ defined over $U$, and function $\Gamma$, such that ${\mathcal{D}} > \Gamma$, ${\mathcal{D}}' \gg {\mathcal{D}}$, and ${\mathcal{C}}_P(\Gamma; {\mathcal{D}})$ is arbitrarily large, while ${\mathcal{C}}_P(\Gamma; {\mathcal{D}}')$ is arbitrarily close to the minimum value of $\big|\Gamma \big| \times \ln(|\Gamma(U)|)$. \end{proposition} \begin{proof} The proof is by example. Let $U$ have twelve states, labeled A$, \ldots, $ L and suppose that the functions $P, \Gamma, {\mathcal{D}}' = (X, Y)$ and ${\mathcal{D}} = (X, Y)$ are as in Fig.~\ref{fig:table-sic-s}, with $1/4 < p < 1$. \begin{figure}[tbp] \hglue-3cm \includegraphics[width=1.7\columnwidth]{table-sic-s-fixed.pdf} \vglue-12.5cm \caption{Scenario illustrating discrepancies of complexities of two IDs where one strongly infers the other.} \label{fig:table-sic-s} \end{figure} \begin{enumerate} \item To verify that ${\mathcal{D}} > \Gamma$, for the $1$-probe, choose $x = 1$. For the $-1$ probe, choose $x = 2$. \item To verify that ${\mathcal{D}}' \gg {\mathcal{D}}$, first, for the $1$-probe, for $x = 1,2,3$, choose $x' = 1,3,5$, respecitively. Then for the $-1$-probe, for $x = 1,2,3$, choose $x' = 2,4,6$, respectively. \item To verify that ${\mathcal{C}}(\Gamma ; {\mathcal{D}})$ can be arbitrarily large, first expand it as $-2\ln((1 - p)/2) = 2\ln(2) - 2\ln(1 - p)$. (For the 1-probe, $x = 1$ and ${\mathcal{M}}_{P, X}(x) = -\ln((1 - p)/2)$ and similarly for the -1-probe and $x = 2$.) \item To verify that ${\mathcal{C}}(\Gamma ; {\mathcal{D}}')$ can be arbitrarily close to its minimal value, write it as $-2\ln(p/2) = 2\ln(2) - 2\ln(p)$. (For the 1-probe, $x' = 5$ and ${\mathcal{M}}_{P,X}(x) = -\ln(p/2)$ and similarly for the -1-probe and $x' = 6$.) \end{enumerate} \noindent Finally, by taking $p$ arbitrarily close to 1, ${\mathcal{C}}(\Gamma ; {\mathcal{D}})$ becomes arbitrarily large while ${\mathcal{C}}(\Gamma ; {\mathcal{D}}')$ becomes arbitrarily close to the minimum of $2\ln(2)$. \end{proof} Although there is not space to analyze them here, it is worth noting that there are several ways to translate some of the mathematical structures of algorithmic information theory into the inference device framework. For example, just as a given Turing machine may fail to produce an output for some specific input, so an inference device may fail to reach a conclusion for some specific setup. This motivates the following definition: \begin{definition} A device $(X, Y)$ \textbf{halts} for setup value $x$ iff $X= x \Rightarrow Y= y$ for some single value $y$. \end{definition} \noindent We say that $x$ is a ``halting setup'' if $(X, Y)$ halts for $x$. Parelleling the usual definitions in TM theory, we say that an ID is \textbf{total}, or \textbf{recursive} iff it halts for all $x \in X(U)$. So an ID $(X, Y)$ is recursive iff $X$ refines $Y$. Given this definition of what it means for a device to halt on a given input, we can define the inference analog of a prefix-free Turing machine~\cite{livi08}: \begin{definition} Given a semi-measure $\mu$, a device $(X, Y)$ is \textbf{prefix(-free)} iff \begin{eqnarray} \sum_{x : {\mathcal{D}} \; halts \; on \; x} 2^{-{\mathcal{M}}_{\mu,X} (x)} &\le& 1 \nonumber \end{eqnarray} \end{definition} \noindent By Kraft's inequality, if ${\mathcal{D}}$ is prefix-free for a semi-measure $\mu$, then there is a prefix-free code for the set of all halting $x \in X(U)$. Therefore we can identify that set of $x$'s with semi-infinite bit strings, or equivalently with the natural numbers~\cite{livi08}. As a final example, note that the min over $x$'s in Def.~\ref{def:def6} is a direct analog of the min in the definition of Kolmogorov complexity (there the min is over those strings that when input to a particular UTM result in the desired output string). A natural modification to Def.~\ref{def:def6} is to remove the min by considering all $x$'s that cause $Y = \delta(\Gamma)$, not just of one of them: \begin{eqnarray*} {\hat{{\mathcal{C}}}}(\Gamma ; {\mathcal{D}}) \;\;&\triangleq& \;\; \sum_{\delta \in {\mathcal{P}}(\Gamma)} -{\mbox{ln}} \left[\; \mu \left(\cup_{x : X = x \Rightarrow Y = \delta(\Gamma)} X^{-1}(x) \right) \;\right] \nonumber \\ &=& \sum_{\delta \in {\mathcal{P}}(\Gamma)} -{\mbox{ln}} \left[\sum_{x : X = x \Rightarrow Y = \delta(\Gamma)} e^{-{\mathcal{M}}(x)}\right], \end{eqnarray*} where the equality follows from the fact that for any $x, x' \ne x$, $X^{-1}(x) \cap X^{-1}(x') = \varnothing$. The argument of the $\ln(.)$ in this modified version of inference complexity has a direct analog in TM theory: The sum, over all input strings $s$ to a UTM that generates a desired output string $s'$, of $2^{-n(s)}$, where $n(s)$ is the bit size of $s$. This is sometimes known as the ``algorithmic'' or ``Solomonoff'' probability of $s'$~\cite{livi08} in the theory of TMs. \section{Modeling the physical universe in terms of inference devices} I now expand the scope of the discussion to allow sets of many inference devices and / or many functions to be inferred. Some of the philosophical implications of the ensuing results are then discussed in the next subsection. \subsection{Formalization of physical reality involving Inference Devices} \label{sec:realities} Define a {\bf{reality}} as a pair $(U; \{F_\phi\})$ where the space $U$ is the {\bf{domain}} of the reality, and $\{F_\phi\}$ is a (perhaps uncountable) non-empty set of functions all having domain $U$. We are particularly interested in {\bf{device realities}} in which some of the functions are binary-valued, and we wish to pair each of those functions uniquely with some of the other functions. In general, not all of the functions in $\{F_\phi\}$ need to be members of such a pair. Accordingly, the most general form of such realities is triples of the form $(U; \{(X_\alpha, Y_\alpha)\}; \{\Gamma_\beta\})$, or just $(U; \{{{{\mathcal{D}}}}_\alpha\}; \{\Gamma_\beta\})$ for short, where $\{{{{\mathcal{D}}}}_\alpha\}$ is a set of devices over $U$ and $\{\Gamma_\beta\}$ a set of functions over $U$. Define a {\bf{universal device}} as any device in a reality that can strongly infer all other devices and weakly infer all functions in that reality. Prop.~\ref{thm:thm_3} means that no reality can contain more than one universal device. So in particular, if a reality contains a universal device and there is a given distribution over $U$, then the reality has a unique natural choice for an inference complexity measure, namely the inference complexity with respect to its (unique) universal device. (This contrasts with Kolmogorov complexity, which depends on the arbitrary choice of what UTM to use.) For simplicity, assume the index set $\phi$ is countable, with elements $\phi_1, \phi_2, \ldots$. It is interesting to consider the {\bf{reduced form}} of a reality $(U; \{F_\phi\})$, which is defined as the image of the function $u \rightarrow (F_{\phi_1}(u), F_{\phi_2}(u), \ldots)$. In particular, the reduced form of a device reality is the set of all tuples $([x_1, y_1], [x_2, y_2], \ldots; \gamma_1, \gamma_2, \ldots)$ for which $\exists \; u \in U$ such that simultaneously $X_1(u) = x_1, Y_1(u) = y_1, X_2(u) = x_2, Y_2(u) = y_2, \ldots ; \Gamma_1(u) = \gamma_1, \Gamma_2(u) = \gamma_2, \ldots$. By working with reduced forms of realities, we dispense with the need to explicitly discuss $U$ entirely.{\footnote{Note the implication that if we work with reduced realities, all of the non-stochastic analysis of the previous sections can be reduced to satisfiability statements concerning sets of categorial variables. For example, the fact that a device cannot weakly infer itself is equivalent to the statement that there is no countable space $X$ with at least two elements and associated set of pairs $\mathcal{V} = \{(x_i, y_i)\}$ where all $y_i \in {\mathbb{B}}$, such that for both probes $\delta$ of $y_i$, there is some value $x' \in X$ such that in all pairs $(x', y) \in \mathcal{V}$, $y = \delta(y)$.}} \begin{example} Take $U$ to be the set of all possible histories of a universe across all time that are consistent with the laws of physics. So each $u$ is a specification of a trajectory of the state of the entire universe through all time. The laws of physics are then embodied in restrictions on $U$. For example, if one wants to consider a universe in which the laws of physics are time-reversible and deterministic, then we require that no two distinct members of $U$ can intersect. Similarly, properties like time-translation invariance can be imposed on $U$, as can more elaborate laws involving physical constants. Next, have \{$\Gamma_\beta$\} be a set of physical characteristics of the universe, each characteristic perhaps defined in terms of the values of one or more physical variables at multiple locations and/or multiple times. Finally, have \{${{{\mathcal{D}}}}_\alpha$\} be all prediction / observation systems concerning the universe that all scientists might ever be involved in. In this example the laws of physics are embodied in $U$. The implications of those laws for the relationships among the agent devices \{${{{\mathcal{D}}}}_\alpha$\} and the other characteristics of the universe \{$\Gamma_\beta$\} is embodied in the reduced form of the reality. Viewing the universe this way, it is the $u \in U$, specifying the universe's state for all time, that has ``physical meaning''. The reduced form instead is a logical implication of the laws of the universe. In particular, our universe's $u$ picks out the tuple given by the Cartesian product $[\varprod_\alpha {{{\mathcal{D}}}}_\alpha (u)] \times [\varprod_\beta \Gamma_\beta(u)]$ from all tuples in the reduced form of the reality. As an alternative we can view the reduced form of the reality itself as encapsulating the ``physical meaning'' of the universe. In this alternative $u$ does not have any physical meaning. It is only the relationships among the inferences about $u$ that one might want to make and the devices with which to try to make those inferences that has physical meaning. One could completely change the space $U$ and the functions defined over it, but if the associated reduced form of the reality does not change, then there is no way that the devices in that reality, when considering the functions in that reality, can tell that they are now defined over a different $U$. In this view, the laws of physics i.e., a choice for the set $U$, are simply a calculational shortcut for encapsulating patterns in the reduced form of the reality. It is a particular instantiation of those patterns that has physical meaning, not some particular element $u \in U$. See~\cite{tegmark2008mathematical} for another perspective on the relationship between physical reality and mathematical structures. \label{ex:reality} \end{example} Given a reality $(U; \{(X_1, Y_1), (X_2, Y_2), \ldots \})$, we say that a pair of devices in it are {\bf{pairwise (setup) distinguishable}} if they are distinguishable. We say that the reality as a whole is {\bf{mutually (setup) distinguishable}} iff $\forall \; x_1 \in X_1(U), x_2 \in X_2(U), \ldots \; \exists \; u \in U$ s.t. $X_1(u) = x_1, X_2(u) = x_2, \ldots$. \begin{proposition} \label{prop:prop3} {\bf{i)}} There exist realities $(U; {{{\mathcal{D}}}}_1, {{{\mathcal{D}}}}_2, {{{\mathcal{D}}}}_3)$ where each pair of devices is pairwise setup distinguishable and ${{{\mathcal{D}}}}_1 > {{{\mathcal{D}}}}_2 > {{{\mathcal{D}}}}_3 > {{{\mathcal{D}}}}_1$. {\bf{ii)}} There exists no reality $(U; \{{{{\mathcal{D}}}}_i : i \in {\mathscr{N}} \subseteq {\mathbb{N}}\})$ where the devices are mutually distinguishable and for some integer $n$, ${{{\mathcal{D}}}}_1 > {{{\mathcal{D}}}}_2 > \ldots > {{{\mathcal{D}}}}_n > {{{\mathcal{D}}}}_1$. {\bf{iii)}} There exists no reality $(U; \{{{{\mathcal{D}}}}_i : i \in {\mathscr{N}} \subseteq {\mathbb{N}}\})$ where for some integer $n$, ${{{\mathcal{D}}}}_1 \gg {{{\mathcal{D}}}}_2 \gg \ldots \gg {{{\mathcal{D}}}}_n \gg {{{\mathcal{D}}}}_1$. \end{proposition} There are many ways to view a reality that contains a countable set of devices $\{{{{\mathcal{D}}}}_i\}$ as a graph, for example by having each node be a device while the edges between the nodes concern distinguishability of the associated devices, or concern whether one weakly infers the other, etc. In particular, given a countable reality, define an associated directed graph by identifying each device with a separate node in the graph, and by identifying each relationship of the form ${{{\mathcal{D}}}}_i \gg {{{\mathcal{D}}}}_j$ with a directed edge going from node $i$ to node $j$. We call this the {\bf{strong inference graph}} of the reality. Prop.~\ref{prop:whats_inferrable}(ii) means that no reality with $|U| > 3$ can have a universal device if the reality contains all functions defined over $U$. Suppose that this is not the case, so that the reality may contain a universal device. Prop.~\ref{thm:thm_3} means that such a universal device must be a root node of the strong inference graph of the reality and that there cannot be any other root node. In addition, by Prop.~\ref{thm:thm_2}(ii), we know that every node in a reality's strong inference graph with successor nodes has edges that lead directly to every one of those successor nodes (whether or not there is a universal device in the reality). By Prop.~\ref{prop:prop3}(iii) we also know that a reality's strong inference graph is acyclic. Note that even if a device ${{{\mathcal{D}}}}_1$ can strongly infer all other devices ${{{\mathcal{D}}}}_{i > 1}$ in a reality, it may not be able to infer them $simultaneously$ (strongly or weakly). For example, define a ``composite'' function $\Gamma : u \rightarrow (Y_2(u), Y_3(u), \ldots)$. Then the fact that ${{{\mathcal{D}}}}_1$ is a universal device does not mean that $\forall \delta \in {\mathcal{P}}(\Gamma) \;\exists \; x_1 : Y_1 = \delta(\Gamma)$. See the discussion in~\cite{wolp01} on ``omniscient devices'' for more on this point. We now define what it means for two devices to operate in an identical manner: \begin{definition} \label{def:def7} Let $U$ and $\hat{U}$ be two (perhaps identical) sets. Let ${{{\mathcal{D}}}}_1$ be a device in a reality with domain $U$. Let $R_1$ be the relation between $X_1$ and $Y_1$ specified by the reduced form of that reality, i.e., $x_1 R_1 y_1$ iff the pair $(x_1, y_1)$ occurs in some tuple in the reduced form of the reality. Similarly let $R_2$ be the relation between $X_2$ and $Y_2$ for some separate device ${{{\mathcal{D}}}}_2$ in the reduced form of a reality having domain ${\hat{U}}$. Then we say that ${{{\mathcal{D}}}}_1$ {\bf{mimics}} ${{{\mathcal{D}}}}_2$ iff there is an injection, $\rho_X : X_2({\hat{U}}) \rightarrow X_1(U)$ and a bijection $\rho_Y : Y_2({\hat{U}}) \leftrightarrow Y_1(U)$, such that for $\forall x_2, y_2$, $x_2 R_2 y_2 \Leftrightarrow \rho_X(x_2) R_1 \rho_Y(y_2)$. If both ${{{\mathcal{D}}}}_1$ mimics ${{{\mathcal{D}}}}_2$ and vice-versa, we say that ${{{\mathcal{D}}}}_1$ and ${{{\mathcal{D}}}}_2$ are {\bf{copies}} of each other. \label{def:mimic} \end{definition} Intuitively, when expressed as devices, two physical systems are copies if they follow the same inference algorithm with $\rho_X$ and $\rho_Y$ translating between those systems. As an example, consider the case where $U = \hat{U}$, and we have a reality over that space that contains two separate physical computers that are inference devices, both being used for prediction. If those devices are copies of each other, then they form the same conclusion for the same value of their setup function, i.e., they perform the same computation for the same input. The requirement in Def.~\ref{def:mimic} that $\rho_Y$ be surjective simply reflects the fact that since we're considering devices, $Y_1(U) = Y_2(U) = {\mathbb{B}}$. Note that because $\rho_X$ in Def.~\ref{def:mimic} need not be surjective, there can be a device in $U$ that mimics multiple devices in $\hat{U}$. The relation of one device mimicing another is reflexive and transitive. The relation of two devices being copies is an equivalence relation. Say that an inference device ${\mathcal{D}}_2$ is being used for observation and ${\mathcal{D}}_1$ mimics ${\mathcal{D}}_2$. The fact that ${\mathcal{D}}_1$ mimics ${\mathcal{D}}_2$ does not imply that ${\mathcal{D}}_1$ can emulate the observation that ${\mathcal{D}}_2$ makes of some function $\Gamma$. The mimicry property only relates ${\mathcal{D}}_1$ and ${\mathcal{D}}_2$, with no concern for relationships with any third function. This is why up above we formalized what it means for one device that ``emulates'' another in terms of strong inference rather than in terms of mimicry. Indeed, there are some interesting relationships between what it means for devices to be copies and what it means for one to strongly infer the other: \begin{proposition} \label{prop:prop5} Let ${{{\mathcal{D}}}}_1$ be a copy of ${{{\mathcal{D}}}}_2$ where both exist in the same reality. {\bf{i)}} It is possible that ${{{\mathcal{D}}}}_1$ and ${{{\mathcal{D}}}}_2$ are distinguishable and ${{{\mathcal{D}}}}_1 > {{{\mathcal{D}}}}_2$, even for finite $X_1(U), X_2(U)$. {\bf{ii)}} It is possible that ${{{\mathcal{D}}}}_1 \gg {{{\mathcal{D}}}}_2$, but only if $X_1(U)$ and $X_2(U)$ are both infinite. \end{proposition} \subsection{Philosophical implications} \label{sec:philo} Return now to the case where $U$ is a set of laws of physics (i.e., the set of all histories consistent with a set of such laws). The results above provide general restrictions that must relate any devices in such a universe, regardless of the detailed nature of the laws of that universe. In particular, these results would have to be obeyed by all universes in a multiverse~\cite{smol02,agte05,carr07}. Accordingly, it is interesting to consider these results from an informal philosophical perspective. Say we have a device ${\mathcal{D}}$ in a reality that is distinguishable from the set of all the other devices in the reality. Such a device can be viewed as having ``free will'', in the limited sense that the way the other devices are set up does not restrict how ${\mathcal{D}}$ can be set up. Under this interpretation, Prop.~\ref{prop:dist_not_infer} means that if two devices both have free will, then they cannot predict / recall / observe each other with guaranteed complete accuracy. A reality can have at most one of its devices that has free will and can predict / recall / observe / control the other devices in that reality with guaranteed complete accuracy.{\footnote{There are other ways to interpret the vague term ``free will''. For example, Lloyd has argued that humans have ``free will'' in the sense that under the assumption that they are computationally universal, then due to the Halting theorem they cannot predict their own future conclusions ahead of time~\cite{lloyd2012turing}. The fact that an ID cannot even weakly infer itself has analogous implications that hold under a broader range of assumptions concerning human computational capability. For example, this implications hold under the assumption that humans are \textit{not} computationally universal, or, at the opposite extreme, under the assumption that they have super-Turing reasoning capability.} Prop.~\ref{thm:thm_3} then goes further and considers devices that can emulate each other. It shows that independent of concerns of free will, no two devices can unerringly emulate each other. (In other words, no reality can have more than one universal device.) Somewhat tongue in cheek, taken together, these results could be called a ``monotheism theorem''. Prop.~\ref{prop:prop5} tells us that if there is a universal device in some reality, then it must be infinite (have infinite $X(U)$) if there are other devices in the reality that are copies of it. Now the time-translation of a physical device is a copy of that device.{\footnote{Formally, say that the states of some physical system $S$ at a particular time $t$ and shortly thereafter at $t + \delta$ are identified as the setup and conclusion values of a device ${\mathcal{D}}$. In other words, ${\mathcal{D}}$ is given by the functions $(X(u), Y(u)) \triangleq (S(u_t), S(u_{t+\delta}))$. In addition, let $R_S$ be the relation between $X$ and $Y$ specified by the reduced form of the reality containing the system. Say that the time-translation of ${\mathcal{D}}$, given by the two functions $S(u_{t'})$ and $S(u_{t' + \delta})$, also obeys the relation $R_S$. Then the pair of functions $(X_2(u), Y_2(u)) \triangleq (S(u_{t'}), S(u_{t' + \delta}))$ is another device that is copy of ${\mathcal{D}}$. So for example, the same physical computer at two separate pairs of moments is two separate devices, devices that are copies of each other, assuming they have the same set of allowed computations.}} Therefore any physical device that is \emph{ever} universal must be infinite. In addition, the impossibility of multiple universal devices in a reality means that if any physical device is universal, it can only be so at one moment in time. (Its time-translation cannot be universal.) Again somewhat tongue in cheek, taken together this second set of results could be called an ``intelligent design theorem''. In addition to the questions addressed by the monotheism and intelligent design theorems, there are many other semi-philosophical questions one can ask of the form ``Can there be a reality with the following properties?''. By formulating such questions in terms of reduced realities, they can often be reduced to constraint satisfaction problems, potentially involving infinite-dimensional spaces. In this sense, many of the questions that have long animated philosophy can be formulated as constraint satisfaction problems. \section{Physical knowledge} \label{sec:phys_know} Say that colloquially speaking you ``know'' the sky's color is currently blue, so long as $u$ is in some subset $W$ of all histories. (The reason we consider subsets $W$ is that you cannot know that the sky's color is blue in \textit{all} histories, since in some histories it will \textit{not} be blue.) How can we formalize this colloquial notion? Well, one thing it means if you ``know the sky's color is blue'' for any $u \in W$ is that for such $u$'s you can ask yourself ``Is the sky green?'' and answer 'no', ask yourself ``Is the sky red?'' and answer 'no', ask yourself ``Is the sky blue'' and answer 'yes', etc., and always be correct in your answer. So to ``know'' something implies you can weakly infer it. Intuitively speaking, weak inference formalizes an aspect of the semantic content of ``knowledge''. To properly formalize knowledge of the sky's color however, we need to use more structure than is just provided by weak inference of the sky's color. The problem is that it is possible that $(X, Y) > \Gamma$ even if for each $\gamma \in\Gamma(U)$, the associated $x$ that causes $Y(u) = \delta(\gamma, \Gamma(u))$ always results in $Y(u) = -1$.{\footnote{Note that there must be \emph{some} $x$ that allows $Y(u) = 1$, since $|Y(U)| = 2$. However it may be that none of those specific $x$'s that are involved in the ID's inferring $\Gamma$ have that property.}} Loosely speaking, $(X, Y)$ can infer the sky's color by always setting itself up so that it (correctly) answers that the sky does not have a given color $c$, so long as it can do that for any given color $c$.{\footnote{This characteristic of weak inference is an example of how flexible and unrestrictive the definition of weak inference is, mentioned above. This particular flexibility is most reasonable for the inference process of control, where typically $x$ directly influences the value of $\Gamma$, and to a somewhat lesser degree for the inference process of observation.}} So to say that $(X, Y)$ knows $\Gamma = \gamma$ over (all $u$ in) $W$, it makes sense not just to require that $(X, Y) > \Gamma$, but also that for all $\gamma \in \Gamma(U)$, there exists some $u \in W$ such that both $X(u)$ is a setup value that arises for the question, ``Does $\Gamma(u) = \gamma$?'', and that $Y(u) = 1$, i.e., that the device answers `yes'. Similarly, it would be problematic to say that the device $(X, Y)$ ``knows'' the sky's color if $(X, Y)$ can infer the sky's color by always setting itself up so that it (correctly) answers that the sky \emph{does} have a given color $c$, so long as it can do that for any given color $c$. This suggests we want to also add the requirement that for all $\gamma \in \Gamma(U)$, there exists some $u \in W$ such that $X(u)$ is a setup value that arises for the question, ``Does $\Gamma(u) \ne \gamma$?'', and that $Y(u) = -1$, i.e., that the device answers `no'. To model knowledge in this sense, not just inference, we need to guarantee that there is some color $c$ such that whenever the history $u$ is in some set $W$, for the question, ``Is the sky's color \emph{c}?'', the inference device will answer `yes', and be correct. In other words, you don't ``know'' the sky's color whenever $u \in W$ if you can only ever say what color it is \emph{not} whenever $u \in W$. For it to be the case that whenever $u \in W$ you know that the sky's color is \emph{c}, at a minimum, it must be that you can correctly answer ``yes, the sky's color is \emph{c}'', for some such $u \in W$. Nonetheless, we also want to guarantee that there is at least one $u \in W$ at which we correctly answer ``no, $c'$ is not the sky's color'' for \textit{some} color $c'$, which may be the same as $c$ or different. \subsection{Formal definition of physical knowledge} We can formalize this strengthened version of inference as follows: \begin{definition} Consider an inference device $(X, Y)$ defined over $U$, a function $\Gamma$ defined over $U$, a $\gamma \in \Gamma(U)$, and a subset $W \subseteq U$. We say that ``\textbf{$(X, Y)$ \textbf{(physically) knows} $\Gamma = \gamma$} over $W$'' iff $\exists$ $\xi : \Gamma(U) \rightarrow \overline{X}$ such that \begin{enumerate}[i)] \item $\forall \gamma' \in \Gamma(U), u \in \xi(\gamma') \Rightarrow \delta_{\gamma'}(\Gamma(u)) = Y(u)$, \item $\varnothing \;\ne\; \xi(\gamma) \cap W \;\subseteq\; Y^{-1}(1)$. \item For all $\gamma' \ne \gamma$, $\varnothing \;\ne\; \xi(\gamma') \cap W \;\subseteq\; Y^{-1}(-1);$ \end{enumerate} \label{def:knows} \end{definition} \noindent (Recall that $\overline{X}$ is the partition of $U$ induced by $X$.) When I want to specify the precise function $\xi$ used in Def.~\ref{def:knows}, I will say that ``by using $\xi$, $(X, Y)$ knows that $\Gamma = \gamma$ over $W$". By Def.~\ref{def:knows}(i), if $(X, Y)$ physically knows $\Gamma = \gamma$ over $W$, then $(X, Y)$ weakly infers $\Gamma$. So $(X, Y)$ is always correct in its inference --- even if $u \not \in W$. We impose this requirement for all of $U$, not just $W$, because the agent using the device does not have any \emph{a priori} reason to expect that $u \in W$. So it does them no good to be able to set up a device that will correctly say whether some function has a certain value --- but only if the condition $u \in W \subset U$ holds, a condition they cannot detect. Def.~\ref{def:knows}(ii) and Def.~\ref{def:knows}(iii) are the extra conditions beyond just weak inference, forcing the ID to answer 'yes' at least once, and to answer 'no' at least once. Neither of those conditions depend on the precise form of the function $\Gamma(u)$, only its image, $\Gamma(U)$ (which specifies the domain of $\xi$). It's also worth noting that most of the analysis below does not invoke Def.~\ref{def:knows}(iii). The motivation for including that condition anyway will arise below, when we demonstrate that physical knowledge need not imply logical omniscience; this demonstration is more consequential if it applies even when Def.~\ref{def:knows}(iii) holds. The following properties are immediate: \begin{lemma} Let $(X, Y)$ be a device defined over $U$, $\Gamma$ a function over $U$, and $W$ a subset of $U$. Say that by using $\xi$, $(X, Y)$ knows that $\Gamma = \gamma$ over $W$. It follows that: \begin{enumerate}[i)] \item $\Gamma(u) = \gamma \; \forall u \in \xi(\gamma) \cap W$; \item If $W$ refines $\Gamma$, then $\Gamma(W) = \gamma$. \end{enumerate} \label{lemma:elem} \end{lemma} \begin{proof} To prove the first claim, note from Def.~\ref{def:knows}(ii) that for all $u \in \xi(\gamma) \cap W$, $Y(u) = 1$. By Def.~\ref{def:knows}(i), this means that at all such $u$, $\Gamma(u) = \gamma$, completing the proof. Given this, if in addition $W$ refines $\Gamma$ (so that $\Gamma(u)$ has the same value across all $W$), then it must be that $\Gamma(u) = \gamma$ for all $u \in W$. (Similar arguments for $Y(u) = -1$ follow by using Def.~\ref{def:knows}(iii).) This establishes the second claim. \end{proof} Note that the definition of physical knowledge does not require that $\xi(\gamma) \subseteq Y^{-1}(1)$, but only that $\xi(\gamma) \cap W \subseteq Y^{-1}(1)$ (and similarly for $Y^{-1}(-1)$). The simple fact that $\bar{x} \in \xi(\gamma)$ \emph{and nothing more} does not imply that the device must answer `yes' if $X(u) = \xi(\gamma)$. Furthermore, there may be more than one $\xi(.)$ with which the ID can ``know $\Gamma = \gamma$ over $W$''. There may even be some other $\xi(.)$ that can be used to instead know $\Gamma = \gamma' \ne \gamma$ over $W$. This illustrates that physical knowledge does not require that $\Gamma$ have the same value over all of $\xi(\Gamma)$. This flexibility means that physical knowledge includes knowledge that occurs by observation of the value of $\Gamma$, just like inference does. A related point is that we do not require that $W$ refine $\Gamma$ to have a device know that $\Gamma = \gamma$. This freedom allows the device to know that $\Gamma = \gamma$ over $W$ even if the value of $\Gamma$ depends on the value of $X$, the question the device is asking. In other words, it is possible that the device both knows that $\Gamma = \gamma$ over $W$ and knows that $\Gamma = \gamma'$ over $W$ for some $\gamma' \ne \gamma$. In this sense, the definition of physical knowledge is extremely non-restrictive. This lack of restriction means that physical knowledge allows for ``quantum-mechanical-style'' coupling of an observation device and the system being observed. More generally, it allows $(X, Y)$ to be a device that controls the property $\Gamma$ of the system being observed. Typically though, when we are interested in knowledge in the sense of accurate prediction or observation that does not affect the system being predicted / observed, $W$ will refine $\Gamma$. The following example illustrates Def.~\ref{def:knows} in more detail: \begin{example} Say that the sky above Greenwich, UK at time $t$ is \{blue, cloud-free, with the sun less than 15 degrees above the horizon\}. Furthermore, say that at some time $t'$, Bob knows that the sky above Greenwich, UK at time $t$ is blue. (It does not matter whether $t' = t$.) To formalize this knowledge in terms of Def.~\ref{def:knows}, let $U$ be the set of all histories in which both Bob and Greenwich, UK exist, and where in addition the following conditions hold: \begin{enumerate}[i)] \item There is a partition ${\mathcal{C}}$ of all possible distributions of the intensity of light in optical wavelengths. For example, one element of that partition is `green', one is `red', and one is the color 'blue'; \item Bob asks himself at $t'$, ``Is $c$ the color of the sky above Greenwich at $t$?'', for some color $c \in {\mathcal{C}}$; \item Bob answers that question at that time $t'$ to the best of his abilities, with either a 'yes' or a 'no'. \end{enumerate} Define $\Gamma(u)$ as the map taking each $u \in U$ to the associated element of ${\mathcal{C}}$ that characterizes the color of the sky above Greenwich at $t$. Define $X(u)$ as the map taking each $u \in U$ to the associated color $c$ where at $t'$ Bob is asking himself the question, ``Does the sky's color at Greenwich at $t$ equal $c$?''. Assume that the image of $X$ is all ${\mathcal{C}}$. Next, let $Y(u)$ specify the binary answer in Bob's mind at $t'$. Finally, let $W \subseteq U$ be the set of all histories $u$ such that the sky above Greenwich at time $t$ is \{blue, cloud-free, with the sun less than 15 degrees above the horizon\}, and assume $u \in W$. Given all this, ``at $t'$, Bob knows the sky is blue above Greenwich at time $t$, when the sky above Greenwich at time $t$ is \{blue, cloud-free, with the sun less than 15 degrees above the horizon\}'', in the sense of Def.~\ref{def:knows} if three conditions are met: \begin{enumerate}[i)] \item There is a $u$ for which ``the sky above Greenwich at time $t$ is \{blue, cloud-free, with the sun less than 15 degrees above the horizon\}'' and for which $X(u)$ specifies the question, ``Is the sky blue above Greenwich at $t$?'', i.e. for which Bob is asking himself that question at $t'$; \item For the $u$ in (i), $Y(u)$ specifies that Bob answers 'yes' at $t'$; \item If the sky above Greenwich at time $t$ were still \{blue, cloud-free, with the sun less than 15 degrees above the horizon\}, but instead Bob were able to ask himself at $t'$ any question of the form ``Does the sky's color at Greenwich at $t$ equal $c'$?'' where $c' \ne$ blue (i.e., if the history were some different $u' \in W$ where $X(u') \ne X(u)$) and did so, Bob would answer 'no' at $t'$ (i.e., $Y(u)$ would equal $-1$). \end{enumerate} This is a very simple example. In particular, in some situations for Bob to know at $t'$ that the sky is blue at Greenwich at $t$, Bob will need to configure an apparatus to have a particular state at a particular time (e.g., he may need to configure an automatic camera to photograph the sky above Greenwich at $t$). In such situations, $X$ not only specifies the question that Bob asks himself at $t'$ but also specifies how Bob configures the apparatus. See~\cite{wolp08b}. Note that the set $U$ in this example might be a proper subset of the set of all histories that are consistent with the laws of physics. This is just an example of the fact that the definition of weak inference in general implicitly specifies a set $U$ that is a subset of the set of all physically possible histories. Indeed, there might very well be histories that are consistent with the laws of physics in which Bob does not exist, or perhaps even Greenwich does not exist. Clearly we cannot speak of whether Bob does or does not ``know the sky is blue'' in any such histories.{\footnote{This need to restrict the universe of discourse to a subset of all physically possible histories holds no matter how we formalize ``knowledge''. It has nothing to do with formalizing knowledge using inference devices.}} Finally, note that Bob could conceivably also know at time $t$ that the sky above Greenwich, UK at time $t$ is cloud-free, or that the sun in that sky is less than 15 degrees above the horizon. Any such alternative knowledge would require posing a question to Bob about a different subject. Therefore it would require a different value of $X(u)$, and therefore a different $u$. However $W$, the state of the sky above Greenwich at $t$, doesn't vary with the question that Bob asks concerning that sky. This means that $W$ contains each of those different $u$'s that result in different values of $X(u)$. Ultimately, it is to allow this possibility of multiple questions all concerning the same state of the sky that the definition of physical knowledge involves sets $W$. \label{ex:1} \end{example} \subsection{Epistemic logic based on physical knowledge} \label{sec:boole} Inference knowledge obeys many of the usual properties considered in theories of logic. Here I illustrate this by presenting some of those properties. For the rest of this subsection assume that any space $U$ we consider is countable. I will consider Boolean-valued functions, i.e., functions $\Gamma$ such that $\Gamma(U) = \{-1, 1\}$. It will also be convenient to identify each such binary-valued function $\Gamma$ over $U$ with the associated set $\Gamma^{-1}(1)$. Using this identification, any so-called \emph{concrete} Boolean algebra over subsets of $U$ defines a Boolean algebra over an associated set of binary-valued functions, which we call the \textbf{function Boolean algebra}. (Equivalently, a function Boolean algebra over $U$ is a Boolean algebra specified by a set of bit strings indexed by elements of $U$.) Moreover, we can always express an arbitrary Boolean algebra as a concrete Boolean algebra~\cite{wiki_boolean_algebra}. Accordingly, given any Boolean algebra with propositions $\Phi$, we can identify any specific proposition $\phi \in \Phi$ as a subset of $U$ in the concrete Boolean algebra, and then identify that element of the concrete Boolean algebra with an element of the function Boolean algebra. So we can identify any proposition $\phi$ with a specific binary function, which we write as $\Gamma_\phi$ (with the set $\Phi$ usually implicit). We now use the function Boolean algebra to define the standard shorthands of propositional logic for binary-valued functions. For example, for any two binary-valued functions $\Gamma_1$ and $\Gamma_2$, their logical AND is \begin{eqnarray} \Gamma_1(u) \wedge \Gamma_2(u) = \{u \in U : \Gamma_1(u) = \Gamma_2(u) = 1\} \label{eq:AND} \end{eqnarray} and the logical NOT is given by \begin{eqnarray} \neg \Gamma_1(u) = \{u \in U : \Gamma_1(u) = -1\} \end{eqnarray} This allows us to apply the usual axioms of Boolean algebra to binary-valued functions. We can also adopt the usual abbreviations from Boolean algebra, e.g., \begin{eqnarray} \Gamma_1 \vee {\Gamma_2} &=& \neg (\neg \Gamma_1 \wedge \neg {\Gamma_2}); \\ \Gamma_1 \equiv \Gamma_2 &=& (\Gamma_1 \wedge \Gamma_2) \vee (\neg \Gamma_1 \wedge \neg {\Gamma_2}); \\ \label{eq:6} \Gamma_1 \Rightarrow {\Gamma_2} &=& \neg\Gamma_1 \vee {\Gamma_2}; \\ \label{eq:7} \Gamma_1 \Leftrightarrow {\Gamma_2} &=& (\Gamma_1 \Rightarrow {\Gamma_2}) \wedge ({\Gamma_2} \Rightarrow \Gamma_1) \end{eqnarray} I extend this terminology to cases where we are considering subsets $W \subseteq U$ in the obvious way, e.g., $\Gamma_1 \Rightarrow {\Gamma_2}$ has the value `true' over $W$ iff $\neg\Gamma_1(u) \vee {\Gamma_2}(u)$ has the value `true' for all $u \in W$. Similarly, though it is not used in this paper, as is conventional we can take the function TRUE$(u)$ to be an abbreviation for some fixed propositional tautology such as $(\Gamma_1 \vee \neg \Gamma_1)(u)$ (i.e., the function that equals 1 for all $u \in U$) and FALSE to be an abbreviation for the function $\neg$ TRUE.{\footnote{ Note that these two function each violate the requirement of the usual formulation of inference devices that every function take on at least two values. So in particular, in order to consider whether a device can weakly infer such a function, we would need to weaken the definition of weak inference of a function to allow a single value in the image of that function.}} In keeping with this, I will sometimes use the term 'true' to mean the value $1$, and use the term `false' to mean the value $-1$. In many conventional types of epistemic logic, in particular Kripke structures, knowledge is defined in such a way that is impossible for an agent to know two contradictory things. However as discussed above, to allow physical knowledge to capture the case where the agent ``knows a function has a specific value'' due to the (physical) fact that they control the value of that function, we are careful to define terms so that an agent can physically know contradictory things. Specifically, this occurs if the function that they physically know, $\Gamma$, takes on more than one value across the set under consideration, $W$, and by appropriate choice of ($\xi$ and therefore) $\bar{x}$, the agent can cause different probes of $\Gamma(u)$ to have the value 1. (Note that in this case $\Gamma$ is \emph{not} refined by $W$.) Because of this, certain epistemic properties that are automatically satisfied in conventional types of epistemic logic must be carefully derived in analysis of physical knowledge. The following proposition presents one of these properties. For pedagogical clarity, in the remainder of this subsection, ``is true'' is shorthand for ``is true over $W$'' and similarly ``is false'' is shorthand for ``is false over $W$''. \begin{proposition} Let $\Gamma$ be any binary-valued function over $U$, ${\mathcal{D}} = (X, Y)$ any device over $U$, and $W$ some (implicit) subset of $U$. Then\ ${\mathcal{D}}$ knows that $\Gamma$ is false iff ${\mathcal{D}}$ knows that $\neg \Gamma$ is true. \label{prop:first_impl} \end{proposition} \begin{proof} I prove the forward direction; the inverse follows the same way. Let $\xi_{\Gamma}$ be the operator establishing that ``${\mathcal{D}}$ knows that ${\Gamma}$ is false". So $\xi_{\Gamma}(-1) \cap W \subseteq Y^{-1}(1)$. Define $\xi_{\neg {\Gamma}}(\gamma) = \xi_{{\Gamma}}(-\gamma)$ for all $\gamma \in {\mathbb{B}}$ (i.e., for all $\gamma$ in the codomains of both ${\Gamma}$ and $\neg {\Gamma}$). It follows that $\xi_{\neg {\Gamma}}(1) \cap W \subseteq Y^{-1}(1)$. This establishes that if the condition Def.~\ref{def:knows}(ii) hold for ``${\mathcal{D}}$ knows that ${\Gamma}$ is false (over $W$)", then it must also hold for ``${\mathcal{D}}$ knows that $\neg {\Gamma}$ is true". Property (iii) is established using an analogous argument. Finally if property (i) in Def.~\ref{def:knows} holds for ``${\mathcal{D}}$ knows that $ {\Gamma}$ is false" using $\xi_{ \Gamma}$, it must also hold for ``${\mathcal{D}}$ knows that $\neg {\Gamma}$ is true" using $\xi_{\neg {\Gamma}}$. This establishes the (forward direction of the) claim in full. \end{proof} Note that applying Prop.~\ref{prop:first_impl}(i) with a new function $\Gamma' := -\Gamma$ establishes that a device ${\mathcal{D}}$ knows that a function $\Gamma'$ is true iff ${\mathcal{D}}$ knows that $\neg \Gamma'$ is false. In contrast to these results, if the function under consideration is refined by $W$, then the agent cannot know contradictory things concerning the value of that function. We start our discussion of this kind of situation with an immediate corollary of Lemma~\ref{lemma:elem}(ii), which intuitively says that a device cannot physically know something if that thing is false: \begin{corollary} Suppose that $W$ refines $\Gamma$. Then if ${\mathcal{D}}$ knows that $\Gamma$ is true (over $W$), $\Gamma$ is true. \label{corr:trivial} \end{corollary} (Similarly, if $W$ refines $\Gamma$ and ${\mathcal{D}}$ knows that $\Gamma$ is false, $\Gamma$ is false.) Note that Coroll.~\ref{corr:trivial} tells us that if $W$ refines ${\Gamma}$, then it is possible that ${\mathcal{D}}$ knows that ${\Gamma}$ is true, or that ${\mathcal{D}}$ knows that ${\Gamma}$ is false --- but not both. So if $W$ refines $\Gamma$, we have the usual property that ${\mathcal{D}}$ cannot know two contradictory things. Since binary-valued functions obey the rules of propositional logic, Coroll.~\ref{corr:trivial} means that if $W$ refines $\Gamma$ as well as $\Gamma'$, and ${\mathcal{D}}$ both knows that $\Gamma$ is true and that $\Gamma'$ is true, it follows that $\Gamma \wedge \Gamma'$ is true. This immediately establishes many properties of physical knowledge --- in particular if $\Gamma'$ involves the logical $\Rightarrow$ operator defined in Eq.~\eqref{eq:6} --- including the following: \begin{corollary} Let $\Gamma_1$, ${\Gamma_2}$ and ${\Gamma_3}$ be any binary-valued functions over $U$, and ${\mathcal{D}} = (X, Y)$ any device over $U$. \begin{enumerate}[i)] \item Say that $W$ refines $\Gamma_1$. Then if ${\mathcal{D}}$ knows that $\Gamma_1$ is true, and $\Gamma_1 \Rightarrow {\Gamma_2}$ is true, it follows that ${\Gamma_2}$ is true. \item Say that $W$ refines $\Gamma_1$ as well as $\Gamma_1 \Rightarrow {\Gamma_2}$. Then if ${\mathcal{D}}$ knows that $\Gamma_1$ is true, and knows that $\Gamma_1 \Rightarrow {\Gamma_2}$ is true, it follows that ${\Gamma_2}$ is true. \item Say that $W$ refines $\Gamma_1 \Rightarrow {\Gamma_2}$ as well as ${\Gamma_2} \Rightarrow {\Gamma_3}$. Then if ${\mathcal{D}}$ both knows that $\Gamma_1 \Rightarrow {\Gamma_2}$ is true and knows that ${\Gamma_2} \Rightarrow {\Gamma_3}$ is true, it follows that $\Gamma_1 \Rightarrow {\Gamma_3}$ is true. \item Say that we have a set of binary-valued functions $\{{\Gamma}_i : i = 1, \ldots N\}$ and that $W$ refines $\Gamma_1$ as well as $\Gamma_i \Rightarrow \Gamma_{i+1}$ for all $i \in \{1, \ldots, N-1\}$. Then if ${\mathcal{D}}$ knows that $\Gamma_1$ is true and knows that $\Gamma_i \Rightarrow \Gamma_{i+1}$ is true for all $i \in \{1, \ldots, N-1\}$, it follows that $\Gamma_i$ is true for all $i \in \{1, \ldots, N\}$. \end{enumerate} \label{prop:impl} \end{corollary} \noindent (In interpreting these results, the reader should remember to insert ``over $W$'' after every statement about whether a function is true or false.) In general, the properties described in Coroll.~\ref{prop:impl} do not hold without the conditions that certain functions are refined by $W$. So for the most part, they need not hold for the case where an agent knows the value of a function because they control its value. Note though that Coroll.~\ref{prop:impl}(ii) does not require that $W$ refine ${\Gamma_2}$. Similarly Coroll.~\ref{prop:impl}(iii) does not require that $W$ refine either $\Gamma_1$, $\Gamma_2$, $\Gamma_3$, or ${\Gamma_1} \Rightarrow {\Gamma_3}$. We can weaken the last two claims in Coroll.~\ref{prop:impl}: \begin{corollary} Let ${\Gamma_1}$ and ${\Gamma_2}$ be any binary-valued functions over $U$, ${\mathcal{D}}$ any device over $U$, and $W$ any (implicit) subset of $U$. \begin{enumerate}[i)] \item Say that $W$ refines ${\Gamma_1}$ and refines ${\Gamma_1} \Rightarrow {\Gamma_2}$. Then if either ${\mathcal{D}}$ knows that ${\Gamma_1}$ is true and ${\Gamma_1} \Rightarrow {\Gamma_2}$ is true, or ${\Gamma_1}$ is true and ${\mathcal{D}}$ knows that ${\Gamma_1} \Rightarrow {\Gamma_2}$ is true, it follows that ${\Gamma_2}$ is true. \item Say that $W$ refines ${\Gamma_1} \Rightarrow {\Gamma_2}$ and refines ${\Gamma_2} \Rightarrow {\Gamma_3}$. Then if either ${\mathcal{D}}$ both knows that ${\Gamma_1} \Rightarrow {\Gamma_2}$ is true and ${\Gamma_2} \Rightarrow {\Gamma_3}$ is true, or ${\Gamma_1} \Rightarrow {\Gamma_2}$ is true and ${\mathcal{D}}$ knows that ${\Gamma_1} \Rightarrow {\Gamma_3}$ is true it follows that ${\Gamma_2} \Rightarrow {\Gamma_3}$ is true. \end{enumerate} \end{corollary} \subsection{Impossibility results concerning physical knowledge} There are major restrictions on physical knowledge. The first such restriction follows from the first demon theorem. \begin{corollary} For any device ${\mathcal{D}}$, there exists a function $\Gamma$ over $U$ such that for no $W \subseteq U, \gamma \in \Gamma(U)$ does ${\mathcal{D}}$ know $\Gamma = \gamma$ over $W$. \label{coroll:5} \end{corollary} The second major restriction follows from the second demon theorem. \begin{corollary} Let $(X_1, Y_1)$ and $(X_{-1}, Y_{-1})$ be two distinguishable devices. Then for at least one of the two devices $i \in \{-1, 1\}$, there is no pair $(W \subseteq U, y_{-i} \in Y_{-i}(U))$ such that $(X_i, Y_i)$ knows that $Y_{-i} = y_{-i}$ over $W$. \label{coroll:6} \end{corollary} Similarly, Coroll.~\ref{coroll:1} provides another restriction on physical knowledge: \begin{corollary} Consider a pair of devices ${\mathcal{D}} = (X, Y)$ and ${\mathcal{D}} ' = (X', Y')$ that are both distinguishable from one another and whose conclusion functions are inequivalent. Say that there is a $W \subseteq U$ that refines $Y$ such that ${\mathcal{D}}'$ knows that $Y = Y(W)$ over $W$. Then there are at least three inequivalent surjective binary functions $\Gamma$ such that there is no $W'$ with the following two properties: $W'$ refines $\Gamma$, and ${\mathcal{D}}$ know that $\Gamma = \Gamma(W')$ when $W'$. \end{corollary} In other words, if ${\mathcal{D}} '$ knows the value of ${\mathcal{D}}$'s conclusion function over $W$, then there are at least three separate functions that ${\mathcal{D}}$ never knows, no matter what the subset of $U$ we are in. \subsection{Physical knowledge and the first three rules of \emph{S5}} \emph{S5} is a set of five rules obeyed by many epistemic logics, including Kripke structures. The ``knowledge axiom'' of \emph{S5} says that if an agent knows a Boolean proposition $\phi$, then $\phi$ must be true. We can use the map from propositions to binary functions (discussed just before Eq.~\eqref{eq:AND}) to formulate a physical knowledge version of this axiom. Coroll.~\ref{corr:trivial} confirms that the physical knowledge analog of the knowledge axiom holds (assuming we only consider sets $W$ that refine $\Gamma_\phi$). In addition to the knowledge axiom, \emph{S5} also includes the ``knowledge generalization rule'', which says that if proposition $\phi$ is true in all possible states of the world (i.e., if $\phi$ is necessarily true rather than contingently true), then the agent knows $\phi$. An analogous rule in terms of physical knowledge might be that if $W$ refines ${\Gamma_1}$, and ${\Gamma_1}$ is true over $W$, then the agent physically knows ${\Gamma_1}$ is true over $W$. However this rule need not hold; an agent $(X, Y)$ may not be able to weakly infer ${\Gamma_1}$, whether or not $\Gamma_1$ is true over $W$. (And even if they can infer ${\Gamma_1}$, it may be that $Y^{-1}(1) \cap W = \varnothing$.) The ``distribution axiom" of \emph{S5} says that if an agent both knows proposition $\phi_1$ and knows $\phi_1 \Rightarrow \phi_2$, then $\phi_2$ is true \emph{and they know this}. In contrast, Coroll.~\ref{prop:impl}(ii) only establishes that if an agent both physically knows that ${\Gamma_1}$ is true over $W$ and knows that ${\Gamma_1} \Rightarrow {\Gamma_2}$ is true over $W$, then ${\Gamma_2}$ is also true over $W$. However it is easy to construct examples where the conditions for Coroll.~\ref{prop:impl}(ii) hold but the agent does not know $\Gamma_2$ is true. Ultimately, the reason for this difference between physical knowledge and Kripke structures is due to the requirement of physical knowledge of $\Gamma$ that the device weakly infer $\Gamma$ --- a requirement that involves considering counterfactual scenarios, something not done in conventional epistemic logics. For what are ultimately the same reasons, the conditions for Coroll.~\ref{prop:impl}(i) may hold even if the agent does not physically know that $\Gamma_2$ is true. This is illustrated in the following example: \begin{example} For purposes of this example, fix some particular location and time. Suppose that the (binary-valued) function $\Gamma_1(u)$ is defined by whether the temperature in $u$ is / isn't ten degrees celsius at that location and time. Have $\Gamma_2(u)$ be whether the temperature in $u$ is / isn't above freezing. Presume $U$ is large enough that there are $u \in U$ that satisfy each of the three possible values of $(\Gamma_1(u), \Gamma_2(u))$. Finally, have $W$ be the set of all $u$ at which the temperature is ten degrees. Note that $(\Gamma_1 \Rightarrow \Gamma_2)(u)$ is always true. (This means it is a ``valid'' statement, in the language of epistemic logic.) So in particular it is true for all $u \in W$. To ground thinking suppose that the ID is a thermometer that outputs a 1 or -1, depending on the temperature. $x$ is the value of the temperature that the thermometer is checking. By Def.~\ref{def:knows}(i), if the agent physically knows $\Gamma_1 = 1$ over $W$, then there is some setup function $\xi$ they can use to configure their thermometer to correctly give a 1 if the temperature is ten degrees, and to correctly give a -1 otherwise. In other words, such physical knowledge requires that when answering the question, ``Is the temperature ten degrees?'' (i.e., does $\Gamma_1(u) = 1$), the agent will set $X$ to be in the state $\xi(1)$, and they will be guaranteed that the associated conclusion $Y(u)$ will equal $\delta_1(u)$ for any $u \in \xi(1)$ \underline{whether or not $u \in W$}. In general, depending on whether that $\xi$ obeys $\xi(1) \subseteq W$, this phenomenon may mean that the ID gives the correct answer for some $u$ in which the temperature is not ten degrees. This is key: the ID is set up with the same $x$ value regardless of whether $u \in W$. \underline{After} the thermometer is set up this way, \underline{then} the agent learns whether the temperature equals ten degrees. If (after having been set up with the $x$ value corresponding to ten degrees) the ID tells the agent that the temperature is indeed ten degrees (since $u \in W$), {at that point the} agent has physical knowledge that the temperature is ten degrees. But not before. In particular, consider the situation where $\xi(1)$ is big enough to contain both a $u$ where the temperature is five degrees, and one where it is negative five degrees (in addition to containing one where it is ten degrees). Since we require that $Y(u) = \delta_1(\Gamma_1(u))$ for all $u \in \xi(1)$, this means that $Y(u) = -1$ for both of those $u$'s. Note though that $\Gamma_2(u)$ has different values for those two temperatures. This means that the agent cannot use this same $\xi$ that allows them to weakly infer $\Gamma_1$ to also weakly infer $\Gamma_2$. (This is how the intuitive notion of regular implication would work too; if all I know is that the temperature is not 10 degrees, then I don't know whether it is -5 or 5.) Moreover, in general, there may not be any alternative to this $\xi$ that the agent can use with that thermometer to weakly infer $\Gamma_2$, i.e., weakly infer whether the temperature is above freezing or not. In this situation, the agent cannot weakly infer $\Gamma_2$. (Recall again the key point that in the definition of physical knowledge we require that weak inference holds for all $u \in U$, not just the $u \in W$.) So in this situation, the agent does not physically know that $\Gamma_2 = 1$ for $u \in W$. Loosely speaking, there are thermometers that can be used to \underline{always} tell us correctly whether the temperature is (not) ten degrees -- both when it is and when it isn't ten degrees. However some such thermometers cannot be used to \underline{always} tell us correctly whether the temperature is (not) above freezing (both when it is and when it isn't above freezing). Given such a thermometer, for the particular situation where the thermometer is set up to detect whether the temperature is ten degrees, and in addition it answers 'yes', you and I can use our reasoning ability to realize that the temperature must above above zero. But the ability to use such reasoning to come to a conclusion is not the same thing as physical knowledge of that conclusion. In the rest of this example I establish this argument in a fully formal manner, making the simplifying assumption that $W$ refines $\Gamma_1$, as in Coroll.~\ref{prop:impl}(i). First, note that by Lemma~\ref{lemma:elem}(ii), since the ID physically know that $\Gamma_1 = 1$ when $W$, $\Gamma_1(u)$ is true throughout $W$. This in turn means that $\Gamma_2$ is true throughout $W$ (since $\Gamma_1 \Rightarrow \Gamma_2$). Let $\xi$ be the function that establishes that condition Def.~\ref{def:knows}(ii) holds for $\Gamma_1$. We can use that same function to establish that condition Def.~\ref{def:knows}(ii) holds for $\Gamma_1 \Rightarrow \Gamma_2$ and for $\Gamma_2$. Similar arguments hold for condition Def.~\ref{def:knows}(iii). So ${\mathcal{D}}$ meets conditions (ii) and (iii) for having physical knowledge of both $\Gamma_1 \Rightarrow \Gamma_2$ and $\Gamma_2$. So to complete our analysis of whether ${\mathcal{D}}$ has physical knowledge that $\Gamma_2 = 1$, we must consider whether condition Def.~\ref{def:knows}(i) for $\Gamma_2$. We do this by considering two cases, one in which ${\mathcal{D}}$ does physically know $\Gamma_2 = 1$ when $W$, and one where it does not: \begin{enumerate} \item First, assume $\xi(\gamma) \subseteq W$ both for $\gamma = 1$ and $\gamma = -1$. Since Def.~\ref{def:knows}(i) holds for $\Gamma_1$ for that $\xi$, and since $\Gamma_1(u) = \Gamma_2(u)$ throughout $W$, it is immediate that Def.~\ref{def:knows}(i) also holds for that $\xi$. So ${\mathcal{D}}$ knows that $\Gamma_2$ is true over $W$, under our assumption. Next, plug the fact that $\Gamma_1(u) = \Gamma_2(u) = 1$ for all $u \in W$ into the definition of $\Gamma_1 \Rightarrow \Gamma_2$ to see that $\delta_1((\Gamma_1 \Rightarrow \Gamma_2)(u)) = 1$ for all $u \in \xi(1) \subset W$. So $\delta_1((\Gamma_1 \Rightarrow \Gamma_2)(u)) = Y(u)$ for all $u \in \xi(1)$. This establishes that Def.~\ref{def:knows}(i) holds for the function $\Gamma_1 \Rightarrow \Gamma_2$ for the case of $\gamma' = 1$. For the remaining case of $\gamma' = -1$, note that for all $u \in \xi(-1)$, again $\Gamma_1(u) = \Gamma_2(u) = 1$. So $\delta_{-1}((\Gamma_1 \Rightarrow \Gamma_2)(u)) = -1$. Since $Y(u) = -1$ throughout $\xi(-1)$, this establishes that Def.~\ref{def:knows}(i) also holds for the function $\Gamma_1 \Rightarrow \Gamma_2$ for the case of $\gamma' = -1$. Accordingly, under our assumption that the support of both $\xi(1)$ and of $\xi(-1)$ is restricted to $W$, ${\mathcal{D}}$ knows that $\Gamma_1 \Rightarrow \Gamma_2$ over $W$. \item In many situations however, even though $W$ refines $\Gamma_1$, it will \emph{not} be the case that the associated function $\xi$ that establishes that Def.~\ref{def:knows}(i) and Def.~\ref{def:knows}(ii) both hold for $\Gamma_1$ always produce sets that are confined to $W$. Very often either $\xi(1) \not \subseteq W$ and / or $\xi(-1) \not \subseteq W$. An example of this is given just above, in the discussion involving thermometers. As mentioned in that discussion, for such a $\xi$, it may be that (for example) $\xi(1)$ contains points $u' \not \in W$ such that $\Gamma_1(u') = -1$ but $\Gamma_2(u') = 1$. Now for any such $u'$, it must be that $Y(u') = -1$ (since ${\mathcal{D}}$ weakly infers $\Gamma_1$). On the other hand, for any $u \in W \cap \xi(1)$, $Y(u) = 1$. Since the value of $\Gamma_2(u)$ does not change across $\xi(1)$, this means that $Y$ and $\Gamma_2$ cannot have the same value across all of $\xi(1)$. This means that that function $\xi$ could not be used to establish that ${\mathcal{D}}$ weakly infers $\Gamma_2$ --- and therefore all bets are off concerning whether ${\mathcal{D}}$ can physically know $\Gamma_2$ over $W$. This reflects the fact that while under the conditions of Coroll.~\ref{prop:impl}(i) $\Gamma_1$ and $\Gamma_2$ must be identical for all $u \in W$, they will in general differ outside of $W$, and so a device that can say both whether $\Gamma_1$ is true or not may not be able to tell us whether $\Gamma_2$ is true or not. \end{enumerate} \label{ex:no_log_omni} \end{example} Recall from the introduction that in many epistemic logics, if an agent knows a proposition $\phi$ is true (more generally, that a set of propositions are true), and $\phi \Rightarrow \phi'$, then not only is $\phi'$ true --- but the agent knows that it is. (In particular, as discussed above, this is true of Kripke structures.) This property of ``(full) logical omniscience'' is a major problem with these logics, since logical omniscience implies for example that if someone knows the axioms of numbers theory, then they know all the theorems of number theory that are implied by those axioms. However as illustrated in Ex.~\ref{ex:no_log_omni}, physical knowledge need not obey logical omniscience.{\footnote{It is known that so long as both the distribution axiom and the knowledge generalization rule hold --- which is the case in all so-called ``normal modal logics'' --- then so does (full) logical omniscience. However neither of those need hold with physical knowledge.}} A closely related point is that the definition of physical knowledge does not fully agree with the colloquial meaning of the term ``knowledge''. It should really be viewed more of a strengthened form of inference, capturing more of the common structure of real-world prediction, observation, memory and control, rather than an attempt to provide an accurate entry in an English language dictionary. For example, it is possible that a device knows that $\Gamma_1 \wedge \Gamma_2 = 1$ over $W$, but does not know that $\Gamma_1 = 1$ over $W$. In particular, physical knowledge by a device that $\Gamma_1 \wedge \Gamma_2 = 1$ over $W$ provides no guarantees that the device weakly infers $\Gamma_1$; loosely speaking, the ID may not be able to correctly answer questions concerning the value of $\Gamma_1(u)$ for $u \not \in W$, even though they can answers concerning the value of $\Gamma_1(u) \wedge \Gamma_2(u)$ for such $u$.{\footnote{As an example, suppose that some of the subsets $\bar{x}$ that are images of $\xi$ extend beyond $W$, and in particular include points $u$ at which $\Gamma_1(u) \wedge \Gamma_2(u) = -1$ while $\Gamma_1(u) = 1$. $Y(u)$ must equal -1 for such a $u$, since the device uses $\xi$ to weakly infer $\Gamma_1 \wedge \Gamma_2$. But this means that $\xi$ does not weakly infer $\Gamma_1$, and so does not know $\Gamma_1 = 1$ over $W$.}$^,${\footnote{This should not be surprising; if logical omniscience held, then knowledge that $\Gamma_1 \wedge \Gamma_2 = 1$ over $W$ \emph{would} imply knowledge that $\Gamma_1 = 1$ over $W$.}} Nonetheless, it is worth noting that the definition of physical knowledge could be weakened to agree with this aspect of the colloquial meaning of ``knowledge''. One way to do that would be drop the requirement that the ID infer $\Gamma$ in full, including for $u \not \in W$. Under this modified definition of what it means for the ID to know that $\Gamma = \gamma$ over $W$, we would still require that for all $u \in \xi(\gamma)$, if $Y(u) = 1$, then $\Gamma(u) = \gamma$ (whether or not $u \in W$). So no matter what $u$ is, we would require that if the device is answering the question, ``does $\Gamma(u) = \gamma$?'' and it answers `yes', then it is correct. However for all $\gamma' \ne \gamma$, we only require that for all $u \in W \cap \xi(\gamma')$, if $Y(u) = -1$, then $\delta_\gamma(\Gamma(u)) = Y(u)$. Under this modification, we would allow there to be $u$ outside of $W$, and $\gamma' \ne \gamma$, where the device is answering the question, ``does $\Gamma(u) = \gamma'$?'' and incorrectly answers `no'. \subsection{Physical knowledge that you have physical knoweldge} \label{sec:common_knoweldge} The final two rules of \emph{S5} are known as the \emph{positive introspection rule} and the \emph{negative introspection rule}. Intuitively, they stipulate that when an agent knows something, they know that they know it, and when they don't know something, they know that they don't know it (resp.). Perhaps the simplest formalization of these rules occurs in the event-based framework based on Aumann structures~\cite{fagin2004reasoning,zalta2003stanford,aumann1976agreeing,auma99,aubr95,futi91,bibr88}. As discussed in the introduction, in this framework, in this framework events are defined as subsets of $U$. We say ``Alice knows event $E$'' if $A(u) \subseteq E$, where $A$ is Alice's knowledge operator. So the event, ``Alice knows $E$" is just the union of all $u$ such that Alice knows $E$ for $U = u$, i.e., the union of all $u \in U$ such that $A(u) \subseteq E$ . (Note that even if $u \in E$, $A(u)$ may include points $u' \not \in E$ --- no elements of such a set $A(u)$ are contained in the event ``Alice knows $E$''.) It is immediate that if Alice knows event $E$, then Alice knows \{Alice knows $E$\}. This is the (event-based approach version of the) positive introspection rule. Physical knowledge is formulated in terms of functions and subsets $W$, not in terms of events, so we need to extend it to consider the introspection rules. We say that ``${\mathcal{D}}$ \textbf{(physically) knows} event $E \subseteq U$'' if ${\mathcal{D}}$ knows $\mathcal{X}_{_E} = 1$ over $E$ for some $\xi$. Next, we must define what subset of $U$ is represented by ``the event that \{${\mathcal{D}}$ knows event $E$\}'', i.e., by ``the event that \{${\mathcal{D}}$ knows $\mathcal{X}_{_E} = 1$ over $E$ for some $\xi$\}''. We adopt the interpretation that this set is the union of all sets $\bar{x}$ that might arise in the image of some $\xi$ such that ${\mathcal{D}}$ knows $\mathcal{X}_{_E} = 1$ over $E$ for $\xi$. We write this set as \begin{eqnarray} K({\mathcal{D}} \; knows \; E) \;\;\; &:=& \bigcup_{\xi \; : \; {\mathcal{D}} \; knows \; \mathcal{X}_{_E} = 1 \; over \; E \; for \; \xi} \xi(1) \cup \xi(-1) \nonumber \\ \end{eqnarray} (Note that in general, $K({\mathcal{D}} \; knows \; E)$ can include points $u$ that lie outside of $E$.) This allows us to translate from the event-based framework to the physical knowledge framework: we say that ``${\mathcal{D}}$ obeys positive introspection'' if for every event that ${\mathcal{D}}$ knows, ${\mathcal{D}}$ also knows the event $K({\mathcal{D}} \; knows \; E)$. \begin{corollary} For every event that a device ${\mathcal{D}}$ knows, ${\mathcal{D}}$ also knows the event $K({\mathcal{D}} \; knows \; E)$ \end{corollary} \begin{proof} Plugging in, ``${\mathcal{D}}$ knows the event $K({\mathcal{D}} \; knows \; E)$'' will be established if we can show that \begin{eqnarray} {\mathcal{D}} \; knows \; \mathcal{X}_{_{K({\mathcal{D}} \; knows \; E)}} = 1 \; over \; K({\mathcal{D}} \; knows \; E) \nonumber \end{eqnarray} for some $\xi$. Now by hypothesis ${\mathcal{D}}$ {knows event} $E$. So there is at least one function $\xi$ such that ${\mathcal{D}}$ knows $\mathcal{X}_{_E} = 1$ over $E$ for $\xi$. By Def.~\ref{def:knows}(i), this means that ${\mathcal{D}}$ weakly infers $\mathcal{X}_E$ using $\xi$. Therefore for both $\gamma \in {\mathbb{B}}$, $u \in \xi(\gamma) \Rightarrow \delta_\gamma(\mathcal{X}_E(u)) = Y(u)$. Moreover for both $\gamma \in {\mathbb{B}}$, $\mathcal{X}_E(u) = \mathcal{X}_{K({\mathcal{D}} \; knows \; E)}(u)$ for all $u \in \xi(\gamma)$. Therefore ${\mathcal{D}}$ weakly infers $\mathcal{X}_{_{K({\mathcal{D}} \; knows \; E)}}$ using that same function $\xi$. Next, note that $\xi(1) \cap K({\mathcal{D}} \; knows \; E)$ equals $\xi(1) \cap E$. Moreover, since ${\mathcal{D}}$ knows $\mathcal{X}_{_E} = 1$ over $E$ for $\xi$, by Def.~\ref{def:knows}(ii), $\varnothing \;\ne\; \xi(1) \cap E \;\subseteq\; Y^{-1}(1)$. So $\varnothing \;\ne\; \xi(1) \cap K({\mathcal{D}} \; knows \; E) \;\subseteq\; Y^{-1}(1)$. Therefore the condition in Def.~\ref{def:knows}(ii) holds for knowledge that ${\mathcal{X}}_{K({\mathcal{D}} \; knows \; E)} = 1$ over $K({\mathcal{D}} \; knows \; E)$ by using $\xi$ . Similarly, $\varnothing \;\ne\; \xi(-1) \cap K({\mathcal{D}} \; knows \; E) \;\subseteq\; Y^{-1}(-1)$. So all three criteria in Def.~\ref{def:knows} are met for physical knowledge that $\mathcal{X}_{_{K({\mathcal{D}} \; knows \; E)}} = 1$ over $K({\mathcal{D}} \; knows \; E)$ by using $\xi$. \end{proof} In this sense, the positive introspection rule of \emph{S5} holds for physical knowledge. The negative introspection rule cannot hold for the event-based framework. This is because the event ``Alice does not know $E$'' cannot contain any $u$ obeying $u \in A(u)$, and so Alice can never know that she does not know $E$. (As a result, investigations in the event-based approach focus on negative introspection of belief rather than negative introspection of knowledge.) Not surprisingly then, it is not clear how to formalize a physical knowledge version of the negative introspection rule, since that requires defining a function over $U$ that captures the case that the device does \emph{not} know that $u \in E$. (N.b., that is not the same as having the device know that $u \not \in E$.) \section{Future work} Much more work remains to complete our understanding of inference. Perhaps most obviously, a lot remains to be investigated concerning the relationship between structures like inference complexity (the ID version of Kolmogorov complexity) and all the results in algorithmic information theory, from Chaitin's ``incompleteness theorem'' to the Halting theorem to computational complexity theory. There is also a lot of future work to be done concerning physical knowledge. To begin, it might be useful to extend the analysis of physical knowledge to include all the concepts introduced in the analysis of inference, e.g., strong inference, covariance accuracy, and inference complexity. Other future work on physical knowledge would be to develop Prop.\ref{prop:first_impl}, Coroll.~\ref{corr:trivial} and Coroll.~\ref{prop:impl} into a complete axiomatization of physical knowledge, i.e., a set of axioms that are logically equivalent to the definition of physical knowledge. The goal would be to parallel the kind of axiomatization which has been done for Kripke structures (See Chap. 3 of \cite{fagin2004reasoning}.) As a final example, it might be illuminating to construct and then investigate physical knowledge versions of common knowledge, of distributed knowledge, and associated results in conventional epistemic logic, e.g., Aumann's famous proof that ``no two Bayesians can disagree''~\cite{aumann1976agreeing}. There are also many ways to extend the concept of physical knowledge to capture attributes of our physical world, like space and time. As an example, suppose we are given a \textbf{distance function} $D(\gamma, \gamma') : \Gamma(U) \times \Gamma(U) \rightarrow {R}^+$. We want to use that $D(., .)$ to define the distance between what a given ID ``says'' that $\Gamma(u)$ is, and what $\Gamma(u)$ really is. One way to do this builds on the physical knowledge formalism. For simplicity, assume $W$ refines $\Gamma$. We say that ${\mathcal{D}}$ \textbf{claims} $\gamma$ over $W$ if $\exists\; \xi : \Gamma(U) \rightarrow \bar{X}$ such that for all $u \in \xi(\gamma) \cap W$, $Y(u) = 1$, and for all $\gamma' \ne \gamma, u \in \xi(\gamma') \cap W$, $Y(u) = -1$. (Note that there may be more than one $\gamma$ that ${\mathcal{D}}$ claims over $W$.) We define the \textbf{error} of ${\mathcal{D}}$ over $W$ (for $\gamma$) as the smallest $\epsilon \in {\mathbb{R}}$ such that ${\mathcal{D}}$ claims $\gamma$ over $W$ and $D(\Gamma(W), \gamma) = \epsilon$. By supposing a probability distribution over $U$ as well as a distance function, we can analyze concepts like the expected error of the claim of a device, the variance of what a device claims, etc. In particular, it may be possible to use an error function to extend the analysis that led to Prop.~\ref{prop:prop6}, to investigate the possible relationship between inference and Heisenberg's uncertainty principle. (As part of such an investigation, it may be helpful to focus on the specific case where $U$ is a Hilbert space.) Other future work is to investigate the use of inference devices in general, and physical knowledge in particular, as a formalization of ``semantic information'', a concept that has been extensively debated by people ranging from the founders of information theory and cybernetics~\cite{shannon1948mathematical} to philosophers~\cite{stanford_encyl_semantic_info,mac1969information} to people working in statistical physics~\cite{wolpert_banff_fqxi_semantic_info_2016,rovelli2016meaning,crutchfield1992semantics,crutchfield1992knowledge}. Finally, despite the relation of physical knowledge with epistemic logic, physical knowledge is designed only to capture properties of knowledge concerning physical reality. It is not designed to capture properties of knowledge concerning mathematical systems, e.g., predicate logic. However it may be worth investigating its application to such systems. For example, one could take each ``history'' in a reality to be a (perhaps infinite) string over some fixed alphabet. $U$ might then be defined as the set of all strings that are ``true'' under some encoding that translates a string into axioms and associated logical implications. Then an inference device would be a (perhaps fallible) theorem-proving algorithm, embodied within $U$ itself. The results presented above would then concern the relation among such theorem-proving algorithms. \section*{Acknowledgements} I would like to thank Philippe Binder, Artemy Kolchinsky, Brendan Tracey, Alexi Parizeau and especially Walt Read for many useful discussions. In particular, Prop.~\ref{prop:cov_lb}, Ex.~\ref{ex:cov_lb}, and Prop.~\ref{prop:inf_comp_entropy} are due to Walt. This work was supported by the Santa Fe Institute, Grant No. FQXi-RFP-1622 from the FQXi foundation, Grant No. FQXi-RFP3-1349 from the FQXi foundation, and the Silicon Valley Foundation. \bibliographystyle{amsplain} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,314,259,993,975
arxiv
\section{Introduction} Since neutrino from Supernova were observed [1], neutrino astronomy started. An astronomical observation using neutrino could supply a new information which is not obtained by light. However neutrino interacts with matter so weakly that event rate is extremely small. High luminosity and enhancement of event rate are required to make a real observation possible. The most luminous source of neutrino around the earth is sun and it is practical to use solar neutrino for astronomical purpose. Especially solar neutrino of discrete energy from $Be^7$ process can be a useful source for astronomical observations. Coherent scattering of neutrino by neutral current interactions can enhance the event rate. We study a coherent moon effect to solar neutrino in this paper. Coherent interaction of neutrino with massive objects is a unique macroscopic quantum phenenomenon which appears despite an extreme weakness of interaction with matters. The flux of solar neutrino is $10^{9} /{cm^2sec}$ for $Be^7$ neutrino and in a charged current event, which occurs incoherently, a rate is about one hundred per year in Super Kamiokande. A mean free path of 1 MeV neutrino in the earth is about $4 \times 10^{10}$ Km and is slightly smaller in the moon. The earth and the moon are quite transparent as far as the charged current interaction is concerned. Strength of the neutral current interaction is smaller than the charged current interaction and elastic scattering process $\nu_e+e^{-} \rightarrow e^{-}+\nu_e$ is similar to the neutral current interaction but has a larger strength. In these processes the neutrino and other matters can stay in almost the same quantum states, and by the coherent sum of amplitudes a total amplitude is enhanced. This effect becomes important when neutrino interacts with a massive object like a star. The moon is a star around the earth and can be located between the earth and the sun. So the coherent interaction of the solar neutrino with the moon can take place. We study the coherent scattering of the solar neutrino with moon in this paper. The MSW coherent matter effect Ref.[2-3] is actually one of the important ingredients in understanding current neutrino oscillation experiments. Because expected day-night effect has not been observed, the MSW effect would not be important for the earth and for the moon. Double slitt like effect we study in this paper is a spatial interference effect of the lightest neutrino. This effect is sensitive to the absolute magnitude of the lightest neutrino mass. The solar neutrino has an energy in MeV region and the coresponding de Broglie wave length is $10^{-13}m $ ,whearas the radius of the moon is of order $10^6$ m. Because the wave length is much smaller than the total size, this regime may be thought as a geometrical optics regime or the classical regime for the neutrino where the wave character is normally washed out. However the interaction strength of neutrino with matter is extremely weak and the neutrino propagates almost freely even in matter with keeping an original phase. Consequently the propagation of neutrino in the moon should be treated by quantum mechanical wave equation even in the macroscopic regime. Interference is a key phenomenon of wave dynamics. We show that to generate interference for solar neutrino, wave packet character ,which have been studied for neutrino oscillations in Ref. [4-8], is important. Using reasonable parameter values, we estimate the magnitude of the effect from overlapp between an initial wave packet and a final wave packet. We show that an observation of the effect would be possible if the absolute value of the neutrino mass is $10^{-4} eV$ or less. Observation of the interference should verify quantum mechanics in Gm region and give new informations on the neutrino mass and on the interior of the moon. \\ ------------------------------------------------------\\ ishikawa@particle.sci.hokudai.ac.jp(K. Ishikawa) \newpage By replacing the quark pair operators or the electron pair operators in the four Fermi interaction with the density of electrons and the quarks, we obtain a potential term in Dirac equation. We solve the equation and find that the effect of the potential arises in the momentum dependent phase factor. Although the effect disappers from the magnitude of a plane wave, the potential gives an interference effect to the wave packet since the wave packet is a linear combination of plane waves. Interference among the plane waves is generated and an observable effect is obtained. It is predicted that the solar neutrino flux changes with time when they are blocked by the moon during eclipses. {\bf potential } Neutrino interacts with quarks $q(x)$ or with the electron $e(x)$ in matter by the charged current interaction \begin{equation} {G_{F} \over \sqrt 2} \bar\nu(x)\gamma_{\mu}(1 -\gamma_5) e(x) \{\bar d(x) \gamma^{\mu}(1 -\gamma_5) u(x) +\bar e(x) \gamma^{\mu}(1 -\gamma_5) \nu(x) \}, \label{eq:chargei} \end{equation} and by the neutral current interaction, \begin{eqnarray} & &{G_{F} \over \sqrt 2} \bar\nu(x)(1 -\gamma_5)\gamma_{\mu} \nu(x) \{ g_V(u) \bar u(x) \gamma^{\mu}u(x) + g_V(d)\bar d(x) \gamma^{\mu}d(x) \\ & &+ g_V(e) \bar e(x) \gamma^{\mu}e(x) + axial vectors \}. \nonumber \\ \nonumber \label{eq:neutrali} \end{eqnarray} Throughout this paper we assume that the electron neutrino is lightest and the mixing angle is negligible. The second term in charged current interction has a same final state and initial state as electron neutrino neutral current interaction and by Fiertz transformation the interaction is written as \begin{equation} {G_{F} \over \sqrt 2} \bar\nu(x) (1 -\gamma_5)\gamma_{\mu} \nu(x)\{ \bar e(x) \gamma^{\mu}e(x) +axial vector \}. \end{equation} The coefficents $g_V(u),g_V(d),g_V(e)$ are given by the Weinberg angle $\theta_W$ as, \begin{eqnarray} & &g_V(u)= {1 \over 2}-2 \times{2 \over 3}\times{\sin}^2{\theta_W} \\ & &g_V(d)= -{1 \over 2}-2 \times(-{1 \over 3})\times{\sin}^2{\theta_W} \\ & &g_V(e)= -{1 \over 2}+2 \times{\sin}^2{\theta_W} \label{eq:neu} \end{eqnarray} Using the current value $\sin^2 {\theta_W}=0.23108 $ [9], we have \begin{equation} g_V(u)=0.19, g_V(d)= -0.35,g_V(e)=-0.034 \end{equation} The charged current interaction,Eq.(\ref{eq:chargei}), transforms the neutrino to the electron. Since the final state in the matter is different from the initial state, the process occurs incoherently. The mean free path of the neutrino in the earth from the incoherent scattering $\nu+n \rightarrow e+p$ is computed from the crossection, $\sigma$, and the neutron density $\rho_n$. By using the crosssection and the density, \begin{eqnarray} & &\sigma= {{G_F}^2 \over \pi} 2M_n E_{\nu} \\ & & \rho_n= 0.556{r \over 2} N_{Avo}/{cm}^3, \end{eqnarray} where $r$ is the specific gravity and $N_{Avo}$ is Avogadoro Number, we have the mean free path, \begin{eqnarray} l&=&{1 \over \sigma \rho} \nonumber \\ &=&10^{11} Km \end{eqnarray} for $r=5$ and $E_{\nu}=1$ Mev. Hence the mean free path is much larger than the size of the earth, $10^5 Km $ or the size of the moon, $2\times 10^3$ Km. Neutrino propages in the earth or in the moon almost freely as far as the charged current interactions are concerned. The strengths of neutral current interactions, Eq.(\ref{eq:neutrali}), are about the same as the charged current interactions. So a probability of incoherent scattering event from the neutral current interaction is also about the same as the charged current interaction and is very small. Coherent scattering is possible in the neutral current interactions since the final state in the matter is the same as the initial state. In the coherent scattering, amplitude from each atom is coherently added and a total amplitude can become much larger than the original amplitude. In this case, an enhancement is possible. We will see that the charge of vector current can have a coherent contribution. To study weak matrix elements, expectation value of the current in the matter \begin{eqnarray} & &\langle Matter|\bar f(0)\gamma_{\mu}\Gamma f(0)|Matter\rangle\\ & &=\sum_i\bar f_i\gamma_{\mu}\Gamma f_i(0)\nonumber \end{eqnarray} is needed where summation over atoms is taken. Expectation values of the currents in one particle state with a momentum ${\vec p}$ and a spin ${\vec s}$ are \begin{eqnarray} & &\bar u(p,s) \gamma_0 u(p,s)={p_0 \over m},\\ & & \bar u(p,s) \gamma_i u(p,s)={p_i \over m},\nonumber \\ & &\bar u(p,s) \gamma_0 \gamma _5 u(p,s)= 0, \\ & & \bar u(p,s) \gamma_i \gamma_5 u(p,s)=s_i.\nonumber \end{eqnarray} Only the zeroth component of the vector charge is positive definite and nearly 1 in the forward scattering region of the neutrinos. Other components are not positive definite and are small. Hence after the summation over whole atoms, only the vector charge becomes non-zero value $ \sum _i \bar u(p,s) \gamma_0 u(p,s)=\rho $ and others vanish. Consequently coherent neutrino interaction with matter is obtained by the vector charge and is reduced to an effective two body Hamiltonian, \begin{eqnarray} & &\bar\nu(x){1-\gamma_5 \over 2}\gamma_{\mu} \nu(x) V_0(x),\\ & &V_0(x)= \sqrt 2 G_F (\rho_e(x) -0.5 \rho_{neutron}(x)), \label{eq:potential} \end{eqnarray} where $\rho_e(x)$ is the density of the electrons and $\rho_{neutron}(x)$ is the density of the neutron . {\bf plane wave} We solve a Dirac equation with the potential term that is produced by the interaction with matter. Since the potential is caused by the weak interaction, the magnitude is very small and is proportional to the Fermi coupling constant. The range of the potential we study,on the other hand, is very large. So it is interesting to see if the effect of the potential is observable. Let us solve a Dirac equation with a left-handed potential term, \begin{equation} i\hbar {\partial \over \partial t} \psi(x)=(i {\vec \alpha}\cdot{\vec p} +m\beta) \psi(x) +V(x)({1-\gamma_5 \over 2}) \psi(x), \end{equation} \begin{eqnarray} & &V(x)=V_0,r \le R \\ & & V(x)= 0,r \ge R, \nonumber \end{eqnarray} here $R$ is a large macroscopic value and $V_0$ is a small value. We will see that the product $V_0 R$ is order 1. We obtain a stationary solution of the energy $E$ \begin{equation} \psi(x)=\exp{( E t / i\hbar)}\psi({\vec x}) \end{equation} with a boundary condition at $z \rightarrow -\infty$ \begin{equation} \phi({\vec x})=e^{i {\vec k}\cdot {\vec x} }u({\vec k}), \end{equation} where $u({\vec k})$ is a free Dirac spinor of a momentum ${\vec k}$. $\psi({\vec x})$ satisfies an intergral equation, \begin{equation} \psi({\vec x})=\phi({\vec x})+\int d{\vec x}'D({\vec x}-{\vec x}') V({\vec x}')({1-\gamma_5 \over 2}) \psi({\vec x}'), \end{equation} with the Green function of the Dirac operator, \begin{equation} D({\vec x}-{\vec x}') =\int {d{\vec p} \over (2\pi)^3} e^{i {\vec p} \cdot ({\vec x}-{\vec x}')} {{E+{\vec \alpha} \cdot {\vec p}+m\beta } \over {E^2- {\vec p}^2 -m^2+i\epsilon }}. \end{equation} By applying Born approximation,we have an solution \begin{eqnarray} \psi({\vec x})&=&\phi({\vec x})+\psi({\vec x})^{(1)}+\psi({\vec x})^{(2)}+ \psi({\vec x})^{(3)}+-, \\ \psi({\vec x})^{(l+1)}&=&\int d{\vec x}'D({\vec x}-{\vec x}') V({\vec x}')({1-\gamma_5 \over 2}) \psi({\vec x}')^{(l)},\nonumber \\ \psi({\vec x})^{(1)} &=&e^{i{\vec k}\cdot {\vec x}}\int {d{\vec p} \over (2\pi)^3}e^{i({\vec p}-{\vec k})\cdot {\vec x}}{1 \over {\vec k}^2-{\vec p}^2+i\epsilon }\times \\ & &({1-\gamma_5 \over 2}( 2E({\vec k})+{\vec \alpha}\cdot({\vec p}-{\vec k}))+\gamma_5\beta m)u({\vec k}) \tilde V({\vec k}-{\vec p})\nonumber \label{eq:persol} , \end{eqnarray} where \begin{eqnarray} \tilde V({\vec k}-{\vec p})&=&\int d{\vec x}'e^{i({\vec k}-{\vec p})\cdot{\vec x}'}V({\vec x}') \\ &=&4\pi V_0 R^3 v_0(q), \nonumber \\ v_0(q)&=&{1 \over q}({\sin q \over q^2}-{\cos q \over q} ),\\ q&=&R|({\vec p}-{\vec k})| \nonumber \end{eqnarray} Identity, \begin{eqnarray} & &(E({\vec k})+{\vec \alpha} \cdot {\vec p}+m\beta){1-\gamma_5 \over 2} u({\vec k}) \nonumber \\ & &={1-\gamma_5 \over 2} 2E({\vec k})u({\vec k})+\gamma_5m\beta u({\vec k}) +{1-\gamma_5 \over 2} {\vec \alpha}\cdot{({\vec p}-{\vec k})} u({\vec k}) \end{eqnarray} was used in Eq.(\ref{eq:persol}). Finally we have \begin{eqnarray} \label{eq:solution} & &\psi({\vec x})^{(1)}=e^{i{\vec k}\cdot {\vec x}}\{ {1-\gamma_5 \over 2}2E({\vec k}) {V_0 R \over 2\pi^2}u({\vec k})F({\vec x},{\vec k}) \\ & &+\gamma_5\beta m {V_0 R \over 2\pi^2}u({\vec k})F({\vec x},{\vec k})+{1-\gamma_5 \over 2 }{V_0 \over 2\pi^2}{ \alpha}_i F_i({\vec x},{\vec k})u({\vec k})\}, \nonumber \\ \label{eq:function1} & &F({\vec x},{\vec k})=\int d{\vec q}{1 \over -2{\vec k}\cdot {\vec q}+i\epsilon}v_0(q)e^{i{\vec q}\cdot{\vec x} \over R}\\ \label{eq:function2} & &F_i({\vec x},{\vec k})=\int d{\vec q}q_i{1 \over -2{\vec k}\cdot {\vec q+i\epsilon}} v_0( q)e^{i{\vec q}\cdot{\vec x} \over R}. \end{eqnarray} Eqs. (\ref{eq:function1}) and (\ref{eq:function2}) become \begin{eqnarray} & &F({\vec x},{\vec k})= i{4 {\pi}^2 \over 2k}\sqrt{1-{\vec \xi}_T^2},\\ & &F_i({\vec x},{\vec k})={\partial \over \partial k_i} F({\vec x},{\vec k}) \end{eqnarray} in the region ${\vec k}\cdot{\vec x} > R,1>{\vec \xi_T}^2$ where ${\vec \xi}={\vec x}/R,{\vec \xi}_{T}={\vec \xi}-{\vec k}({\vec k}\cdot{\vec \xi})/{k^2}$. In the region where $R$ and ${E \over m} $ are large, the first term is dominant over the second term and the third term in Eq.(\ref{eq:solution}), and we have \begin{equation} \psi({\vec x})^{(1)}= {1-\gamma_5 \over 2}2E({\vec k}) {V_0 R \over 2\pi^2}F({\vec x},{\vec k}) \phi({\vec x}). \end{equation} Thus the first order term changes with position in the range R even though the wave length is microscopic size. It should be noted however that this term is pure imaginary. It is suggested that correction terms modify only phase factor of the wave function. We show in the following that this is the case in fact. The second order term is computed in a similar manner. The dominant term is given as, \begin{equation} \psi({\vec x})^{(2)}= ({1-\gamma_5 \over 2}2E({\vec k}) {V_0 R \over 2\pi^2})^2 F^{(2)}({\vec x},{\vec k}) \phi({\vec x}), \end{equation} where the coefficient is given as \begin{equation} F^{(2)}({\vec x},{\vec k})=\int d{\vec q}_{1}d{\vec q}_{2} {1 \over -2{\vec k}\cdot ({\vec q}_{1}+{\vec q}_{2})+i\epsilon}v_0(q_1) {1 \over -2{\vec k}\cdot {\vec q}_{2}+i\epsilon}v_0(q_2)e^{i( {\vec q}_{1}+ {\vec q}_{2})\cdot{\vec x} \over R}. \end{equation} By writing the integral with a symmetric manner, we have \begin{eqnarray} & &F^{(2)}({\vec x},{\vec k}) \\ & &={1 \over 2!}\int d{\vec q}_{1}d{\vec q}_{2} {1 \over -2{\vec k}\cdot ({\vec q}_{1}+{\vec q}_{2})+i\epsilon} ({1 \over -2{\vec k}\cdot {\vec q}_{2}+i\epsilon}+ {1 \over -2{\vec k}\cdot {\vec q}_{1}+i\epsilon})\nonumber\\ & &\times v_0(q_1) v_0(q_2)e^{i( {\vec q}_{1}+ {\vec q}_{2})\cdot{\vec x} \over R}\nonumber \\ & &={1 \over 2!}\int d{\vec q}_{1}d{\vec q}_{2} {1 \over -2{\vec k}\cdot{\vec q}_{1}+i\epsilon} {1 \over -2{\vec k}\cdot {\vec q}_{2}+i\epsilon} v_0(q_1) v_0(q_2)e^{i( {\vec q}_{1}+ {\vec q}_{2})\cdot{\vec x} \over R}\nonumber \\ & &={1 \over 2!}{F({\vec x},{\vec k})}^2\nonumber. \end{eqnarray} Thus the second order term becomes, \begin{equation} \psi({\vec x})^{(2)}={1 \over 2!} ({1-\gamma_5 \over 2}2E({\vec k}) {V_0 R \over 2\pi^2}F({\vec x},{\vec k}))^2 \phi({\vec x}) \end{equation} and higher order terms are written in the same manner \begin{equation} \psi({\vec x})^{(l)}={1 \over l!} ({1-\gamma_5 \over 2}2E({\vec k}) {V_0 R \over 2\pi^2}F({\vec x},{\vec k}))^l \phi({\vec x}). \end{equation} Adding all higher order terms we have the wave function \begin{equation} \label{eq:fsolution} \psi({\vec x})= \exp{(i{\vec k}\cdot{\vec x} + i {1-\gamma_5 \over 2}2 V_0{1 \over k} \sqrt { {\vec k}^2(R^2-{\vec x}^2)+{({\vec k}\cdot{\vec x})^2}})}u({\vec k}) . \end{equation} Substituing the Fermi coupling constant, the radius and the density of the moon assuming the specific gravity $r$, \begin{eqnarray} & &G_F=1.16 \times 10^{-5} (GeV)^{-2} \\ & & R= 1.74 \times 10^3 Km \\ & &\rho_e=\rho_{neutron}=0.556 {r \over 2} N_{Avo}/{cm}^3 \end{eqnarray} to Eq.(\ref{eq:potential}), the numerical constant in Eq.(\ref{eq:fsolution}) becomes for $r=5$ \begin{equation} 2 V_0 R= 0.95. \end{equation} This is an interesting value to see an interference effect. However the correction is in the phase factor and disappears in the $|\psi({\vec x})|^2$. So the neutrino flux is the same as free wave if the neutrino is a plane wave. It is impossible to observe the interference effect using any plane wave in the present situation. { \bf wave packet} Realistic solar neutrino is not a plane wave but is a linear combination of plane waves, a finite wave packet Ref.[4-9,11-12]. The scattering amplitude is the overlapp between the initial wave packet and the final wave packet. The former is determined from the beam and the latter is determined from the detector. From the Eq.(\ref{eq:fsolution}), the phase of the initial wave function through the moon depends on several parameters such as momentum and position and the wave packet is certain linear combinations of these waves. We will show that the overlapp between these wave packets and the final wave packets have interference effect if the mass of the lightest neutrino is about $10^{-4} eV/c^2 $ or less. A wave packet expands during a propagation [12,13]. Speed of expansion is determined by the velocity variance and is dominant in the transverse direction for a relativistic particle. Using the initial size $\delta x$, the maxmimum velocity in the transverse direction is given as $ v_T={\delta P_T \over E} = {\hbar \over {\delta x} E}$. The size in the transverse direction is a product of the velocity and the propagation time \begin{equation} \Delta x_T=v_T \delta t. \end{equation} The size for the neutrino of $E=0.6MeV,{\delta x}=10^{-10}m $ becomes $3\times10^5 Km$ for $\delta t=500s $ and $10^3 Km$ for $\delta t=1s$. The former one is the packet size when it propagates from the sun to the earth and the latter is the packet size when it propagates from the earth to the moon. The radius of moon is $1.7 \times 10^3 Km$ and is about the same as the packet size of the neutrino at the moon which is observed at the earth with a microscopic size. So the wave packet effects are relavant for the solar neutrino that is detected at the earth. The Gaussian wave packet of the variance $\sigma \hbar$ at $t=0,{\vec x}={\vec X_0}$ expands at a much later time $t>>t_0$ or at a previous time $t<<-t_0$ and behaves at a distant position ${\vec x}$ as \begin{eqnarray} & &\psi({\vec x},t)= N e^{-i{ {mc \over \hbar}\sqrt {(ct)^2-({\vec x}-{\vec X_0})^2} } -{1 \over 2 \sigma \hbar}({\vec P}_X-{\vec p}_0)^2},\\ & &{\vec P}_X=mc{1 \over \sqrt{(ct)^2-({\vec x}-{\vec X}_0)^2}} ({\vec x}-{\vec X}_0), \nonumber \end{eqnarray} where N is a normalization factor and ${\vec p}_0$ is the central value of the momentum. From the Gaussian term the wave packet size is determined. The phase factor is written by the use of the momentum as, \begin{equation} \phi={ (mc^2)^2 \over \hbar |{\vec P}_{X} c |} t. \end{equation} The phase factor $F({\vec k},{\vec x})$ is added for the neutrino which penetrates inside the moon. We study the overlapp of two wave packets at a certain time at around the moon. Initial wave packet which is emitted at a time $T_1$ [14] in the sun and propagates toward the earth through the Moon and final wave packet which is detected at a time $T_2$ at the detector are located around the Moon at a time $t=T_2-\delta t $ where $\delta t$ is around one second. The function $F({\vec k},{\vec x})$ is in one of the wave function and becomes smooth function of ${\vec x}$ in this region and computation is straightforward. The initial wave packet which is emitted from the sun at $({\vec X}_1,T_1)$ behaves at $({\vec x},t)$ as, \begin{eqnarray} \label{eq:inwave} & &\psi_{in}({\vec x},t)= N e^{-i{ {mc \over \hbar}\sqrt {(c(t-T_1))^2-({\vec x}-{\vec X_1})^2} } -{1 \over 2 \sigma \hbar}({\vec P}_{X_1}-{\vec p^{in}}_0)^2},\\ & &{\vec P}_{X_1}=mc{1 \over \sqrt{(c(t-T_1))^2-({\vec x}-{\vec X}_1)^2}} ({\vec x}-{\vec X}_1). \nonumber \end{eqnarray} The final wave packet detected at $({\vec X_2},T_2)$ behaves at the same $({\vec x},t)$ as, \begin{eqnarray} \label{eq:outwave} & &\psi_{out}({\vec x},t)= N e^{-i{ {mc \over \hbar}\sqrt {(c(T_2-t))^2-({\vec x}-{\vec X_2})^2} } -{1 \over 2 \sigma \hbar}({\vec P}_{X_2}-{\vec p}_0^{out})^2+F({\vec x})},\\ & &{\vec P}_{X_2}=mc{1 \over \sqrt{(c(T_2-t))^2-({\vec x}-{\vec X}_2)^2}} ({\vec x}-{\vec X}_2). \nonumber, \end{eqnarray} We have chosen the situation where the wave front of the initial wave packet does not reach the Moon but the wave front of the final wave packet reaches the Moon. The phase in Eqs.(\ref{eq:inwave}) and (\ref{eq:outwave} ) depend on the absolute value of the neutrino mass and momentum. For a neutrino of the mass $10^{-4}eV/c^2$ and the momentum $1MeV/c$, the phase factor $\phi$ is written as \begin{eqnarray} & &\phi=5 {|{\vec x}| \over x_0} \times 10^{-3}, \\ & & x_0=1000Km, \nonumber \end{eqnarray} and becomes negligbly small in the scale of Moon. So the phase factor of ${\psi_{in}({\vec x})}$ is constant $\phi_0$ and the phase factor of ${\psi_{out}({\vec x})}$ is negligibly small in the same scale. Gaussian factors are almost one in the same region. Then overlapp of the two wave functions is given as, \begin{equation} (\psi_{out}({\vec x},t), \psi_{in}({\vec x},t))=e^{i\phi_0} \int d {\vec r} N e^{iF( {\vec r}) -{1 \over 2 \sigma \hbar}({\vec P}_{X_1}-{\vec p^{in}}_0)^2-{1 \over 2 \sigma \hbar}({\vec P}_{X_2}-{\vec p^{out}}_0)^2}. \end{equation} Since the Gaussian terms are positive definite the integral is reduced by the effect of phase factor $F({\vec x})$. The reduction rate depends on parameters such as wave packet sizes, the energy and others. Using the Moon radius for the size of detecting wave packet around Moon we have the neutrino flux at about $0.85 $ of the normal flux value for ${\vec p}_0^{in}={\vec p}_0^{out}={\vec P}_X $ at a total solar eclipse. Reduction is almost the same for other momenta. We studied coherent scattering of solar neutrino by the moon and found that the moon modifies the phase of solar neutrino wave function. The result is surprizing from two reasons. First one is extreme weakness of potential and second one is the fact that the wave character is seen in extremely large scale. They became possible because, despite extreme weakness of potential, the volume of finite potential region is very large and the product of two values is of order one. The other feature is seen from the fact that the ratio between the distance and de Broglie wave length is around $10^{20}$. Normally in this regime the wave character are washed away due to a rapid oscillation of phase and geometrical optics regime is realized. Interference should have not been produced in this case. However in the present case, interference is in fact produced. This is because the time dependent phase and space dependent phase of the relativistic waves cancell and consequently the phase difference in large distance is not washed away and wave phenomena occurs. As an implication, we suggests that the time dependent moduration of solar neutrino flux occur during eclipse if the suitable detector is used. If a macroscopic wave phenomenon of neutrino is verified, this can be used for testing quantum mechanics in the scale of Gmeter range. New informations on the absolute value of the lightest neutrino mass and the interior of the moon could be supplied also. {\bf Acknowledgements} This work was partially supported by the special Grant-in-Aid for Promotion of Education and Science in Hokkaido University provided by the Ministry of Education, Science, Sports and Culture, a Grant-in-Aid for Scientfic Research on Priority Area ( Dynamics of Superstrins and Field Theories, Grant No. 13135201), and a Grant-in-Aid for Scientfic Research on Priority Area ( Progress in Elementary Particle Physics of the 21st Century through Discoveries of Higgs Boson and Supersymmetry, Grant No. 16081201) provided by the Ministry of Education, Science, Sports and Culture, Japan. K. I. thanks G. Takeda for informing Ref. [15]. \textbf{References} \bigskip [1] T. Hirata et al, Phy. Rev. Lett. \textbf{58}, (1987)1490; Phys.Rev.\textbf{D38}, (1988)448. [2] L. Wolfenstein, Phys. Rev. \textbf{D17}, (1978)2369 [3] S. P. Mikheyev,and A. Yu. Smirnov, Yad. Fiz.\textbf{42}, (1985)1441 [4] B. Kayser, Phys. Rev. \textbf{D24}, (1981)110; Nucl. Phys. \textbf{B19}(Proc.Suppl), (1991)177 [5] C. Giunti, C. W. Kim, and U. W. Lee, Phys. Rev. \textbf{D44}, (1991)3635 [6] S. Nussinov, Phys. Lett. \textbf{B63}, (1976)201 [7] K. Kiers ,N. Nussinov and N. Weisis, Phys. Rev. \textbf{D53}, (1996)537. [8] C. Y. Cardall, Phys. Rev. \textbf{D61}, (2000)073006 [9] M. Beuthe, Phys. Rev. \textbf{D66}, (2002)013003; [10] Particle Data table, S. Eidelman et al. , Physics Letters \textbf{B592}, (2004)1 [11] A. Asahara, K.Ishikawa, T. Shimomura, and T. Yabuki, Prog. Theor. Phys. \textbf{113}, (2005)385; T. Yabuki and K. Ishikawa, Prog. Theor. Phys. \textbf{108}, (2002)347. [12] K. Ishikawa and T. Shimomura ,''Generalized S-matrix in Mixed Representation ''Hokkaido University preprint hep-ph/0508303 [13] M. L. Goldberger and Kenneth M. Watson, ``Collision Theory `` (John Wiley \& Sons, Inc. New York, 1965). [14] Neutrino production time is neither known nor defined with a definite value because K-electron capture rate of $Be^7$ is so slow. Mean free time of $Be^7$ core or $Li^7$ core may be large. Also no observation is made on these objects and time width becomes large. See [12] on this point. Consequently we simply take a linear combination of the different time. [15]Coherent interactions of neutrinos with matters was studied in different context by, P. Langacker, J. P. Leveille, and J. Sheinen, Phys. Rev. \textbf{D27}, (1983)1228. [16] After we have completed our paper we found the following paper:Solar Neutrinos and the Eclipse Effect, M. Narayan, G. Rajasekaran, and R. Sinha,and C. P. Burgess, Phys. Rev. \textbf{D60}, (1999)073006, which discusses the eclipse effect in MSW solar neutrino oscillations. \end{document}
1,314,259,993,976
arxiv
\section*{Introduction} For the study of chaotic dynamics and dimension of attractors the concepts of the Lyapunov exponents \cite{Lyapunov-1892} was found useful and became widely spread \cite{GrassbergerP-1983,EckmannR-1985,ConstantinFT-1985,AbarbanelBST-1993}. Such characteristics of chaotic behavior, as the Lyapunov dimension \cite{KaplanY-1979} and the entropy rate \cite{Kolmogorov-1959,Sinai-1959,AdlerKA-1965}, can be estimated via the Lyapunov exponents \cite{Millionschikov-1976,Pesin-1977,KaplanY-1979,Young-2013}. In this work an analytical approach to the study of the Lyapunov dimension, convergency and entropy for a dynamical model of Chua memristor circuit is demonstrated. \section{A dynamical model of the Chua memristor circuit} Consider one of the Chua memristor models \cite[eq.25]{CorintoF-2017} \begin{equation}\label{chua-memristor} \begin{aligned} & \dot x = \alpha(m_0-1)x+\alpha y-\alpha m_1x^3+\alpha x_0, \\ & \dot y = x-y+z, \ \dot z = \beta y-\gamma z \end{aligned} \end{equation} with real parameters $\alpha,\beta,m_0,m_1,\gamma$, $x_0$, and suppose that \( \alpha m_1 > 0. \) For a survey on memristor circuits see, e.g. \cite{Tetzlaff-2014-book}. System \eqref{chua-memristor} with $x_0=0$ describes the dynamics of the Chua oscillator with cubic nonlinearity \cite{Chua-1993,HuangPWF-1996}. If $\gamma=0$ and $x_0=0$, then system \eqref{sys:ode} has equilibria $u^{eq}_0=(0,0,0)$ and $u^{eq}_\pm = (\pm \sqrt{\frac{m0 - 1}{m_1}},0,\mp \sqrt{\frac{m0 - 1}{m_1}})$ for $m_0>1$. Represent system \eqref{chua-memristor} as an autonomous differential equation of general form: \begin{equation}\label{sys:ode} \dot{u} = f({u}), \end{equation} where $u=(x,y,z) \in U=\mathbb{R}^3$ and the continuously differentiable vector-function $f: \mathbb{R}^3 \to \mathbb{R}^3$ is the right-hand side of system \eqref{chua-memristor}. Define by $u(t,u_0)$ a solution of \eqref{sys:ode} such that $u(0,u_0)=u_0$, and consider the evolutionary operator $\varphi^t(u_0) = u(t,u_0)$. We assume the uniqueness and existence of solutions of \eqref{sys:ode} for $t \in [0,+\infty)$. Then system \eqref{sys:ode} generates a dynamical system $\{\varphi^t\}_{t\geq0}$. Let a nonempty set $K \subset U$ be invariant with respect to $\{\varphi^t\}_{t\geq0}$, i.e. $\varphi^t(K) = K$ for all $t \geq 0$. For example, as the set $K$ one can consider various types of attractors of system \eqref{sys:ode} (see, e.g. examples from \cite{HuangPWF-1996,BilottaP-2008}). Recently the \emph{classification of local attractors as being hidden or self-excited} was introduced in connection with the discovery of the first hidden attractor in the classical Chua model with the saturation nonlinearity \cite{KuznetsovLV-2010-IFAC,LeonovKV-2011-PLA,BraginVKL-2011,LeonovKV-2012-PhysD,KuznetsovKLV-2013,KiselevaKKKLYY-2017,StankevichKLC-2017}: an attractor is called a \emph{self-excited attractor} if its basin of attraction intersects with any open neighborhood of an equilibrium, otherwise, it is called a \emph{hidden attractor} \cite{LeonovKV-2011-PLA,LeonovK-2013-IJBC,LeonovKM-2015-EPJST,Kuznetsov-2016}. For example, hidden attractors can be found in various memristive circuits, see, e.g. \cite{PhamVVJKH-2016,HuLLYZ-2017-HA,BaoBWCX-2017-HA,BaoBLWW-2016-HA,PhamVVHV-2016-HA,Vaidyanathan-2017-SCI-HA,VaidyanathanPV-2017-HA,PhamVVTT-2017-HA,PhamVVWH-2017-HA,RochaRK-2017-HA,SahaSRC-2015-HA,SemenovKASVA-2015,PhamVJWV-2014-HA,PhamVVLV-2015-HA,ChenYB-2015-HA-EL,ChenLYBXW-2015-HA}. Remark that hidden attractors are not connected with equilibria and, thus, do not related with the Shilnikov scenario of chaos \cite{AfraimovichGLShT-2014}. In the works \citep{BianchiKLYY-2015,BlagovKLYY-2015,LeonovKYY-2015-TCAS,KuznetsovLYY-2017-CNSNS} it is demonstrated the difficulties of reliable simulation of the phase-locked loops circuits in SPICE and MATLAB Simulink, caused by hidden attractors with narrow basins of attraction. Consider linearization of system \eqref{sys:ode} along the solution $u(t,u_0)=\varphi^t(u)$: \begin{equation} \label{sfl} \begin{aligned} & \dot v = J(\varphi^t(u))v, \quad J(u) = Df(u), \end{aligned} \end{equation} where $J(u)$ is the $3\!\times\!3$ Jacobian matrix \[ J(u) =\left( \begin{array}{ccc} \alpha(m_0-1)-3\alpha m_1x^2 & \alpha & 0 \\ 1 & -1 & 1 \\ 0 & \beta & -\gamma \\ \end{array} \right) \] and it can be represented as \( J=J(0)-3\alpha m_1x^2I_1 \) with \[ J_0= J(0) =\left( \begin{array}{ccc} \alpha(m_0-1)&\alpha & 0 \\ 1 & -1 & 1 \\ 0 & \beta & -\gamma \\ \end{array} \right), \ I_1 =\left( \begin{array}{ccc} 1&0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \\ \end{array} \right). \] Let for any $t > 0$ and any $u \in U$ the ordered sequence $\lambda_1(u) \ge\!\cdots\!\ge \lambda_n(u)$, where $\lambda_i(u) = \lambda_i(\tfrac{1}{2} (J(u) + J(u)^{*})$, $i = 1,\dots,n$ be the eigenvalues of the symmetrized Jacobian matrix \( \tfrac{1}{2} \left(J(u) + J(u)^{*} \right). \) \begin{lemma} \( \lambda_j(0) \geq \lambda_j(u), \quad j=1,2,3 \) \end{lemma} Then from Corollary~\ref{thm:dLKYeig} (see in the Appendix) we get \begin{theorem} \( d_{\rm L}^{\rm KY}\big(\{\lambda_{j}(u)\}_{i=1}^3\big) \leq d_{\rm L}^{\rm KY}\big(\{\lambda_{j}(0)\}_{i=1}^3\big) \) \end{theorem} If $J(0)$ have simple real eigenvalues $\lambda_i(J(0))$, then $\lambda_i(J(0)) = \lambda_i(\tfrac{1}{2}(J(0)+J^*(0)))$ and we get the following result \begin{corollary} Let $u^{eq}_0=(0,0,0)$ be one of the equilibria of system \eqref{chua-memristor} and the matrix $J(0)$ have simple real eigenvalues. Then the exact Lyapunov dimension of any compact invariant set $K \ni u^{eq}_0$ is defined as \[ \dim_{\rm L}K = d_{\rm L}^{\rm KY}\big(\{\lambda_{j}(0)\}_{i=1}^3\big). \] \end{corollary} By Theorem~\ref{theorem:l1l2} (see in the Appendix) we get \begin{theorem} If \( \lambda_{1}(0)+\lambda_{2}(0) <0, \) then any bounded solution of system \eqref{sys:ode} tends to the stationary set. \end{theorem} \newpage \section*{Appendix. Exact and finite-time Lyapunov dimension} Suppose that $\det J(u) \neq 0 \quad \forall u \in U$. Consider a fundamental matrix of linearized system \eqref{sfl} $D\varphi^t(u)$ such that $D\varphi^0(u) = I$, where $I$ is a unit $3 \times 3$ matrix. Let $\sigma_i(t,u) = \sigma_i(D\varphi^t(u))$, $i = 1,2,3$, be the singular values of $D\varphi^t(u)$ with respect to their algebraic multiplicity ordered so that $\sigma_1(t,u) \geq \sigma_2(t,u) \geq \sigma_3(t,u) > 0$ for any $u \in U$ and $t \geq 0$. Consider the set of \emph{the finite-time Lyapunov exponents} at the point $u_0$ $\{\LEs_i(t,u_0)=\frac{1}{t}\ln\sigma_{j}(t,\!u_0)\}_{i=1}^3$ ordered by decreasing for $t > 0$. Introduce the \emph{Kaplan-Yorke formula \cite{KaplanY-1979} with respect to the ordered set} $\lambda_1\geq... \geq \lambda_n$: \begin{equation}\label{lftKY} d^{\rm KY}(\{\lambda_i\}_{i=1}^3)\!=\! j+\frac{\sum_{i=1}^{j}\lambda_i}{|\lambda{j + 1}|}, \ j\!=\!\max\{m\!: \sum_{i=1}^{m}\!\lambda_i\!\geq\!0\}, \end{equation} where $d^{\rm KY}(\{\lambda_i\}_{i=1}^3)=0$ for $j=0$ and $d^{\rm KY}(\{\lambda_i\}_{i=1}^3)=3$ for $j=3$. Then the \emph{finite-time local Lyapunov dimension} \cite{Kuznetsov-2016-PLA,KuznetsovLMPS-2017} at a certain point $u_0$ can be defined as \[ \dim_{\rm L}(t,u_0) = d^{\rm KY}(\{\LEs_i(t,u_0)\}_{i=1}^3). \] and the \emph{finite-time Lyapunov dimension} of invariant closed bounded set $K$ is as follows \begin{equation}\label{DOmaptmax} \dim_{\rm L}(t, K) = \sup\limits_{u_0 \in K} \dim_{\rm L}(t,u_0). \end{equation} In this approach the use of Kaplan-Yorke formula \eqref{lftKY} with the finite-time Lyapunov exponents is justified by the \emph{Douady--Oesterl\'{e} theorem} \cite{DouadyO-1980}, which implies that for any fixed $t > 0$ the Lyapunov dimension of the map $\varphi^t$ with respect to a closed bounded invariant set $K$, defined by \eqref{DOmaptmax}, is an upper estimate of the Hausdorff dimension of the set $K$: \( \dim_{\rm H}K \leq \dim_{\rm L}(t, K). \) For the estimation of the Hausdorff dimension of invariant closed bounded set $K$ one can use the map $\varphi^t$ with any time $t$ (e.g. $t=0$ leads to the trivial estimate $\dim_{\rm H}K \leq 3$) and, thus, the best estimation is \( \dim_{\rm H}{K} \le \inf_{t\geq0}\dim_{\rm L} (t, K). \) The following property \begin{equation}\label{DOlim} \inf_{t\geq0}\sup\limits_{u \in K} \dim_{\rm L}(t,u) = \liminf_{t \to +\infty}\sup\limits_{u \in K} \dim_{\rm L}(t,u) \end{equation} allows one to introduce the \emph{Lyapunov dimension} \cite{Kuznetsov-2016-PLA} \begin{equation}\label{DOinf} \dim_{\rm L} K = \liminf_{t \to +\infty}\sup\limits_{u \in K} \dim_{\rm L}(t,u). \end{equation} If the maximum of local Lyapunov dimensions on the global attractors, which involves all equilibria, is achieved at an equilibrium point $u^{cr}_{eq}$, i.e. $\dim_{\rm L} u^{cr}_{eq} = \max_{u_0 \in K} \dim_{\rm L} u_0$, then this allows one to get the \emph{exact Lyapunov dimension}. (this term was suggested by Doering~et~al.~in~\cite{DoeringGHN-1987}). In general, a \emph{conjecture on the Lyapunov dimension of self-excited attractor} \cite{KuznetsovLMPS-2017} is that for a typical system the Lyapunov dimension of a self-excited attractor does not exceed the Lyapunov dimension of one of unstable equilibria, the unstable manifold of which intersects with the basin of attraction and visualize the attractor. In contrast to the finite-time Lyapunov dimension \eqref{DOmaptmax}, the Lyapunov dimension \eqref{DOinf} is \emph{invariant under smooth change of coordinates} \cite{KuznetsovAL-2016,Kuznetsov-2016-PLA}. This property and a proper choice of smooth change of coordinates may significantly simplify the estimation of the Lyapunov dimension of dynamical system. Consider an effective analytical approach, proposed by Leonov \cite{Leonov-1991-Vest,LeonovB-1992,Kuznetsov-2016-PLA}. Let for any $t > 0$ and any $u_0 \in U$ the ordered sequence $\lambda_1(u_0, S) \ge\!\cdots\!\ge \lambda_n(u_0, S)$, where $\lambda_i(u_0, S) = \lambda_i(S \varphi^t(u_0))$, $i = 1,\dots,n$ be the eigenvalues of the symmetrized Jacobian matrix \begin{equation} \label{SJS} \frac{1}{2} \left( S J(\varphi^t(u_0)) S^{-1} + (S J(\varphi^t(u_0)) S^{-1})^{*}\right). \end{equation} \begin{theorem}\label{theorem:th1} If there exist an integer $j \in \{1,\ldots,n-1\}$, a real $s \in [0,1]$, a differentiable scalar function $V: U \subseteq \mathbb{R}^n \to \mathbb{R}^1$, and a nonsingular $n\times n$ matrix $S$ such that \begin{equation}\label{ineq:weilSVct} \sup_{u \in U} \big( \lambda_1 (u,S) + \cdots + \lambda_j (u,S) + s\lambda_{j+1}(u,S) + \dot{V}(u) \big) < 0, \end{equation} where $\dot{V} (u) = ({\rm grad}(V))^{*}f(u)$, then for a compact invariant set $K\subset U$ we have \[ \dim_{\rm H}K \leq \dim_{\rm L}(\{\varphi^t\}_{t\geq0},K) < j+s. \] \end{theorem} In the work \cite{Kuznetsov-2016-PLA} it is shown how the method can be justified by the invariance of the Lyapunov dimension of compact invariant set with respect to the special smooth change of variables $h$ with $Dh(u)=e^{V(u)(j+s)^{-1}}S$, where $V$ is a differentiable scalar function and $S$ is a nonsingular $n \times n$ matrix. For $S=0$ and $V(u) \equiv 0$ we have \begin{corollary}[\cite{DouadyO-1980,Smith-1986,Leonov-1991-Vest,Kuznetsov-2016-PLA}] \label{thm:dLKYeig} \[ \dim_{\rm H}K\!\leq\!\dim_{\rm L}K \!\leq\! \sup_{u \in K}d_{\rm L}^{\rm KY}\!\big(\{\lambda_{j}(u)\}_{i=1}^3\big). \] \end{corollary} The following result~\cite{Leonov-2012-PMM} is useful for the study of global convergency. \begin{theorem}\label{theorem:l1l2} If there exist a continuously differentiable scalar function $V: U \subseteq \mathbb{R}^n \to \mathbb{R}^1$ and a non-degenerate $n\times n$ matrix $S$ exist such that \begin{equation}\label{pmm36} \sup_{u \in U} \big( \lambda_1(u,S)+\lambda_2(u,S)+\dot V(u) \big) <0, \end{equation} then any bounded solution of system \eqref{sys:ode} with any initial data $u_0 \in U$ tends to the stationary set of dynamical system $\{\varphi^t\}_{t\geq0}$ as $t\to+\infty$. \end{theorem} Remark that the stationary set can have any structure, e.g. be a line of equilibria. In \cite{BoichenkoL-1998,PogromskyM-2011} it is demonstrated how the above technique can be effectively used to a derive constructive upper bound of the sum of positive Lyapunov exponents and the topological entropy \cite{AdlerKA-1965} (the topological entropy is an analogue of the entropy defined earlier by Kolmogorov and Sinai \cite{Kolmogorov-1959,Sinai-1959}). \section*{Acknowledgment} \vspace{-0.5cm} We would like to thank Leon Chua, Fernando Corinto, and Ronald Tetzlaff to draw our attention to the memristors and for fruitful discussions. \vspace{-0.5cm}
1,314,259,993,977
arxiv
\section{Introduction}\label{sec:intro} \vspace{-2mm} Sound event detection (SED), as a fundamental task to recognize the acoustic events, has achieved significant progress in a variety of applications, such as unobtrusive monitoring in health care, and surveillance. Recently, Deep Neural Network (DNN) based methods such as CRNN~\cite{mesaros2019sound} and Conformer~\cite{miyazaki2020conformer} significantly improve the event detection performance. However, these methods are usually designed in an offline setting that the entire audio clip containing sound events is fully observed. This assumption may not hold in many real-world applications that require real-time event detection. For example, the event detection in audio surveillance~\cite{viet2013real} requires low latency reaction to potentially dangerous circumstances for life saving and protection. In this paper, we will focus on the sound event early detection (SEED) problem, which is designed in an online setting that requires ongoing events to be recognized as early as possible. Despite the importance of the SEED problem, few existing focus on detecting sound events with short delays from audio streaming data. Some works design a monotonous detection function to achieve early detection, such as random regression forests algorithm\cite{phan2015early}, Dual-DNN~\cite{phan2018enabling}. Some work ~\cite{mcloughlin2018early} proposes a detection front-end to identify seed regions from spectrogram features to detect events at the early stage. However, the prediction of these methods are based on probability, which could be not reliable (over-confidence)~\cite{zhao2020uncertainty, sensoy2018evidential}. Especially during the early stage of an ongoing event, we only collect a small number of stream audios that may not be enough to compose a clear event sound to support a reliable prediction. Figure~\ref{fig:example} (a) shows an example that prediction based on probability is over-confidence at the early stage. To solve the issue discussed above, we propose a novel Polyphonic Evidential Neural Network (PENet) to estimate the Beta distribution instead of the class probability such that we can estimate evidential uncertainty for each prediction. The attached evidential uncertainty is able to detect the ``over-confidence'' prediction and achieve a reliable prediction. To further improve the SEED performance, we propose the backtrack inference method that consider the forward information (waiting for the future information) of an ongoing event. \begin{figure}[!t] \centering \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=\linewidth]{figs/fig1_a.png} \caption{Baseline} \label{fig1a} \end{subfigure} \begin{subfigure}[b]{0.23\textwidth} \centering \includegraphics[width=\linewidth]{figs/fig1_b.png} \caption{Ours} \label{fig1b} \end{subfigure} \vspace{-2mm} \small{ \caption{An example of sound event early detection. (a) Baseline (CRNN) model might give a prediction with over-confidence, and result in a false positive detection; (b) To avoid over-confidence, our framework estimates vacuity uncertainty instead of entropy, and make a reliable prediction with low vacuity.} \label{fig:example} \vspace{-6mm} } \end{figure} \vspace{-2mm} \section{Methodology}\label{sec:method} \vspace{-2mm} In this section, we begin with the essential concepts of evidential uncertainty. Then, we introduce the proposed Polyphonic Evidential Neural Network with its backtrack inference method. \vspace{-2mm} \subsection{Subjective Logic and Evidential Uncertainty}\label{sec:SL} \textbf{Subjective Logic} (SL)~\cite{josang2016subjective} defines a subjective opinion by explicitly considering the dimension of uncertainty derived from vacuity (i.e., a lack of evidence). For a given binomial opinion towards proposition (e.g., an audio segment) ${\mathbf x}$, an opinion is expressed by two belief masses (i.e., belief $b$ and disbelief $d$) and one uncertainty mass (i.e., vacuity, $u$). Denote an opinion by $\omega= (b, d, u)$, where $b$ and $d$ can be thought as positive (event happen) vs. negative (event not happen) on a given audio segment. We have the property $b+d+u=1$ and $b, d, u \in [0, 1]$. An opinion, $\omega$, can always be projected onto a single probability distribution by removing the uncertainty mass. To this end, the expected belief probability $p$ is defined by: $p = b+a\cdot u$, where $a$ refers to a base rate representing a prior knowledge without commitment such as neither agree nor disagree. A binomial opinion follows a Beta pdf (probability density function), denoted by $\text{Beta}(p | \alpha, \beta)$ in Eq~\eqref{beta_pdf}, where $\alpha$ and $\beta$ represents the positive and negative evidence. \begin{eqnarray} \label{beta_pdf} \vspace{-1mm} \textbf{Beta}(p | \alpha, \beta) = \frac{1}{B(\alpha, \beta)}p^{\alpha-1}(1-p)^{\beta-1}, \end{eqnarray} where $B(\alpha, \beta) = \Gamma(\alpha)\Gamma(\beta)/ \Gamma(\alpha+\beta)$ and $\Gamma(\cdot)$ is the gamma function. In SL, $\alpha$ and $\beta$ are received over time. An opinion $w$ can be obtained based on $\alpha$ and $\beta$ as $w = (\alpha, \beta)$. This can be translated to $w = (b, d, u)$ using the mapping rule in SL: \begin{eqnarray} \vspace{-2mm} b = \frac{\alpha-1}{\alpha +\beta }, \; d = \frac{\beta-1}{\alpha +\beta}, \; u = \frac{W}{\alpha +\beta}, \label{eq:beta-mapping-rule} \end{eqnarray} where $W$ is an amount of uncertainty evidence. In practice we set $W=2$ for binary case. \noindent {\bf Evidential Uncertainty.} The concept of evidential uncertainty has been discussed differently depending on domains~\cite{josang2018uncertainty, zhao2020uncertainty,xu-etal-2021-boosting,shi2020multifaceted,hu2020multidimensional}. In this work, we adopt the concept of uncertainty based on SL in developing an uncertainty-based SEED framework when the input is a streaming audio signal. {\em Vacuity} refers to a lack of evidence, meaning that uncertainty is introduced because of no or insufficient information. High vacuity might happen at the early stage of an ongoing event due to the small amount of collected stream audios, resulting in an over-confidence estimation. Table~\ref{tab:evidence} illustrates the difference between probability and evidence. For example, at the early stage of an ongoing event, we only collect 1 positive evidence and 4 negative evidence. And we can calculate its expected probability $p=[0.2, 0.8]$, which result in high confidence of negative prediction. However, prediction based on small amount of evidence (i.e., high vacuity) is not reliable. With collecting more evidence (e.g., $[\alpha, \beta]=[200, 4]$), we would have a reliable prediction with low vacuity. \begin{table}[ht!] \footnotesize \centering \vspace{-1mm} \caption{Difference between evidence and probability. Prediction with less evidence (high vacuity) is not reliable.} \vspace{-2mm} \begin{tabular}{l|c|c|c} \toprule \textbf{Evidence} & $[\alpha, \beta]=[1, 4]$ & $[\alpha, \beta]=[4, 4]$ & $[\alpha, \beta]=[200, 4]$ \\ \midrule \textbf{Probability} & $p=[0.2, 0.8$] & $p=[0.5, 0.5]$ & $p=[0.98, 0.02]$ \\ \midrule \textbf{Uncertainty} &\textbf{High Vacuity} &\textbf{High Vacuity} & \textbf{Low} \\ \bottomrule \end{tabular} \label{tab:evidence} \vspace{-4mm} \end{table} \subsection{Polyphonic Evidential Neural Network} \vspace{-1mm} \begin{figure*}[!t] \centering \includegraphics[width=0.94\textwidth]{figs/SEED_framework.png} \caption{\textbf{PENet Overview.} Given the streaming audio segments (a), PENet is designed for estimating the Beta distribution (c), which can be transferred to subjective opinion (d) with vacuity uncertainty.} \label{fig:framework} \end{figure*} Based on the intuition of evidential uncertainty in SEED, we propose a novel Polyphonic Evidential Neural Network (PENet) with reliable prediction. The overall description of the framework is shown in Figure~\ref{fig:framework}. For SEED setting, the audio signal is collected in a streaming way. At each timestamp $t$, we collect an audio segment ${\mathbf x}^t$. The corresponding label of ${\mathbf x}^t$ is denoted as $y^t=[y^t_1, \ldots, y^t_K]$, where $y^t_k=\{0, 1\}$. \noindent {\bf PENet.} For polyphonic sound event detection, most existing methods would consider a binary classification for each class, such as softmax output~\cite{Turpault2019_DCASE,hershey2021benefit}. As discussed in Section~\ref{sec:SL}, evidential uncertainty can be derived from binomial opinions or equivalently Beta distributions to model an event distribution for each class. Therefore, we design a Polyphonic Evidential Neural Network (PENet) $f$ to form their binomial opinions for the class-level Beta distribution of a given audio segment ${\mathbf x}^t$. In addition, we considered a context of $m$ segments for sequential input purpose. Then, the conditional probability $P(p^t_k|{\mathbf x}^{[t-m, t]};\theta)$ of class $k$ can be obtained by: \begin{eqnarray} \vspace{-2mm} P(p^t_k|{\mathbf x}^{[t-m, t]};\theta) &=&\textbf{Beta}(p^t_k|\alpha^t_k, \beta^t_k) \\ \alpha_k^t, \beta_k^t &=& f_k({\mathbf x}^{[t-m, t]};\theta) \label{eq:mlenn} \vspace{-2mm} \end{eqnarray} where ${\mathbf x}^{[t-m, t]}$ means a sequence of audio segments, i.e., $[{\mathbf x}^{t-m}, {\mathbf x}^{t-m+1},\ldots, {\mathbf x}^t]$, $f_k$ is the output of PENet for class $k$, and $\theta$ is the model parameters. The Beta probability function $\text{Beta}(p_k^t|\alpha_k^t, \beta_k)$ is defined by Eq.~\eqref{beta_pdf}. Note that PENet is similar to the classical polyphonic sound event detection model (e.g., CRNN~\cite{Turpault2019_DCASE}), except that we use an activation layer (e.g., ReLU) instead of the softmax layer (only outputs class probabilities). This ensures that PENet would output non-negative values taken as the evidence for the predicted Beta distribution. \noindent {\bf Training with Beta Loss.} In this paper, we design and train neural networks to form their binomial opinions for the classification of a given audio segment as a Beta distribution. For the binary cross-entropy loss, we have the Beta loss by computing its Bayes risk for the class predictor, \begin{eqnarray} \vspace{-2mm} \mathcal{L}_{Beta} &=& \sum_{t=1}^T \sum_{k=1}^K \int \Big[\textbf{BCE}(y_k^t, p_k^t) \Big] \textbf{Beta}({p}_k^t;\alpha_k^t, \beta_k^t) d {p}_k^t \nonumber \\ &=& \sum_{t=1}^T \sum_{k=1}^K \Big[y_k^t\Big(\psi(\alpha_k^t+\beta_k^t) - \psi(\alpha_k^t)\Big) + \nonumber \\ &&(1-y_k^t)\Big(\psi(\alpha_k^t+\beta_k^t) - \psi(\beta_k^t)\Big) \Big], \label{eq:loss} \end{eqnarray} where $T$ is the number of segments decomposed from an audio, $K$ is the number of class, $\textbf{BCE}(y_k^t, p_k^t)= -y_{k}^t \ln(p_{k}^t) - (1-y_k^t) \ln (1-p_{k}^t)$ is the binary cross-entropy loss, and $\psi(\cdot)$ is the \textit{digamma} function. The log expectation of Beta distribution derives the second equality. \noindent {\bf Uncertainty-based Inference.} At the test stage, we consider a simple strategy to make a reliable prediction. For each class, we predict sound events happened only when belief larger than disbelief with a small vacuity, \vspace{-2mm} \begin{eqnarray} \hat{y}_k^t =\begin{cases}1,& \text{if } b^t_k > d^t_k \text{ and } u_k^t < V \\0 ,& \text{otherwise}\end{cases} \label{eq:inference} \end{eqnarray} where $\hat{y}_k^t\in \{0, 1\}$ is the model prediction for class $k$ in segment $t$, $V$ is the vacuity threshold. \vspace{-2mm} \subsection{Backtrack Inference} \vspace{-2mm} We propose a backtrack inference method that considers forward and backward information to feed into PENet as a sequential input to further improve early detection performance. Figure~\ref{fig:framework} (a) illustrate the backtrack input. Then, we can rewrite Eq.~\eqref{eq:mlenn} as \begin{eqnarray} \alpha_k^t, \beta_k^t = f_k({\mathbf x}^{[t-m, t+n]};\theta), \label{eq:mlenn2} \end{eqnarray} where $m$ is the backward steps, and $n$ is the forward steps. The additional previous and future information is critical for the prediction of the current audio segment. We show that backtrack inference improves SEED detection accuracy, but the waiting process (consider forward information) causes a higher detection delay. \section{Experiments}\label{sec:experiment} \subsection{Experiment Details}\label{sec:details} \noindent {\bf Dataset.} We conduct the experiments on DESED2021 dataset~\cite{Turpault2019_DCASE} The dataset for this task is composed of 10 sec audio clips recorded in domestic environments or synthesized using Scaper~\cite{salamon2017scaper} to simulate a domestic environment. The dataset includes 10 classes of sound events that represent a subset of Audioset~\cite{hershey2021benefit}. In DESED2021 dataset, the training set contains 10,000 synthetic audio clips with strong-label, 1578 weak-label audio clips, and 14,412 unlabeled audio clips. The validation set includes 1168 audio clips that are annotated with strong-label (timestamps obtained by human annotators). The test set includes 1,016 real-world audio clips. \noindent {\bf Features.} The input features used in the experiments are log-mel spectrograms extracted from the audio signal resampled to 16000 Hz. The log-mel spectrogram uses 2048 STFT windows with a hop size of 256 and 128 Mel-scale filters. At the training stage, the input is the full observed 10-second sound clip. As a result, each 10-second sound clip is transformed into a 2D time-frequency representation with a size of (626×128). At the test stage, we collect an audio segment at each timestamp, which can be transformed into a 2D time-frequency representation with a size of (4×128). \noindent {\bf Comparing Methods.} To evaluate the effectiveness of our proposed approach (PENet), we compare it with one state-of-the-art SEED method: Dual DNN~\cite{phan2018enabling}; two SED methods: CRNN~\cite{Turpault2019_DCASE} and Conformer~\cite{miyazaki2020conformer}; and three different uncertainty methods: \textit{Entropy}, \textit{Epistemic} uncertainty~\cite{gal2016dropout} (represents the uncertainty of model parameters), and \textit{Aleatoric} uncertainty~\cite{depeweg2018decomposition} ( represents the uncertainty of data noise). We use MC-drop~\cite{gal1506bayesian} to estimate epistemic and aleatoric uncertainties in the experiments. \noindent {\bf Evaluation Metrics.} Since the traditional offline sound event detection metrics cannot early detection performance, we use both early detection F1 and detection delay to evaluate our performance for the onset of sound events at the early stage. We first define the true positive prediction for the event $k$ only happened when the first prediction timestamp $d_p$ is located into event happened intervals. In addition, we set an early predict tolerance $L$ that if the first prediction is earlier than true event occurred. Otherwise, we consider the prediction for this event is false positive, \begin{eqnarray} \text{TP}_k =\begin{cases}1, & \text{if }y^{d_p}_k==1 \text{ and } d_p-d_t \ge L \\ 0 ,& \text{otherwise} \end{cases} \label{eq:tp} \end{eqnarray} where $d_t$ is the onset timestamp of the predicted event. For detection delay, it's only measured when we have a true positive prediction. The detection delay is defined as follows, \begin{eqnarray} \text{delay} =\begin{cases}d_p-d_t, & \text{if } d_p\ge d_t \\ 0 ,& \text{if } d_p< d_t \end{cases} \label{eq:delay} \end{eqnarray} \noindent {\bf Set up.} For all experiments, we use CRNN~\cite{turpault2020training} as the backbone except Conformer. We use the Adam optimizer for all methods and follow the same training setting as~\cite{turpault2020training}. For the uncertainty threshold, we set 0.5 for epistemic uncertainty and 0.9 for other uncertainties (entropy, vacuity, aleatoric). \vspace{-2mm} \subsection{Results and Analysis}\label{sec:result} \noindent {\bf Early Detection Performance.} Table~\ref{tab:results1} shows that our Evidence model with vacuity outperforms all baseline models under the detection delay and early detection F1 score for sound event early detection. The outperformance of vacuity-based detection is fairly impressive. This confirms that low vacuity (a large amount of evidence) is the key to maximize the performance of early detection. In addition, we observe that backtrack technical can significantly improve the early detection F1, demonstrating that backtrack information is essential in SEED. However, using the backtrack technical would increase the detection delay as well. Furthermore, the test inference time of our approach is around 5ms, less than the streaming segment duration (60ms), which indicates that our method satisfies the real-time requirement. \begin{table}[ht!] \centering \caption{Early detection performance on DESED dataset.} \begin{tabular}{l|c|c|c} \toprule \textbf{Model} & \textbf{Delay} $\downarrow$ & \textbf{F1} $\uparrow$ & \textbf{Time} \\ \midrule Conformer & 0.372 & 0.639 & 6.6ms \\ Dual DNN & 0.386 & 0.682 & 5.1ms \\ CRNN & 0.284 & 0.687 & 5.0ms \\ CRNN + entropy & 0.312 & 0.679& 5.0ms \\ CRNN + epistemic & 0.278 & 0.647& 27ms \\ CRNN + aleatoric & 0.281 & 0.643& 27ms \\ \midrule PENet & \textbf{0.247} & 0.670& 5ms \\ PENet + vacuity & 0.252 & 0.691& 5ms \\ PENet + vacuity + backtrack & 0.310 & \textbf{0.725} & 5.2ms \\ \bottomrule \end{tabular} \label{tab:results1} \end{table} \noindent {\bf Uncertainty Analysis.} We explore the sensitivity of vacuity threshold used in the evidence model. Figure~\ref{fig:vacuity} plots the detection delay and early detection F1 score with the varying vacuity threshold values. When the vacuity threshold increases, the detection delay decreases continuously, and the early detection F1 score reaches the highest when vacuity is 0.9. \begin{figure}[!t] \centering \includegraphics[width=0.35\textwidth]{figs/delay_f1.png} \caption{Effect of vacuity threshold.} \label{fig:vacuity} \end{figure} \noindent {\bf Trade off of Backtrack.} We analyzed the sensitivity of our proposed backtrack method to the number of backtrack steps. Table~\ref{fig:backtrack} shows a trade-off between detection delay and F1 score with the varying numbers of steps. When the backtrack step increase, the detection delay is continuously increased, and detection F1 increases until backtrack step equal to 6. The results demonstrate that backtrack information is critical to improving the detection accuracy in SEED. \begin{figure}[!t] \centering \includegraphics[width=0.35\textwidth]{figs/Backtrack_delay_f1.png} \caption{Detection delay and F1 score for different numbers of backtrack steps.} \label{fig:backtrack} \vspace{-3mm} \end{figure} \vspace{-2mm} \section{Conclusion} In this paper, we propose a novel Polyphonic Evidential Neural Network to model the evidential uncertainty of the class probability with Beta distribution. Specifically, we use a Beta distribution to model the distribution of class probabilities, and the evidential uncertainty enriches uncertainty representation with evidence information, which plays a central role in reliable prediction. And the proposed backtrack inference method can further improve the event detection performance. The experiment results demonstrate that the proposed method outperformed in the SEED task compared with other competitive counterparts. \newpage \bibliographystyle{IEEEbib}
1,314,259,993,978
arxiv
\section{Introduction} \label{intro} One of the most important recent discoveries in particle physics is the observation of neutrino oscillations in atmospheric\cite{1002.3471}, solar\cite{1109.0763}, reactor\cite{1009.4771} and accelerator \cite{hep-ex/0212007,1103.0340} neutrino experiments. Neutrino oscillations is a quantum-mechanical consequence of the neutrino mixing relation \begin{equation}\label{mixing} \nu_{lL}(x) = \sum^{3}_{i=1} U_{li} \, \nu_{iL}(x) \qquad (l=e,\mu,\tau) . \end{equation} Here $\nu_{i}(x)$ is the field of neutrinos with mass $m_{i}$, $U$ is the 3$\times$3 unitary PMNS\cite{Pontecorvo:1957cp,Pontecorvo:1958qd,Maki:1962mu} mixing matrix. The left-handed flavor field $\nu_{lL}(x)$ enters into the standard leptonic charged current \begin{equation}\label{lepCC} j^{CC}_{\alpha}(x) = 2 \sum_{l=e,\mu,\tau} \bar{\nu}_{lL}(x) \, \gamma_{\alpha} \, l_L(x) \end{equation} and determines the notion of a left-handed flavor neutrino $\nu_{l}$ which is produced in CC weak processes together with a lepton $l^{+}$. The flavor neutrino $\nu_{l}$ is described by the mixed state \begin{equation}\label{state} |\nu_{l}\rangle=\sum_{i=1}^{3} U^{*}_{li} \, |\nu_{i}\rangle , \end{equation} where $|\nu_{i}\rangle $ is the state of a neutrino with mass $m_{i}$ and a definite momentum. The probability of the transition $\nu_{l}\to \nu_{l'}$ in vacuum is given by the standard expression (see Ref.\cite{hep-ph/9812360}) \begin{equation}\label{probability} P(\nu_{l}\to\nu_{l'})= \left| \sum_{i} U_{l'i} \, e^{-iE_{i}t} \, U^{*}_{li} \right|^2 = \left|\delta_{l'l}+ \sum_{i\neq k}U_{l'i} \left(e^{-i\frac{\Delta{m}_{ki}^2L}{2E}}-1\right) U^{*}_{li} \right|^2 . \end{equation} Here $\Delta{m}_{ki}^2= m_{i}^2- m_{k}^2$, $L \simeq t$ is the distance between the neutrino detector and the neutrino source, and $E$ is the neutrino energy. In the standard parameterization, the $3\times 3$ PMNS mixing matrix is characterized by three mixing angles, $\vartheta_{12}$, $\vartheta_{23}$ and $\vartheta_{13}$, by a Dirac CP-violating phase $\delta$ and by two possible Majorana CP-violating phases $\lambda_{2}$ and $\lambda_{3}$: \begin{equation} U = \begin{pmatrix} c_{12} c_{13} & s_{12} c_{13} & s_{13} e^{-i\delta} \\ - s_{12} c_{23} - c_{12} s_{23} s_{13} e^{i\delta} & c_{12} c_{23} - s_{12} s_{23} s_{13} e^{i\delta} & s_{23} c_{13} \\ s_{12} s_{23} - c_{12} c_{23} s_{13} e^{i\delta} & - c_{12} s_{23} - s_{12} c_{23} s_{13} e^{i\delta} & c_{23} c_{13} \end{pmatrix} D(\lambda_{2},\lambda_{3}) \,, \label{MixMat} \end{equation} where $ c_{ab} \equiv \cos\vartheta_{ab} $ and $ s_{ab} \equiv \sin\vartheta_{ab} $. The diagonal matrix $ D(\lambda_{2},\lambda_{3}) = \mathrm{diag}(1,e^{i\lambda_{2}},e^{i\lambda_{3}}) $ is present only if massive neutrinos are Majorana particles. The Majorana phases have an effect in processes which are allowed only if massive neutrinos are Majorana particles and are characterized by a violation of the total lepton number, as neutrinoless double-beta decay (see Section~\ref{effmass}). Since neutrino oscillations are flavor transitions without violation of the total lepton number, they do not depend on the Majorana phases\cite{Bilenky:1980cx,Doi:1980yb,Langacker:1986jv,1001.0760}. The neutrino oscillation probabilities depend only on the four mixing parameters $\vartheta_{12}$, $\vartheta_{23}$, $\vartheta_{13}$ and $\delta$, and on two independent mass-squared differences $\Delta{m}_{12}^2$ and $\Delta{m}_{23}^2$. From the analysis of the experimental data it follows that \begin{equation}\label{inequality} \Delta{m}_{12}^2\simeq \frac{1}{30} \, |\Delta{m}_{23}^2| . \end{equation} In the case of three-neutrino mixing assumed in Eqs.~(\ref{mixing}) and (\ref{state}), two neutrino mass spectra are possible: \begin{enumerate} \item Normal spectrum (NS) \begin{equation}\label{norspec} m_{1}< m_{2} < m_{3} ;\quad \Delta{m}^2_{12} \ll \Delta m^2_{23} . \end{equation} \item Inverted spectrum (IS) \begin{equation}\label{invspec} m_{3}< m_{1} < m_{2} ;\quad \Delta{m}^2_{12} \ll | \Delta{m}^2_{13}| . \end{equation} \end{enumerate} The existing experimental data do not allow to establish, what type of neutrino mass spectrum is realized in nature. Let us introduce the "solar" and "atmospheric" mass-squared differences $\Delta{m}^2_{s}$ and $\Delta{m}^2_{a}$, respectively. For both spectra we have $\Delta{m}^2_{12}=\Delta{m}^2_{s}$. For normal (inverted) spectrum we have $\Delta{m}^2_{23}=\Delta{m}^2_{a}$ ($|\Delta{m}^2_{13}|=\Delta{m}^2_{a}$). From a three-neutrino analysis of the Super-Kamiokande data\cite{1002.3471}, the values of the neutrino oscillation parameters in the case of a normal (inverted) mass spectrum are, at 90\% C.L., \begin{eqnarray}\label{SKanalysis} & 1.9 \, (1.7)\cdot 10^{-3} \, \mathrm{eV}^2 \leq\Delta{m}_{a}^2 \leq 2.6 \, (2.7)\cdot 10^{-3} \, \mathrm{eV}^2, & \nonumber \\ & 0.407 \leq \sin^2\vartheta_{23} \leq 0.583,\quad \sin^2\vartheta_{13} < 0.04 \, (0.09) . & \end{eqnarray} The results of the Super-Kamiokande atmospheric neutrino experiment have been fully confirmed by the long-baseline accelerator neutrino experiments K2K\cite{hep-ex/0212007} and MINOS\cite{1103.0340}. From the two-neutrino analysis of the MINOS $\nu_{\mu}\to \nu_{\mu}$ data, for the parameters $\Delta{m}_{a}^2$ and $\sin^22\vartheta_{23}$ the following values were obtained: \begin{equation}\label{Minos1} \Delta{m}_{a}^2=(2.32 {}_{-0.08}^{+0.12})\cdot 10^{-3} \, \mathrm{eV}^2,\quad \sin^22\vartheta_{23}>0.90 . \end{equation} From the combined three-neutrino analysis of all solar neutrino data and the data of the reactor KamLAND experiment, it was found that\cite{1009.4771} \begin{equation}\label{solkam} \Delta{m}_{s}^2=(7.50 {}_{-0.20}^{+0.19})\cdot 10^{-5} \, \mathrm{eV}^2, \quad \tan^2\vartheta_{12}= 0.452{}^{+0.035}_{-0.033}, \quad \sin^2\vartheta_{13}=0.020\pm 0.016 . \end{equation} From a similar analysis performed by the SNO collaboration, it was obtained that\cite{1109.0763} \begin{equation}\label{solkam1} \Delta{m}_{s}^2=(7.41 {}_{-0.19}^{+0.21})\cdot 10^{-5} \, \mathrm{eV}^2,\quad \tan^2\vartheta_{12}= 0.446{}^{+0.030}_{-0.029},\quad \sin^2\vartheta_{13}=0.025 {}^{+0.018}_{-0.015} . \end{equation} The Daya Bay collaboration\cite{1203.1669} measured recently with high precision the mixing angle $\vartheta_{13}$: \begin{equation} \sin^2\vartheta_{13} = 0.024 \pm 0.004 . \label{s13db} \end{equation} This is a $5.2\sigma$ evidence of a non-zero value of $\vartheta_{13}$ which confirms the previous measurements of T2K\cite{1106.2822}, MINOS\cite{1108.0015} and Double Chooz\cite{1112.6353}. It also confirms earlier indications of a non-zero value of $\vartheta_{13}$ found in the analysis of the data of solar and other neutrino experiments (see Eqs.~(\ref{solkam}) and (\ref{solkam1}) and Ref.\cite{0809.2936,0804.3345,0806.2649,0810.5443,1001.4524}). The Daya Bay measurement has important implications for theory\cite{1203.1672} and experiment (see Ref.\cite{1003.5800}). It opens promising perspectives for the observation of CP violation in the lepton sector and matter effects in long-baseline experiments, which could allow to determine the character of the neutrino mass spectrum. Several years ago an indication in favor of short-baseline $\bar\nu_{\mu}\to \bar\nu_{e}$ transitions was found in the LSND experiment \cite{hep-ex/0104049}. The LSND data can be explained by neutrino oscillations with $0.2 \, \mathrm{eV}<\Delta{m}^2< 2 \, \mathrm{eV}$ and $10^{-3}< \sin^2 2\vartheta<4\cdot 10^{-2}$. Recently an additional (2$\sigma$) indication in favor of short-baseline oscillations, compatible with the LSND result, was obtained in the MiniBooNE experiment\cite{1007.1150}. Moreover, the data obtained in old reactor short-baseline experiments can also be interpreted as indications in favor of oscillations\cite{1101.2755} by using a new calculation of the reactor neutrino fluxes\cite{1101.2663,1106.0687}. All these data (if confirmed) imply that the number of massive neutrinos is larger than three and in addition to the three flavor neutrinos $\nu_{e},\nu_{\mu},\nu_{\tau}$ mixed sterile neutrinos $\nu_{s_{1}},...$ must exist. The problem of short-baseline neutrino oscillations and sterile neutrinos is a hot topic at the moment. Several new short-baseline reactor and accelerator experiments are aimed to check this possibility in the near future (see ref.\cite{1012.4356}). The absolute values of neutrino masses are currently unknown. The Mainz\cite{hep-ex/0412056} and Troitsk\cite{1108.5034} experiments on the high-precision measurement of the end-point part of the $\beta$-spectrum of $^{3}H$ decay found the 95\% C.L. upper bounds \begin{equation}\label{MainzTr} m_{\beta} \leq 2.3 \, \mathrm{eV} \, (\mathrm{Mainz}), \qquad m_{\beta} \leq 2.1 \, \mathrm{eV} \, (\mathrm{Troitsk}), \end{equation} for the ``average'' neutrino mass (see Ref.\cite{Giunti-Kim-2007}) \begin{equation} m_{\beta}=\sqrt{\sum_{i}|U_{ei}|^2m^2_{i}} \,. \label{trimass} \end{equation} From neutrino oscillation and tritium $\beta$-decay data we conclude that \begin{enumerate} \renewcommand{\labelenumi}{(\theenumi)} \renewcommand{\theenumi}{\Alph{enumi}} \item Neutrino masses are different from zero. \item Neutrino masses are much smaller than the masses of charged leptons and quarks. \item Neutrino masses are not (or not only) of Standard Model (SM) Higgs origin. \end{enumerate} Several mechanisms of neutrino mass generation have been proposed. It is widely believed that the most plausible one is the seesaw mechanism\cite{Minkowski:1977sc,GellMann-Ramond-Slansky-SeeSaw-1979,Yanagida-SeeSaw-1979,Mohapatra:1980ia}. According to this mechanism, small neutrino masses are generated by new interactions beyond the SM which violates the total lepton number $L$ at a scale much larger than the electroweak scale $v=(\sqrt{2}G_{F})^{-1/2}\simeq 246$ GeV. If the seesaw mechanism is realized, {\em the neutrinos $\nu_{i}$ with definite masses are Majorana particles} and, consequently, the lepton number violating neutrinoless double-beta decay ($0\nu\beta\beta$-decay) of even-even nuclei, \begin{equation}\label{betabeta} N(A,Z) \to N(A,Z+2) +e^{-} +e^{-}, \end{equation} is allowed, where $N(A,Z)$ is a nucleus with nucleon number $A$ and proton number $Z$. The knowledge of the nature of neutrinos with definite masses (Majorana or Dirac?) is extremely important for the understanding of the origin of small neutrino masses. Using large detector masses, high energy resolutions and low backgrounds, the experiments on the search for neutrinoless double-beta decay allow to reach unparalleled sensitivities to extremely small effects due to the Majorana neutrino masses. In this brief review we consider this process (see also \cite{Bilenky:2002aw,Elliott:2002xe,Elliott:2004hr,0708.1033,1001.1946,1106.1334,GomezCadenas:2011it,Schwingenheuer:2012jt}). \section{Seesaw mechanism of neutrino mass generation} \label{seesaw} In this Section we briefly discuss the standard seesaw mechanism of neutrino mass generation\cite{Minkowski:1977sc,GellMann-Ramond-Slansky-SeeSaw-1979,Yanagida-SeeSaw-1979,Mohapatra:1980ia}. We consider a general approach based on the effective Lagrangian formalism\cite{Weinberg:1979sa}. Let us assume that the Standard Model is valid up to some scale $\Lambda$. If we include effects of physics beyond the SM, the total Lagrangian (in the SM region) has the form \begin{equation}\label{EffL} \mathcal{L}(\Lambda)= \mathcal{L}^{SM}+\sum_{n\geq 1} \frac{1}{\Lambda^{n}}\mathcal{O}_{4+n} . \end{equation} The second term is a nonrenormalizable part of the Lagrangian. It is built from SM fields and satisfies the requirement of $SU(2)\times U(1)$ invariance. The operator $\mathcal{O}_{4+n}$ has dimension $M^{4+n}$. In the expansion (\ref{EffL}) of the non-renormalizable part of the Lagrangian in powers of $1/\Lambda$, the most important term for neutrino physics is the first one, $\mathcal{L}^{\rm{eff}}_{I} = \mathcal{O}_{5} / \Lambda$, which contains an operator of dimension five. This term can be built from the leptons and Higgs doublets: \begin{equation}\label{effL1} \mathcal{L}^{\rm{eff}}_{I} = - \frac{1}{\Lambda} \sum_{l',l} \left[ \overline L_{l'L} \tilde{H} \right] Y_{l'l} \left[ \tilde{H}^{T} (L_{lL})^{c} \right] + \mathrm{h.c.} , \end{equation} for $l,l'=e,\mu,\tau$. Here \begin{equation}\label{heavyH1} L_{lL} = \left( \begin{array}{c} \nu_{lL} \\ l_L \end{array} \right) , \qquad H = \left( \begin{array}{c} H^{(+)} \\ H^{(0)} \end{array} \right) \end{equation} are the lepton and Higgs doublets, $\tilde{H}=i\tau_{2}H^{*}$ is the conjugated Higgs doublet, $(L_{lL})^{c} = C(\overline L_{lL})^{T}$ is the (right-handed) charge-conjugated lepton doublet and $Y_{l'l}=Y_{ll'}$ are dimensionless constants (presumably of order one). Here $C$ is the charge-conjugation matrix (which satisfies the relations $C\gamma^{T}_{\alpha}C^{-1}=-\gamma_{\alpha}$ and $C^{T}=-C$). The Lagrangian (\ref{effL1}) does not conserve the total lepton number $L$. Let us stress that this is the only Lagrangian term with a dimension-five operator which can be built with the SM fields. The electroweak symmetry is spontaneously broken by the vacuum expectation value of the Higgs field \begin{equation}\label{heavyH2} \tilde{H}_{0} = \frac{1}{\sqrt{2}} \left( \begin{array}{c} v \\ 0 \end{array} \right) . \end{equation} From Eqs.~(\ref{effL1}) and (\ref{heavyH2}), we obtain the {\em left-handed Majorana neutrino mass term} \begin{equation}\label{Mjmass} \mathcal{L}^{\mathrm{M}}=- \frac{1}{2} \sum_{l',l} \overline \nu_{l'L} \, M^{L}_{l'l} \, (\nu_{lL})^{c} + \mathrm{h.c.}, \end{equation} where \begin{equation} M^{L}_{l'l} = \frac{v^2}{\Lambda} \, Y_{l'l} \end{equation} After the diagonalization of the symmetric matrix $Y$ through the transformation \begin{equation} Y = U \, y \, U^{T}, \quad U^{\dag} U = 1, \quad y_{ik} = y_{i} \delta_{ik} , \end{equation} we obtain \begin{equation}\label{Mjmass1} \mathcal{L}^{\mathrm{M}} = - \frac{1}{2} \sum_{i}m_{i} \bar\nu_{i} \nu_{i}. \end{equation} Here \begin{equation}\label{Mjmass2} m_{i}=\frac{v^2}{\Lambda} \, y_{i} , \end{equation} and \begin{equation}\label{Mjmass3} \nu_{i}=\sum_{l}U_{il}^{\dag}\nu_{lL}+\sum_{l}(U_{il}^{\dag}\nu_{lL})^{c}. \end{equation} From Eq.~(\ref{Mjmass3}) it follows that the field $\nu_{i}$ satisfies the Majorana condition \begin{equation}\label{Mjmass4} \nu_{i}=\nu_{i}^{c}=C\bar\nu_{i}^{T}. \end{equation} Thus, {\em $\nu_{i}$ is the field of the Majorana neutrino with mass $m_{i}$} given by Eq.~(\ref{Mjmass2}). From Eq.~(\ref{Mjmass3}), one can see that the flavor field $\nu_{lL}$ is connected to $\nu_{iL}$ by the standard mixing relation \begin{equation}\label{Majmass1} \nu_{lL} = \sum_{i} U_{li} \, \nu_{iL}, \end{equation} where $U$ is the unitary PMNS mixing matrix given in Eq.~(\ref{MixMat}) in the standard parameterization, including the diagonal matrix of Majorana phases. The values of the neutrino masses are determined by the seesaw factor $v^2/\Lambda$. Assuming that $m_{3}\simeq 5\cdot 10^{-2}$ eV (which is the largest neutrino mass in the case of a neutrino mass hierarchy $m_{1}\ll m_{2}\ll m_{3}$), we have $\Lambda \simeq 10^{15}$ GeV. Thus, the standard seesaw mechanism of neutrino mass generation explains the smallness of neutrino masses by a violation of the total lepton number $L$ in interactions due to physics beyond the SM at a very large (GUT) scale. The local effective Lagrangian (\ref{effL1}) can be obtained by considering the possible existence of heavy Majorana leptons $N_{i}$ with masses $M_{i} \gg v$, which are singlets of the $SU(2)_{L} \times U(1)_{Y}$ gauge group of the SM. These heavy Majorana leptons can have the lepton number-violating Yukawa interaction with the standard lepton and Higgs doublets \begin{equation}\label{heavyH} \mathcal{L}^{Y}_{I} = - \sqrt{2} \sum_{i,l} Y_{li}\overline L_{lL} N_{iR} \tilde{H }+ \mathrm{h.c.} \end{equation} At electroweak energies, the interaction (\ref{heavyH}) generates the effective Lagrangian (\ref{effL1}) at second order of perturbation theory. We have \begin{equation}\label{HeavyN} \sum_{i} Y_{l'i}\frac{1}{M_{i}}Y_{li} = \frac{1}{\Lambda}Y_{l'l}. \end{equation} From this relation it follows that the masses $M_{i}$ determine the scale of new physics. The seesaw mechanism based on the Lagrangian (\ref{heavyH}) is called ``type I seesaw''. There are two other well-studied\cite{hep-ph/9805219} ways to generate the effective Lagrangian $\mathcal{L}^{\rm{eff}}_{I}$ and, consequently, the left-handed Majorana mass term (\ref{Mjmass}): through the interaction of the lepton and Higgs doublets with a heavy triplet scalar boson (type II seesaw) or with a heavy Majorana triplet fermion (type III seesaw). Summarizing, if small neutrino masses are generated by the standard seesaw mechanism, we have the following consequences: \begin{enumerate} \item Neutrinos with definite masses are truly neutral Majorana particles. \item Neutrino masses are given by the seesaw relation (\ref{Mjmass2}). Hence, the neutrino masses are suppressed with respect to the masses of charged leptons and quarks, which are proportional to $v$, by the small ratio $v/\Lambda$. \item The Majorana neutrino mass term is the only implication at the electroweak scale of a possible existence of heavy Majorana particles. \item $CP$-violating decays of heavy Majorana particles in the early Universe could be the origin of the baryon asymmetry of the Universe (see Ref.\cite{0802.2962}). \end{enumerate} \section{On the theory of $0\nu\beta\beta$-decay} \label{theory} In this Section we present a brief derivation of the matrix element of the neutrinoless double-beta decay process in Eq.~(\ref{betabeta}), assuming that this process is induced by Majorana neutrino masses and mixing (see Refs.\cite{Doi:1985dx,Bilenky:1987ty,1001.1946}). The standard effective Hamiltonian of the process has the form \begin{equation}\label{effHam} {\mathcal{H}}_{I}(x)= \frac{G_F}{\sqrt{2}} \, 2 \, \bar{e}_{L}(x) \gamma_{\alpha} \nu_{eL}(x) \, j^{\alpha}(x) + \mathrm{h.c.} \end{equation} Here $G_F$ is the Fermi constant and $j^{\alpha}(x)$ is the hadronic charged current which does not change strangeness. In terms of the quark fields, the current $j^{\alpha}(x)$ has the form \begin{equation}\label{effHam1} j^{\alpha}(x)=2\cos\vartheta_{C}\bar u_{L}(x)\gamma^{\alpha} d_{L}(x). \end{equation} The mixed flavor field $\nu_{eL}(x)$ is given by the relation (\ref{Majmass1}) with $l=e$: \begin{equation}\label{emixfield} \nu_{eL}(x)=\sum_{i}U_{ei} \, \nu_{iL}(x), \end{equation} where $U$ is the PMNS mixing matrix and $\nu_{i}(x)$ is the field of the Majorana neutrino with mass $m_{i}$, which satisfies the Majorana condition (\ref{Mjmass4}). The process (\ref{betabeta}) is of second order in $G_{F}$, with the exchange of virtual neutrinos. The matrix element of the process is given by \begin{eqnarray}\label{Smatelem} && \langle f|S^2|i\rangle=-4 \left(\frac{G_F}{\sqrt{2}}\right)^2N_{p_1}N_{p_2} \int d^{4}x_{1}d^{4}x_{2} \sum_{i} \bar u_{L}(p_1) e^{ip_{1}x_{1}} \gamma_{\alpha} U_{ei} \nonumber\\ && \times \langle 0|T(\nu_{iL}(x_{1}) \nu^{T}_{iL}(x_{2})|0\rangle \gamma^{T}_{\beta} U_{ei} \bar u^{T}_{L}(p_2) e^{ip_{2}x_{2}} \langle N_{f}|T(J^{\alpha}(x_{1})J^{\beta}(x_{2}))|N_{i} \rangle. \end{eqnarray} Here $p_{1}$ and $p_{2}$ are electron four-momenta, $J^{\alpha}(x)$ is the hadronic charged current in the Heisenberg representation\footnote{In Eq.~(\ref{Smatelem}) strong interactions are taken into account.}, $N_{i}$ and $N_{f}$ are the states of the initial and final nuclei with respective four-momenta $P_{i}=(E_{i}, \vec{p}_{i})$ and $P_{f}=(E_{f}, \vec{p}_{f})$, and $N_{p}^{-1}=(2\pi)^{3/2}\sqrt{2p^{0}}$ is the standard normalization factor. \begin{figure}[t!] \begin{center} \includegraphics*[width=0.25\linewidth]{fig-01.eps} \end{center} \caption{\label{feybb1} Feynman diagram of the elementary particle transition which induces $0\nu\beta\beta$-decay. } \end{figure} Taking into account the Majorana condition (\ref{Mjmass4}), for the neutrino propagator we find the expression\footnote{The neutrino propagator is proportional to $m_{i}$. This is connected to the fact that only left-handed neutrino fields enter into the Hamiltonian of weak interactions. Thus, in the case of massless neutrinos the matrix element of neutrinoless double $\beta$-decay is equal to zero. This is a consequence of the general theorem on the equivalence of the theories with massless Majorana and Dirac neutrinos\cite{Ryan-Okubo-NCS-2-234-1964,Case:1957}.} \begin{equation}\label{nupropag} \langle 0|T(\nu_{iL}(x_{1}) \bar\nu_{iL}(x_{2}))|0\rangle=-\frac{i}{(2\pi)^{4}} \int d^{4}q \, e^{-iq(x_{1}-x_{2})}\frac{m_{i}}{q^2-m^2_{i}} \, \frac{1-\gamma_{5}}{2} \, C. \end{equation} Performing the integration over $x^{0}_{1}$, $x^{0}_{2}$ and $q^{0}$ in Eqs.~(\ref{Smatelem}) and (\ref{nupropag}), the matrix element of the process takes the form \begin{eqnarray}\label{Smatelem2} &&\langle f|S^2|i \rangle=2i \left(\frac{G_F}{\sqrt{2}}\right)^2N_{p_1}N_{p_2} 2\pi\delta(E_{f}+p^{0}_{1}+p^{0}_{2}-E_{i}) \bar u(p_1)\gamma_{\alpha}\gamma_{\beta}(1+\gamma_{5})C \bar u^{T}(p_2)\nonumber\\ && \times\int d^{3}x_{1} d^{3}x_{2} e^{-i\vec{p}_{1}\vec{x}_{1}-i\vec{p}_{2}\vec{x}_{2}} \sum_{j} U^2_{ej} m_{j} \int \frac{d^{3}q}{(2\pi)^{3}} \, \frac{e^{i\vec{q}(\vec{x}_{1}-\vec{x}_{2})}} { q_{j}^{0}}\times\nonumber\\ &&\left(\sum_{n} \frac{\langle N_{f}|J^{\alpha}(\vec{x}_{1})|N_{n}\rangle\langle N_{n}| J^{\beta}(\vec{x}_{2}))|N_{i}\rangle }{E_{n}+p^{0}_{2}+q^{0}_{j}-E_{i}-i\epsilon} +\sum_{n}\frac{ \langle N_{f}|J^{\beta}(\vec{x}_{2})|N_{n}\rangle\langle N_{n}| J^{\alpha}(\vec{x}_{1}))|N_{i}\rangle} {E_{n}+p^{0}_{1}+q^{0}_{j}-E_{i}-i\epsilon} \right). \nonumber\\ \end{eqnarray} where $ q^{0}_{j} = \sqrt{|\vec{q}|^2 + m_{j}^2} $ and $E_{n}$ are the energy levels of the intermediate nuclear state. This is an exact expression for the matrix element of $0\nu\beta\beta$-decay at second order of perturbation theory. In the following we consider major $0^{+}\to 0^{+}$ transitions of even-even nuclei, for which the following standard approximations\cite{Doi:1985dx} apply: \begin{enumerate} \item Effective Majorana mass approximation. $0\nu\beta\beta$-decay is due to the exchange of virtual neutrinos (see the diagram in Fig.\ref{feybb1}). Taking into account that the average distance between nucleons in a nucleus is about $10^{-13}$ cm, the uncertainty relation implies that the average neutrino momentum is $q \simeq 100$ MeV. On the other hand, from tritium experiments we have the upper bounds in Eq.~(\ref{MainzTr}), which constrain all the masses $m_{j}$ to be smaller than about 2 eV. Therefore, the neutrino masses can be safely neglected in the denominators in Eq.~(\ref{Smatelem2}) and we have $q^{0}_{j} = \sqrt{|\vec{q}|^2 + m_{j}^2} \simeq q$, with $q = |\vec{q}|$. Thus, from Eq.~(\ref{Smatelem2}) it follows that in the matrix element of $0\nu\beta\beta$-decay {\em the neutrino properties and the nuclear properties are factorized and the neutrino masses and mixing enter into the matrix element in the form of the effective Majorana mass} \begin{equation}\label{effMj} m_{\beta\beta}=\sum_{i}U^2_{ei}m_{i}. \end{equation} \item Long-wavelength approximation. We have $|\vec{p}_{k}\vec{x}_{k}| \leq |\vec{p}_{k}| R$ ($k=1,2$), where $R\simeq 1.2 \, A^{1/3}\cdot10^{-13}$ cm is the radius of a nucleus with nucleon number $A$. Taking into account that $|\vec{p}_{k}| \lesssim 1 \, \mathrm{MeV}$, we have $|\vec{p}_{k}\vec{x}_{k}| \ll 1$. Thus, we have $e^{-i\vec{p}_{1}\vec{x}_{1}-i\vec{p}_{2} \vec{x}_{2}}\simeq 1$ (this approximation means that electrons are produced in $S$-states). \item Closure approximation. The energy of the virtual neutrino, $q\simeq 100$ MeV, is much larger than the excitation energy $E_{n}-E_{i}$. Thus, the energy of the intermediate states $E_{n}$ can be approximated by an average energy $\overline{E}$. In this approximation, called ``closure approximation'', we have \begin{equation}\label{Smatelem3} \frac{\langle N_{f}|J^{\alpha}(\vec{x}_{1})|N_{n}\rangle\langle N_{n}| J^{\beta}(\vec{x}_{2}))|N_{i}\rangle }{E_{n}+p^{0}_{k}+q^{0}_{j}-E_{i}-i\epsilon}\simeq \frac{\langle N_{f}|J^{\alpha}(\vec{x}_{1}) J^{\beta}(\vec{x}_{2}))|N_{i}\rangle }{\overline{ E}+p^{0}_{k}+q-E_{i}-i\epsilon}. \end{equation} \end{enumerate} Taking into account these approximations and considering commuting hadronic currents (see Eqs.~(\ref{current1}) and (\ref{current2}) below), for the matrix element of $0\nu\beta\beta$-decay we obtain the expression \begin{eqnarray}\label{Smat1} &&\langle f|S^{(2)}|i \rangle=8\pi i \left(\frac{G_F}{\sqrt{2}}\right)^2 m_{\beta\beta}N_{p_1}N_{p_2} \bar u(p_1)(1+\gamma_{5})C \bar u^{T}(p_2)\times \nonumber\\ &&\int d^{3}x_{1}d^{3}x_{2} \langle N_{f}|J^{\alpha}(\vec{x}_{1})K(|\vec{x}_{1}-\vec{x}_{2}|) J_{\alpha}(\vec{x}_{2}))|N_{i}\rangle \delta(E_{f}+p^{0}_{1}+p^{0}_{2}-E_{i}).\nonumber\\ \end{eqnarray} Here \begin{equation}\label{Smat2} K(|\vec{x}_{1}-\vec{x}_{2}|)=\frac{1}{(2\pi)^{3}} \int d^{3}q \, \frac {e^{i\vec{q} (\vec{x}_{1}-\vec{x}_{2})}} {q \Big[ \overline{E} + q - \left(M_{i}+M_{f}\right)/2 \Big]} , \end{equation} where $M_{i}(M_{f})$ is the mass of the initial (final) nucleus. In the calculation of the hadronic part of the matrix element of $0\nu\beta\beta$-decay, the following approximate expression for the effective charged current $J^{\alpha}(\vec{x})=(J^{0}(\vec{x}),\vec{ J}(\vec{x}))$ is used\cite{hep-ph/9905509}: \begin{equation}\label{current1} J^{0}(\vec{x}) = \sum^{A}_{n=1}\tau_{n}^{+}\delta(\vec{x}-\vec{r}_{n}) g_{V}(q^2) \end{equation} and \begin{equation}\label{current2} \vec{ J}(\vec{x}) = -\sum^{A}_{n=1}\tau_{n}^{+}\delta(\vec{x}-\vec{r}_{n}) \left[ g_{A}(q^2)\vec{\sigma}_{n}+g_{M}(q^2)i\frac{\vec{\sigma}_{n} \times \vec{q}}{2m_{p}}-g_{P}(q^2) \frac{\left(\vec{\sigma}_{n}\cdot\vec{q}\right) \vec{q}}{2m_{p}} \right]. \end{equation} Here, $\sigma_{n}^{i}$ and $\tau_{n}^{i}$ are Pauli matrices acting, respectively, on the spin and isospin doublets of the $n$ nucleon, $\tau_{+}=(\tau_{1}+i \tau_{2})/2$, $\vec{r}_{n}$ is the coordinate of the $n$ nucleon, $m_{p}$ is the proton mass, $g_{V}(q^2), g_{A}(q^2), g_{M}(q^2)$ and $g_{P}(q^2)$ are the vector, axial, magnetic and pseudoscalar weak form factors of the nucleon. From the conserved vector current (CVC) and partially conserved axial current (PCAC) hypotheses, it follows that \begin{equation}\label{current3} g_{V}(q^2)=F^{p}_{1}(q^2)-F^{n}_{1}(q^2), \quad g_{M}(q^2)=F^{p}_{2}(q^2)-F^{n}_{2}(q^2), \quad g_{P}(q^2)=\frac{2m_{p}g_{A}}{q^2+m^2_{\pi}}, \end{equation} where $F^{p(n)}_{1}$ and $F^{p(n)}_{2}$ are the Dirac and Pauli electromagnetic form factors of the proton (neutron) and $g_{A}\simeq 1.27$ is the axial coupling constant of the nucleon. The expressions (\ref{current1}) and (\ref{current2}) can be obtained from the one-nucleon matrix element of the hadronic charged current. For the number density of nucleons in a nucleus, the following approximate expression is used: \begin{equation}\label{current4} \bar\Psi(\vec{x})\gamma^{0}\Psi(\vec{x})= \sum^{A}_{n=1}\delta(\vec{x}-\vec{r}_{n}). \end{equation} The nuclear matrix element (NME) $M^{0\nu}$, which is the integrated product of two hadronic charged currents and a neutrino propagator, is a sum of a Fermi (F), a Gamow-Teller (GT) and a tensor (T) term: \begin{equation}\label{MatEl} {M}^{0\nu} = \langle 0^+_f|\sum_{k,l} \tau^+_k \tau^+_l \left[ \frac{H_F(r_{kl})}{g^2_A} + H_{GT}(r_{kl}) \vec\sigma_k\cdot \vec\sigma_l - H_T(r_{kl}) S_{kl} \right] |0^+_i\rangle . \end{equation} Here $S_{kl} = 3(\vec\sigma_k\cdot \vec{r}_{kl}) (\vec\sigma_l \cdot \vec{r}_{kl}) - \vec\sigma_k\cdot \vec\sigma_l$, with $\vec{r}_{kl}=\vec{r}_{k}-\vec{r}_{l}$, and the neutrino potentials $H_{F,GT,T}(r_{kl})$ are given by the expressions \begin{equation} H_{F,GT,T}(r_{kl}) = \frac{2}{\pi} R \int_0^\infty \frac{j_{0,0,2}(q r_{kl}) h_{F,GT,T}(q^2) q}{q + \overline{E} -(M_i+M_f)/2 } dq, \end{equation} where $R$ is the radius of the nucleus, and the functions $h_{F,GT,T}(q^2)$ are combinations of different form factors\footnote{The functions $h_{F,GT,T}(q^2)$ can be found in Ref.\cite{0710.2055}.}. Taking into account the Coulomb interaction of the electrons and the final nucleus, for the total width of $0\nu\beta\beta$-decay we find the general expression \begin{equation}\label{totrate} \Gamma^{0\nu} = \frac{1}{T^{0\nu}_{1/2}} = G^{0\nu}(Q,Z) \, |M^{0\nu}|^2 \, \frac{|m_{\beta\beta}|^2}{m_{e}^2} , \end{equation} where $G^{0\nu}(Q,Z)$ is a known integral over the phase space, $ Q = M_{i} - M_{f} - 2 \, m_{e} $ is the $Q$-value of the process, and $m_{e}$ is the electron mass. The numerical values of $G^{0\nu}(Q,Z)$, $Q$ and the natural abundance of several nuclei of experimental interest are presented in Table~\ref{tabnuc}. \begin{table}[b!] \tbl{The values of $G^{0\nu}(Q,Z)$, $Q$ and natural abundance of the initial isotope for several $\beta\beta$-decay processes of experimental interest. Table adapted from Ref.\protect\cite{Schwingenheuer:2012jt}.} { \begin{tabular}{ccccl} $\beta\beta$-decay & $G^{0\nu}$ & $Q$ & nat.~abund. & experiments \\ & $[10^{-14}\,\mathrm{y}^{-1}]$ & [keV] & [\%] & \\ \hline $ {}^{48}\text{Ca} \to {}^{48}\text{Ti} $ & 6.3 & 4273.7 & 0.187 & CANDLES \\ $ {}^{76}\text{Ge} \to {}^{76}\text{Se} $ & 0.63 & 2039.1 & 7.8 & GERDA, Majorana \\ $ {}^{82}\text{Se} \to {}^{82}\text{Kr} $ & 2.7 & 2995.5 & 9.2 & SuperNEMO, Lucifer \\ $ {}^{100}\text{Mo} \to {}^{100}\text{Ru} $ & 4.4 & 3035.0 & 9.6 & MOON, AMoRe \\ $ {}^{116}\text{Cd} \to {}^{116}\text{Sn} $ & 4.6 & 2809 & 7.6 & Cobra \\ $ {}^{130}\text{Te} \to {}^{130}\text{Xe} $ & 4.1 & 2530.3 & 34.5 & CUORE \\ $ {}^{136}\text{Xe} \to {}^{136}\text{Ba} $ & 4.3 & 2461.9 & 8.9 & EXO, KamLAND-Zen, NEXT, XMASS \\ $ {}^{150}\text{Nd} \to {}^{150}\text{Sm} $ & 19.2 & 3367.3 & 5.6 & SNO+, DCBA/MTD \\ \end{tabular} \label{tabnuc} } \end{table} \section{Effective Majorana mass} \label{effmass} The effective Majorana mass $m_{\beta\beta}$ is determined by the neutrino masses, the mixing angles and the Majorana phases. In this Section we discuss which are the possible values of the effective Majorana mass which can be obtained taking into account the information on the neutrino mass-squared differences and mixing angles obtained from neutrino oscillation data. In the standard parameterization (\ref{MixMat}) of the mixing matrix, we have \begin{equation} |m_{\beta\beta}| = \left| \cos^2\vartheta_{12} \cos^2\vartheta_{13} m_{1} + e^{2i\alpha_{12}} \sin^2\vartheta_{12} \cos^2\vartheta_{13} m_{2} + e^{2i\alpha_{13}} \sin^2\vartheta_{13} m_{3} \right| \,, \label{mbbst} \end{equation} where $\alpha_{12}$ and $\alpha_{13}$ are, respectively, the phase differences of $U_{e2}$ and $U_{e3}$ with respect to $U_{e1}$: $\alpha_{12}=\lambda_{2}$ and $\alpha_{13}=\lambda_{3}-\delta$ in the standard parameterization (\ref{MixMat}) of the mixing matrix. Therefore, $0\nu\beta\beta$-decay depends not only on the mixing angles and Dirac CP-violating phase, but also on the Majorana CP-violating phases. This is in agreement with the discussion after Eq.~(\ref{MixMat}), since the total lepton number is violated in $0\nu\beta\beta$-decay. \begin{figure}[t!] \begin{minipage}[r]{0.47\textwidth} \begin{center} Before Daya Bay \\ \includegraphics*[width=0.99\textwidth]{fig-02a.eps} \end{center} \end{minipage} \hfill \begin{minipage}[l]{0.47\textwidth} \begin{center} After Daya Bay \\ \includegraphics*[width=0.99\textwidth]{fig-02b.eps} \end{center} \end{minipage} \caption{\label{bb0-plt} Value of the effective Majorana mass $|m_{\beta\beta}|$ as a function of the lightest neutrino mass in the normal (NS, with $m_{\mathrm{min}}=m_{1}$) and inverted (IS, with $m_{\mathrm{min}}=m_{3}$) neutrino mass spectra before and after the Daya Bay\protect\cite{1203.1669} measurement of $\vartheta_{13}$ in Eq.~(\ref{s13db}). The current upper bound on $|m_{\beta\beta}|$ (see Eqs.~(\ref{Hei-Mos1}), (\ref{Cuori1}) and (\ref{Nemo1})) and the cosmological bound (see Ref.\protect\cite{1007.0658}) on $\sum_{i} m_{i} \simeq 3 m_{\mathrm{min}}$ in the quasi-degenerate region are indicated. } \end{figure} In the case of a NS, the neutrino masses $m_{2}$ and $m_{3}$ are connected with the lightest mass $m_{1}$ by the relations \begin{equation}\label{norspec1} m_{2}=\sqrt{m^2_{1}+\Delta{m}^2_{s}}, \qquad m_{3}=\sqrt{m^2_{1}+\Delta{m}^2_{s}+\Delta{m}^2_{a}}. \end{equation} On the other hand, in a IS $m_{3}$ is the lightest mass and we have \begin{equation}\label{invspec1} m_{1}=\sqrt{m^2_{3}+\Delta{m}^2_{a}}, \qquad m_{2}=\sqrt{m^2_{3}+\Delta{m}^2_{a}+\Delta{m}^2_{s}}. \end{equation} Figure~\ref{bb0-plt} shows the value of the effective Majorana mass $|m_{\beta\beta}|$ as a function of the lightest neutrino mass\cite{hep-ph/9906525,hep-ph/0102265} in the normal and inverted neutrino mass spectra before and after the Daya Bay\cite{1203.1669} measurement of $\vartheta_{13}$ in Eq.~(\ref{s13db}). We used the values of the neutrino oscillation parameters obtained in the global analysis presented in Ref.\cite{Schwetz:2011zk}: \begin{equation} \Delta{m}^2_{12} = 7.59 {}^{+(0.20,0.40,0.60)}_{-(0.18,0.35,0.50)} \times 10^{-5} \, \text{eV}^2 , \; \sin^2 \vartheta_{12} = 0.312 {}^{+(0.017,0.038,0.058)}_{-(0.015,0.032,0.042)} , \label{arXiv:1103.0734-1} \end{equation} and in the NS \begin{equation} \Delta{m}^2_{13} = 2.50 {}^{+(0.09,0.18,0.26)}_{-(0.16,0.25,0.36)} \times 10^{-3} \, \text{eV}^2 , \; \sin^2 \vartheta_{13} = 0.013 {}^{+(0.007,0.015,0.022)}_{-(0.005,0.009,0.012)} , \label{arXiv:1103.0734-2} \end{equation} whereas in the IS \begin{equation} - \Delta{m}^2_{13} = 2.40 {}^{+(0.08,0.18,0.27)}_{-(0.09,0.17,0.27)} \times 10^{-3} \, \text{eV}^2 , \; \sin^2 \vartheta_{13} = 0.016 {}^{+(0.008,0.015,0.023)}_{-(0.006,0.011,0.015)} . \label{arXiv:1103.0734-3} \end{equation} The three levels of uncertainties correspond to $(1\sigma,2\sigma,3\sigma)$. In the ``After Daya Bay'' plot in Fig~\ref{bb0-plt} we replaced the value of $\vartheta_{13}$ in Eqs.~(\ref{arXiv:1103.0734-2}) and (\ref{arXiv:1103.0734-3}) with that measured by the Daya Bay Collaboration in Eq.~(\ref{s13db}). The uncertainties for $|m_{\beta\beta}|$ have been calculated using the standard method of propagation of uncorrelated errors, taking into account the asymmetric uncertainties in Eqs.~(\ref{arXiv:1103.0734-1})--(\ref{arXiv:1103.0734-3}). In the following we discuss the predictions for the effective Majorana mass in three cases with characteristic neutrino mass spectra: \begin{enumerate} \item Hierarchy of neutrino masses\footnote{Quarks and charged leptons have this type of mass spectrum.}: \begin{equation}\label{hierar} m_{1} \ll m_{2} \ll m_{3}. \end{equation} \item Inverted hierarchy of neutrino masses: \begin{equation}\label{invierar} m_{3} \ll m_{1} \lesssim m_{2}. \end{equation} \item Quasi-degenerate neutrino mass spectrum: \begin{equation}\label{quasi} \sqrt{ \Delta{m}^2_{\rm{a}}} \ll m_{0} \simeq \left\{ \begin{array}{rcl} \displaystyle m_{1} \lesssim m_{2} \lesssim m_{3} & \displaystyle \qquad & \displaystyle \mathrm{(NS)}, \\ \displaystyle m_{3} \lesssim m_{1} \lesssim m_{2} & \displaystyle \qquad & \displaystyle \mathrm{(IS)}, \end{array} \right. \end{equation} \end{enumerate} where $m_{0}$ is the absolute mass scale common to the three masses. As one can see from Fig.~\ref{bb0-plt}, the Daya Bay measurement of $\vartheta_{13}$ has a visible impact on the value of $|m_{\beta\beta}|$ only in the case of a hierarchy of neutrino masses, discussed in the following, because only in that case the contribution of the largest mass $m_{3}$, which is weighted by $\sin^2\vartheta_{13}$, is decisive. \subsection{Hierarchy of neutrino masses} \label{hierarchy} In this case we have \begin{equation}\label{hierar1} m_{1} \ll \sqrt{\Delta{m}^2_{s}}, \qquad m_{2}\simeq \sqrt{ \Delta{m}^2_{s}}, \qquad m_{3}\simeq \sqrt{ \Delta{m}^2_{a}}. \end{equation} Thus, $m_{2}$ and $m_{3}$ are determined by the solar and atmospheric neutrino mass-squared differences. Neglecting the contribution of $m_{1}$ to the effective Majorana mass, from Eq.~(\ref{mbbst}) we find \begin{equation}\label{hierar2} |m_{\beta\beta}| \simeq \left| \sin^2\vartheta_{12} \sqrt{\Delta{m}^2_{s}} + e^{2i\alpha_{23}} \sin^2\vartheta_{13} \sqrt{\Delta{m}^2_{a}} \right| , \end{equation} where $\alpha_{23}$ is the phase difference between $U_{e3}$ and $U_{e2}$: $\alpha_{23}=\alpha_{13}-\alpha_{12}=\lambda_{3}-\delta-\lambda_{2}$ in the standard parameterization (\ref{MixMat}) of the mixing matrix. The first term in Eq.(\ref{hierar2}) is small because of the smallness of $\Delta{m}^2_{s}$. On the other hand, the contribution of the ``large'' $\Delta{m}^2_{a}$ is suppressed by the small factor $\sin^2 \vartheta_{13} $. Hence, both terms must be taken into account and cancellations are possible, as shown in Fig.~\ref{bb0-plt}. As one can see from Fig.~\ref{bb0-plt}, in the case of a hierarchy of neutrino masses we have the upper bound \begin{equation}\label{hierar4} |m_{\beta\beta}| \leq \sin^2\vartheta_{12} \sqrt{\Delta{m}^2_{s}} + \sin^2\vartheta_{13} \sqrt{\Delta{m}^2_{a}} \lesssim 5 \cdot 10^{-3} \, \mathrm{eV}, \end{equation} which is significantly smaller than the expected sensitivity of the future experiments on the search for $0\nu\beta\beta$-decay (see Section~\ref{exp}). This bound corresponds to the case of $e^{2i\alpha_{23}}=1$. It is slightly increased by the Daya Bay measurement of $\vartheta_{13}$ in Eq.~(\ref{s13db}), because the additive contribution of $\sin^2\vartheta_{13} \sqrt{\Delta{m}^2_{a}}$ in Eq.~(\ref{hierar2}) is increased. On the other hand, one can see from Fig.~\ref{bb0-plt} that the lower bound on $|m_{\beta\beta}|$ for $m_{1} \ll 10^{-3} \, \mathrm{eV}$, which corresponds to $e^{2i\alpha_{23}}=-1$, is slightly decreased by the Daya Bay measurement of $\vartheta_{13}$, because the increased contribution of $\sin^2\vartheta_{13} \sqrt{\Delta{m}^2_{a}}$ in this case is subtracted. From Fig.~\ref{bb0-plt} one can also see that when the contribution of $m_{1}$ is not negligible, there can be cancellations among the three mass contributions. The two extreme cases in which cancellations can happen are the following ones in which CP is conserved: \begin{description} \item[$e^{2i\alpha_{12}}=-1\;\mathrm{and}\;e^{2i\alpha_{13}}=+1.$] The value of $m_{1}$ for which cancellations suppress $|m_{\beta\beta}|$ is slightly decreased by the Daya Bay measurement of $\vartheta_{13}$, because the larger value of $\sin^2\vartheta_{13} m_{3}$ adds to the contribution of $m_{1}$. Hence, a smaller value of $m_{1}$ is required to cancel the sum of the contributions of $m_{1}$ and $m_{3}$ with the opposite contribution of $m_{2}$. \item[$e^{2i\alpha_{12}}=-1\;\mathrm{and}\;e^{2i\alpha_{13}}=-1.$] The value of $m_{1}$ for which cancellations suppress $|m_{\beta\beta}|$ is slightly increased by the Daya Bay measurement of $\vartheta_{13}$, because the larger value of $\sin^2\vartheta_{13} m_{3}$ adds to the contribution of $m_{2}$. Hence, a larger value of $m_{1}$ is required to cancel the contribution of $m_{1}$ with the opposite sum of contributions of $m_{2}$ and $m_{3}$. \end{description} Figure~\ref{bb0-plt} shows that\footnote{ We are very grateful to Michele Frigerio for pointing out a mistake in the cancellation band presented in the first arXiv version of this paper and in its published version (Mod. Phys. Lett. A 27 (2012) 1230015). } the two effects lead to a slight widening of the cancellation band after the Daya Bay measurement of $\vartheta_{13}$. \subsection{Inverted hierarchy of the neutrino masses} \label{inverted} In this case, for the neutrino masses we have \begin{equation}\label{invierar1} m_{3} \ll \sqrt{\Delta{m}^2_{a}}, \quad m_{1} \simeq \sqrt{\Delta{m}^2_{a}}, \quad m_{2} \simeq \sqrt{\Delta{m}^2_{a}} \left( 1+ \frac{\Delta{m}^2_{s}}{2 \Delta{m}^2_{a}} \right) \simeq \sqrt{ \Delta{m}^2_{a}}. \end{equation} In the expression of $|m_{\beta\beta}|$, the contribution of the term $m_{3}\sin^2\vartheta_{13}$ can be safely neglected. Neglecting also the small contribution of $\sin^2\vartheta_{13}$, from Eq.~(\ref{mbbst}) we find \begin{equation}\label{invierar2} |m_{\beta\beta}| \simeq \sqrt{\Delta{m}^2_{a} \left(1-\sin^22\vartheta_{12}\,\sin^2\alpha_{12}\right)}. \end{equation} The phase $\alpha_{12}$ is the only unknown parameter in the expression for the effective Majorana mass in the case of a inverted mass hierarchy. From Eq.~(\ref{invierar2}) we find the following range for $|m_{\beta\beta}|$: \begin{equation}\label{invierar3} \cos 2\vartheta_{12} \,\sqrt{ \Delta{m}^2_{a}} \leq |m_{\beta\beta}| \leq\sqrt{ \Delta{m}^2_{a}}. \end{equation} The upper and lower bounds of this inequality correspond to the case of $CP$-invariance in the lepton sector. In fact, $CP$ invariance implies that (see Refs.\cite{Bilenky:1987ty,Giunti-Kim-2007,Bilenky:2010zza}) \begin{equation}\label{CPMaj1} e^{2i\alpha_{12}}=\eta_{2} \, \eta^{*}_{1}, \end{equation} where $\eta_{k}=\pm i$ is the $CP$ parity of the Majorana neutrino $\nu_{k}$. If $\eta_{2}=\eta_{1}$, we have $\alpha_{12}=0,\pi$ (the upper bound in the inequality (\ref{invierar3})). If $\eta_{2}=-\eta_{1}$ we have $\alpha_{12}=\pm\pi/2$ (the lower bound in the inequality (\ref{invierar3})). From the existing neutrino oscillation data, we find the following range for the possible value of the effective Majorana mass: \begin{equation}\label{invierar5} 10^{-2} \lesssim |m_{\beta\beta}| \lesssim 5 \cdot 10^{-2} \, \mathrm{eV}. \end{equation} The anticipated sensitivities to $|m_{\beta\beta}|$ of the future experiments on the search for the $0\nu\beta\beta$-decay are in the range (\ref{invierar5}) (see Section~\ref{exp}). Thus, the future $0\nu\beta\beta$-decay experiments will probe the Majorana nature of neutrinos if a inverted hierarchy of neutrino masses is realized in nature. \subsection{Quasi-degenerate neutrino mass spectrum} \label{quasi-degenerate} Neglecting the small contribution of $\sin^2\vartheta_{13}$ in Eq.~(\ref{mbbst}), in the case of a quasi-degenerate neutrino mass spectrum we obtain \begin{equation}\label{quasi1} |m_{\beta\beta}| \simeq m_{0} \sqrt{1-\sin^22\vartheta_{12}\,\sin^2\alpha_{12}}, \end{equation} where $m_{0}$ is the unknown absolute mass scale of neutrino masses (see Eq.~(\ref{quasi})) and $\alpha_{12}$ is the phase difference between $U_{e2}$ and $U_{e1}$: $\alpha_{12}=\lambda_{2}$ in the standard parameterization (\ref{MixMat}) of the mixing matrix. Thus, in this case $|m_{\beta\beta}|$ depends on two unknown parameters: $m_{0}$ and $\alpha_{12}$. From Eq.~(\ref{quasi1}), we obtain the following range for the effective Majorana mass: \begin{equation}\label{quasi2} \cos 2\vartheta_{12} \, m_{0} \leq |m_{\beta\beta}| \leq m_{0} . \end{equation} If $0\nu\beta\beta$-decay will be observed and the effective Majorana mass will turn out to be relatively large ($|m_{\beta\beta}| \gg \sqrt{\Delta{m}^2_{a}}$), we will have an evidence that neutrinos are Majorana particles and their mass spectrum is quasi-degenerate. In this case, we have \begin{equation}\label{quasi5} |m_{\beta\beta}| \leq m_{0} \leq \frac{|m_{\beta\beta}|}{\cos 2\vartheta_{12}} \simeq 2.8\, |m_{\beta\beta}|. \end{equation} Information about the value of the mass scale will be inferred from the data of the future tritium $\beta$-decay experiment KATRIN \cite{hep-ex/0109033,Angrik:2005ep} and from future cosmological observations. The sensitivity of the KATRIN experiment to the neutrino mass scale is expected to be about 0.2 eV, which the same as the sensitivity to $m_{\beta}$ in Eq.~(\ref{trimass}), since in the quasi-degenerate case $m_{\beta} \simeq m_{0}$. Cosmological observations give information on the value of the sum of the neutrino masses $\sum_{i}m_{i}\simeq3m_{0}$ in the quasi-degenerate case. The existing cosmological data imply the bound $\sum_{i}m_{i}\lesssim 0.5$ eV (see Ref.\cite{1007.0658}). It is expected that future cosmological observations will be sensitive to $\sum_{i}m_{i}$ in the range $(6\times 10^{-3}-10^{-1})$ eV (see, for example, Ref.\cite{1103.5083}). \section{Nuclear matrix elements} \label{nucmatel} The effective Majorana mass $|m_{\beta\beta}|$ is not a directly measurable quantity. The measurement of the half-life of $0\nu\beta\beta$-decay gives {\em the product of the effective Majorana mass and the nuclear matrix element} (see Eq.~(\ref{totrate})). Hence, in order to determine the effective Majorana mass one must calculate the nuclear matrix elements (NMEs) of $0\nu\beta\beta$-decay, which is a complicated nuclear many-body problem. Five different methods are used at present. In this short review we do not describe these methods and we do not discuss the advantages and disadvantages of each of them. We only present the references to the original papers in Tab.~\ref{tabnme} and the latest results in Fig.~\ref{simkovic-nmes}. \begin{table}[b!] \tbl{Methods of calculation of nuclear matrix elements of $0\nu\beta\beta$-decay.} { \begin{tabular}{cc} Method & References \\ \hline Quasi-particle Random Phase Approximation (QRPA) & \cite{nucl-th/0305005,0706.4304,nucl-th/0012010,Simkovic:2011zz} \\ Energy Density Functional method (EDF) & \cite{1012.1783,1008.5260} \\ Projected Hartree-Fock-Bogoliubov approach (PHFB) & \cite{Rath:2010zz,Rath:2011zz} \\ Interacting Boson Model-2 (IBM-2) & \cite{Barea:2009zz,Iachello:2011zz,Iachello:2011zzb} \\ Large-Scale Shell Model (LSSM) & \cite{Menendez:2011zza,0801.3760} \end{tabular} \label{tabnme} } \end{table} \begin{figure}[t!] \begin{center} \includegraphics*[width=0.8\textwidth]{fig-03.eps} \end{center} \caption{\label{simkovic-nmes} Values of the NME calculated with the methods in Tab.~\ref{tabnme}~\protect\cite{Simkovic-private-12}. } \end{figure} From Fig.~\ref{simkovic-nmes} we reach the following conclusions: \begin{enumerate} \item The LSSM value of each NME is typically smaller than the corresponding one calculated with other approaches. Moreover, the LSSM value of each NME depends weakly on the nucleus, except for the double-magic nucleus ${^{48}\rm{Ca}}$. If $0\nu\beta\beta$-decay of different nuclei will be observed in future experiments, this characteristic feature of the LSSM can be checked, because the LSSM predicts the following ratio of half-lives of different nuclei: \begin{equation} \frac{T^{0\nu}_{1/2}(Z_{1},A_{1})}{T^{0\nu}_{1/2}(Z_{2},A_{2})} \simeq \frac{G^{0\nu}(Q_{2},Z_{2})}{G^{0\nu}(Q_{1},Z_{1})} \end{equation} \item \label{item:ratnme} There is a large discrepancy between the values of NMEs calculated with different approaches. The ratios of the maximal and minimal values of each NME are 3.1 ($^{48}\mathrm{Ca}$), 2.4 ($^{76}\mathrm{Ge}$), 2.0 ($^{82}\mathrm{Se}$), 3.7 ($^{96}\mathrm{Zr}$), 1.8 ($^{100}\mathrm{Mo}$), 1.3 ($^{116}\mathrm{Cd}$), 1.8 ($^{124}\mathrm{Sn}$), 1.9 ($^{128}\mathrm{Te}$), 2.1 ($^{130}\mathrm{Te}$), 1.9 ($^{136}\mathrm{Xe}$), 2.3 ($^{150}\mathrm{Nd}$). Therefore, the situation with the calculation of the $0\nu\beta\beta$-decay NMEs is obviously not satisfactory at present. Further efforts and progress are definitely needed. \end{enumerate} \section{Neutrinoless double-beta decay experiments} \label{exp} Many experiments searched for neutrinoless double-beta decay without finding an uncontroversial positive evidence. The most stringent lower bounds on the half-lives of the decays of $^{76}\mathrm{Ge}$, $^{130}\mathrm{Te}$ and $^{100}\mathrm{Mo}$ have been obtained, correspondingly, in the Heidelberg-Moscow\cite{Klapdor-Kleingrothaus:2001yx}, Cuoricino\cite{1012.3266} and NEMO3\cite{hep-ex/0601021,Barabash:2010bd} experiments. In the Heidelberg-Moscow experiment\cite{Klapdor-Kleingrothaus:2001yx} Germanium crystals with a 86\% enrichment in the $\beta\beta$-decaying isotope $^{76}\mathrm{Ge}$ were used. The total mass of $^{76}\mathrm{Ge}$ was 11 kg, with a low background of 0.11 counts/(kg~keV~y). After 13 years of running (with a 35.5 kg y exposure) no $\beta\beta$-peak at $Q$=2039 keV was found. The resulting half-live is \begin{equation}\label{Hei-Mos} T_{1/2}^{0\nu}(^{76}\mathrm{Ge})> 1.9 \times 10^{25} \, \mathrm{y} \quad (90 \% \mathrm{CL}), \end{equation} which implies that\footnote{Some participants of the Heidelberg-Moscow experiment claimed\cite{Klapdor-Kleingrothaus:2006ff} the observation of $0\nu\beta\beta$-decay of $^{76}\mathrm{Ge}$ with half-life $T_{1/2}^{0\nu}(^{76}\mathrm{Ge}) = (2.23^{+0.44}_{-0.31}) \times 10^{25}$ y (with 51.39 kg y exposure). From this result the authors found $|m_{\beta\beta}| = 0.32 \pm 0.03$ eV. This claim will be checked by the GERDA experiment\cite{Ur:2011zz} using the same $0\nu\beta\beta$-decaying nucleus.} \begin{equation}\label{Hei-Mos1} |m_{\beta\beta}| \lesssim (0.22-0.64) \, \mathrm{eV}. \end{equation} In the cryogenic experiment Cuoricino\cite{1012.3266} $\mathrm{TeO}_{2}$ bolometers were used, with a total mass of 11.34 kg of $^{130}\mathrm{Te}$. The background was 0.17 counts/(kg~keV~y). After a 19.75 kg y exposure the following lower bound was obtained: \begin{equation}\label{Cuori} T_{1/2}^{0\nu}(^{130}\mathrm{Te})> 2.8 \times 10^{24} \mathrm{y}\quad (90 \% \mathrm{CL}), \end{equation} which corresponds to \begin{equation}\label{Cuori1} |m_{\beta\beta}| \lesssim (0.30-0.71) \, \mathrm{eV}. \end{equation} In the NEMO3 experiment\cite{hep-ex/0601021,Barabash:2010bd} the cylindrical source was divided in sectors with enriched ${^{100}\mathrm{Mo}}$ (6914 g), ${^{82}\mathrm{Se}}$ (932 g) and other $\beta\beta$-decaying isotopes. The two emitted electrons were detected in drift cells and plastic scintillator. No $0\nu\beta\beta$-decay was observed. The half-life of $0\nu\beta\beta$-decay of $^{100}\mathrm{Mo}$ have been bounded by \begin{equation}\label{Nemo} T_{1/2}^{0\nu}(^{100}\mathrm{Mo})> 1.1 \times 10^{24} \, \mathrm{y} \quad (90 \% \mathrm{CL}). \end{equation} The corresponding limit for the effective Majorana mass is \begin{equation}\label{Nemo1} |m_{\beta\beta}| \lesssim (0.44-1.00) \, \mathrm{eV}. \end{equation} Several new experiments on the search for $0\nu\beta\beta$-decay of different nuclei are currently running or in preparation. In the following we discuss briefly some of them (for more detailed presentations of future experiments see Ref.\cite{GomezCadenas:2011it,Schwingenheuer:2012jt}). In the GERDA experiment\cite{Ur:2011zz}, started in 2011, 18 kg of enriched germanium crystals (with 86\% of the $\beta\beta$-decaying isotope $^{76} \mathrm{Ge}$) are used. The expected background in the Phase-I of the experiment is $ 10^{-2}$ counts/(kg~keV~y). After one year of running it is expected to reach a sensitivity of $T_{1/2}^{0\nu}(^{76}\mathrm{Ge})= 2.5 \times 10^{25}$ y, which should allow to check the claim made in Ref.\cite{Klapdor-Kleingrothaus:2006ff}. During the Phase-II of the GERDA experiment (expected to start in 2013), an array of enriched Germanium crystals (with 40 kg of $^{76} \mathrm{Ge}$) will be cooled and shielded by liquid Argon of very high radiopurity. A low background ($10^{-3}$ counts/(kg~keV~y)) is expected. After 5 years of data taking, in the Phase-II of the experiment a sensitivity of $T_{1/2}^{0\nu}(^{76}\mathrm{Ge})\simeq 1.9 \cdot 10^{26}$ y is expected. The corresponding sensitivity to the effective Majorana mass is $ |m_{\beta\beta}|\simeq (7.3\cdot 10^{-2}-2.0\cdot 10^{-1})$ eV. In the cryogenic CUORE experiment\cite{0912.0452} $\mathrm{TeO_{2}}$ bolometers are used both as source and as detector. In the Phase-I of the experiment (started in the end of 2011) the target mass is 10.8 kg of $^{130} \mathrm{Te}$. In the Phase-II (expected to start in 2014) the target mass will be 206 kg of $^{130} \mathrm{Te}$. The expected background in this phase will be $10^{-2}$ counts/(kg~keV~y). After 5 years of data taking a sensitivity of $T_{1/2}^{0\nu}(^{130}\mathrm{Te})= 1.6 \cdot 10^{26}$ y will be reached, which corresponds to $ |m_{\beta\beta}| \simeq (4.0-9.4) \cdot 10^{-2}$ eV. In the KamLAND-Zen experiment\cite{1201.4664}, the $0\nu\beta\beta$-decay of $^{136} \mathrm{Xe}$ will be studied. In this experiment enriched $ \mathrm{Xe}$ (with 91\% of the $\beta\beta$-decaying isotope $^{136} \mathrm{Xe}$) dissolved in liquid scintillator will be placed in a balloon (3.4 m in diameter) at the center of the KamLAND detector. In the first phase of the experiment (started in 2011), the source mass is 364 kg of $^{136} \mathrm{Xe}$. In the second phase (scheduled for 2013) 910 kg of $^{136} \mathrm{Xe}$ will be utilized. After 5 years of data taking it will be possible to reach a sensitivity to $|m_{\beta\beta}|$ in the region of the inverted hierarchy ($|m_{\beta\beta}| \simeq 2.5 \cdot 10^{-2}$ eV). In the running EXO experiment\cite{Gornea:2011zz} the decay ${}^{136}\mathrm{Xe}\to {}^{136}\mathrm{Ba}+e^{-}+e^{-}$ is searched for. In the first phase of the experiment (EXO-200) the mass of the fiducial volume is about 150 kg of liquid Xenon enriched to 80.6\% in the $\beta\beta$-decaying isotope $^{136} \mathrm{Xe}$. After two years of data taking a sensitivity $|m_{\beta\beta}| \simeq (8.7 \cdot 10^{-2} - 2.2 \cdot 10^{-1})$ eV is planned to be achieved. The full EXO experiment will consist of about 1 ton of enriched liquid Xenon. With $ \mathrm{Ba}^{+}$ tagging, a very low background of about $10^{-4}$ counts/(kg~keV~y) will be reached. After 5 years of data taking, it is expected to reach a sensitivity of $T_{1/2}^{0\nu}(^{136}\mathrm{Xe}) \simeq 10^{27}$ y, which corresponds to $|m_{\beta\beta}|\simeq (1.6-4.0)\cdot 10^{-2}$ eV. \section{Conclusions} \label{conclusions} If massive neutrinos are Majorana particles, neutrinoless double-beta decay of $^{76}\mathrm{Ge}$, $^{100}\mathrm{Mo}$, $^{130}\mathrm{Te}$, $^{136}\mathrm{Xe}$ and other even-even nuclei is allowed. However, the expected probability of $0\nu\beta\beta$-decay is extremely small, because: \begin{enumerate} \item It is a process of second order in the Fermi constant $G_{F}$. \item Since the Hamiltonian of weak interactions conserves helicity, the amplitude of $0\nu\beta\beta$-decay is proportional to the very small factor \begin{equation} \frac{m_{\beta\beta}}{\overline{q}^2}, \label{smallfactor} \end{equation} which comes from the neutrino propagator. Here $m_{\beta\beta}=\sum_{i}U^2_{ei}m_{i}$ is the effective Majorana mass ($\lesssim 1$ eV) and $\overline{q}$ is the average neutrino momentum ($\sim 100$ MeV). \end{enumerate} The expected half-lives of $0\nu\beta\beta$-decays depend on the decaying nucleus and are typically larger than $10^{24}-10^{25}$ years. Therefore, the observation of this rare process is a real challenge. The effective Majorana mass (and consequently the matrix element of the process) depends on the character of the neutrino mass spectrum. In the case of a quasi-degenerate spectrum, the expected value of $m_{\beta\beta}$ is relatively large. This case is partly excluded by the data of the performed $0\nu\beta\beta$-decay experiments and by cosmological data (see Fig.~\ref{bb0-plt}). It will be further explored by GERDA, KamLAND-Zen, EXO, CUORE and other experiments. In order to reach the region of the inverted neutrino mass hierarchy, with $10^{-2} \lesssim |m_{\beta\beta}| \lesssim 5 \cdot 10^{-2}$ eV, the construction of large detectors ($\sim$ 1 ton) and about 5 years of data taking will be required. We considered here the $0\nu\beta\beta$-decay induced by the standard mechanism of exchange of light Majorana neutrinos between $n$-$p$-$e^{-}$ vertices. From neutrino oscillation data it follows that if neutrino with definite masses are Majorana particles this decay mechanism is realized if there is no cancellation of the different mass contributions (as shown in Fig.~\ref{bb0-plt}, cancellations can happen in the normal scheme). As discussed in Section~\ref{seesaw}, the neutrino mass mechanism of $0\nu\beta\beta$-decay is predicted by the standard seesaw mechanism\cite{Minkowski:1977sc,GellMann-Ramond-Slansky-SeeSaw-1979,Yanagida-SeeSaw-1979,Mohapatra:1980ia}. However, additional sources of violation of the total lepton number $L$ are possible (see Ref.\cite{1103.6217} and references therein). If $L$ is violated at the TeV scale these additional mechanisms could give contributions to the matrix elements of the $0\nu\beta\beta$-decay comparable with the contribution of the light Majorana neutrino mass mechanism. Let us consider as an example the violation of $L$ due to R-parity violating interactions of SM and SUSY particles. In this case, $0\nu\beta\beta$-decay is induced by the exchange of a heavy Majorana SUSY neutralino. The product of $n$-$p$-$e^{-}$ vertices is given by the factor \begin{equation}\label{sUSY} \left(\frac{G_{F}}{\sqrt{2}}\right)^2 \left(\frac{m^2_{W}} {\Lambda^2}\right)^2\frac{1}{\Lambda}, \end{equation} where $\Lambda$ characterizes the scale of the masses of SUSY particles and $m_{W}$ is the mass of the $W$-boson. The factor (\ref{sUSY}) must be compared with the corresponding factor \begin{equation}\label{sUSY1} \left(\frac{G_{F}}{\sqrt{2}}\right)^2\left(\frac{m_{\beta\beta}}{\overline{q}^2}\right), \end{equation} which appears in the case of the Majorana neutrino mass mechanism. Taking into account that $\overline{q} \simeq 100$ MeV and assuming that $|m_{\beta\beta}|\simeq 10^{-1}$ eV, we come to the conclusion that Eqs.~(\ref{sUSY}) and (\ref{sUSY1}) are comparable if $\Lambda$ is of the order of a few TeV. If the $0\nu\beta\beta$-decay of different nuclei will be observed in future experiments, it will be possible to probe the presence of different mechanisms which can generate the process. Finally, let us emphasize that the search for $0\nu\beta\beta$-decay is a powerful practical way to solve one of the most fundamental problem of modern neutrino physics: {\em are neutrinos with definite masses $\nu_{i}$ truly neutral Majorana particles or are they Dirac particles possessing a conserved total lepton number?} \section*{Acknowledgments} We are grateful to F. \v{S}imkovic who kindly provided us with Fig.~\ref{simkovic-nmes}. We are also thankful to him for useful discussions. \section*{Note Added} After the completion of this review, the EXO collaboration published in \texttt{arXiv:1205.5608} the important first result of EXO-200. With an exposure of 32.5 kg y, they obtained $T_{1/2}^{0\nu}(^{136}\text{Xe}) > 1.6 \times 10^{25} \, \text{y}$ at 90\% CL, corresponding to $|m_{\beta\beta}| \lesssim (0.14-0.38) \, \mathrm{eV}$.
1,314,259,993,979
arxiv
\section{Introduction} \label{sec:intro} The escape fraction of ionizing photons, $f^{\rm ion}_{\rm esc}$, represents one of the key parameters describing cosmic reionization \citep[e.g.][]{HH03,Cen03,WL03,Mitra}. Observational constraints on $f^{\rm ion}_{\rm esc}$\hspace{1mm} are still weak \citep[see Fig~13 of][]{Smith16}. Ionizing photons, also known as Lyman Continuum (LyC) photons, have only been directly observed to escape for a handful of galaxies \citep[e.g.][also see Benson et al. 2013, Smith et al. 2016 and references therein]{B14,Izotov16,V16a}. Observations of the Ly$\alpha$ forest constrain the LyC volume emissivity (the rate at which LyC photons are {\it released} into the IGM per unit volume), while observations of the UV-luminosity function of star forming galaxies provide direct constraints on the {\it production} rate of LyC photons. These two constraints combined constrain the volume-averaged escape fraction of ionizing photons, denoted with $\langle$$f^{\rm ion}_{\rm esc}$$\rangle$, and show that $\langle$$f^{\rm ion}_{\rm esc}$$\rangle$ increases with redshift \citep[][]{Inoue06,Kuhlen,BB13}. The LyC escape fraction depends on more than just redshift. Various models and simulations predict that $f^{\rm ion}_{\rm esc}$\hspace{1mm} decreases with dark matter halo mass \citep[e.g.][but also see Gnedin et al. 2008, Ma et al. 2015, Sharma et al. 2016]{Yajima11,FL13,P13,Wise14}, which in turn correlates with observables such as the non-ionizing UV-continuum luminosity of galaxies. The reason that not all simulations agree on this mass-dependence is partly because different studies focus on galaxies with very different masses, at very different redshifts, and different implementations for sub-grid physics associated with feedback, which can strongly affect the properties of the simulated interstellar medium. Ab initio modeling of $f^{\rm ion}_{\rm esc}$\hspace{1mm} still represents a major theoretical challenge (see e.g. Fernandez \& Shull 2011 for a discussion), and models may have to include additional physical processes such as X-ray heating/ionization (Benson et al. 2013), runaway stars \citep{CK12} and binary evolution (Ma et al. 2016), all of which can facilitate the escape of ionizing photons. Irrespective of theoretical and observational uncertainties, the escape of ionizing photons requires that paths exist which contain low column densities of atomic hydrogen, i.e. $N_{\rm HI} ~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 1/\sigma_{\rm ion} \approx 10^{17}$ cm$^{-2}$, where $\sigma_{\rm ion}=6\times 10^{-18}$ cm$^{2}$ denotes the photoionization cross-section evaluated at the Lyman limit \citep[e.g.][]{Verner96}. These same low column density paths provide escape routes for Ly$\alpha$ photons (Behrens et al. 2014, Verhamme et al. 2015). LyC and Ly$\alpha$ escape are therefore expected to be correlated, at least at some level (e.g. Rauch et al. 2011, Erb et al. 2014, Micheva et al. 2016). If the escape of LyC photons is facilitated by (supernova-driven) winds that blew low-column density holes (see e.g. Dove et al. 2000, Sharma et al. 2016), then this provides a physical mechanism connecting $f^{\rm ion}_{\rm esc}$ \hspace{1mm} and $f^{{\rm Ly}\alpha}_{\rm esc}$, as observations of Ly$\alpha$ emitting galaxies indicate that galactic outflows promote the escape of Ly$\alpha$ photons (Kunth et al. 1998, Atek et al. 2008, Wofford et al. 2013, Rivera-Thorsen et a. 2015, see Hayes 2015 for a review). The goal of this paper is to more quantitatively explore the correlation between Ly$\alpha$ and LyC photons, for which we use a large suite of simplified models of the multi-phase ISM that span the wide range of physical conditions encountered in observed galaxies (first presented in Gronke \& Dijkstra 2016). Yajima et al. (2014) previously found a clear correlation between $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} and $f^{\rm ion}_{\rm esc}$\hspace{1mm} in their cosmological hydrodynamical simulations of a Milky Way-like galaxy. Their calculations should be viewed as a `bottom-up' (or ab-initio) approach to quantifying this correlation, while our work should be viewed as a `top-down' (or empirical) approach. As neither approach has converged yet (see \S~\ref{sec:model}), our work should be viewed as complementary to that of Yajima et al. (2014). Addressing the correlation between $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} and $f^{\rm ion}_{\rm esc}$\hspace{1mm} has become (even) more relevant for cosmic reionization as, we will argue in \S~\ref{sec:muvfesc}, there is increasing evidence that $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} increases towards lower UV-luminosities. The outline of this paper is as follows: In \S~\ref{sec:model} we present our models and show the predicted correlation between $f^{{\rm Ly}\alpha}_{\rm esc}$ \hspace{1mm} and $f^{\rm ion}_{\rm esc}$\hspace{1mm} in \S~\ref{sec:results}. We discuss implications of our results in \S~\ref{sec:discuss}, before presenting our conclusions in \S~\ref{sec:conc}. \section{The Model} \label{sec:model} \begin{figure*} \centering \includegraphics[width=8.5cm,angle=0]{modelscheme.pdf} \vspace{0mm} \caption[]{Schematic representation of the adopted geometry in our `clumpy ISM' models, which represent simplified versions of multiphase interstellar media. A sphere or radius $r_{\rm cl}=5$ kpc is filled with outflowing, neutral, dusty clumps of gas embedded within a hot interclump medium. The covering factor $f_{\rm cl}$ denotes the average number of clumps along sightlines from the center to the edge of the cloud. The clumps surround a spatially extended Ly$\alpha$ (and LyC) source, both of which are characterized by an exponential volume emissivity profile with scale length $H_{\rm em}$. A fraction $P_{\rm cl}$ of all Ly$\alpha$ and LyC photons is emitted inside cold clumps.} \label{fig:scheme} \end{figure*} The escape of both ionizing and Ly$\alpha$ photons depends sensitively on the distribution of neutral gas throughout the interstellar medium. For Ly$\alpha$ photons, the kinematics of this neutral gas is possibly even more important (Kunth et al. 1998, Atek et al. 2008, Steidel et al. 2010, Wofford et al. 2013, Rivera-Thorsen et al. 2015). Modeling Ly$\alpha$ transfer on interstellar scales therefore requires a proper model for both the distribution and kinematics of the neutral gas in the ISM, which likely requires magneto-hydrodynamical simulations with sub-pc resolution (e.g. Fujita et al. 2009, Dijkstra \& Kramer 2012). This requirement underlines why it is important to have a complementary top-down approach to the bottom-up analysis by Yajima et al. (2014), whose simulations had a spatial resolution of $250h^{-1}$ comoving pc and gas mass resolution of $M=3\times 10^5h^{-1}M_{\odot}$. To circumvent the demanding requirements to properly model interstellar Ly$\alpha$ transfer from first principles, this process has been represented by highly simplified models, which include ({\it i}) the `shell' model, which consists of a Ly$\alpha$ source surrounded by a geometrically thin shell of neutral, dusty hydrogen, which is (typically) outflowing (see e.g. Ahn et al. 2003, Verhamme et al. 2006, Gronke et al. 2015a). The shell model -which contains seven free parameters - has been remarkably successful at reproducing observed Ly$\alpha$ spectra line profiles (e.g. Verhamme et al. 2008, Hashimoto et al. 2015, Yang et al. 2016, though some issues have been pointed out by Barnes \& Haehnelt 2010, Kulas et al. 2012, Chonis et al. 2013); and ({\it ii}) the `clumpy ISM' model, which consists of a (large) collection of spherical clumps that contain dusty, neutral hydrogen gas, embedded within a hot inter-clump medium, and which represent simplified versions of multiphase interstellar media (e.g. Neufeld 1991, Hansen \& Oh 2006, Laursen et al. 2013, Gronke \& Dijkstra 2014). Clumpy models naturally give rise to a non-zero porosity of the neutral gas, and a `continuum covering factor' of neutral gas that is less than $100\&$, both of which facilitates Ly$\alpha$ escape (e.g. Shibuya et al. 2014, Trainor et al. 2015, Rivera-Thorsen et al. 2015). Both sets of simplified models can be interpreted as `sub-grid' models that describe the Ly$\alpha$ transfer on scales that have not been modelled yet from first principles. In shell models, the shell completely surrounds the Ly$\alpha$ source. The escape fraction $f^{\rm ion}_{\rm esc}$\hspace{1mm} is determined by its HI column ($N_{\rm HI}$) as $f^{\rm ion}_{\rm esc}$$(\nu)=\exp[-\sigma_{\rm ion}(\nu)N_{\rm HI}]$, and $f^{\rm ion}_{\rm esc}$\hspace{1mm} is practically binary ($f^{\rm ion}_{\rm esc}$$\approx0$ for $N_{\rm HI} > 10^{17}$ cm$^{-2}$ or $f^{\rm ion}_{\rm esc}$$\approx1.0$ for $N_{\rm HI} ~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 10^{17}$ cm$^{-2}$). However the production rate of Ly$\alpha$ is zero for $f^{\rm ion}_{\rm esc}$$\approx 1.0$, as nebular luminosities depend on $f^{\rm ion}_{\rm esc}$\hspace{1mm} as $\propto (1-f_{\rm esc}^{\rm ion})$ (e.g. Schaerer 2003). The shell model therefore technically only gives rise to Ly$\alpha$ emission while $f^{\rm ion}_{\rm esc}$$\neq 0$ over a finely tuned narrow range of $N_{\rm HI}$ centered on $N_{\rm HI} \sim 10^{17}$ cm$^{-2}$. Here, we focus on the clumpy ISM models. We have recently constructed a large library of clumpy models (Gronke \& Dijkstra 2016). In these models, the clumps have HI column densities large enough to make them opaque to LyC photons. However, there exist sightlines that do not penetrate any clumps, and which allow LyC photons to escape. In clumpy models, $f^{\rm ion}_{\rm esc}$ \hspace{1mm} is related to the fraction of sightlines from the LyC source(s) which do not intersect any clumps (this corresponds to the `picket fence model' of Heckman et al. 2011). The geometry of the clumpy ISM model and its the main parameters are based on that described in Laursen et al. (2013). We refer the interested reader to these papers for a more detailed description on how Laursen et al. (2013) constrain their parameters through observed galaxies. Here, we only present only a brief description of the model. In the clumpy ISM model, the multiphase ISM is represented by a large number of neutral, spherical `clumps' which are embedded within a hot gas. The neutral clumps are distributed in a sphere of radius $r_{\rm gal}=5\,$kpc. The clouds themselves have radius $r_{\rm cl}$. The cloud covering factor $f_{\rm cl}$ denotes the total number of clouds from the center of the sphere to its edge, averaged over all sightlines. The content of the cold [hot] clumps [inter-clump medium] is described by $T_{cl},\, n_{{\text{H\MakeUppercase{\romannumeral 1}}} , cl}$ [$T_{ICM},\hspace{1mm} n_{{\text{H\MakeUppercase{\romannumeral 1}}} , ICM}$] for temperature\footnote{The temperature is defined as $b^2\equiv 2k_{\rm p}T/m_{\rm p}$, where $b^2=v^2_{\rm th}+v^2_{\rm turb}$. Here $v_{\rm th}$ [$v_{\rm turb}$] denotes the thermal [turbulent] velocity of the gas.} and the number density of hydrogen, respectively. The dust optical optical depth through the clouds per path-length given by $\sigma_d Z_{\rm cl}/Z_{\rm sun}n_{{\text{H\MakeUppercase{\romannumeral 1}}} }$ where $\sigma_d=1.58\times 10^{-21} \, {\rm cm}^{2}$ \citep{Pei92,Laursen09}, where $Z_{\rm cl}$ denotes the `metallicity' of the cloud (the ICM has metallicity $Z_{ICM}\equiv \zeta_Z Z_{\rm cl}$). Following previous analyses, we assume that there is no further structure to the cold clumps. That is, we do not further split up the neutral clumps into `warm' and `cold' neutral media, as is the case for realistic multiphase gases (e.g. McKee \& Ostriker 1977). The clumps are outflowing\footnote{Changing the sign of $v(r)$ only `flips' the emerging Ly$\alpha$ spectrum around $x=0$, and leaves our $f^{\rm ion}_{\rm esc}$\hspace{1mm} and $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} unaffected.} with a velocity profile \begin{equation} v(r) = v_{\infty,{\rm cl}}\left\{1 - \left(\frac{r}{r_{\text{min}}}\right)^{1-\beta_{\rm cl}}\right\}^{1/2} \label{eq:velocity_profile} \end{equation} for $r>r_{\text{min}} = 1\,{\rm kpc}$ and otherwise zero (Steidel et al. 2010, Laursen et al. 2013). In addition to this, the clouds have a random, isotropic velocity distribution which is Gaussian with a standard deviation $\sigma_{cl}$. Ly$\alpha$ photons are emitted randomly following an exponential radial volume emissivity profile $\epsilon_{{\rm Ly}\alpha}(r)=\mathcal{N}\exp(-r / H_{\rm em})$ where $\mathcal{N}$ is a normalization constang, and $r$ is the distance to the center of the cloud. The photon is emitted {\it inside} a cloud with probability $P_{\rm cl}$, which would force it to escape from its birthcloud first. The frequency of the photon is drawn from a Gaussian with standard deviation $\sigma_i$. In all our models, we assume the LyC emission traces Ly$\alpha$ emission exactly, including that a fraction $P_{\rm cl}$ is emitted inside a neutral clump. The vast majority of the LyC photons that are emitted inside a cloud do not to escape. We thus need $14$ parameters to completely characterize our models\footnote{Note, that the parameters given here differ slightly from what we used in Gronke \& Dijkstra (2014). There, we ignored the filling of the ICM since we were interested in the (enhancement of) the \ifmmode{{\rm Ly}\alpha}\else Ly$\alpha$\ \fi escape fraction.}. Laursen et al. (2013) discuss plausible ranges for each parameter based on theoretical models, and observations of the ISM in the Milky Way, nearby dwarf galaxies, Ly$\alpha$ emitters (LAEs) and drop-out galaxies out to $z\sim 6$. Our fiducial model adopts the central value of the range quoted in Laursen et al. (2013) as `reasonable', with the exception of the outflow velocity $v_{\infty, {\rm cl}}$ for which Laursen et al. (2013) chose deliberately small values. Values for each parameter are listed in Table~\ref{tab:models} shown in the Appendix. We assembled a library of $2,500$ spectra (using $\sim 10,000$ \textit{escaped} photons each). We drew each parameter uniformly\footnote{Note that $n_{{\text{H\MakeUppercase{\romannumeral 1}}} , {\rm ICM}},\,n_{\rm d, ICM},\,T_{\rm ICM},\,T_{\rm cl},\,Z_{\rm cl}$ and $\zeta_Z$ were drawn uniformly in log-space.} from the range indicated in Table~\ref{tab:models}, which is loosely based on the `extreme' range in Laursen et al. (2013). This choice gives us a suite of empirical, simplified models of the multi-phase ISM that span the wide range of physical conditions encountered in observed galaxies. \section{Results: Correlation Between $f^{\rm ion}_{\rm esc}$ \hspace{1mm} and $f^{{\rm Ly}\alpha}_{\rm esc}$}\label{sec:results} Figure~\ref{fig:1} shows $f^{\rm ion}_{\rm esc}$ \hspace{1mm} as a function of $f^{{\rm Ly}\alpha}_{\rm esc}$. Each cross represents a Monte-Carlo radiative transfer simulation for one random realization of a clumpy ISM model. The color of the cross denotes $f_{\rm cl}$. \begin{figure} \includegraphics[width=9.0cm,angle=0]{fig1.pdf} \vspace{0mm} \caption[]{The ionizing photon escape fraction, $f^{\rm ion}_{\rm esc}$, as a function of Ly$\alpha$ escape fraction, $f^{{\rm Ly}\alpha}_{\rm esc}$, for a suite of 2500 clumpy ISM models. Each {\it cross} represents the angle-averaged escape fraction for a complete Ly$\alpha$ Monte-Carlo radiative transfer calculation for one particular parametrization of the clumpy ISM model. The {\it color} of the {\it crosses} denote the cloud covering factor $f_{\rm cl}$. This plot shows that there is a correlation between the two parameters: galaxies with low $f^{{\rm Ly}\alpha}_{\rm esc}$ \hspace{1mm} have a low $f^{\rm ion}_{\rm esc}$, while galaxies with high $f^{{\rm Ly}\alpha}_{\rm esc}$ \hspace{1mm} show a large spread in $f^{\rm ion}_{\rm esc}$, driven strongly by $f_{\rm cl}$. } \label{fig:1} \end{figure} There are several take-away points from this plot. \begin{enumerate}[leftmargin=*] \item The $2500$ models give rise to significant variation in $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} (which spans $\sim$ 3 orders of magnitude) and $f^{\rm ion}_{\rm esc}$\hspace{1mm} (which spans $\sim 4$ orders of magnitude). Models that give rise to $f^{{\rm Ly}\alpha}_{\rm esc}$ $~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}} 0.1-0.2$ would correspond to galaxies with relatively `strong' Ly$\alpha$ emission, such as Ly$\alpha$ emitters. Our models therefore give rise to a population of `Ly$\alpha$ emitters' and weaker Ly$\alpha$ sources such as drop-out galaxies with weak Ly$\alpha$ emission. Our results also indicate that for at fixed $f^{{\rm Ly}\alpha}_{\rm esc}$, the dispersion in $f^{\rm ion}_{\rm esc}$\hspace{1mm} can be large. \item When $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} is small, then $f^{\rm ion}_{\rm esc}$\hspace{1mm} is small. Ly$\alpha$ photons are destroyed most efficiently when they encounter, and scatter in, many different clumps. The number of scattering events (or `cloud interactions') scales as $\mathcal{N}_{\rm cl}\propto f^2_{\rm cl}$ ($f_{\rm cl} \gg 1$, Hansen \& Oh 2006). If $f_{\rm cl}$ is larger, then the Poission probability of having clear sightlines becomes exponentially smaller: the Poisson probability that a sightline from $r=0$ intersects zero clumps equals\footnote{The expression for $P(N_{\rm clump}=0|f_{\rm cl})$ which properly includes the Ly$\alpha$ emissivity profile $\epsilon_{{\rm Ly}\alpha}(r)$ must take into account that the probability $P(N_{\rm clump}=0|f_{\rm cl},r)$ depends on emission direction for $r \neq 0$. This makes the expression for $P(N_{\rm clump}=0|f_{\rm cl})$ a bit more complicated but preserves the exponential dependence on $f_{\rm cl}$. } $P(N_{\rm clump}=0|f_{\rm cl},r=0)=(1-P_{\rm cloud})\exp(-f_{\rm cl})$, where $1-P_{\rm cloud}$ denotes the probability that the LyC photon was {\it not} emitted inside a cloud. This result may make it difficult to explain inferred $f^{\rm ion}_{\rm esc}$$\sim 0.1-0.2$ for a small subset LBGs (e.g. Iwata et al. 2009, Micheva et al. 2016). This apparent discrepancy can be alleviated in five ways: ({\it i}) resonant scattering of Ly$\alpha$ off residual HI gas in the diffuse IGM can suppress the observed Ly$\alpha$ flux by an additional factor of $1.5-2.0$ depending on redshift (e.g. Laursen et al. 2011), which should be applied to our predicted $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} when comparing to observations; ({\it ii}) the fraction of LBGs with claimed LyC detections is very small, which suggests this population is rare, and not captured by our analysis in spite of our coverage of a broad range of ISM physical conditions; ({\it iii}) While LBGs generally have smaller $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} than LAEs, $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} appears correlated with the Ly$\alpha$ EW (e.g. Trainor et al. 2015, Micheva et al. 2016), which itself scales as EW$\propto (1-$$f^{\rm ion}_{\rm esc}$$)$$f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm}(see \S~\ref{sec:lyaprod}). This suggests that those LBGs that show LyC leakage, may in fact have larger $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} than the LBG population as a whole; ({\it iv}) For very large $f^{\rm ion}_{\rm esc}$\hspace{1mm}, the production rate of Ly$\alpha$ decreases, which mimicks a low $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} (see \S~\ref{sec:lyaprod} for more discussion on this); ({\it v}) Each cross in Figure~\ref{fig:1} represents an {\it angle-average} of the escape fractions for each of the 2500 models. The `apparent' $f^{\rm ion}_{\rm esc}$\hspace{1mm} can be larger along sightlines which do not intersect any clumps. The Ly$\alpha$ escape fraction can also be enhanced for these same sightlines, though scattering of Ly$\alpha$ photons suppresses the angular variation of $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} (see Gronke \& Dijkstra 2014). The angular variation of both escape fraction can be represented by replacing each cross in Figure~\ref{fig:1} with a distribution which is elongated along the $f^{\rm ion}_{\rm esc}$-direction, which may help explain that objects exist for which $f^{\rm ion}_{\rm esc}$\hspace{1mm} is high, while $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} is low. \item The dispersion in $f^{\rm ion}_{\rm esc}$\hspace{1mm} at fixed $f^{{\rm Ly}\alpha}_{\rm esc}$ \hspace{1mm} increases with $f^{{\rm Ly}\alpha}_{\rm esc}$. In other words, as we increase $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} the probability of having a large $f^{\rm ion}_{\rm esc}$\hspace{1mm} increases. There are a number of ways to boost $f^{{\rm Ly}\alpha}_{\rm esc}$. These include reducing the dust content of the neutral clumps, increasing the outflow velocity, and reducing $f_{\rm cl}$. As mentioned above, reducing $f_{\rm cl}$ enhance the Poisson probability that there exist sightlines that do not intersect any clumps, which increases $f^{\rm ion}_{\rm esc}$. \citet{Matthee16b} recently found $f^{\rm ion}_{\rm esc}$$~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}} 60\%$ for 8 H$\alpha$ emitters (HAEs) out of a sample of 191. Two of these LyC emitting HAEs have a high $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} (Matthee et al. 2016a). For the remaining 6 the Ly$\alpha$ is not good enough (yet) to constrain $f^{{\rm Ly}\alpha}_{\rm esc}$. \item The color coding of the {\it crosses} in Figure~\ref{fig:1} show clearly that high $f^{\rm ion}_{\rm esc}$\hspace{1mm} are those with low $f_{\rm cl}$. This again reflects that a lower average number density of clouds from the center to the edge of the `galaxy' boosts the Poisson probability for having clean sightlines. \end{enumerate} The strong $f_{\rm cl}$-dependence of $f^{\rm ion}_{\rm esc}$\hspace{1mm} is easily understood from analytic arguments. The simulations indicate that this result is not significantly affected by varying the other parameters. The {\it black dashed line} shows the best linear fit through the collection of data points. We stress that the purpose of this line is to illustrate that $f^{\rm ion}_{\rm esc}$\hspace{1mm} and $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} are correlated. The exact `best-fit' correlation depends on how the 14 model-parameters were sampled: different PDFs for the model parameters would likely yield a different best-fit correlation. This may help explain that our correlation differs quantitatively from that found by Yajima et al. (2014), who found few objects with high $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} and low $f^{\rm ion}_{\rm esc}$. This difference may also reflect that ({\it i}) the simulations do not resolve the multi-phase interstellar medium, and may therefore not properly capture that Ly$\alpha$ photons avoid destruction by dust by scattering off the surface of dense, neutral clumps which contain most of the dust, ({\it ii}) that our model artificially enhances this surface scattering effect, by representing the multi-phase ISM as a two-phase medium. We stress that the purpose of our calculations was not to derive the correct correlation, which would be overambitious, but rather to show that for reasonable parameters for the multiphase ISM, a correlation exists. Finally, it is worth mentioning that the fact that both $f^{\rm ion}_{\rm esc}$\hspace{1mm} and $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} are affected most strongly by $f_{\rm cl}$ implies that the precise structure of the clumps (i.e. the presence of a `cold neutral medium' inside the clumps), would introduce changes that are subdominant to those introduced by $f_{\rm cl}$. \section{Discussion}\label{sec:discuss} \subsection{The $M_{\rm UV}$-dependence of $f^{{\rm Ly}\alpha}_{\rm esc}$}\label{sec:muvfesc} The `Ly$\alpha$ fraction' denotes the fraction of galaxies that have a Ly$\alpha$ emission line stronger than some threshold equivalent width. Observations indicate that the Ly$\alpha$ fraction increases with $M_{\rm UV}$ (e.g. Stark et al. 2010, Pentericci et al. 2011, Caruana et al. 2012, Ono et al. 2012, Schenker et al. 2012, Pentericci et al. 2014, Caruana et al. 2014). Gronke et al. (2015b) combined observations of the UV-LF with current constraints on the UV-dependence of the Ly$\alpha$ fraction, and predicted that Ly$\alpha$-LFs should be have steeper faint ends than the UV-LFs. Specifically, if we denote the faint-end slope of the Ly$\alpha$ LF with $\alpha_{{\rm Ly}\alpha}$, then $\alpha_{{\rm Ly}\alpha}=\alpha_{\rm UV}-x$ where $x\sim 0.2-0.4$ (see Fig~2 of Gronke et al. 2015b). Recent measurements of faint end slope of Ly$\alpha$ emitter luminosity functions at $z=5.7$ indicate that $\alpha_{{\rm Ly}\alpha}\sim -2.2\pm 0.2$ (Dressler et al. 2015), and that $\alpha_{{\rm Ly}\alpha}\sim -1.75\pm 0.1$ at $z\sim 2$ (Konno et al. 2016). These measurements agree well\footnote{Gronke et al. (2015b) only predicted Ly$\alpha$ LFs at $z\geq 3$. We extrapolated their predictions for $\alpha_{{\rm Ly}\alpha}$ to $z\sim 2$. This same extrapolation would translate to a faint-end slope of the UV-luminosity function at $z\sim 2.3$ that is $\alpha_{\rm UV} \sim -1.5$, which agrees with recent determinations (see Fig~10 of Parsa et al. 2016), though not all (see e.g. Reddy \& Steidel 2009, who found a steeper $\alpha_{\rm UV}=-1.73\pm 0.07$).} with prediction using Ly$\alpha$ fraction constraints, and provides independent confirmation that more Ly$\alpha$ radiation emerges per `unit' UV-flux density towards lower UV luminosities. This enhanced emergence of Ly$\alpha$ flux from UV-faint galaxies implies that ({\it i}) the Ly$\alpha$ production rate increases, and/or ({\it ii}) $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} increases towards lower UV-luminosities. Recent work has shown that at $z\sim 4$ the ionizing photon production efficiency, $\xi_{\rm ion}$ (Robertson et al. 2013), appears to be independent of $M_{\rm UV}$ in the range $-21<M_{\rm UV} <-19$ at $z\sim 4-5$ (see Fig~1 of Bouwens et al. 2016, which also shows that there is still a large scatter). The Ly$\alpha$ production efficiency should then also not depend on $M_{\rm UV}$, as Ly$\alpha$ production is directly tied to ionizing photon production. In contrast, over this same range in $M_{\rm UV}$, the Ly$\alpha$ fraction rises rapidly (see Fig~13 of Stark et al. 2010). This suggests that the enhanced visibility of Ly$\alpha$ flux is mostly driven by an enhanced escape fraction, and provides the basis for our statement that there is observational support that $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} increases towards lower UV-luminosities (or towards higher $M_{\rm UV}$). Trainor et al. (2015) note that in LAEs with H$\alpha$ detections, the inferred $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} correlates significantly with Ly$\alpha$ EW, which provides independent confirmation that Ly$\alpha$ EW is an indicator of Ly$\alpha$ escape. \citet{Oya16} recently found that the Ly$\alpha$-EW PDF, and therefore the Ly$\alpha$ fraction, depends on stellar mass, $M_*$. This supports that $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} increases towards lower $M_*$. Our finding of a correlation between $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} and $f^{\rm ion}_{\rm esc}$\hspace{1mm} then implies that $f^{\rm ion}_{\rm esc}$\hspace{1mm} also increases towards lower $M_*$. Faisst (2016) independently came to this conclusion by combining the observed correlation between $f^{\rm ion}_{\rm esc}$\hspace{1mm} and the [O III]$\lambda$5007/[O II]$\lambda$3727 line ratio, and the (anti-)correlation of this line ratio with $M_*$ inferred from local high-z analogues. We stress that we focus on the $M_{\rm UV}$-dependence of $f^{\rm ion}_{\rm esc}$\hspace{1mm} because this allows us to directly connect our results to the UV-LF of continuum selected galaxies, which is routinely used to quantify the LyC volume emissivity of galaxies during cosmic reionization. \subsection{Implications for Reionization}\label{sec:reionization} One of the main open questions in reionization is whether galaxies provided enough photons to reionize the Universe, and if so, which galaxies provided the dominant contribution to the ionizing background that drove reionization. These questions are commonly addressed by extrapolating the faint-end of the (non-ionizing) UV-LF of drop-out galaxies to some minimum UV luminosity (corresponding to a maximum $M^{\rm lim}_{\rm UV}$), and then see whether theses galaxies provided enough photons to either reionize the Universe, or to keep it ionized \citep[e.g.][]{Wilkins11,Shull12,Kuhlen,Fink,Robertson}. This approach introduces two parameters related to the UV-LF: ({\it i}) its faint end slope ($\alpha_{\rm UV}$), and ({\it ii}) its minimum cut-off luminosity ($M^{\rm lim}_{\rm UV}$). For a fixed set of parameters $(\alpha_{\rm UV},M^{\rm lim}_{\rm UV})$, the question whether galaxies reionized the Universe then translates to a constraint on $f^{\rm ion}_{\rm esc}$. This constraint on $f^{\rm ion}_{\rm esc}$\hspace{1mm} represents a (weighted) average over the full population of UV emitting galaxies. There have been numerous theoretical efforts to model the faint end slope of the UV-LF and where it may flatten (e.g. Jaacks et al. 2013, Mason et al. 2015, O'Shea et al. 2015, Liu et al. 2016). \begin{figure} \vspace{-10mm} \includegraphics[width=10.0cm,angle=0]{fig2.pdf} \vspace{-5mm} \caption[]{An increase in $f^{\rm ion}_{\rm esc}$\hspace{1mm} towards lower UV luminosities gives rise to a steepening of the LyC luminosity function (LF), which can be mimicked with a steeper UV LF and a constant $f^{\rm ion}_{\rm esc}$. The {\it top panel} of this Figure shows the relative contribution $d\epsilon_{\rm ion}/dM_{\rm UV}$ (in arbitrary units) to the ionizing volume emissivity $\epsilon_{\rm ion}$ at $z=6$ by galaxies with $M_{\rm UV} \pm dM_{\rm UV}/2$ for the measured $\alpha_{\rm UV}=-1.85$ ({\it black solid line}), and steeper LFs with $\alpha_{\rm UV}=-2.25$ ({\it blue dotted line}) and $\alpha_{\rm UV}=-2.05$ ({\it red dashed line}). While we cannot predict (yet) which $\alpha_{\rm UV}$ mimicks the true $M_{\rm UV}$-dependence of $f^{\rm ion}_{\rm esc}$, this plot visually illustrates the enhanced contribution of UV-faint galaxies to cosmic reionization. The {\it lower panel} shows the ratio between the models in the {\it top panel} and the fiducial model (shown as the {\it black solid line}).} \label{fig:3} \end{figure} If Ly$\alpha$ and LyC escape are correlated, then we also expect $f^{\rm ion}_{\rm esc}$\hspace{1mm} to increase towards lower UV luminosities. Just like the case for Ly$\alpha$, if we were to plot the LyC luminosity function (i.e. the number density of galaxies as a function of LyC luminosity), it would be steeper than the UV luminosity function. This steepening can be mimicked by a model in which $f^{\rm ion}_{\rm esc}$\hspace{1mm} does not depend on $M_{\rm UV}$, and in which the faint-end slope of the UV-luminosity function is made steeper. Figure~\ref{fig:3} visually illustrates the impact of this steepening, and the {\it top panel} shows the relative contribution $d\epsilon_{\rm ion}/dM_{\rm UV}$ to the ionizing volume emissivity $\epsilon_{\rm ion}$ by galaxies in the range $M_{\rm UV} \pm dM_{\rm UV}/2$. The {\it black solid line} shows $d\epsilon_{\rm ion}/dM_{\rm UV}$ for the `standard' Schechter function parameters at $z=6$, $(\alpha_{\rm UV}, M_*)=(-1.85,-20.2)$ (using the fitting formula from Bouwens et al. 2015). For the {\it blue dotted line} [{\it red dashed line}] we increased $\alpha_{\rm UV} \rightarrow -2.25$, which represents the steepening relevant for the Ly$\alpha$ LF [$\alpha_{\rm UV} \rightarrow -2.05$, which represents an intermediate case]. While we do not know which $\alpha_{\rm UV}$ mimicks the correct $M_{\rm UV}$ dependence of $f^{\rm ion}_{\rm esc}$, it does illustrate the possible enhanced contribution of UV-faint galaxies to cosmic reionization\footnote{Decreasing $\alpha$ reduces the contribution of UV-bright galaxies, i.e. $M_{\rm UV} < M_*$, to $d\epsilon_{\rm ion}/dM_{\rm UV}$ because reducing $\alpha$ also affects the bright end of the luminosity function.}. The enhancement is illustrated in the {\it lower panel} of Figure~\ref{fig:3} which shows the ratio of the models shown in the {\it top panel}. This plot shows that in a model with $\alpha_{\rm UV}=-2.25$ galaxies with $M_{\rm UV} \sim -16$ contribute $~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}} 10$ times more to the total ionizing photon production rate than when $\alpha=-1.85$. This model extends down to $M_{\rm UV}=-14$, which corresponds (roughly) to the limit to which the UV-LF has been constraint to be a power-law \citep[see e.g][also see O'Shea et al. 2015 and Liu et al. 2016 for theoretical arguments why the UV-LF may flatten at $M_{\rm UV} ~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}} -14$]{Alavi14,Parsa16,Livermore16}. \begin{figure*} \centering \includegraphics[width=16.0cm,angle=0]{fig3.pdf} \vspace{0mm} \caption[]{{\it Left panel}: Ly$\alpha$ spectra emerging from 25 models with the lowest $f^{\rm ion}_{\rm esc}$$<10^{-4}$. {\it Right panel}: Ly$\alpha$ spectra emerging from the $25$ models with the highest $f^{\rm ion}_{\rm esc}$$>0.37$. This Figure shows that for models with low $f^{\rm ion}_{\rm esc}$\hspace{1mm} Ly$\alpha$ spectra are redshifted, asymmetric, and broad. The width and velocity off-set of the models with the {\it lowest} $f^{\rm ion}_{\rm esc}$\hspace{1mm} are larger than what has been observed, which is likely because of the simplified representation of the multiphase ISM (see text). Although models with high $f^{\rm ion}_{\rm esc}$\hspace{1mm} exhibit a variety of spectral line shapes, their spectra are generally narrower and more symmetric.} \label{fig:spec} \end{figure*} Finally, Ly$\alpha$ escape in clumpy ISM models -- which were introduced to reflect the multi-phase nature of the ISM -- is most strongly regulated by covering factor, and to a lesser extent by other parameters such as the dust content (Gronke \& Dijkstra 2016). Star forming galaxies are known to become bluer towards higher redshift, which is taken as evidence that star forming galaxies get less dusty towards higher redshifts \citep[e.g.][]{Finkcolors,Bouwenscolor}. Our conclusions would break down if Ly$\alpha$ escape were driven {\it entirely} by the changing dust content of an otherwise identical scattering medium. In this case however, we would expect both the width and (possibly) velocity shift of the Ly$\alpha$ line to {\it increase} with $M_{\rm UV}$, because Ly$\alpha$ scattering causes photons to diffuse in frequency space, and to broaden the Ly$\alpha$ spectral line shape. If only dust were regulating Ly$\alpha$ escape, then dust would suppress this frequency diffusion and cause lines to be narrower (see e.g. Fig~8 of Laursen et al. 2009). The evolution in Ly$\alpha$ line width and shift predicted by the `pure dust' scenario is not consistent with observations which indicate that Ly$\alpha$ spectra of Ly$\alpha$ emitting galaxies, if anything, tend to get narrower: Konno et al. (2016) have shown that shell-model fits to observed Ly$\alpha$ line profiles favor increasingly low HI column densities towards higher $z$ for otherwise identical shell model parameters. The reduced HI column density introduces less frequency diffusion, and makes Ly$\alpha$ line profiles narrower\footnote{Also, the velocity off-set of the peak flux density of the Ly$\alpha$ spectral line shape decreases towards UV-fainter galaxies \citep[e.g.][]{Erb14,Song14}.}. In addition, there is observational support that the covering factor of low-ionization metal lines decreases with $z$ (e.g. Jones et al. 2013). If these metals trace cold, neutral gas, then this supports the notion that Ly$\alpha$ escape increases towards higher redshift (at least partly) because of the evolution in the covering factor of neutral gas. \subsection{Connection $f^{\rm ion}_{\rm esc}$\hspace{1mm} to the Ly$\alpha$ Spectrum}\label{sec:spectrum} Figure~\ref{fig:1} showed that $f^{\rm ion}_{\rm esc}$\hspace{1mm} depends sensitively on $f_{\rm cl}$, which was due to the exponential dependence on $f_{\rm cl}$ of the Poisson probability of having sightlines with no clumps. The parameter $f_{\rm cl}$ is known to play a key role in Ly$\alpha$ transfer through clumpy media (Hansen \& Oh 2006). We have demonstrated that $f_{\rm cl}$ is one the 14 parameters of the clumpy models that most strongly affects the emerging Ly$\alpha$ spectrum (Gronke \& Dijkstra 2016). This implies immediately that $f^{\rm ion}_{\rm esc}$\hspace{1mm} should be closely correlated with spectral features of the Ly$\alpha$ line. Figure~\ref{fig:spec} compares Ly$\alpha$ spectra for 25 models with the {\it highest} $f^{\rm ion}_{\rm esc}$$>0.37$ ({\it right panel}) to 25 models with the {\it lowest} $f^{\rm ion}_{\rm esc}$$<10^{-4}$ ({\it left panel}). These two panels illustrate clearly that a high $f^{\rm ion}_{\rm esc}$\hspace{1mm} corresponds to having narrower, more symmetric Ly$\alpha$ lines. Models that have the highest $f^{\rm ion}_{\rm esc}$\hspace{1mm} show a variety in their spectra. We caution that the width and velocity off-set of the models with the {\it lowest} $f^{\rm ion}_{\rm esc}$\hspace{1mm} are larger than what has been observed. This is likely an artefact of the models: models with the lowest $f^{\rm ion}_{\rm esc}$\hspace{1mm} have the highest $f_{\rm cl} \sim 8$. Ly$\alpha$ photons typically scatter off $\sim f^2_{\rm CL}$ separate clouds before escaping (e.g. Hansen \& Oh 2006), and each `cloud-interaction' can impart of noticeable Doppler boost on the Ly$\alpha$ photon, which broadens the Ly$\alpha$ spectral line. The connection between the Ly$\alpha$ spectral shape and ionizing photon escape was pointed out previously by Behrens et al. (2014, in the context of modified shell models) and Verhamme et al. (2015, in the context of shell models). In these models, LyC escape translated to ({\it i}) significant Ly$\alpha$ flux at systemic velocity and/or ({\it ii}) a small peak separation ($\Delta v ~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 300$ km s$^{-1}$). In multiphase models, it is not possible to point out features in the spectrum that guarantee a LyC detection, partly because of the larger variety in the spectra associated with models that have larger $f^{\rm ion}_{\rm esc}$. In addition, in clumpy models Ly$\alpha$ photons can escape after scattering off a single gas cloud, and close to the frequency at which they were initially emitted (also see Hansen\& Oh 2006, Laursen et al. 2013). Moreover, while narrow Ly$\alpha$ lines that are symmetric around the systemic velocity of the host galaxy translate to a higher probability of being a LyC emitting galaxy, LyC escape is highly anisotropic (Ly$\alpha$ escape less so, see Gronke \& Dijkstra 2014), which further complicates making robust predictions for whether we can observe LyC flux from a galaxy or not. However, anisotropic escape of LyC photons similarly affects other promising LyC-leakage indicators\footnote{On the other hand, LyC escape enhances the ionizing radiation field in close proximity to star forming galaxies, which can increase the surface brightness in fluorescent Ly$\alpha$ and H$\alpha$ emission (see Mas-Ribas \& Dijkstra 2016).} such as the [O III]$\lambda$5007/[O II]$\lambda$3727 line ratio (Jaskot \& Oey 2013, Nakajima \& Ouchi 2014). The low-redshift `Lyman Break Analogue' (LBA, Heckman et al. 2011, Borthakur et al. 2015) and `green pea galaxy' (Henry et al. 2015, Yang et a. 2016, Izotov et al. 2016) with reported detections of LyC escape, had unusual Ly$\alpha$ spectra in the sense that the spectra contained significant flux blueward of Ly$\alpha$ resonance. These spectra were different than those shown in Figure~\ref{fig:spec} in that they had deep `absorption' troughs separating the blue and red peaks, which are absent from the spectra in Figure~\ref{fig:spec}. The absence of these absorption troughs in the theoretical spectra may reflect the lack trace amounts of residual HI (possibly in the CGM) at systemic velocity (see Gronke \& Dijkstra 2016). In any case, the presence of flux blueward of the Ly$\alpha$ resonance indicates that the lines are more symmetric around the Ly$\alpha$ resonance than is common. \subsection{Suppressed Ly$\alpha$ Production for large $f^{\rm ion}_{\rm esc}$}\label{sec:lyaprod} The Ly$\alpha$ production rate scales as $\propto (1-$$f^{\rm ion}_{\rm esc}$$)$. The total Ly$\alpha$ flux that we receive from a distant galaxy, as well as the equivalent width (EW) of the line, both scale as $\propto (1-$$f^{\rm ion}_{\rm esc}$$)$$f^{{\rm Ly}\alpha}_{\rm esc}$. Figure~\ref{fig:5} shows $\log$$f^{\rm ion}_{\rm esc}$ \hspace{1mm} as a function of $\log$[$f^{{\rm Ly}\alpha}_{\rm esc}$$(1-$$f^{\rm ion}_{\rm esc}$$)]$. The turnover at high-$f^{\rm ion}_{\rm esc}$\hspace{1mm} and high-$f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} reflects that the quantity $f^{{\rm Ly}\alpha}_{\rm esc}$$(1-$$f^{\rm ion}_{\rm esc}$$)$ cannot exceed $(1-$$f^{\rm ion}_{\rm esc}$$)$ (which is indicated as the {\it red dotted line}). At fixed $f^{\rm ion}_{\rm esc}$\hspace{1mm} there exists a distribution of $f^{{\rm Ly}\alpha}_{\rm esc}$$(1-$$f^{\rm ion}_{\rm esc}$$)$, which reflects the dispersion in $f^{{\rm Ly}\alpha}_{\rm esc}$. The average of this distribution peaks at some maximum $f^{\rm ion}_{\rm esc,max}$\hspace{1mm}(also see Dijkstra et al. 2014). The value of $f^{\rm ion}_{\rm esc,max}$\hspace{1mm} is model-dependent, and even in the context of our model it depends on how we sampled our 14 parameters. It nevertheless seems reasonable to assume that $f^{\rm ion}_{\rm esc,max}$$\sim 0.1-0.5$. For large $f^{\rm ion}_{\rm esc}$$>$$f^{\rm ion}_{\rm esc,max}$\hspace{1mm} the Ly$\alpha$ luminosity drops again, which mimicks a reduction in $f^{{\rm Ly}\alpha}_{\rm esc}$. We expect this `apparent' reduction in the escape fraction to translate to a reduction in the Ly$\alpha$ fraction and/or a flattening of the Ly$\alpha$ luminosity function at low Ly$\alpha$ luminosities\footnote{Although a flattening has possibly been detected at $z\sim 3$ by Rauch et al. (2008) at $L_{\alpha} \sim 10^{41}$ erg s$^{-1}$, which would probe galaxies with $M_{\rm UV} \sim -15 \pm 1$ (see Gronke et al. 2015 for a discussion).}. No evidence for either this drop or this flattening exists at $z\sim 6$ in current data (though it may be present at $z~6.5$, see Fig~7 of Matthee et al. 2015), which implies that this effect is not important in current observations at $z\sim 6$. More quantitatively, Dressler et al. (2015) infer a steep faint end slope of the LAE LF down to $L_{\alpha} < 10^{42}$ erg s$^{-1}$. Gronke et al. (2015b) show that Ly$\alpha$ luminosity of $L_{\alpha} \sim 10^{42}$ erg s$^{-1}$ probes galaxies with $M_{\rm UV} \sim -18 \pm 1$ (see their Fig~3). This therefore implies hat $f^{\rm ion}_{\rm esc}$$<$$f^{\rm ion}_{\rm esc,max}$, and that therefore this effect is not important, down to $M_{\rm UV} \sim -18 \pm 1$. At $z>6$ there is observational evidence for a reduction in the Ly$\alpha$ flux from star forming galaxies compared to expectations based on extrapolations from lower redshift observations (see e.g. Dijkstra 2014 and references therein). There are indications that this reduction is more severe for UV-faint galaxies (e.g. Ono et al. 2012, Pentericci et al. 2014), which is commonly interpreted as a signature of inhomogeneous reionization, but might also reflect that $f^{\rm ion}_{\rm esc}$$\rightarrow$$f^{\rm ion}_{\rm esc,max}$\hspace{1mm} in UV-faint galaxies at $z\sim 7$ (also see Dijkstra et al. 2014). It is theoretically possible to distinguish between these two scenarios: ({\it i}) reionization leaves a unique signature on the angular clustering of Ly$\alpha$ emitters (McQuinn et al. 2007, Mesinger \& Furlanetto 2008, Jensen et al. 2013, Sobacchi \& Mesinger 2015), and which can be measured with Subaru's Hyper-Suprime Cam\footnote{\url{http://www.naoj.org/Projects/HSC/}} (see e.g. Jensen et al. 2014, Sobacchi \& Mesinger 2015), ({\it ii}) if Ly$\alpha$ disappears as a result of $f^{\rm ion}_{\rm esc}$\hspace{1mm} becoming large, then we should see a similar decrease in the line strength of other non-resonant nebular lines such as H$\alpha$ (and H$\beta$), something that can be tested with the James Webb Space Telescope\footnote{\url{http://www.jwst.nasa.gov/}} \citep{JWST}. Redshift $z\sim 6$ is particularly interesting as reionization likely had little impact on the observed Ly$\alpha$ flux from galaxies. Should future data reveal a flattening in the Ly$\alpha$ LF at low Ly$\alpha$ luminosities and/or a reduction in the Ly$\alpha$ fraction at lower UV-luminosities, then this may provide a valuable constraint on $f^{\rm ion}_{\rm esc}$\hspace{1mm} at this redshift. In addition, understanding whether $f^{\rm ion}_{\rm esc}$\hspace{1mm} introduces a flattening in the Ly$\alpha$ LF at low Ly$\alpha$ luminosities and/or a drop in the Ly$\alpha$ fraction at faint UV luminosities, would help us better constrain the role that reionization plays in suppressing the Ly$\alpha$ flux from galaxies at $z>6$. \begin{figure} \includegraphics[width=9.0cm,angle=0]{fig4.pdf} \vspace{0mm} \caption[]{This plot shows $f^{\rm ion}_{\rm esc}$\hspace{1mm} as a function of the `apparent' Ly$\alpha$ escape fraction, $f^{{\rm Ly}\alpha}_{\rm esc}$$(1-$$f^{\rm ion}_{\rm esc}$$)$, which reflects that the production rate of Ly$\alpha$ photons scales as $\propto (1-$$f^{\rm ion}_{\rm esc}$$)$. The {\it red-dotted line} shows the maximum apparent escape fraction $(1-$$f^{\rm ion}_{\rm esc}$$)$. Large $f^{\rm ion}_{\rm esc}$\hspace{1mm} thus also suppresses the observed Ly$\alpha$ flux, mimicking a reduction in $f^{{\rm Ly}\alpha}_{\rm esc}$. This Figure illustrates that there exists a maximum average $f^{{\rm Ly}\alpha}_{\rm esc}$$(1-$$f^{\rm ion}_{\rm esc}$$)$ at some $f^{\rm ion}_{\rm esc}$$\equiv$$f^{\rm ion}_{\rm esc,max}$$\sim 0.1-0.5$ (see text).} \label{fig:5} \end{figure} \subsection{Impact of Delayed Ly$\alpha$ Escape due to Trapping} The escape fraction of LyC photons from a galaxy can vary significantly on time-scales of $\sim 10$ Myr \citep[][]{Kimm14,Ma15}, which corresponds approximately to the life-time of massive stars. Trapping of Ly$\alpha$ photons by HI gas can introduce a lag in the escape of Ly$\alpha$ and LyC photons (Yajima \& Li 2014): Ly$\alpha$ photons scatter inside HI gas when $f^{\rm ion}_{\rm esc}$$ \ll 1$, but are `released' efficiently when low-column channels temporarily open-up, which allow LyC photons to escape. Time-variations in $f^{\rm ion}_{\rm esc}$\hspace{1mm} and delayed escape of Ly$\alpha$ has only a minor, positive, impact on our results by slightly more tightly coupling $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} and $f^{\rm ion}_{\rm esc}$, as we explain below. Trapping of Ly$\alpha$ photons is limited to time-scales $t_{\rm trap} \ll 10$ Myr, as the typical Ly$\alpha$ trapping time equals $t_{\rm trap}=|x_{\rm p}|t_{\rm cross}$, where $t_{\rm cross}\equiv R/c$ denotes the time it takes radiation to escape in the absence of scattering, and $|x_{\rm p}|\approx 12(N_{\rm HI}/10^{20}\hspace{1mm}{\rm cm^{-2}})^{1/3}(T/10^4\hspace{1mm}{\rm K})^{1/6}$ for a static, uniform, spherical gas cloud with an HI column density $N_{\rm HI}$ and temperature $T$ (Adams, 1975). In reality, this estimate provides a strict upper limit to the delay time: velocity gradients, density inhomogeneities reduce $t_{\rm trap}$ (Bonilha et al. 1979, Dijkstra \& Loeb 2008). Laursen et al. (2013) evaluated that the typical trapping time for Ly$\alpha$ in clumpy media considered here to be $t_{\rm trap} \sim 2 \times 10^4$ yr. Trapping of Ly$\alpha$ photons is therefore unlikely to introduce a lag between the escape of Ly$\alpha$ and LyC photons at a level where it has observable consequences. Moreover, if anything, this effect would serve to more tightly couple Ly$\alpha$ and LyC escape, as LyC escape would be accompanied with the escape of Ly$\alpha$ photons that were trapped inside the HI gas. \\ \section{Conclusions} \label{sec:conc} The escape fraction of ionizing photons, $f^{\rm ion}_{\rm esc}$, represents one of the great unknowns in our understanding of cosmic reionization. Observational constraints on $f^{\rm ion}_{\rm esc}$\hspace{1mm} are still weak, and theoretical predictions remain incomplete owing to the challenging nature of the calculations. We have computed the correlation between the escape fractions of Ly$\alpha$ ($f^{{\rm Ly}\alpha}_{\rm esc}$) and ionizing (LyC) radiation ($f^{\rm ion}_{\rm esc}$) by performing Monte-Carlo simulations of Ly$\alpha$ radiative transfer through a suite of $2500$ models of dusty, clumpy interstellar media. This represents a `top-down' (empirical) approach to modelling LyC and Ly$\alpha$ transfer through realistic, multiphase interstellar media, and complements the previous `bottom-up' (ab initio) approach by Yajima et al. (2014), who used hydrodynamical simulations to generate models of the ISM. Our main results are: \begin{itemize}[leftmargin=0pt,itemindent=20pt] \item We find that $f^{\rm ion}_{\rm esc}$ \hspace{1mm} and $f^{{\rm Ly}\alpha}_{\rm esc}$ \hspace{1mm} are correlated. The dispersion in $f^{\rm ion}_{\rm esc}$ \hspace{1mm} at fixed $f^{{\rm Ly}\alpha}_{\rm esc}$ \hspace{1mm} increases towards larger $f^{{\rm Ly}\alpha}_{\rm esc}$: galaxies with low $f^{{\rm Ly}\alpha}_{\rm esc}$ \hspace{1mm} have a low $f^{\rm ion}_{\rm esc}$, while galaxies with high $f^{{\rm Ly}\alpha}_{\rm esc}$ \hspace{1mm} show a large spread in $f^{\rm ion}_{\rm esc}$\hspace{1mm} (see Fig~\ref{fig:1}). The dispersion in $f^{\rm ion}_{\rm esc}$\hspace{1mm} is driven by the dispersion in $f_{\rm cl}$, which measures the cloud covering factor. Our results agree qualitatively with those obtained by Yajima et al (2014, who also found a positive correlation), but quantitatively some differences remain, which reflects that neither approach has converged yet (see the discussion in \S~\ref{sec:model}). While predictions of both $f^{\rm ion}_{\rm esc}$\hspace{1mm} and $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} are still highly uncertain, the existence of a correlation between the two quantities can be predicted more robustly, which is underlined by the fact that two different, independent approaches confirm the existence of this correlation. The $f^{{\rm Ly}\alpha}_{\rm esc}$-$f^{\rm ion}_{\rm esc}$\hspace{1mm} correlation reflects that the escape of ionizing photons requires that sightlines exist which contain low column densities of atomic hydrogen, i.e. $N_{\rm HI} ~\rlap{$<$}{\lower 1.0ex\hbox{$\sim$}} 1/\sigma_{\rm ion} \approx 10^{17}$ cm$^{-2}$. These same-low column density paths provide escape routes for Ly$\alpha$ photons (also see Behrens et al. 2014, Verhamme et al. 2015). At a deeper level, the escape of Ly$\alpha$ is facilitated by outflows, which may also create low column density holes out of galaxies, which in turn permit LyC photons to escape. \item We argued that the positive correlation between $f^{\rm ion}_{\rm esc}$\hspace{1mm} and $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} is directly relevant for studies of cosmic reionization, as there is increasing observational support from both continuum and line selected galaxies that Ly$\alpha$ escapes more easily from UV-faint galaxies (see \S~\ref{sec:muvfesc}). The correlation between $f^{\rm ion}_{\rm esc}$\hspace{1mm} and $f^{{\rm Ly}\alpha}_{\rm esc}$ \hspace{1mm} then implies that ionizing photons also escape more easily from UV-faint galaxies at this redshift. This implies that UV-faint galaxies contribute more to the volume emissivity of ionizing photons than implied by the faint-end slope of the UV-luminosity function (\S~\ref{sec:reionization}). These conclusions may be invalidated if the escape of Ly$\alpha$ is regulated {\it purely} by dust. However, we argued in \S~\ref{sec:reionization} that observations do not support this picture. \item Because the `apparent' Ly$\alpha$ escape fraction, $f^{{\rm Ly}\alpha}_{\rm esc}$$(1-$$f^{\rm ion}_{\rm esc}$$)$, reaches a maximum value for $f^{\rm ion}_{\rm esc}$$=$$f^{\rm ion}_{\rm esc,max}$$\sim 0.1-0.5$ (see \S~\ref{sec:lyaprod}), we expect a drop in the Ly$\alpha$ fraction at lower UV-luminosities and/or a flattening of the Ly$\alpha$ LF at lower Ly$\alpha$ luminosities, if $f^{\rm ion}_{\rm esc}$\hspace{1mm} continues to rise monotonically. This has not been observed yet at $z\sim 6$ (but possibly at $z\sim 6.5$, see Matthee et al. 2015), which implies that $f^{\rm ion}_{\rm esc}$$<$$f^{\rm ion}_{\rm esc,max}$\hspace{1mm} in galaxies with $M_{\rm UV} \sim -18 \pm 1$. The observed reduction in Ly$\alpha$ flux from galaxies at $z>6$ may be partly due to $f^{\rm ion}_{\rm esc}$\hspace{1mm} approaching $f^{\rm ion}_{\rm esc,max}$\hspace{1mm} (also see Dijkstra et al. 2014). LAE clustering measurements and observations of Balmer lines can help determine the role of $f^{\rm ion}_{\rm esc}$\hspace{1mm} in the disappearance of Ly$\alpha$ emission from galaxies at z$~\rlap{$>$}{\lower 1.0ex\hbox{$\sim$}} 6$ (see \S~\ref{sec:lyaprod}). \item Figure~\ref{fig:1} also shows that the ionizing escape fraction is strongly affected by the cloud covering factor, $f_{\rm cl}$. As a result, $f^{\rm ion}_{\rm esc}$\hspace{1mm} is closely connected to the observed Ly$\alpha$ spectral line shape (see \S~\ref{sec:spectrum}) with LyC emitting objects typically having narrower, more symmetric Ly$\alpha$ lines (Fig~\ref{fig:spec}, also see Erb et al. 2014). In multiphase models, LyC emitting object exhibit a wide range of spectral line profiles, and it is not possible to identify spectral features that `guarantee' a LyC detection. \end{itemize} Ly$\alpha$ emitting galaxies are valuable for constraining the ionization state of the intergalactic medium (see e.g. Dijkstra 2014, and references therein). Our work implies that these galaxies also provide unique insights into the nature of the sources that reionized the Universe, in spite of the fact that modeling interstellar Ly$\alpha$ radiative transfer remains highly challenging. We emphasize that our results differ from previous works which estimated the contribution of LAEs to cosmic reionization (see e.g. Yajima et al. 2014): LAEs represent a subset of galaxies within a limited range of $M_{\rm UV}$ {\it and} with a (relatively) large $f^{{\rm Ly}\alpha}_{\rm esc}$ \hspace{1mm} (and hence $f^{\rm ion}_{\rm esc}$), where the precise range in $M_{\rm UV}$ and $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} both depend on the minimum Ly$\alpha$ luminosity and Ly$\alpha$ EW of the LAE sample of interest. In addition, the contribution of LAEs to cosmic reionization depends sensitively on Ly$\alpha$ EW-PDF as a function of $M_{\rm UV}$, which is not well constrained, especially at faint $M_{\rm UV}$. Here, we make a more general (and robust) point that the faint-end of the LAE LF helps constrain the $M_{\rm UV}$-dependence of $f^{{\rm Ly}\alpha}_{\rm esc}$\hspace{1mm} and by extension, $f^{\rm ion}_{\rm esc}$. In the next years, the number of Ly$\alpha$ emitting galaxies at $z\sim 5.7-7$ is anticipated to grow by $\sim 1-2$ orders of magnitude with surveys performed on Subaru's Hyper Suprime-Cam (HSC). Moreover, integral-field unit spectrographs such as MUSE\footnote{\url{https://www.eso.org/sci/facilities/develop/instruments/muse.html}} will enable us to detect fainter Ly$\alpha$ emitting sources and better constrain the faint-end slope of Ly$\alpha$ emitter luminosity function, and also better characterize the (sometimes spatially resolved) spectra of Ly$\alpha$ emission lines. Recent spectroscopic observations of gravitationally lensed galaxies (e.g. Treu et al. 2015, Schmidt et al. 2016, Vanzella et al. 2016b) have uncovered several (intrinsically) UV-faint galaxies with a prominent Ly$\alpha$ emission line, and/or other spectral features such as a high [O III]$\lambda$5007/[O II]$\lambda$3727 line ratio, which favor LyC escape (Huang et al. 2016, Vanzella et al. 2016b). These observations -which support the case for an enhanced contribution of UV faint galaxies to cosmic reionization- provide a preview of what will be routinely possible with the next generation of ground-based telescopes such as the European Extremely Large Telescope (E-ELT)\footnote{\url{https://www.eso.org/sci/facilities/eelt/}}, the Thirty Meter Telescope\footnote{{\url http://www.tmt.org/}}, and the Giant Magellan Telescope (GMT)\footnote{\url{http://www.gmto.org/}}. {\bf Acknowledgements} We thank the Aspen Center for Physics, where part of this collaboration was initiated, and which is supported by National Science Foundation grant PHY-1066293. MD thanks the astronomy department at UCSB for their kind hospitality. MG thanks the Physics \& Astronomy department of JHU for their kind hospitality. AV gratefully acknowledges support from the University of San Francisco Faculty Development Fund. We thank Peng Oh and Crystal Martin for helpful discussions. Finally, we thank an anonymous referee for a constructive report which helped us improve the presentation of this work.
1,314,259,993,980
arxiv
\section{Introduction} Future wireless networks define a very challenging environment for mobility management (MM) solutions, due to the significant increase in density (in terms of both users and deployed access points), in heterogeneity (given the various radio access technologies (RATs) supported), as well as in programmability (the network as well as the environment can be programmable). To achieve an ubiquitous network service in such challenging environments, it is critical to devise effective MM strategies that facilitate seamless mobility by allowing users to traverse through the network without losing connectivity and service continuity. One of the traditional approaches for allowing applications to serve a user in mobile scenarios has been to maintain network connectivity through handovers \textcolor{red}{based on criteria such as Radio Signal Strength Indicator (RSSI), Signal to Interference and Noise Ratio (SINR), Reference Signal Received Quality (RSRQ), Reference Signal Received Power (RSRP), etc}. However, in addition to the signal quality parameter centric handovers, modern day applications necessitate that other parameters such as available core network bandwidth, End-to-End (E2E) latency, backhaul bandwidth and backhaul reliability \cite{Sutton2018} are also taken into consideration. Moreover, maintaining Quality of Service (QoS), e.g., provisioning service continuity, link continuity, required bit-rate and latency, during mobility scenarios has been one of the primary objectives for novel MM mechanisms. Multiple strategies to satisfy such QoS criteria such as service migration \cite{Machen2018}, service replication \cite{Frangoudis2018}, path reconfiguration \cite{Yang2016}, etc., have been proposed by the research community. MM solutions for 5G and beyond networks are also expected to ensure E2E connectivity and session continuity through the maintenance/preservation of IP address of the user towards the core network entity that provisions the service for the corresponding user. \begin{figure*} \centering \includegraphics[scale = 0.45]{Figures/figure1new.pdf} \caption{An illustrative 5G and beyond network mobility scenario.} \end{figure*} \textcolor{red}{To motivate further, we consider} an illustrative example of the future mobility scenario as presented in Figure 1. It shows the extraordinary nature of complexity that the future networks will present for MM. As shown in Figure 1(a), a mobile user equipment (UE) is connected to multiple RATs (5G Access Point (AP)/ Long Term Evolution (LTE) eNode B (eNB)/visible light communications (VLC) and Light Fidelity (LiFi) small cells \cite{Boulogeorgos2018, Chowdhury2018, Zhang2019}, etc.), while having a delay tolerant and a delay sensitive application datastream (flows) with distinct QoS profiles. Also, the AP through which the delay tolerant flow is being served to the user has a good wireless link with a meta-surface in the vicinity. While, traditionally the environment between the user and an AP is considered as an adversary in all the generations of mobile communications, including 5G, in beyond 5G (B5G) networks the environment will be programmable and hence, will be an ally by provisioning favorable transmission channels \cite{Basar2019, Renzo2019, Chung2011}. These favorable channels will essentially consist of reflected signals, the phases and polarizations of which will be adjusted by thin (but electrically significant) surfaces, also known as meta-surfaces, so that they interfere constructively at the receiver \cite{Renzo2019, Basar2019}. In addition to the meta-surfaces, future networks will also consist of mobile APs such as drones, as shown in Figure 1(a). Note that, the density of meta-surfaces and drone APs will also be extremely high in future networks. Further, in the scenario illustrated, we consider the use case wherein the drone AP is servicing a device-to-device (D2D) cluster, and connecting it to the core network through one of the ground based APs. The D2D cluster over the course of its existence does not generate packets as frequently as the other users, since the cluster devices mainly host Internet of Things (IoT) applications. Next, in Figure 1(b), as the user moves, it starts to register wireless links with better signal quality from other APs as compared to those it is already associated to. It is imperative to state here that, the APs can be from the same or different network operators. Henceforth, a careful and efficient RAT and AP selection for each flow will be necessary as part of the future MM mechanisms. It is interesting to observe that while the AP used for serving the delay tolerant flow in Figure 1(a) no longer has a good link quality, through the meta-surfaces and their programmable nature it still has a good wireless link to the user and hence is able to serve it. Following the new RAT/AP association, flows pertaining to the user are redirected through the most optimal path. Novel MM mechanisms that aim to service the 5G and B5G networks will require efficient route optimization methods to perform the same. Additionally, the MM mechanisms will also need to implement IP forwarding so as to ensure E2E link continuity. In Figure 1(c) we then observe that as the user moves further, the RAT/AP selection and optimal routing methods are continually implemented. Further, when a new application request is generated, as seen in Figure 1(c), an appropriate RAT and AP for the given flow is selected alongside the route that satisfies the requested QoS. Lastly, in Figure 1(d), it can be seen that alongside the user's flows, the D2D cluster's flows are also being serviced by network. However, the D2D cluster is firstly serviced by a drone AP, which then relays information to/from the ground based APs. These ground based APs assist in serving the data flows generated from the devices in the D2D cluster by relaying the data to the relevant servers in the core network. Given the complexity of the scenario presented in Figure 1, it is evident that no single MM mechanism will form the solution to all the possible situations and scenarios that will be prevalent. And, although current MM mechanisms propose methods for careful RAT and AP selection, IP packet forwarding, route optimization, and session management, a more than 10-fold increase in user density coupled with the heterogeneity in flow types and network will extremely limit their capabilities, as explained in the subsequent sections in detail. New user applications such as Augmented Reality, Virtual Reality, Vehicle-to-Everything (V2X), etc., will present very restrictive delay requirements, exceptionally high reliability and bandwidth requirements \cite{Parvez2018}, that will consequently severely challenge the capabilities of current MM strategies. Further, the radio access network (RAN) technologies themselves are expected to undergo important transformation in the future networks given the significant interest in VLC, LiFi, etc., \cite{Boulogeorgos2018, Chowdhury2018}. Whilst both LiFi and VLC, being TeraHertz (THz) bandwidth technologies, enable near Terabits per second (Tbps) speeds, they are significantly impaired by the environment. This consequently has significantly more detrimental effects on the user QoS during mobility scenarios, which we will discuss in further detail in the later sections. Also, owing to the telecom operators' desire to serve more industry verticals, a new set of mobility patterns will emerge. For example, a platoon of vehicles moving coherently together, vehicles disbanding from one platoon to join another, ultra-fast moving users (in excess of 500 km/h), moving access points (such as those on drones \cite{Sekander2018}), etc., thus introducing another dimension to the MM problem. Henceforth, the ability to serve devices with mobility patterns that will be more diverse and challenging as compared to current day network scenarios, will be a significant challenge towards the design, development and deployment of 5G and beyond MM mechanisms. An additional yet significant challenge will be to manage and potentially reduce the control plane (CP) signaling load \cite{Jain2019} due to mobility events. Thus, a fresh perspective, wherein MM solutions are decentrali\-zed and flexible, can support multiple use cases simultaneously and account for the various other radical changes in 5G and B5G networks \textcolor{red}{with reliability}, is required. Note that, decentralization will permit MM mechanisms to service the exponentially increasing number of users coupled with different mobility profiles (e.g., static IoT devices and users in high-speed trains). On the other hand, flexibility will allow them to adapt to the user, network and/or environment context (e.g., QoS, user mobility profile, network load, flow types, meta-surfaces, etc.). \textcolor{red}{Additionally, reliability will aid in provisioning seamless mobility as well as in satisfying the ultra-reliable criterion for future wireless network applications.} References \cite{Akyildiz2015} and \cite{Andreev2014} aim to provide new MM strategies via Software Defined Networking (SDN) based MM and multi-RAT mobility. However, they do not elaborate on the myriad challenges that future MM mechanisms will encounter, such as time complexity, signaling overhead, etc. \textcolor{red}{Similarly, while in \cite{Fan2016} MM strategies, such as advanced cell association, group handovers, etc., have been discussed to address the heterogeneity in the mobility patterns and profiles that will arise in 5G, they fall short in addressing the challenges such as core network signaling, complexity, etc., that 5G and beyond MM solutions will face.} Further, surveys such as \cite{Ferretti2016} and \cite{Zekri2012} are restricted to the current network architecture, and hence, fail to provide a MM perspective for 5G and beyond networks. In addition, while \cite{Zhang2019} aims to provide insights into the requirements, architecture and key technologies for B5G networks, it does not address the critical issue of MM in B5G networks. Hence, to the best of our knowledge, no study has ever provided a comprehensive view of the functional requirements, challenges and potential solutions with regards to the future MM strategies, essential to realizing the future networks. \textcolor{red}{We now list the contributions of this paper, which aim to address these aforementioned gaps, as follows:} \begin{itemize} \item[\textcolor{red}{1.}] \textcolor{red}{We present a novel discussion on the functional requirements and design criteria for 5G and Beyond MM mechanisms.} \item[\textcolor{red}{2.}] \textcolor{red}{We develop a novel qualitative analysis for the legacy mechanisms as well as the current state of the art MM mechanisms on the basis of reliability, flexibility and scalability, towards their utility for 5G and beyond wireless networks.} \item[\textcolor{red}{3.}] \textcolor{red}{We provide a novel classification of the current state-of-the-art mechanisms based on where they are implemented or create an impact within the network, i.e., core network (CN), access network (AN) and extreme edge network. Additionally we also provision a mapping of these classifications onto the 5G service based architecture (SBA) defined by 3GPP \cite{3GPP2020}, which will consequently assist to indicate explicitly the gaps that exist currently.} \item[\textcolor{red}{4.}] \textcolor{red}{We then provide the first discussion in literature with regards to how the current state-of-the-art strategies will fare towards MM for potential B5G solutions envisioned.} \item[\textcolor{red}{5.}] \textcolor{red}{Following the discussions and qualitative analysis we have elucidated the various challenges that the design and development of future MM mechanisms will face.} \item[\textcolor{red}{6.}] \textcolor{red}{We then provide a discussion on the potential strategies that will help them overcome these persistent challenges. We accompany these discussions with a novel mapping between the potential strategies and the aforementioned challenges that they will help resolve.} \item[\textcolor{red}{7.}] \textcolor{red}{Lastly, we develop and provision a novel and unified vision for the 5G and beyond MM solution.} \end{itemize} \noindent \textcolor{red}{The rest of this paper is organized as follows: Section 2 presents the functional requirements and design criteria for the 5G and beyond MM mechanisms. Section 3 defines the criteria for the qualitative analysis as well as the parameters that govern the fulfillment of these criteria. Section 4 presents the novel qualitative analysis for the legacy mechanisms and establishes their pros and cons for 5G and beyond MM. Section 5 introduces a similar analysis for the current state of the art mechanisms as well as their utility towards the MM solutions fr future networks. Section 6 then presents the persistent challenges, the potential strategies that will assist in resolving these challenges whilst aiming to satisfy the requirements defined in Section 2, and the proposed framework for 5G and beyond MM. We then conclude this paper in Section 7.} \section{5G and Beyond MM: Functional Requirements and Design Criteria} Future wireless networks, in addition to being dense, heterogeneous and extensively programmable, will serve multiple industry verticals as well as accommodate multiple tenants on the same network infrastructure \cite{Renzo2019,Rost2016}. These transformations, some of which are being discussed by the research community \cite{Basar2019, Akyildiz2016}, represent a paradigm shift from the current network architecture design. As a consequence, MM mechanisms need to be re-evaluated and/or re-designed. For this, we first present the functional requirements of MM mechanisms for future wireless networks in Table 1, based on the characteristics we derive from the current and future network scenarios. From Table 1, it can be observed that the MM solutions for 5G and beyond networks will have to adapt and evolve, so as to be able to serve the future wireless networks efficiently. As seen from the table, MM solutions will need to be redesigned so that they are flexible, scalable and reliable to ensure the requested QoS and seamless mobility. Apart from these requirements, there are certain criteria that will impact the design and development of future MM solutions. \textcolor{red}{Consequently, in the following text we present an insight into these myriad design criteria and their impact on 5G and beyond MM.} \begin{table*} \caption{Functional Requirements from 5G and beyond MM} \centering \begin{tabular}{|p{0.55cm}|p{4cm}|p{7cm}|p{5.5cm}|} \hline \textbf{Req \#} &\textbf{Current Scenario} & \textbf{5G and Beyond Scenario} & \textbf{Resulting MM Functional Requirement} \\ \hline \textcolor{red}{\textbf{R1}} & Single RAT connectivity & UE connected to multiple RATs & Provision support for multi-RAT MM as well as efficient RAT selection methods.\\ \hline \textcolor{red}{\textbf{R2}} & UEs with predominantly mobile broadband applications request MM support & UEs with enhanced broadband (eMBB), massive machine type communications (mMTC) and Ultra-reliable low latency communication (URLLC) applications will request MM support. These applications will have different QoS requirements \cite{Elayoubi2016}. For example: minimum data rate, latency, reliability, etc. & Provide MM support based on context, i.e., based on application requirements, user mobility, network conditions, etc. \\ \hline \textcolor{red}{\textbf{R3}} & Density of UEs in the current scenario is $10^5 devices/km^2$ \cite{ITU2015} & Density of UEs in 5G and beyond will be $\geq 10^6 devices/km^2$ \cite{ITU2015} & MM mechansims should be able to scale and provision support for the increasing user density \\ \hline \textcolor{red}{\textbf{R4}} & Network is vendor driven \cite{Habibi2019} & Network is softwarized \cite{Habibi2019} & MM solutions should evolve to utilize the benefits provided by softwarized 5G and beyond enablers such as SDN, Network Function Virtualization (NFV), etc. \\ \hline \textcolor{red}{\textbf{R5}} & Network is predominantly ground based with static radio towers & APs and relay stations may be carried on drones in 5G and beyond networks \cite{Khawaja2019, Li2019} & MM solutions for 5G and beyond networks should support mobility of both UEs and APs \\ \hline \textcolor{red}{\textbf{R6}} & 4G, 3G and 2G are standardized and the MM protocols provision support for all these devices & 5G and beyond networks and devices will be gradually rolled out. They are fundamentally different from 4G, 3G and 2G networks & Backwards compatibility to support legacy devices will be needed from MM solutions for 5G and beyond. \\ \hline \textcolor{red}{\textbf{R7}} & Sub 6 GHz is the frequency range for data transfer & Sub 6 GHz, millimeter Wave (mmWave) \cite{Akyildiz2016}, Terahertz communication \cite{Boulogeorgos2018, Chowdhury2018} will be utilized in 5G and beyond networks & Increased robustness, given that VLC and mmWave will be significantly impacted by the environment, thus challenging seamless mobility in 5G and beyond networks. \\ \hline \textcolor{red}{\textbf{R8}} & Finest granularity of tracking and localization is $<50 m$ \cite{Wymeersch2017} & Finest granularity of tracking and localization is a beam ($<10 cm$) \cite{Wymeersch2017} & MM solutions should evolve to utilize the advanced level of granularity to provision better mobility and tracking performance in dense urban or high speed scenarios \\ \hline \textcolor{red}{\textbf{R9}} & The complexity is driven mainly via user requirements in a homogeneous network & The complexity in 5G and beyond networks is a combination of different user types, different QoS requirements, heterogeneous RAT scenarios, heterogeneous backhaul scenarios \cite{Jaber2016b} and ultra dense nature of the network & MM mechanisms need to ensure adequate flexibility (they should accommodate for the increased heterogeneity) and tractable solutions (fast and low computational complexity) with well managed power consumption for the increased network complexity \\ \hline \textcolor{red}{\textbf{R10}} & Requested services and data is always hosted in the IP Multimedia Subsystem (IMS) core & Requested services and data in 5G and beyond networks can now be hosted at the network edge, through MECs \cite{Habibi2019} & MM mechanisms should provide adequate Service Migration \cite{Machen2018}/ Service Replication \cite{Frangoudis2018} support to ensure the required QoS from the applications \\ \hline \textcolor{red}{\textbf{R11}} & Support for mobility up to 350 km/h & Support for mobility up to and beyond 500 km/h proposed \cite{ITU2015} & MM solutions need to ensure the required flexibility to accommodate multiple demanding mobility profiles, avoiding the \emph{one size fits all} approach \\ \hline \end{tabular} \label{tab:my_label} \end{table*} \subsection{\textcolor{red}{Centralized vs. Hierarchical vs. Distributed Solution}} While a centralized solution might offer optimality given its global view, a distributed approach can offer more reliability by eliminating the Single Point of Failure (SPoF) problem as well as avoiding congestion at a specific network node. Instead, a hierarchical approach can incorporate the benefits of both aforesaid techniques. For example, in LTE, MME is the mobility management entity with the Serving Gateway (S-GW) being the mobility anchor, and hence, it is a centralized solution. \textcolor{red}{However, Distributed Mobility Management (DMM) \cite{Liu2015} assists in decentralization of the traditional MM mechanisms, wherein instead of having a single MM anchor for all the flows on a UE, the anchors are now distributed. By distribution of MM anchors here we mean that, when a flow is initiated to/from a UE, the anchor may be chosen dependent on the flow requirements. For example, given a new flow originating to/from a UE, a MM anchor is chosen which might be very close to the UE to assist in network offloading purposes, whereas pre-existing flows might still be served from the MM anchors to which they were first assigned, so as to avoid service disruptions. Hence, it would provide more reliability. The hierarchical method on the other hand, will combine the centralized and distributed approaches to offer the reliability of the distributed approach (through decentralization of mobility anchors) and the optimality of the centralized approach (e.g. through master and slave network management entities). An example of such a distributed/hierarchical approach can be found in the upcoming 5G networks, wherein through SDN and NFV there is a separation between the CP, i.e., Access and Mobility Function (AMF)- Session Management Function (SMF) for mobility management, and the data plane (DP), i.e., OpenFlow (OF) switches, etc., \cite{Yang2016, Chen2016}. \subsection{\textcolor{red}{Computational Resources}} The computational resou\-rce locations and their corresponding computational po\-wer will determine the degree to which the mobility management mechanisms can be distributed. For example, edge clouds can aid not only in MM related computation (e.g., RAT and AP selection) but can also enable faster access to content through caching. In addition, for 5G and beyond networks, it will also be critical for the MM mechanisms to determine whether services need to be migrated or replicated \cite{Urgaonkar2015, Wang2018,Frangoudis2018}, so as to maintain service continuity and hence ensure the required QoS. Note that, by service replication we mean that the services being requested by a user undergoing a mobility event are replicated to other edge servers. Further, by service migration \cite{Urgaonkar2015, Machen2018} we imply that the services being accessed by a user undergoing a mobility event are migrated to the next edge cloud server where the user is expected to move to. % \subsection{\textcolor{red}{Backhaul Considerations}} Network densification and the prohibitively expensive nature of installing optical fibre as backhaul \cite{Jaber2016b} will render the backhaul scenario in 5G and B5G wireless networks to be extremely heterogeneous, i.e., they will be composed of both wired and wireless links. Further, the backhaul wireless links will consist of multiple radio access techniques such as microwave, mmWave, VLC or LiFi, co-existing together \cite{Chowdhury2018}. These transformatory trends will need to be taken into consideration while developing future MM mechanisms, as: \begin{itemize} \item Congestion or multiple-hops in the backhaul can impact the E2E latency, and consequently, the perceived QoS \cite{Rony2017}. \item Backhaul reliability will be critical given the relatively poor penetration capability of mmWave \cite{Bai2013} and additionally, strong atmospheric absorption features for VLC \cite{Boulogeorgos2018}. Thus, during mobility, attaching to an AP with a poor backhaul link quality can correspondingly lead to degradation in QoS since, there can be increased packet loss or even an outage altogether. \end{itemize} \subsection{\textcolor{red}{Context}} A multitude of parameters, such as user mobility profiles, type of flows, network and user policies, AP signal quality, network load, backhaul-fronthaul options, etc., constitute the context. Additionally, MM mechanisms for 5G and B5G networks will have to service users with different mobility profiles, accessing different services. Hence, the available contextual information will be valuable for any future MM mechanism. For example, in \cite{Andreev2014}, network load aware MM methods present an improvement of 75\% in throughput at the cell edge as compared to the context agnostic methods, thus reinforcing the aforesaid criteria. \subsection{\textcolor{red}{Granularity of Service}} Granularity in MM services (e.g., based on flow, subscriber or mobility profile) will be an important component for MM methods to provision optimal solutions for 5G and B5G networks. Further, the type of granularity offered, i.e., per-flow based, mobility based, etc., will depend on the user context as well as the network conditions. Hence, innovative mechanisms like the Mobility Management-as-a-Service (MMaaS) paradigm \cite{Jain2017} will be required. In MMaaS, on-demand MM solutions can be employed by or assigned to UEs. For example, if a device is moving at a high speed ($\sim$ 300km/h) and there is another device, say an IoT device, that is stationary, then a mobility based granularity of service can be adopted. Based on this service granularity provision, the high mobility device can be allocated resources on macro-cells whilst the stationary device can be served by small cells. \textcolor{red}{Another important example being that of network slices. Network slicing, the concept, typically refers to a resource based logical slicing of the existing network infrastructure to support multiple verticals and corresponding operators that serve them \cite{Zanzi2018}. In such scenarios, on-demand MM will be necessitated by the network slices, as they will cater to services with differing mobility requirements and patterns, such as the URLLC and eMBB services.} \subsection{\textcolor{red}{D2D Service Availability}} The availability of D2D services will determine how the mobility management mechanism is executed, as D2D can assist in providing seamless mobility through CP information and/or DP data forwarding. \textcolor{red}{This will be specially relevant in scenarios involving V2X \cite{Molina-Masegosa2017}, wherein for example, the vehicles, that are outside the coverage area of the infrastructure network (IN) or are experiencing a deep fade with the IN, can exchange data with it by relaying their information through other vehicles, over the PC5 interface \cite{Molina-Masegosa2017}, that might be nearby and within the coverage area of the IN or are experiencing better channel conditions with it.} \subsection{\textcolor{red}{Physical Layer Considerations}} The introduction of massive MIMO and mmWave technology will certainly impact current MM methods. Concretely, in urban environments the mmWave links will face extensive blockage alongside their limited range due to the propagation characteristics. Hence, this will require densification, which introduces the possibilities of frequent handovers (FHOs). Here by FHOs, we refer to the fact that in a dense network environment, such as those in 5G, the users will be subjected to handover (HO) scenarios more frequently as compared to that in the current networks. On the other hand, beamforming through massive MIMO antennas can be utilized to track moving users and hence, provide them with high QoS through higher throughput and better localization services. Further, for B5G networks, VLC and meta-surfaces have emerged as the main enablers. Note that, VLC will be challenged extremely by the existing environment. This is so because, it operates in the Terahertz range of frequencies, thus making most objects in the environment as blocking agents. Also, meta-surfaces will lead to programmable environments, which will create the issue of dimensionality for an optimal solution. Henceforth, the physical (PHY) layer techniques require consideration in any MM mechanism development for 5G and beyond networks. \subsection{\textcolor{red}{Control Plane Signaling}} An important target of future MM mechanisms will be to reduce the CP signaling induced during handovers. Studies such as \cite{Jain}, have proposed enhanced handover signaling mechanisms for an SDN-based core network architecture, such that the transmission and processing cost as well as the overall latency during a handover process is reduced whilst ensuring the Capital Expenditure (CAPEX) does not rise significantly. Such a procedure will enhance the QoS for the user while switching access points and hence, will be critical to the future MM suite. \newline \noindent \textcolor{red}{Although, a complete overhaul of MM mechanisms for future wireless networks might result in optimal solutions, the time to develop and market them will be correspondingly longer. Hence, in the following sections, we perform a novel qualitative analysis for the various legacy as well as current state-of-the-art mechanisms and standardization efforts, and evaluate their suitability as \emph{enablers for MM} in 5G and beyond wireless networks.} \section{\textcolor{red}{Qualitative Analysis Criteria}} Present day MM mechanisms and standards are extremely stable and also readily implementable. Given the challenging nature of 5G and B5G network scenarios, it is of significant interest that these mechanisms and standards be explored for their potential inclusion -- whole or in part -- as enablers for future MM solutions. Hence, we perform a novel qualitative analysis of these mechanisms on the basis of reliability, flexibility and scalability: the three pillars of any future MM strategy. \begin{sidewaystable*} \renewcommand{\arraystretch}{1.2} \caption{Governing Parameters for the Reliability, Scalability and Flexibility of a MM mechanism/standard} \centering \begin{tabular}{|p{0.7 cm}|p{3.8cm}|p{2.5cm}|p{0.7 cm}|p{3.5cm}|p{2.5cm}|p{0.7 cm}|p{3.5cm}|p{2.5cm}|} \hline \textbf{\#} & \textbf{Reliability} & \textbf{Contribution to Reqs.} & \textbf{\#} & \textbf{Flexibility} & \textbf{Contribution to Reqs.} & \textbf{\#} & \textbf{Scalability} & \textbf{Contribution to Reqs.} \\ \hline \textcolor{red}{RL1.} & Redundancy in the number of flows, connections, etc. & \textcolor{red}{R7} &\textcolor{red}{FL1.} & Granularity of service. E.g. per flow, per connection, per user, etc.& \textcolor{red}{R9, R11} & \textcolor{red}{SL1.} &Manageable number of connections with increasing number of users & \textcolor{red}{R3, R9} \\ \hline \textcolor{red}{RL2.} & Seamless handover capability$^\dagger$ & \textcolor{red}{R1, R7, R8} &\textcolor{red}{FL2.} & Capability to enable connectivity to multiple APs & \textcolor{red}{R1, R9} &\textcolor{red}{SL2.} &Manageable signaling load with increasing number of users & \textcolor{red}{R3, R9}\\ \hline \textcolor{red}{RL3.} & Decentralization & \textcolor{red}{R3, R4} &\textcolor{red}{FL3.} &Handover service support at multiple network levels. E.g. Core network, Access network, etc. & \textcolor{red}{R4, R9} &\textcolor{red}{SL3.} & Manageable processing load with increasing number of users/devices & \textcolor{red}{R3, R9}\\ \hline \textcolor{red}{RL4.} & Fast path re-routing at CN & \textcolor{red}{R5, R10} &\textcolor{red}{FL4.} & Handover decision making utilizing multiple parameters. E.g. network load, requested QoS, etc.& \textcolor{red}{R1, R9} &\textcolor{red}{SL4.} & Decentralization & \textcolor{red}{R4}\\ \hline \textcolor{red}{RL5.} & Congestion aware & \textcolor{red}{R2} &\textcolor{red}{FL5.} &Context awareness & \textcolor{red}{R2, R9, R10} &\textcolor{red}{SL5.} &Ease of implementation and integration & \textcolor{red}{R6}\\ \hline \multicolumn{9}{l}{\textcolor{red}{$^\dagger$Here seamless handover capability refers to the ability of a MM mechanism to permit vertical (inter-RAT) as well as horizontal (intra-RAT) handover.}} \end{tabular} \end{sidewaystable*} \textcolor{red}{As part of this qualitative analysis, we firstly present a detailed description of these three criteria, as follows}: \begin{itemize} \item[1.] \emph{Reliability} will help to determine whether the MM mechanisms employed will be able to ensure guaranteed and continuous service in any given network topology. Such reliability requirements entail not only continuous connectivity whilst traversing a geographic area, they also include reliability in delivery of packets for critical and delay sensitive services. Further, reliability from a MM mechanism also envelops factors such as tolerance to congestion (through for example, Distributed MM), ensuring faster yet trustworthy re-connection and authentication whilst mobile, ensuring appropriate levels of redundancy in the number of flows, connections, and hosts, and also ensuring appropriate resource allocation for users with myriad mobility and application profiles at the edge, access and core network. \item[2.] \emph{Flexibility} as a qualitative analysis tool helps to determine the adaptability that MM mechanisms will provide to the network, which as discussed will be heterogeneous and dense in all perceivable aspects. The flexibility provisioned by MM mechanisms for future networks hence envelops factors such as the ability to formulate and deploy MM policies depending on individual user profiles, flow profiles or based on a slice profile. Further, ensuring the possibility of multi-connectivity through various layers such as transport layer (Stream Control Transmission Protocol (SCTP)/ Multi-Path Transmission Control Protocol (MPTCP)), IP layer (Multi-homing), Medium Access Control (MAC)-PHY layer (Dual Connectivity), will be an important factor for ensuring a flexible MM policy. Additionally, factors such as multi-objective access point selection/user association taking into account factors such as congestion, QoS requirements, backhaul reliability, etc., will be critical to a flexible MM mechanism. \item[3.] \emph{Scalability} aspect allows one to determine if the future MM mechanisms can serve the increasing number of user devices with a corresponding increase in requested QoS with heterogeneous mobility profiles. A measure of scalability of MM mechanisms can be gained by analyzing factors such as number of connections that can be managed given an increasing number of user devices, management of the signaling load generated due to mobility events, management of the increasing load due to processing the many CP messages generated in mobility events, as well as the ability to permit decentralization (which in essence would ensure scalability) and being easily deployable on a large scale given a new MM mechanism. \end{itemize} \textcolor{red}{We summarize the aforesaid criteria into a list of parameters for each criteria and present them in Table 2. Additionally, we also indicate the requirements (Table 1) for whose fulfilment each of these parameters contribute towards. Note that, compliance with each of the stated parameters in Table 2 for the reliability, flexibility and scalability criteria will be essential towards ensuring that the MM mechanism under consideration satisfies the requirements (Table 1) defined for the upcoming 5G and beyond networks. We now elaborate upon the parameter-requirement relationships that have been illustrated in Table 2, with the objective of enhancing the comprehensiveness of the evaluation criteria. } \subsection{\textcolor{red}{Reliability: Parameter to Requirement mapping}} \textcolor{red}{The provision of redundancy in the number of flows and connections, i.e., by satisfying parameter \textit{RL1}, can help fulfil requirement \textit{R7} presented in Table 1. This is so because, redundancy in connections will help overcome the fragile nature of wireless channels in the frequency bands that constitute VLC and mmWave communications. Next, satisfying the parameter \textit{RL2} will contribute towards fulfilling the requirements \textit{R1}, \textit{R7}, and \textit{R8} (Table 1). Here, the ability to provision seamless handover assists in supporting mobility amongst multiple RAT(s) (\textit{R1}), supporting multi-connectivity and thus reliability (\textit{R7}), and utilize enhanced localization capabilities to accomplish the same in dense urban scenarios (\textit{R8}). Additionally, the \textit{RL3} parameter for the reliability criteria, when satisfied, will help to fulfill the \textit{R3} and \textit{R4} requirements (Table 1). The reason being, decentralization will allow for efficient handling of the number of devices (\textit{R3}). Moreover, to establish an effective level of decentralization, such as for accessing cached data at the edge and in the IMS core, enablers such as NFV and Mobile Edge Computing (MEC) will be utilized (\textit{R4}). Furthermore, the \textit{RL4} parameter holds significant relevance towards fulfilling the requirements \textit{R5} and \textit{R10} (Table 1). Specifically, fast path re-routing in the CN ensures that the increased dynamism, due to the mobility of both the UE and APs (\textit{R5}), is catered to in the CN. In addition, data path modifications, due to service migration and service replications, do not lead to extensive delays is also ensured through parameter \textit{RL4}. Lastly, satisfying the \textit{RL5} parameter will help towards fulfilling the \textit{R2} requirement (Table 1), since guaranteeing congestion awareness helps service the different QoS requirements of the applications, such as virtual reality and emergency services, with better reliability.} \subsection{\textcolor{red}{Flexibility: Parameter to Requirement mapping}} \textcolor{red}{When a MM mechanism under study satisfies the flexibility parameter \textit{FL1}, it correspondingly helps to fulfil the \textit{R9} and \textit{R11} requirements (Table 1). This is so because, \textit{FL1} states that a MM mechanism should support granularity of service. This will correspondingly assist in accommodating the multitude of service requirements independently (\textit{R9}) as well as avoid the \textit{one size fits all} approach (\textit{R11}). Next, \textit{FL2} parameter will help in satisfying the \textit{R1} and \textit{R9} requirements (Table 1). Essentially, the capability to be able to connect with multiple APs will assist in multi-RAT MM (\textit{R1}) as well as in provisioning enhanced agility for MM mechanisms in a dense and heterogeneous network (\textit{R9}). Further, when the \textit{FL3} parameter is satisfied, it helps to fulfil the \textit{R4} and \textit{R9} requirements. The reason being, to enable handover support at multiple levels of the network, usage of SDN, NFV and MEC platform will be necessitated for efficient implementation (\textit{R4}). Moreover, such multi-level handover support will also provision flexibility for the network (\textit{R9}). Additionally, satisfying parameter \textit{FL4} enables the MM mechanism under study to contribute towards satisfying the \textit{R1} and \textit{R9} requirements (Table 1). Specifically, having a handover decision mechanism that utilizes multiple parameters aids in handling MM amongst multiple RAT(s) more flexibly and hence, efficiently (\textit{R1}). Also, such strategies will ensure that alongside flexibility, solutions are computationally tractable and energy efficient (\textit{R9}). Finally, parameter \textit{FL5}, when satisfied, will be relevant for the fulfilment of requirements \textit{R2}, \textit{R9} and \textit{R10} (Table 1). To elaborate, the context awareness feature of a MM mechanism will assist in provisioning MM support dependent on application, user and network context (\textit{R2}), flexibility to handle the increased heterogeneity in the network (\textit{R9}), and ensure QoS whilst performing complex tasks such as migrating or relocating services based on user mobility events (\textit{R10}) through appropriate path and resource management.} \subsection{\textcolor{red}{Scalability: Parameter to Requirement mapping}} \textcolor{red}{For the scalability criteria, when parameter \textit{SL1}, \textit{SL2} and \textit{SL3} are satisfied by a MM mechanism, they correspondingly also assist in fulfilling the \textit{R3} and \textit{R9} requirements (Table 1). Concretely, the ability to be able to manage increasing number of connections, signaling load and processing load with the number of increasing users will correspondingly assist in handling a user density of more than $10^6$ devices per $km^2$ in 5G and beyond networks (\textit{R3}). Also, they will help in ensuring the required scalability to accommodate the increasing heterogeneity in the network as well as the corresponding tractability of the MM solution (\textit{R9}). Next, when parameter \textit{SL4} for the scalability criterion is met, it helps to fulfil the \textit{R4} requirement (Table 1). Specifically, to accomplish decentralization objective the MM mechanism under study will need to utilize enablers such as NFV and MEC. Lastly, satisfying parameter \textit{SL5} will help to meet the requirement \textit{R6} (Table 1). The reason being that, ease of implementation usually arises from the fact that a MM mechanism has been used/deployed before, as well as is suitable to accommodate legacy devices whilst catering to a new set of service and devices. Hence, satisfying the \textit{SL5} parameter will assist in ensuring that backwards compatibility requirements (\textit{R6}) are adhered to.\\} \noindent \textcolor{red}{And so, from the aforementioned elaborate understanding of the mapping, it can be deduced that the criteria chosen for our qualitative analysis are comprehensive in nature and approach.} Moreover, and considering only the 5G networks since their KPIs have been defined \cite{Elayoubi2016}, provisioning beyond 99.999\% reliability will be ensured through the reliability metric during mobility scenarios. Further, latency less than 5 ms for connected cars and 10 ms for virtual reality and broadband applications, will be guaranteed through the reliability and flexibility metric. \textcolor{red}{Specifically,} the reliability metric will help provision congestion awareness, reliable link selection, etc., while flexibility will allow multiple type and number of connections during mobility scenarios. In addition, support for nearly 1 million devices per km$^{2}$ with different application and mobility profiles will be ensured through the scalability criterion. \textcolor{red}{Consequently, this further reinforces the comprehensiveness of the criteria chosen for the qualitative analysis that follows.} \begin{comment} \item Mechanism/Standard satisfies 0 or 1 parameter(s) $\rightarrow$ Barely reliable/flexible/scalable \item Mechanism/Standard satisfies 2 parameters $\rightarrow$ Reliable/Flexible/Scalable \item Mechanism/Standard satisfies 3 parameters $\rightarrow$ Very reliable/flexible/scalable \item Mechanism/Standard satisfies 4 parameters $\rightarrow$ Highly reliable/flexible/scalable \item Mechanism/Standard satisfies 5 parameters $\rightarrow$ Ultra reliable/flexible/scalable \end{comment} \section{\textcolor{red}{Legacy mechanisms and standards: 5G and Beyond MM enablers?}} \textcolor{red}{We evaluate certain widely employed/studied legacy standards and mechanisms based on the criteria (reliability, flexibility and scalability) listed in Table 2. It is important to state here that, the goal of the following analysis is not to compare the considered standards and mechanisms against each other but rather to highlight the extent of their suitability for 5G and beyond networks.} \subsection{IETF MPTCP-SCTP} \subsubsection{\textcolor{red}{Discussion}} \textcolor{red}{Being transport layer protocols, MPTCP (through multiple TCP connections) \cite{Ford2011, Ford2013} and SCTP (through its multi-homing capabilities) \cite{Stewart2007} can provide multiple TCP paths for flows originating at the host. Generally utilized for increasing data rates \cite{Klein2011} and improving the QoS, the provision of multipath redundancy \cite{Zannettou2016, Phung2019, Liu2017a, Natarajan2009} and congestion awareness (at the transport layer level) \cite{Raiciu2011,Wischik2011, Ignaciuk2018, Stewart2007} will facilitate reliability for 5G and beyond MM mechanisms. Additionally, MPTCP and SCTP satisfy the granularity of service criterion (by provision of per-flow level granularity of service), which will be essential for the future MM mechanisms. Further, according to \cite{Ford2011, Ford2013, wei-mptcp-proxy-mechanism-02}, for MPTCP to be implemented without altering the legacy systems, proxy servers supporting MPTCP will need to be installed in front of the legacy devices, such as the middleboxes installed by service providers. The legacy systems can then communicate with the proxies using the legacy TCP protocol, while the proxies utilize MPTCP for communicating with the destination MPTCP capable device. However, it is the requirement of these additional proxies that will impact the scalability of the MPTCP solution for 5G and beyond MM mechanisms. Moreover, for SCTP, both the user and server protocol stacks need to be updated \cite{Stewart2007}. Given the number of users in future networks, it will pose a scalability challenge for the deployment of SCTP as part of the 5G and beyond MM mechanisms.} \subsubsection{\textcolor{red}{Analysis}} \textcolor{red}{Given our objective of determining the suitability of MPTCP and SCTP for 5G and beyond MM mechanisms, we enlist their \textit{pros} and \textit{cons} as follows:} \begin{itemize} \item \textcolor{red}{MPTCP Pros} \begin{itemize} \item \textcolor{red}{Allows for multiple data flows at the transport layer level \cite{Ford2011, Ford2013, Liu2017a}, and hence, provisions for resiliency against connection failures, given the multipath feature \cite{Zannettou2016, Phung2019, Liu2017a}} \item \textcolor{red}{Provisions congestion awareness, with studies such as \cite{Raiciu2011} proposing specific congestion control methods for MPTCP} \item \textcolor{red}{Through its ability to divide a connection into multiple sub-flows, MPTCP provisions the capability to handle each flow independently \cite{Ignaciuk2018, Liu2017a}} \end{itemize} \item \textcolor{red}{MPTCP Cons} \begin{itemize} \item \textcolor{red}{The middleboxes installed by service providers are not optimized to support MPTCP \cite{Ford2011, Ford2013}} \item \textcolor{red}{MPTCP requires proxies to allow MPTCP enabled devices to take its full benefits \cite{wei-mptcp-proxy-mechanism-02}} \end{itemize} \item \textcolor{red}{SCTP Pros} \begin{itemize} \item \textcolor{red}{Allows for multiple data flows at the transport layer level \cite{Stewart2007, Natarajan2009}, and hence, provisions for resiliency against connection failures, given the multipath feature. } \item \textcolor{red}{Provisions congestion awareness, wherein reference \cite{Stewart2007} establishes the presence of congestion avoidance methods within the SCTP suite} \item \textcolor{red}{Assists in network level fault tolerance through support for multi-homing \cite{Stewart2007, Natarajan2009}} \end{itemize} \item \textcolor{red}{SCTP Cons} \begin{itemize} \item \textcolor{red}{Requires both host and destination device protocols stacks to be updated with the SCTP protocol \cite{Stewart2007}} \end{itemize} \end{itemize} \noindent \textcolor{red}{From the \textit{pros} and \textit{cons} of both MPTCP and SCTP, as listed above, it can be concretely stated that the IETF MPTCP-SCTP methods satisfy parameters \textit{RL1} (allowing for multiple flows over the network for any given user) and \textit{RL5} (provisioning congestion awareness as part of the transport layer characteristic for MM) for the reliability criterion. Further, for flexibility, from our discussion above, it is clear that IETF MPTCP-SCTP only satisfies parameter \textit{FL1} (by allowing for multiple flows, flow level granularity can be induced).} \subsection{IEEE 802.21} \subsubsection{\textcolor{red}{Discussion}} \textcolor{red}{Network layer protocols will play a critical role in ensuring seamless mobility during inter-RAT mobility events, given the fact that a change in IP anchors/addresses invariably leads to a dropped session. A significant effort in this direction is provided by IEEE 802.21, which is an inter-RAT handover protocol allowing devices to move seamlessly between the various IEEE 802.x technologies \cite{Ferretti2016, DeLaOliva2008, Eastwood2008, IEEE2014, Leon2010}. Sitting just above the MAC layer, it provides information and command service to higher layers thus permitting the users to perform a media independent handover. 3GPP technologies can also utilize this information and hence, allow devices to handover from 3GPP to IEEE 802.x RATs and vice versa. Consequently, IEEE 802.21 can provision certain degree of reliability and flexibility for 5G and beyond MM mechanisms. However, note that the protocol stack of all the users would have to be modified to implement the IEEE 802.21 mechanism.} \subsubsection{\textcolor{red}{Analysis}} \textcolor{red}{For the purpose of analysis, we list the \textit{pros} and \textit{cons} of the IEEE 802.21 mechanism towards 5G and beyond MM strategies, as follows:} \begin{itemize} \item \textcolor{red}{IEEE 802.21 Pros} \begin{itemize} \item \textcolor{red}{Provisions seamless handover capability, as it allows users to switch between multiple RATs \cite{Ferretti2016, DeLaOliva2008, Leon2010}} \item \textcolor{red}{Provisions the possibility for a UE to connect to multiple APs \cite{Ferretti2016,Eastwood2008}} \end{itemize} \item \textcolor{red}{IEEE 802.21 Cons} \begin{itemize} \item \textcolor{red}{Requires the protocol stacks of both the host and destination devices to be modified, so as to enable the IEEE 802.21 functionality \cite{DeLaOliva2008, IEEE2014}} \end{itemize} \end{itemize} \noindent \textcolor{red}{And so, given the aforesaid \textit{pros} and \textit{cons} with regards to IEEE 802.21, it can be deduced that it satisfies parameter \textit{RL2} for reliability (allowing for seamless movement between different RATs) and \textit{FL2} for flexibility (allowing for the possibility to connect with multiple RATs) criteria.} \subsection{IETF PMIPv6} \subsubsection{\textcolor{red}{Discussion}} \textcolor{red}{Proxy Mobile IPv6 (PMIPv6) is a layer-3 MM protocol that allows a network based MM solution by utilizing gateways and anchors, i.e., Mobile Access Gateway (MAG) and Local Mobility Anchor (LMA), respectively \cite{Gundavelli2008, Bernados2016}. An LMA manages multiple MAGs, and is responsible for the assigment of the IP prefix which the UE retains during its entire duration within an LMA, i.e., a PMIPv6, domain \cite{Gundavelli2008, Bernados2016}. Concretely, it is the topological anchor for the UE. On the other hand, MAG is responsible for performing mobility related signaling, on behalf of the UE, with the LMA. Furthermore, it mains the assigned IPv6 prefix as the UE roams around the MAGs within an LMA domain\cite{Gundavelli2008, Bernados2016}. It is noteworthy that PMIPv6 has also been adopted by 3GPP networks \cite{3GPP2010}, thus reflecting the maturity and reliability of the solution with regards to its utility for future MM solutions.} \textcolor{red}{However, being centralized in nature, it can impact the network scalability and reliability in dense and heterogeneous future network environments, as a large volume of the traffic will pass through a single anchor. This can consequently lead to SPoF and congestion \cite{Nguyen2008}, thus making it less favorable for 5G and beyond MM mechanisms. And so, certain studies such as \cite{Giust2015, Nguyen2008} provide discussions on scalable methods for PMIPv6. Specifically, in \cite{Giust2015} a PMIPv6 based DMM approach has been proposed. The DMM approach essentially aids in improving the reliability and scalability aspects, as it would provide a decentralized method (without any mobility anchors) and eliminate SPoFs. Furthermore, in \cite{Nguyen2008}, a cluster based approach was proposed to enhance the scalability of the existing PMIPv6 protocol.} \subsubsection{\textcolor{red}{Analysis}} \textcolor{red}{Based on the discussions carried out in Section 3.3.1, we now enlist the \textit{pros} and \textit{cons} of the PMIPv6 strategy with regards to its utility for 5G and beyond MM mechanisms, as follows:} \begin{itemize} \item \textcolor{red}{PMIPv6 Pros} \begin{itemize} \item \textcolor{red}{Given that PMIPv6 is adopted by 3GPP and it forms a relatively agnostic setup for an UE towards its mobility signaling, it can thus provision seamless mobility \cite{3GPP2010, Gundavelli2008, Bernados2016}} \item \textcolor{red}{Through the DMM based PMIPv6 approach, decentralization can be introduced \cite{Giust2015}. Furthermore, other approaches, such as the clustering based approach in \cite{Nguyen2008}, can grant enhanced scalability and reliability to the PMIPv6 approach} \item \textcolor{red}{Given that it has already been adopted by 3GPP for LTE, the available implementational expertise will enhance the ease with which it can be adopted in future networks\\ \newline} \end{itemize} \item \textcolor{red}{PMIPv6 Cons} \begin{itemize} \item \textcolor{red}{In its original flavor, PMIPv6 suffers from scalability and reliability issues due to the SPoF formed by the LMA in its architecture \cite{Nguyen2008}} \item \textcolor{red}{An explicit treatment of PMIPv6 with regards to the parameters for flexibility criterion is missing in \cite{3GPP2010, Gundavelli2008, Bernados2016, Giust2015, Nguyen2008} } \end{itemize} \end{itemize} \textcolor{red}{And so, it can be deduced that the IETF PMIPv6 in its original flavor, given its maturity in development and deployment, satisfies the seamless handover parameter \textit{RL2} in the reliability criteria. However, with enhancements from the use of DMM and cluster based methods, PMIPv6 can be decentralized and scaled thus satisfying parameters \textit{RL3} and \textit{SL4} in reliability and scalability, respectively. Furthermore, since it has already been explored and implemented in the LTE networks, it satisfies parameter \textit{SL5} owing to its relative ease of implementation as against any other new protocol.} \subsection{LTE MM mechanisms} \subsubsection{\textcolor{red}{Discussion}} \begin{itemize} \item[\textcolor{red}{A.}]\textcolor{red}{\emph{Handover:}} \textcolor{red}{Whilst LTE mobility management derives its characteristics from the PMIPv6 MM strategy \cite{Sanchez2016}, LTE-X2 offers a method to decentralize it. In the presence of an X2 interface between two LTE eNodeB's (eNBs), instead of involving the core network for resource negotiation and data forwarding tasks, the eNBs communicate amongst themselves. This allows for a fast handover and also reduces signaling in the core network \cite{Universal2015}. And so, due to the ability of LTE-X2 to provision seamless handover alongside decentralization, it can grant reliability and scalability to 5G and beyond MM mechanisms. Further, since it provisions decentralization and reduces CN signaling, it also reduces the processing load for the CN. Hence, LTE-X2 can facilitate scalability for 5G and beyond MM. Lastly, since LTE-X2 only enables multi-level HO service support, i.e., HO can be executed either at the access (through X2 HO) or core network level (through S1 HO), it offers limited flexibility.} \setlength\parindent{8pt}\textcolor{red}{However, note that, LTE-S1 handover involves resource negotiation and routing decisions through the MME \cite{Universal2015}. Due to this centralized approach, there will be extensive CN signaling, which will lead to congestion and a SPoF. Thus, in its own capacity, LTE-S1 handover is not foreseen as an enabler for future MM strategies.\\} \item[\textcolor{red}{B.}] \textcolor{red}{\emph{Traffic Offloading:}} \textcolor{red}{3GPP, through Release-10, introduced Local IP Access (LIPA) and Selected IP Traffic Offloading (SIPTO) \cite{Sankaran2012} protocols. Concretely, LIPA allows for a local breakout, wherein a mobile device can communicate with another device through a private network, i.e., the data flow does not pass through the 3GPP CN, or to a public network, if the private network connects to it \cite{Sankaran2012}. An important challenge of LIPA with regards to MM is that, session continuity for LIPA connections during mobility events is not supported.} \setlength\parindent{8pt}\textcolor{red}{On the other hand, SIPTO is an orthodox traffic offloading mechanism, wherein the goal is to offload the IP traffic to an eNB or a gateway that is closer to the UE. Next, during 3GPP Release-10, the concept of IP Flow Mobility (IFOM) was also introduced. IFOM allows a UE to offload, if possible, data sessions to the Wi-Fi network from the 3GPP network. Consequently, through IFOM, a UE can maintain data flows belonging to the same packet data network (PDN) connection simultaneously on both the 3GPP and the Wi-Fi network \cite{Sankaran2012}.} \setlength\parindent{8pt}\textcolor{red}{Given these aforesaid traffic offloading strategies, they can consequently aid in managing any increase in traffic load within the network, as well as the processing load on specific network nodes, due to the increase in the number of users/devices. Thus, these mechanisms can provision scalability for 5G and beyond MM.\\} \item[\textcolor{red}{C.}] \textcolor{red}{\emph{Dual Connectivity and LTE-WiFi Aggregation:} The Dual Connectivity (DC) concept allows a user to camp on two APs simultaneously. Concretely, a UE can be connected to a Small-cell (SC) and a Macro-cell (MC) at the same time, wherein the MC and SC are connected to each other via the X2 interface, and the MC is the master eNB. According to 3GPP, all control plane communications, including resource allocation on SC, are performed via the corresponding MC, i.e., the master eNB, to which the UE is associated to. Note that, DC was introduced by 3GPP for LTE during Release-12. But, it is in Release-13 that this concept matured, wherein multiple usage scenarios, architecture and the operational characteristics were defined. A detailed description of the same has been presented in \cite{Carrier2015}. Furthermore, during Release-13, the concept of LTE-WiFi aggregation (LWA) was standardized \cite{Terrestrial2013}. Through LWA, a UE can simultaneously receive packets over both the LTE and the Wi-Fi interfaces, wherein the aggregation of these two physically distinct data streams takes place at the Packet Data Convergence Protocol (PDCP) layer in the protocol stack. However, note that the LWA functionality is defined only for the downlink \cite{IbarraBarreno2017}. Henceforth, given that the DC and LWA strategies provision the ability to connect to multiple APs at the same time, they can provision reliability and flexibility for 5G and beyond MM mechanisms. \\} \end{itemize} \subsubsection{\textcolor{red}{Analysis}} \textcolor{red}{For the 3GPP based MM mechanisms, we firstly highlight the \textit{pros} and \textit{cons} for the handover, traffic offloading and DC and LWA strategies, as follows:} \begin{itemize} \item \textcolor{red}{LTE Handover Pros} \begin{itemize} \item \textcolor{red}{The LTE-X2 and S1 mechanisms together offer handover support at the access and core network level \cite{Universal2015}} \item \textcolor{red}{Through LTE-X2 handover mechanism, CN signaling can be avoided \cite{Universal2015}} \item \textcolor{red}{LTE-X2 permits decision making for a handover to be taken at the access network level. Hence, it reduces the processing load on the CN entities as well and also permits fast handover capabilities \cite{Oh2014, Universal2015}} \end{itemize} \item \textcolor{red}{LTE Handover Cons} \begin{itemize} \item \textcolor{red}{The S1 based handover mechanism involves signaling through the CN, which creates increased load on the CN \cite{Oh2014} as well as introduces SPoFs} \end{itemize} \item \textcolor{red}{LTE Traffic Offloading Pros} \begin{itemize} \item \textcolor{red}{Provision a method for managing the traffic load given that the number of users/devices will increase significantly \cite{Sankaran2012}} \item \textcolor{red}{Provision a method for managing the processing load in the network nodes \cite{Sankaran2012}} \end{itemize} \item \textcolor{red}{LTE Traffic Offloading Cons} \begin{itemize} \item \textcolor{red}{LIPA does not support session continuity during mobility events, as well as it requires an additional gateway \cite{Sankaran2012}} \item \textcolor{red}{SIPTO is not helpful in mitigating radio congestion \cite{Sankaran2012}} \item \textcolor{red}{IFOM is significantly harder to implement as it necessitates coordination with the non-3GPP networks \cite{Sankaran2012}} \end{itemize} \item \textcolor{red}{LTE DC and LWA Pros} \begin{itemize} \item \textcolor{red}{Provisions the ability to connect to multiple 3GPP as well as Non-3GPP RATs \cite{Carrier2015,Terrestrial2013, IbarraBarreno2017, He2010}} \item \textcolor{red}{Provisions the capability to have multiple physical paths for data transmission, and thus better fault tolerance \cite{Carrier2015,Terrestrial2013, IbarraBarreno2017, He2010}} \end{itemize} \item \textcolor{red}{LTE DC and LWA Cons} \begin{itemize} \item \textcolor{red}{3GPP LWA is only applicable for downlink} \end{itemize} \end{itemize} \textcolor{red}{From the \textit{pros} and \textit{cons} for the LTE MM mechanism, it is clear that they provision redundancy in data paths (through DC and LWA), decentralization (through X2 and traffic offloading) and seamless handover (through X2 and S1 handover), thus satisfying \textit{RL1}, \textit{RL2} and \textit{RL3} parameters for the reliability criterion. Further, for the flexibility criterion, LTE MM mechanisms offer the possibility of a multi-level HO support (through X2 and S1 handover) as well as the ability to connect to multiple APs/RATs at the same time (through DC and LWA), thus satisfying parameters \textit{FL2} and \textit{FL3} for flexibility. Lastly, LTE MM mechanisms offer enhanced support with regards to the scalability criterion for 5G and beyond MM, as they satisfy parameters \textit{SL2} to \textit{SL5}, given their decentralization, ease of integration, multi-level handover mechanisms (X2 and S1 handover), and traffic offloading characteristics.} \subsection{Non-3GPP Multi-Connectivity Solutions} \subsubsection{\textcolor{red}{Discussion}} Multi-connectivity enables the users to establish and maintain physical and logical connections to multiple access points (possibly belonging to different RAT(s)) at the same time. \textcolor{red}{Certain standards and mechanisms, apart from those developed by 3GPP (Section 4.4), that utilize this concept are ITU-Vertical multihoming (ITU-VMH) and the Co-ordinated Multipoint (CoMP) strategy.} \textcolor{red}{Specifically, ITU-VMH permits the user to camp on more than one RAT, via multiple physical channels, at any given moment \cite{ITU-T2011}. Through such provision, ITU-VMH ensures path redundancy. Further, through interactions between the various Open Systems Interconnection (OSI) layers, techniques such as MPTCP/SCTP in combination with ITU-VMH can also aid in the provision of path redundancy \cite{ITU-T2011}. And so, ITU-VMH via its redundancy and seamless handover capabilities ensures reliability for 5G and beyond MM mechanisms. Note that, the seamless handover capability is facilitated by the ability of ITU-VMH to allow the user to connect to a multitude of APs, thus reducing the possibility of outage (as compared to standard HO process) during mobility events. Further, via the provision of multi-connectivity, ITU-VMH also permits per-channel granularity of service. Hence, it also provisions flexibility for future MM mechanisms. However, ITU-VMH, like the IEEE 802.21 standard, would require a transformation in the protocol stack to permit efficient resource allocation at all the protocol layers \cite{ITU-T2011}. Such a transformation might be difficult to scale to all the user devices, and hence, ITU-VMH is not a very scalable solution for 5G and B5G networks.} \textcolor{red}{Lastly, the Co-ordinated Multipoint (CoMP) strategy involves multiple access points co-ordinating with each other to serve a given user \cite{Irmer2011}. Similar to ITU-VMH, CoMP can also provision path redundancy as well as seamless handover capability, owing to its coordinated feature. And hence, it is also a reliable strategy for future MM strategies. Further, similar to ITU-VMH, CoMP can configure multi-connectivity alongside per-channel granularity (multiple APs permit multiple channels for transmission of data and hence, per-channel granularity of service can be provisioned) \cite{Irmer2011}. Consequently, it is qualitatively a flexible mechanism for 5G and B5G networks. However, since CoMP will involve centralized scheduling operations, it will lead to SPoF as well as challenge the scalability of backhaul networks. Consequently, this also renders CoMP as not being a very scalable proposition towards the objective of developing 5G and beyond MM solutions.} \subsubsection{\textcolor{red}{Analysis}} \textcolor{red}{We now present the \textit{pros} and \textit{cons} for ITU-VMH and CoMP strategies, as follows:} \begin{itemize} \item \textcolor{red}{ITU-VMH Pros} \begin{itemize} \item \textcolor{red}{Provisions path redundancy through multi-homing \cite{ITU-T2011}} \item \textcolor{red}{Provisions the capability to connect to multiple RAT(s) at any given time \cite{ITU-T2011}} \item \textcolor{red}{Per-channel granularity of service is possible} \end{itemize} \item \textcolor{red}{ITU-VMH Cons} \begin{itemize} \item \textcolor{red}{It will require the transformation of the entire protocol stack \cite{ITU-T2011}} \end{itemize} \item \textcolor{red}{CoMP Pros} \begin{itemize} \item \textcolor{red}{Provisions path redundancy through its ability to coordinate data transmission from multiple APs, which may also belong to different RATs \cite{Irmer2011, Sun2018a}} \item \textcolor{red}{Provisions the capability to connect to multiple RAT(s) at any given time \cite{Irmer2011, Sun2018a}} \item \textcolor{red}{Through the use of multiple APs for transmission, per-channel granularity of service is made possible} \end{itemize} \item \textcolor{red}{CoMP Cons} \begin{itemize} \item \textcolor{red}{Centralized processing introduces the possibility of SPoF \cite{Irmer2011, Lee2012}} \item \textcolor{red}{Backhaul networks will need to have extremely high capacity and extremely low latency characteristics, so as to support CoMP whilst maintaining QoS \cite{Lee2012}} \end{itemize} \end{itemize} \textcolor{red}{Concretely, ITU-VMH and CoMP satisfy parameters \textit{RL1} (allowing for the possibility of redundant physical connections) and \textit{RL2} (allowing for seamless mobility) for the reliability criterion, and parameters \textit{FL1} (provisioning the possibility of per-channel granularity for MM) and \textit{FL2} (allowing for the possibility of connecting to multiple RATs/APs) for the flexibility criterion.} \subsection{RSS based AP selection methods} \subsubsection{\textcolor{red}{Discussion}} \textcolor{red}{The erstwhile Received Signal Strength (RSS) based methods employ a very simplistic approach to AP selection, i.e., comparing the detected AP link quality (RSSI/RSRP/RSRQ) levels \cite{3GPP2011,3GPP2010, Xenakis2014}. The aforesaid simplistic nature can hence permit scalability for the future MM mechanisms as it is easy to implement, and does not entail a high processing and signaling load either. However, such an approach can be plagued by multiple issues. For example, APs with a good RSS might be overloaded (as more users will be assigned to them) whilst others maybe under-utilized. Such a scenario also implies that RSS based methods are not reliable as a better RSS does not always guarantee better QoS, since, congestion will lead to degraded QoS. Moreover, in dense scenarios, even with the implementation of a hysteresis, UEs will be subject to FHOs due to the fluctuating RSS and availability of multiple candidate APs. This further exemplifies the unreliability of RSS based methods. Additionally, these methods are one-dimensional, given that they consider only RSS as a decision parameter. The RSS methods also do not provision any granularity of service, context awareness, multiple levels of HO support, etc. Hence, they do not offer any flexibility to the MM mechanisms for 5G and B5G networks.} \begin{sidewaystable*} \renewcommand{\arraystretch}{1.1} \caption{\textcolor{red}{Compliance with the Reliability, Scalability and Flexibility criteria for the legacy MM mechanism/standard}} \centering \color{red}\begin{tabular}{|*{14}{>{\centering\arraybackslash}p{1.25 cm}|}} \cline{3-14} \multicolumn{2}{c|}{} & \multicolumn{12}{c|}{\textbf{Mechanisms}}\\ \cline{3-14} \multicolumn{2}{c|}{} & \multicolumn{2}{p{2.5cm}|}{\textbf{IETF MPTCP-SCTP}} & \multicolumn{2}{p{2.5cm}|}{\textbf{IEEE 802.21}} & \multicolumn{2}{p{2.5cm}|}{\textbf{IETF PMIPv6}} & \multicolumn{2}{p{2.5cm}|}{\textbf{LTE MM mechanisms}} & \multicolumn{2}{p{2.5cm}|}{\textbf{Non-3GPP Multi-connectivity solutions}} & \multicolumn{2}{p{2.5cm}|}{\textbf{RSS based handover methods}} \\ \hline \multicolumn{2}{|c|}{} & Cnf.$^\dagger$ & Refs.$^\delta$ & Cnf. & Refs. & Cnf. & Refs. & Cnf. & Refs. & Cnf. & Refs. & Cnf. & Refs. \\ \hline \multirow{5}{*}{\rotatebox{90}{\textbf{Reliability}}} & \textbf{RL1} & \Checkmark & \multirow{5}{*}[-0.2em]{\cite{Klein2011, Zannettou2016, Phung2019,Liu2017a, Natarajan2009,Raiciu2011, Wischik2011}} & $\times$ & \multirow{5}{*}[-0.2em]{\cite{Ferretti2016, DeLaOliva2008}} & $\times$ & \multirow{5}{*}[-0.1em]{\cite{Gundavelli2008, Bernados2016}} & \Checkmark & \multirow{5}{*}[-0.1em]{\cite{Universal2015, He2010}} & \Checkmark & \multirow{5}{*}[-0.2em]{\cite{ITU-T2011,Irmer2011, Sun2018a}}& $\times$ & \multirow{5}{*}[-0.1em]{\cite{3GPP2011, 3GPP2010}} \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{RL2} & $\times$ & & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark& \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{RL3} & $\times$ & & $\times$ & & \Checkmark & & \Checkmark & & $\times$ & & $\times$ & \\\cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{RL4} & $\times$ & & $\times$ & \cite{Leon2010} & $\times$ & \cite{Giust2015} & $\times$ & \cite{Carrier2015, Terrestrial2013, IbarraBarreno2017} & $\times$ & & $\times$ & \cite{Xenakis2014}\\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{RL5} & \Checkmark & & $\times$ & & $\times$ & & $\times$ & & $\times$ & & $\times$ & \\ \hline \hline \multirow{5}{*}{\rotatebox{90}{\textbf{Flexibility}}} & \textbf{FL1} & \Checkmark & \multirow{5}{*}[-0.1em]{\cite{Zannettou2016, Ignaciuk2018}} & $\times$ & \multirow{5}{*}[-0.1em]{\cite{Ferretti2016, Eastwood2008}} & $\times$ & \multirow{5}{*}[-0.1em]{\cite{3GPP2010, Gundavelli2008, Bernados2016, Giust2015, Nguyen2008}} & $\times$ & \multirow{5}{*}[-0.1em]{\cite{Universal2015, He2010}} & \Checkmark & \multirow{5}{*}[-0.1em]{\cite{ITU-T2011,Irmer2011, Sun2018a}} & $\times$ & \multirow{5}{*}[-0.1em]{\cite{3GPP2010, 3GPP2011}}\\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{FL2} & $\times$ & & \Checkmark & & $\times$ & & \Checkmark & & \Checkmark & & $\times$ & \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{FL3} & $\times$ & & $\times$ & & $\times$ & & \Checkmark & & $\times$ & & $\times$ & \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{FL4} & $\times$ & \cite{Liu2017a, Natarajan2009} & $\times$ & \cite{Leon2010} & $\times$ & & $\times$ & \cite{Carrier2015, Terrestrial2013, IbarraBarreno2017} & $\times$ & & $\times$ & \cite{Xenakis2014,Shen2017} \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{FL5} & $\times$ & & $\times$ & & $\times$ & & $\times$ & & $\times$ & & $\times$ & \\ \hline \hline \multirow{5}{*}{\rotatebox{90}{\textbf{Scalability}}} & \textbf{SL1} & $\times$ & \multirow{5}{*}[-0.1em]{\cite{Ford2011, Ford2013}} & $\times$ & \multirow{5}{*}[-0.1em]{\cite{DeLaOliva2008, IEEE2014}} & $\times$ & \multirow{5}{*}[-0.1em]{\cite{Giust2015}} & $\times$ & \multirow{5}{*}[-0.1em]{\cite{Sankaran2012, Universal2015}} & $\times$ & \multirow{5}{*}[-0.1em]{\cite{Irmer2011, Lee2012}} & $\times$ & \multirow{5}{*}[-0.1em]{\cite{3GPP2011, 3GPP2010}} \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{SL2} & $\times$ & & $\times$ & & $\times$ & & \Checkmark & & $\times$ & & \Checkmark & \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{SL3} & $\times$ & & $\times$ & & $\times$ & & \Checkmark & & $\times$ & & \Checkmark & \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{SL4} & $\times$ & \cite{wei-mptcp-proxy-mechanism-02}& $\times$ & & \Checkmark & \cite{Gundavelli2008, Bernados2016}& \Checkmark & & $\times$ & & $\times$ & \cite{Xenakis2014, Ahmed2014} \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13} & \textbf{SL5} & $\times$ & & $\times$ & & \Checkmark & & \Checkmark & & $\times$ & & \Checkmark & \\ \hline \multicolumn{14}{l}{$^{\dagger}$The conformance (Cnf.) of a given mechanism for a given criterion.} \\ \multicolumn{14}{l}{$^{\delta}$The corroborating references (Refs.), if any, for the specified conformance of a mechanism for a given criterion} \end{tabular} \end{sidewaystable*} \begin{comment} \begin{table*} \renewcommand{\arraystretch}{1.1} \caption{Qualitative Analysis: Summary} \centering \begin{tabular}{|p{1.5 cm}|*{6}{p{1.5cm}|}} \hline & \textbf{IETF MPTCP-SCTP} & \textbf{IEEE 802.21} & \textbf{IETF PMIPv6} & \textbf{LTE-X2} & \textbf{ITU-VMH and CoMP} & \textbf{RSSI based methods} \\ \hline \textbf{Reliability} & Reliable & Barely reliable & Barely reliable & Reliable & Reliable & Barely reliable \\ \hline \textbf{Scalability} & Barely salable & Barely scalable & Barely scalable & Very scalable & Barely Scalable & Very scalable \\ \hline \textbf{Flexibility} & Barely flexible & Barely flexible & Barely flexible & Barely flexible & Flexible & Barely flexible \\ \hline \end{tabular} \end{table*} \end{comment} \subsubsection{\textcolor{red}{Analysis}} \textcolor{red}{Based on the discussion, we present here the \textit{pros} and \textit{cons} of the RSS based AP selection methods as follows: } \begin{itemize} \item \textcolor{red}{RSS based methods Pros} \begin{itemize} \item \textcolor{red}{Easy to implement, given that it has already been adopted by 3GPP \cite{3GPP2010,3GPP2011, Ahmed2014}} \item \textcolor{red}{Relatively low processing and signaling load, owing to its simplicity \cite{Ahmed2014}} \end{itemize} \item \textcolor{red}{RSS based methods Cons} \begin{itemize} \item \textcolor{red}{FHOs in ultra dense scenarios is a pertinent issue \cite{Shen2017}} \item \textcolor{red}{It is agnostic of other parameters related to the UE and the network, such as the load, UE context, etc., thus making it unreliable and one-dimensional \cite{3GPP2010, 3GPP2011, Shen2017}} \end{itemize} \end{itemize} \textcolor{red}{Given these \textit{pros} and \textit{cons}, the erstwhile RSSI based method due to its existence and maturity can ensure mobility between multiple RAT(s), hence, satisfying parameter \textit{RL2} for reliability criteria. Furthermore, owing to the aforementioned simplicity and maturity in development and deployment it also satisfies parameters \textit{SL2}, \textit{SL3} and \textit{SL5} for the scalability criterion.\\} \noindent \textcolor{red}{To summarize, we introduce Table 3 wherein we indicate the parameters that each of the explored methods satisfies for the reliability, scalability and flexibility criteria. We also enlist the important references that have lead us to the development of Table 3, as presented in this article. From the discussions, analysis and Table 3 it can be deduced that none of the legacy mechanisms that have been studied achieve the requirements as necessitated by 5G and B5G networks. Concretely, none of the studied mechanisms satisfy all the parameters of the criteria utilized for the qualitative analysis. Notably, the 3GPP based LTE MM mechanisms provision the best basis and support for 5G and beyond MM mechanism, given that they collectively satisfy the most parameters amongst other strategies explored. } \textcolor{red}{Additionally, through this qualitative analysis, whilst we have presented the offered capabilities from legacy mechanisms towards 5G and B5G MM, we have also exposed the gaps that exist. This reinforces the fact that a holistic MM strategy for future wireless networks still remains elusive. Hence, in the following section, we explore the current state-of-the-art in MM solutions for 5G and beyond networks.} \section{5G and Beyond MM: Current State of the Art} \textcolor{red}{Global efforts have spinned up consortiums that have provided impetus to the development of 5G, including that of MM strategies.} Further, for B5G networks, such as 6G, certain collaborative efforts have already started. References \cite{Basar2019, Renzo2019, Boulogeorgos2018, Chowdhury2018, Zhao2019} highlight the advances that have been made with regards to identifying the enablers and core principles of B5G networks. Hence, in this section we first detail the current state of the art in MM mechanisms and the parameters they satisfy from Table 2. We then follow this with a first discussion in literature that elaborates upon the utility of the current state of the art in MM for B5G networks. \textcolor{red}{As a prologue to the aforementioned discussion, we introduce Figure 2, wherein the 5G architecture standardized by 3GPP has been presented \cite{3GPP2020}. Correspondingly, we have also presented the classification of the various mechanisms that we explore in Sections 5.1 and 5.2 with respect to the 5G architecture in Figure 2. This classification is dependent on the portion of the network that is impacted (directly or indirectly) the most by a particular MM scheme. Furthermore, we have illustrated whether the studied mechanisms are either control plane or data plane solutions. Concretely, a CP solution would primarily impact MM via either CP signaling or decisions, while a DP solution would entail provisioning alternate and more efficient data paths. A detailed discussion with regards to these classifications has been provided in Sections 5.1 and 5.2.} \textcolor{red}{Concretely, the 5G architecture, as shown in Figure 2, consists of two main core network functions, i.e., the Session Management Function (SMF) and the Access and Mobility Management Function (AMF). The SMF communicates with the User Plane Function (UPF) over the N4 interface, while the AMF is responsible for communicating with the RAN side over the N2 interface. Furthermore, the AMF and SMF communicate with other network functions, such as the Policy Control Function (PCF), Authentication Server Function (AUSF), etc., to execute their defined functionalities within the ambit of the policies and existing user and network context. For the sake of brevity, in Figure 2 we club all of these functions into a single entity box called \textit{Network Functions}. Moreover, the AMF also has an N26 interface that connects to the Evolved Packet Core (EPC) to facilitate Inter-RAT mobility, while an N32 interface exists in the event of a change in Public Land Mobile Network (PLMN) with 5G Core (5GC) as the CN for both the visted and home networks. Note that, the interfaces \textit{N2}, \textit{N4}, \textit{N26} and \textit{N32} are all control plane paths, with the AMF, SMF and other network functions forming the control plane entities.} \begin{figure*} \centering \includegraphics[scale=0.42]{Figures/Figure3_new.pdf} \caption{Classification of the state of the art in MM strategies on the 5G architecture.} \end{figure*} \textcolor{red}{In addition, the AMF in 5G networks is the equivalent of the MME in LTE-4G networks. It focuses on handling mobility at the access network level (such as AP selection, resource allocation, etc.). The SMF on the other hand handles the CN related tasks during mobility events (such as path re-routing, etc.). Next, in Figure 2, it can be seen that the RAN interacts with the UPF through interface N3, and the UPFs use the N9 interface to communicate amongst themselves. Also, the 5G networks provision a local breakout through the N6 interface from an UPF. The interfaces \textit{N3}, \textit{N9} and \textit{N6} constitute the data plane paths, with the RAN and UPF forming the data plane entities. Lastly, the UE, which is also a data plane entity, interacts with the AMF through the N1 interface. However, to maintain clarity, we have omitted the illustration of this interface from Figure 2. Thus, with this background, we now explore the 3GPP 5G MM mechanisms as well as other research efforts with regards to MM for 5G networks.} \subsection{\textcolor{red}{3GPP 5G MM Mechanisms}} \textcolor{red}{3GPP, through TS 23.501 \cite{3GPP2020}, TS 23.502 \cite{3GPP2020a} and TS 38.300 \cite{3GPP2020b}, has provided significant insights into the design and development of 5G MM strategies. New session management methods, service continuity states, UE mobility monitoring, provisioning for multi-homing, load balancing strategies, provision of on-demand MM, resource allocation due to mobility events, the new MM module, i.e., AMF, inter- and intra- next generation core (NGC) handovers, and LTE-EPC 5G-NGC interworking have been introduced in the aforesaid 3GPP specifications. These techniques through the provision of a softwarized solution and a global view of the network scenario alongside user context appear to facilitate the efficient operation of 5G and beyond MM mechanisms. Consequently, in the discussions that follow, we investigate these newly defined 3GPP MM mechanisms and elaborate their \textit{pros} and \textit{cons} for future MM mechanisms.} \subsubsection{\textcolor{red}{Discussion}} \begin{itemize} \item[\textcolor{red}{A.}]\textcolor{red}{\textit{UE Mobility monitoring:} In TS 23.501 \cite{3GPP2020}, details with regards to how the UE mobility is monitored and the corresponding actions with regards to resource allocation and context updates have been specified. Concretely, when a UE is mobile, the 5G standards define that the AMF will be responsible for monitoring its movement and hence, its mobility pattern. Furthermore, during a UE mobility event, new resources on the destination AP are managed by the AMF through the RAT and Frequency Selection Parameter (RFSP). Such a process simplifies the identification of the required resources, as well as migration of these resources to the destination network. Moreover, the AMF manages the UE mobility event notification, i.e., it provisions details with regards to the mobility event as well as the areas of interest (Tracking areas, Cells, RANs, etc., to which a UE might migrate to). The other Network Functions (NF), such as the SMF, can subscribe to these notifications so as to employ their decisions and policies. } \item[\textcolor{red}{B.}]\textcolor{red}{\textit{Session Management:} Through TS 23.501 \cite{3GPP2020}, the various modes that can be utilized to manage the multiple heterogeneous sessions for a given user has been defined. Notably, if a UE is connected to multiple RATs then, for a given Protocol Data Unit (PDU) session, the UE has the choice to select the access network over which this PDU session will be served. In addition, the UE, in the event of mobility or congestion, can request a PDU session to be transferred from 3GPP to non-3GPP RAT(s). Furthermore, in roaming scenarios, PDU sessions can either avail a local breakout or be routed through the home network. Specifically, each PDU session can be granted, independently, different routing modes. To do so, the SMF in the 5G CN controls and monitors the status of the data paths. Moreover, the SMF also provisions the capability of performing selective traffic routing by the application of Uplink Classifier (UL CL) on certain data plane entities, i.e., UPFs. A UPF essentially performs the function of a router in the 5G network.} \item[\textcolor{red}{C.}]\textcolor{red}{\textit{IPv6 multihoming:} The new 5G standards, as specified in TS 23.501 \cite{3GPP2020}, have formalized the use of IPv6 multi-homing so as to reap the benefits from the multiple physical channels that will be available for use through multi-connectivity. Specifically, according to TS 23.501, more than one session anchor can be specified for a PDU session. Note that, a PDU session anchor's primary role is to assign the IPv6 prefixes that are used by the UE for a given PDU session to communicate with the public network. However, all these PDU session anchors will have a single UPF as a branching point. Next, during a mobility event, a make-before-break approach for a PDU session is adopted to provision service continuity. It must be stated here that, service continuity is ensured through the Session and Service Continuity (SSC) modes, which we will discuss next.} \item[\textcolor{red}{D.}]\textcolor{red}{\textit{Session and Service Continuity Modes:} 3GPP, through TS 23.501 defines the SSC modes, which are critical for the network in determining the level of service continuity offered to a PDU session \cite{3GPP2020}. Concretely, three modes are defined, i.e., \textit{SSC mode 1}, \textit{SSC mode 2} and \textit{SSC mode 3}. We briefly describe them as follows:} \begin{itemize} \item \textcolor{red}{\textit{SSC mode 1:} This mode ensures that the IP address is preserved. Specifically, the PDU session anchor is maintained regardless of the access technology being used by the PDU session after the mobility event. Furthermore, the IP address is maintained throughout the lifetime of the PDU session. Additionally, more PDU session anchors might be allocated for additional IP addresses, however, it is not necessary that they be maintained just like the initial IP address and PDU session anchor.} \item \textcolor{red}{\textit{SSC mode 2:} In this mode, if needed, the network can release a PDU session and request the UE to immediately establish a new PDU session with the same network. Moreover, if the UE has multiple PDU session anchors, the additional anchors can be released or allocated (for new IP prefixes/addresses).} \item \textcolor{red}{\textit{SSC mode 3:} In this mode, IP address is not preserved. This consequently makes any changes in the user plane visible to the UE. However, to ensure that an acceptable level of QoS, and hence, service continuity is maintained, a \textit{make-before-break} approach is followed. This essentially determines the destination PDU session anchor before relieving the resources the PDU session occupies at its current anchor.} \end{itemize} \textcolor{red}{It must be stated here that the SSC mode for a UE is selected by the SMF depending on the UE subscription details as well as the PDU session type.} \item[\textcolor{red}{E.}]\textcolor{red}{\textit{User Plane aspects:} In 5G networks, UPFs will be utilized to handle the data plane traffic. Concretely, they can be thought of as routers, on whom the routing rules are programmed by the SMF. In TS 23.501 \cite{3GPP2020}, the aforesaid specifics have been defined. However, note that the methodology to establish these paths still involves exchanging Tunnel Endpoint Identifiers (TEIDs) between CN entities. This, as we will state in the analysis, can be a cause of increased network load. Additionally, traffic re-routing, in the event of mobility or load balancing, is handled by the SMF, wherein it sends the necessary information, such as the forwarding target information, to the UPFs. Lastly, in the event of mobility of a UE, packet buffering is also provisioned so as to minimize the loss of packets and hence, QoS. } \item[\textcolor{red}{F.}]\textcolor{red}{\textit{Dual Connectivity:} Through TS 23.501 \cite{3GPP2020} and TS 37.340 \cite{ETSI2019}, 3GPP has also concretized and standardized the integration of Multi-RAT Dual Connectivity (MR-DC) into 5G. Concretely, the UEs will now have the capability and possibility to connect to two APs belonging to the same RAT (LTE-LTE, 5G New Radio (NR) - 5G NR) or to different RAT(s) (LTE - 5G NR). As in LTE-DC, this can be configured to allow fast-switching (fast HO), since control plane is not changed unless the Master Node is changed. Also, the UP is terminated at MC, so, no CN signalling is necessary for intra-MC HO.} \item[\textcolor{red}{G.}]\textcolor{red}{\textit{Edge Computing:} TS 23.501 \cite{3GPP2020} defines the support for edge computing platforms in 5G networks. Concretely, these are utilized in the non-roaming or local breakout roaming modes. By local breakout, we mean that a UE can access public network without traversing the core network via additional gateways that are placed within the network. Note that, the 5G CN is responsible for selecting a UPF that is close to the UE and also has access to an edge compute node. Consequently, traffic steering is performed at this UPF towards the edge compute node.} \item[\textcolor{red}{H.}]\textcolor{red}{\textit{Network Slicing:} The concept of enabling a telecom operator to be able to slice its infrastructure network into logically separated networks and consequently service multiple tenants, e.g., virtual network operators, services (eMBB, URLLC, mMTC), etc., using the same, wherein the logical separation involves dynamic allocation of network resources, is termed as network slicing \cite{Zanzi2018}. 3GPP, in TS 23.501 \cite{3GPP2020}, has discussed network slicing in detail, wherein its support for roaming as well as its involvement in the inter-working process between 5G CN and LTE EPC has been elaborated. Specifically, support for migrating and translating the Single Network Slice Selection Assistance Information (S-NSSAI), which consists of the necessary information with regards to an assigned network slice for a UE, between the Home PLMN (H-PLMN) and the Visited PLMN (V-PLMN) has been detailed. Similarly, for the inter-working process, 3GPP charts out the principles for migration, translation and creation of S-NSSAIs whenever a UE undergoes mobility and changes from a 5G network to an LTE network, and vice versa. Moreover, the support has been defined for scenarios where the N26 interface, which is the standard 5G CN and LTE EPC inter-working interface, may or may not be present \cite{3GPP2020}. } \setlength\parindent{8pt}\textcolor{red}{On the other hand, and importantly, the concept of network slicing also assists in provisioning tailor-made MM solutions for the tenants that each network slice will cater to. This consequently helps to deploy on-demand MM strategies.} \item[\textcolor{red}{I.}]\textcolor{red}{\textit{Load Balancing and Congestion Awareness:} In TS 23.501 \cite{3GPP2020}, 3GPP has defined procedures for load balancing at the AMF and SMF, as well as congestion awareness within the core network. Concretely, two specific strategies, i.e., load balancing and load re-balancing, have been provisioned. Within the load balancing paradigm, new users incoming into an AMF region, if necessary, are directed to an appropriate AMF in order to manage the load of the AMFs. To do this, appropriate weights, indicative of the load on each AMF, are assigned and updated at appropriate intervals (typically on a monthly basis). On the other hand, if an AMF becomes overloaded, then load re-balancing is performed. Here, already registered users are migrated to other AMFs that are not overloaded while ensuring minimum service disruption \cite{3GPP2020}. Note that, the new AMF chosen should belong to the same AMF set. An AMF set is defined as the AMFs which belong to the same PLMN, have the same AMF region ID and the same AMF set ID value \cite{3GPP2020}. These parameters are pre-configured by the network operator. Lastly, 3GPP also provisions extensive details with regards to handling congestion control for the Non Access Stratum (NAS) messages. This is important from the perspective of MM, as MM messages are carried over NAS to the CN nodes. For further details with regards to the specifics of the congestion control procedures, the reader is referred to TS 23.501 \cite{3GPP2020}.} \item[\textcolor{red}{J.}]\textcolor{red}{\textit{Cell, Beam and Network Selection:} Through TS 23.501 \cite{3GPP2020} and in particular through TS 38.300 \cite{3GPP2020b} details with regards to cell, beam and network selection have been specified. For \textit{cell selection} these standards documents, developed by 3GPP, specify support for cell selection procedures given that the UE is in either Radio Resource Control (RRC) idle, or RRC inactive or RRC connected state. Note that, RRC idle state refers to a UE that can listen to paging channels, broadcasts and multicasts, as well as perform cell quality measurements. The RRC inactive state refers to a UE that can roam within the RAN-based notification area (RNA) without informing the NG-RAN. The RRC connected state for a UE implies that it has an active connection and data flow. Most notably, for the RRC connected state, cell mobility and beam mobility have been specified. As the name suggests, a UE can either undergo a cell handover or it can switch between the beams that a given AP uses. To perform this, procedures for beam quality and cell quality measurements have also been defined in \cite{3GPP2020b}. The beam quality measurements are performed in the physical layer for multiple beams being transmitted by a given cell. These measurements are filtered and aggregated at the RRC layer to obtain the cell quality measurements. Note that, these quality measurements are still performed using the RSSI/RSRP/RSRQ/SINR metrics. Furthermore, in \cite{3GPP2020b} procedures for cell selection and handover involving intra- and inter-frequency handover in 5G NR, Inter-RAT handover within 5G CN, Inter-RAT handover from 5GC to EPC and vice versa, have been specified. For the sake of brevity, we do not detail these procedure and refer the reader to TS 38.300 \cite{3GPP2020b}. Moreover for Inter-RAT handovers, procedures for packet buffering and forwarding as well as data path switching, to ensure the requested QoS, have also been defined. Lastly, roaming and access restrictions are also appropriately defined based on the user subscription to both the SMF and AMF. This facilitates the selection of the right AP and PLMN for a given user \cite{3GPP2020,3GPP2020b}. } \item[\textcolor{red}{K.}]\textcolor{red}{\textit{Inter-Working, Migration and Handover signaling:} While TS 38.300 \cite{3GPP2020b} specified certain handover procedures for both the CP and DP, a detailed description of the handover signaling, inter-working between 5G CN and EPC, and migration of PDU sessions has been provided in TS 23.502 \cite{3GPP2020a} and TS 23.501 \cite{3GPP2020}. Concretely, through \cite{3GPP2020a} the CN signaling process for the various stages in a handover, i.e., handover request, handover preparation, handover complete/cancel/reject, have been presented in detail. These handover signaling strategies have been detailed for Intra-RAT HO (N2 and Xn handovers) as well as for Inter-RAT handovers (involving 5GC and EPC). Moreover, the handover signaling procedures have also been defined for the scenarios wherein the EPC-5GC inter-working interface, i.e., N26, may or may not be present. Also note that, the 5G-N2 handover is similar to the LTE-S1 handover (specified in Section 4.4.1) and the 5G-Xn handover is similar to the LTE-X2 handover (specified in Section 4.4.1). Next, for the 5GC and EPC inter-working, in TS 23.501 \cite{3GPP2020} the principles for maintaining IP address continuity in the event of UE mobility from 5GC to EPC or vice versa have been provisioned. However, it is also specified that in the event a UE transitions from 5G to 3G or 2G and vice versa, the IP address continuity might not be maintained. Furthermore, procedures for transferring the PDN/PDU sessions established by a UE over a 4G/5G network, when it transitions to the 5GC/EPC, over the N26 interface have been provisioned in \cite{3GPP2020}. Also, traffic steering and forwarding procedures have also been elaborated. Lastly, procedures for migrating PDU sessions from non-3GPP access to the 3GPP access, when a UE undergoes a mobility event from 5GC to EPC, is also supported \cite{3GPP2020}.} \item[\textcolor{red}{L.}]\textcolor{red}{\textit{D2D mobility support:} With the standardization of Proximity Services (ProSe) in 3GPP Release-12 and 13 \cite{Terrestrial2013}, 5G networks can utilize the capability to orchestrate data forwarding/relaying in both DP and CP. This can consequently enhance the ability of the network to provide a proactive and seamless handover procedure \cite{Jung2016}.} \end{itemize} \subsubsection{\textcolor{red}{Analysis}} \textcolor{red}{Given the extensive overview with regards to the MM solutions that have been provisioned by the 5G standards \cite{3GPP2020, 3GPP2020a, 3GPP2020b}, we now, as part of our qualitative analysis, present the \textit{pros} and \textit{cons} for the same, as follows:} \begin{itemize} \item \textcolor{red}{3GPP 5G MM Pros} \begin{itemize} \item \textcolor{red}{Provisions monitoring of UE mobility, mobility event notifications and resource negotiation mechanisms at destination networks \cite{3GPP2020}} \item \textcolor{red}{Employs flexible session management strategies, wherein provision of per-PDU session granularity, through path selection, roaming support and traffic steering, has been detailed \cite{3GPP2020}} \item \textcolor{red}{Support for IPv6 multi-homing \cite{3GPP2020}} \item \textcolor{red}{Provision for multiple sessions and service continuity modes \cite{3GPP2020}} \item \textcolor{red}{Support for Multi-RAT DC \cite{3GPP2020}} \item \textcolor{red}{Support for Edge Computing \cite{3GPP2020}} \item \textcolor{red}{Network slicing information migration support in the event of inter-/intra- RAT mobility \cite{3GPP2020}} \item \textcolor{red}{Network slicing support for provisioning on-demand MM} \item \textcolor{red}{Ability to provision context awareness via network slicing} \item \textcolor{red}{Provision for managing core network load by introducing load balancing and re-balancing principles on the AMF \cite{3GPP2020}} \item \textcolor{red}{Provision of congestion awareness on the CP handling MM messages, i.e., NAS \cite{3GPP2020}} \item \textcolor{red}{Introduction of beam level MM support \cite{3GPP2020b}} \item \textcolor{red}{Intra-RAT (5GC to 5GC) and Inter-RAT (5GC to EPC and vice versa) HO support \cite{3GPP2020, 3GPP2020a}} \item \textcolor{red}{Well defined EPC and 5GC inter-working interface, i.e., N26 \cite{3GPP2020, 3GPP2020a}} \item \textcolor{red}{Mobility support at the D2D level \cite{Terrestrial2013, Jung2016}} \end{itemize} \item \textcolor{red}{3GPP 5G MM Cons} \begin{itemize} \item \textcolor{red}{Handover signaling in the CN is extremely sub-optimal \cite{Jain2019}} \item \textcolor{red}{RAT selection still relies on received signal quality fundamentals only \cite{3GPP2020b}} \item \textcolor{red}{A unified framework for cross-layer mechanisms, such as MPTCP-SCTP (transport layer), IPv6 multi-homing (network layer) and MR-DC (Physical and MAC layer) working together, has not been provisioned} \item \textcolor{red}{In IPv6 multi-homing, a single point of failure (SPoF) still exists, as the multiple PDU session anchors are still connected to a single UPF from where the paths branch out \cite{3GPP2020}} \item \textcolor{red}{Co-ordination between D2D peers for enacting an efficient MM strategy is not explored explicitly in the standards} \end{itemize} \end{itemize} \textcolor{red}{From the \textit{pros} and \textit{cons}, it can be deduced that the 3GPP 5G MM mechanisms will be able to support reliability parameters \textit{RL1} (owing to the support for MR-DC and IPv6 multi-homing, and hence, redundancy in the number of connections and flows), \textit{RL2} (owing to the support for MR-DC and handover procedures defined, thus ensuring seamless handover capability), \textit{RL3} (owing to managing mobility at the access, core and extreme edge network levels as well as local breakouts, thus introducing decentralization) and \textit{RL5} (owing to the congestion awareness feature in NAS). Next, for the flexibility parameters, 3GPP 5G MM mechanisms satisfy \textit{FL1} (owing to the granularity of service support per PDU session as well as per mobility level, and the ability to support on-demand MM through network slicing support), \textit{FL2} (owing to the ability to connect to multiple APs through MR-DC and IPv6 multi-homing support), \textit{FL3} (owing to the handover support at the access, core and extreme edge network levels via the Xn handover, N2 handover and 3GPP ProSe, respectively ) and \textit{FL5} (owing to the ability to take into account the context of the tenant via network slicing). Lastly, 3GPP 5G MM mechanisms, for the scalability criterion, satisfy parameters \textit{SL1} (owing to the AMF load balancing strategies, local breakout strategies, multi-level handover support as well as the granularity in service per mobility levels), \textit{SL4} (owing to local breakout and support for edge computing, thus leading to decentralization) and \textit{SL5} (since these are standards, implementation and integration is not a bottleneck).} \textcolor{red}{Note that, scalability parameters \textit{SL2} and \textit{SL3} are not supported owing to the sub-optimality in CN handover signaling as well as the presence of SPoFs, as stated in the \textit{cons} for the 3GPP 5G MM mechanisms. Also, given that the 3GPP 5G MM mechanisms provision both CP and DP related strategies as well as the core, access and extreme edge network related mechanisms, in Figure 2 they have been classified as illustrated.} \subsection{\textcolor{red}{Other Research Efforts: Core, Access and Extreme Edge Network Solutions}} From the perspective of MM strategies in 5G networks, the main objective of the ongoing academic and industrial research efforts has been to provision mechanisms that cater to the myriad user mobility and application profiles, as well as to ensure context/on-demand based service provision and continuity \cite{Kantor2015}. \textcolor{red}{For example, in \cite{Gramaglia2016}, a wide swathe of avenues that exist in the 5G MM design have been explored. It discusses an SDN based framework that can encompass strategies and techniques which grant certain level of adaptability (feedback based), flexibility (in terms of granularity provisions) and reliability (through availability of multiple paths) for 5G MM solutions.} Notably, and apart from the aforementioned broad study, specific areas of MM have also been tackled through research efforts such as \cite{Jain2019} wherein optimal handover signaling strategies for 4G-5G networks have been proposed. \begin{comment} \begin{table}[!htb] \renewcommand{\arraystretch}{1.1} \caption{Proof-of-Concept (PoC) Studies} \centering \begin{tabular}{|>{\centering\arraybackslash}m{2cm}|*{6}{>{\centering\arraybackslash}m{1cm}|}} \hline \textbf{Operators / Vendors} & \multicolumn{6}{>{\centering}m{6cm}|}{\textbf{PoC Experiment details}} \\ \hline \textbf{Telefonica-Huawei} & \multicolumn{6}{m{6cm}|}{PoC performed for the 5G UCNC (user centric and no cell) RAN wherein the number of 5G connections were increased by 233\%, access latency reduced by 95\% and signaling overhead was reduced by 78\% \cite{Zhang2017}.} \\ \hline \textbf{KDDI labs-Samsung} & \multicolumn{6}{m{6cm}|}{5G handover trials at 192 km/h for 28GHz radio frequency as well as for mobile users at 60 km/h in dense urban environments. } \\ \hline \textbf{SK Telecom-Samsung} & \multicolumn{6}{m{6cm}|}{Demonstration of efficient interworking between between 4G LTE and 5G networks} \\ \hline \end{tabular} \end{table} \end{comment} \textcolor{red}{Hence, given that we will be analyzing a wide range of mechanisms and strategies, we have broadly classified them as being \emph{Core Network}, \emph{Access Network} and \emph{Extreme Edge Network} based solutions, as shown in Figure 2. These classifications reflect the regions in the network where the respective mechanisms generate the most impact. Concretely, \emph{Core network} based solutions will invoke solutions that primarily assist in the provision of MM services through the core network. Similarly, the \emph{Access network} and \emph{Extreme Edge network} solutions assist in provision of MM services through the access and extreme edge portion of the wireless network. And so, we now present a detailed discussion of these solutions alongside their efficacy in satisfying the criteria listed in Table 2.} \subsubsection{Core Network Solutions} \paragraph{\textcolor{red}{Discussion\\}} Core network solutions have been categorized further as either being \emph{SDN}, \emph{DMM} or \emph{Edge clouds} based. Solutions that utilize SDN to implement MM can be equipp\-ed with a global or locally-global network view. This top-view of the network enables MM solutions to offer a high degree of optimality. However, as a result of the convoluted 5G network scenario, the design of SDN CP also becomes increasingly crucial. Hence, the placement of SDN controller(s) (SDN-C) in the overall network topology is an important factor to consider \cite{GSchulz}. Consequently, we present a brief discussion on the SDN based solutions, which might be Centralized, Semi-Centralized or Hierarchical \cite{Li2014, Meneses2018, Assefa2017}. A centralized MM solution will consist of a single global SDN-C which monitors and manages the entire network. With the global view, it enables the formulation of optimal MM solutions. However, the centralized nature might not offer the scalability and reliability (SDN-C can be a SPoF) \cite{Li2014,Basloom} needed by 5G MM solutions. Note that, even though SDN-Cs might appear as SPoFs, corresponding clustering for load sharing and redundancy can help alleviate this issue. Specifically, and similar to the method proposed by 3GPP to pool the Mobility Management entities (MMEs) to avoid SPoF problem and to share the workload between MME instances, SDN-Cs can be clustered together to provision redundancy (and hence no SPoF) and workload sharing. Next, semi-centralized approaches divide the entire geographical region into smaller domains, each managed by a separate SDN-C. This SDN-C, responsible for handling MM in its domain, helps to enhance the network scalability. However, since each domain still has a single SDN-C managing it, SPoF issue might become a limiting factor. Further, for inter-domain HO, extensive signaling would be required between two SDN-Cs whilst the same would be non-existent in a centralized approach \cite{Li2014}. On account of this trade-off, a semi-centralized approach can be successful if an appropriate number of SDN domains are created, which do not increase the signaling burden while reinforcing the network reliability and scalability characteristics \cite{Basloom}. A combination of the aforementioned approaches, i.e., hierarchical approach, consists of SDN-Cs at multiple levels \cite{Li2014}. Whilst the global SDN-C behaves as a master (tuning HO parameters, manage inter domain HOs, etc.), the SDN-Cs in the lower hierarchical levels manage MM within their domains and function as slaves. Such an approach can hence provide the scalability and reliability required by 5G MM solutions. Next, similar to the SDN based solutions, DMM based approaches will contribute significantly to the design and functioning of 5G networks. With the ability to provide a distributed DP in conjunction with a distributed/centralized CP \cite{Yang2016, Nguyen2016, Elgendi2016, Battulga2017, Liu2015}, DMM can enhance the scalability (by removing anchors prevalent in current MM solutions, i.e., decentralization) and flexibility (by allowing the most optimum access router for each flow independently) of the 5G networks. These approaches can be classified as being fully distributed, partially distributed and SDN based. The fully distributed approach whilst ensuring reliability and scalability by distributing both DP and CP, will encounter extensive amount of handover signaling between access routers (ARs) during a mobility event \cite{Yang2016, Nguyen2016}. \textcolor{red}{Note that the DP functionalities and location of ARs are the same as that of the UPFs. However, depending on the type of DMM approach, the CP is fully or partially located on the ARs themselves, instead of being located in a centralized controller.} And so, while the fully distributed approach is challenged by the signaling between ARs, the partially distributed (P-DMM) approach centralizes the CP, hence, alleviating this concern \cite{Yang2016, Nguyen2016}. The P-DMM approach also maintains the benefit of avoiding a single mobility anchor. However, an enhancement of this approach is the SDN based approach. Similar to the P-DMM approach, the CP is still at a central controller, i.e., SDN-C, however, the signaling between the controller and the DP devices is far more simplified as compared to the partially distributed approach. The reason being, in an SDN based approach, the ARs are converted to mere forwarding devices and it is the SDN-C that orchestrates the forwarding rules (routing table) on them to realize the data paths for the existing sessions in the network. Concretely, in the SDN based approach the DP devices no longer need to perform a handshake, like in the P-DMM approach, with the central controller to establish a route, instead the routing information is now fed to the DP devices by the SDN controller \cite{Nguyen2016, Elgendi2016}. These enhancements are further quantified in \cite{Nguyen2016} by the fact that the mean HO latency for SDN based DMM is reduced by 3.94\% as compared to P-DMM, while the E2E delay is reduced by 39.55\%. \textcolor{red}{Subsequent to these discussions, and given that the current standardization in 5G \cite{3GPP2020, 3GPP2020a} stipulates the functionality for mobility management to be split up between the AMF and the SMF NFs, it is noteworthy that the decoupling of the CP and DP and subsequent utilization of the aforesaid NFs via an SDN-C can provision the capability to implement fast and efficient MM solutions for 5G and beyond networks. Such solutions, on the basis of the discussions thus far, will be reliable, flexible and --to an extent-- scalable. Since, CN signaling during mobility events will still be a challenge, given the future network scenario, there remains a possibility for the SDN and DMM based 3GPP 5G MM solutions to be rendered sub-optimal.} Lastly, edge clouds, which essentially refer to data clouds/processing centers close to the RAN within a given network infrastructure, can have a profound impact on the user QoS during mobility scenarios (through fast access to data and compute resources) \cite{Li2014a}. Henceforth, several studies such as \cite{Urgaonkar2015,Wang2018,Architecture2016,Machen2018,Mtibaa2018, Mach2017} alongside 3GPP and ETSI \cite{ETSI2018}, have studied the fundamental concepts of utilizing the edge clouds for fast data access (via data caching) as well as for processing capabilities (i.e., performing certain MM operations without the messages having to traverse the entire CN). Note that, we classify the edge clouds to be a CN solution, even though we state that they are most likely to be closer to the RAN, because, certain topology designs might entail a hierarchical setup. In this hierarchical setup, there will be some edge clouds that are placed close to the RAN and some of them being placed further away from the RAN, say close to the S-GW and Packet Gateway (P-GW) in an LTE network \cite{Li2014a}. Such an approach can help in caching data according to their level of popularity, taking into account CN traffic as well as the latency to retrieve the requested content \cite{Li2014a}. \paragraph{\textcolor{red}{Analysis}\\} \textcolor{red}{For analyzing the core network solutions we utilize the generic classifications, i.e., SDN based, DMM based and Edge Cloud solutions, and firstly list their \textit{pros} and \textit{cons}.} \begin{itemize} \item \textcolor{red}{SDN based mechanism Pros} \begin{itemize} \item \textcolor{red}{Provisions global view of the network \cite{Li2014, Assefa2017}} \item \textcolor{red}{Provisions hierarchical solutions, thus enabling decentralization \cite{Li2014}} \item \textcolor{red}{Provisions the ability to manage CN signaling, and hence, DP paths during mobility events \cite{Li2014, Meneses2018, Assefa2017}} \item \textcolor{red}{Provisions a single point of collection for network statistics thus enabling the design and development of context based MM mechanisms \cite{Basloom}} \end{itemize} \item \textcolor{red}{SDN based mechanism Cons} \begin{itemize} \item \textcolor{red}{Extensive CN signaling for managing handovers in a centralized/semi-centralized approach \cite{Li2014}} \item \textcolor{red}{It does not alleviate the issue of mobility anchors which can lead to SPoFs in the DP} \end{itemize} \item \textcolor{red}{DMM based mechanism Pros} \begin{itemize} \item \textcolor{red}{Provision decentralization of the mobility management anchors \cite{Yang2016, Nguyen2016, Elgendi2016, Battulga2017}} \item \textcolor{red}{Assist the CN in implementing efficient data paths for UEs undergoing mobility \cite{Nguyen2016, Liu2015, Yang2016}} \end{itemize} \item \textcolor{red}{DMM based mechanism Cons} \begin{itemize} \item \textcolor{red}{Fully decentralized solution introduces extensive CN signaling in order to manage the changes in data paths and mobility anchors, and hence, handovers \cite{Nguyen2016}} \item \textcolor{red}{Partially distributed solution, while solving the extensive CN signaling, introduces a central controller, and hence, an SPoF \cite{Nguyen2016}} \item \textcolor{red}{Co-existence and integration with already deployed networks and devices will be a significant challenge \cite{Liu2015}} \end{itemize} \item \textcolor{red}{Edge clouds Pros} \begin{itemize} \item \textcolor{red}{Ensure data offloading opportunities, and hence, reduction in CN traffic load \cite{ETSI2018, Urgaonkar2015}} \item \textcolor{red}{Facilitate processing of MM related tasks without the messages having to traverse the CN \cite{Mtibaa2018}} \item \textcolor{red}{Context awareness \cite{Mtibaa2018,Mach2017}} \end{itemize} \item \textcolor{red}{Edge clouds Cons} \begin{itemize} \item \textcolor{red}{Require dedicated infrastructure and appropriate placement \cite{ETSI2018, Urgaonkar2015, Machen2018}} \item \textcolor{red}{Require fast service migration strategies to ensure seamless mobility \cite{Wang2018}} \end{itemize} \end{itemize} \textcolor{red}{From these \textit{pros} and \textit{cons} as well as the preceding discussions, it is evident that the SDN based solutions satisfies parameter \textit{RL2} (allowing for seamless mobility), \textit{RL3} (through the provision of decentralized solutions), \textit{RL4} (through the ability to re-program paths in CN via orchestration of OF rules) and \textit{RL5} (through the ability to utilize network statistics for traffic steering with the CN) for the reliability criterion. For the flexibility criteria, the SDN based mechanisms satisfy the parameters \textit{FL1} (through the capability of orchestrating policies dependent on flow type, slice, etc.), \textit{FL3} (by allowing for CN based MM solutions that will work in synergy with the access network based solutions) and \textit{FL4} (through the global view of the network wherein a variety of parameters such as network load, QoS requirements, etc., are considered). In terms of scalability, SDN based solutions satisfy parameters \textit{SL1} to \textit{SL3} (given the ability to manage and steer traffic flows with the ability of having a distributed, hierarchical or centralized implementation) and \textit{SL4} (due to the possibility of having a decentralized configuration).} \textcolor{red}{The DMM based solutions, however only satisfy parameters \textit{RL2} (allowing for seamless handovers) and \textit{RL3} (due to the decentralized nature) in the reliability criterion. Further, for the flexibility criterion, DMM based solutions only satisfy parameter \textit{FL1}, i.e., they only offer granularity of service by preventing any mobility anchor. It is noteworthy though that, from the scalability aspect DMM based solutions, like SDN based solutions, satisfy parameters \textit{SL1} to \textit{SL4}, and for the same reasons.} \textcolor{red}{Lastly, for the edge cloud based solutions, parameters \textit{RL2} (allowing for seamless mobility through fast access to data/processing capabilities upon migration to the target network) and \textit{RL3} (allowing decentralization of MM based services) are satisfied for the reliability criterion. For the flexibility criteria, parameters \textit{FL1} (due to the ability to provision services based on mobility and application profiles), \textit{FL3} (by allowing for MM methods at the edge network level in addition to the access and core network based solutions), \textit{FL4} (by provisioning processing capabilities for user association/AP selection services) and \textit{FL5} (by allowing for context awareness in data caching according to user mobility) are satisfied. Additionally, for the scalability criteria, parameters \textit{SL1} to \textit{SL4} are satisfied by the edge cloud solutions. The reason being, they allow for decentralization which can consequently permit better capability to manage connections and control messages due to increasing number of users.} \textcolor{red}{It is important to state here that, given the SDN based mechanisms assist in MM through CP procedures, DMM based solutions assist through CP procedures as well as provision alternate and effective DP paths, and Edge clouds provision alternate and effective data paths, they have been classified as being CP, CP/DP and DP procedures, respectively, in Figure 2.} \subsubsection{Access Network Solutions} \paragraph{\textcolor{red}{Discussion}\\} As part of access network strategies, one of the key approaches that has been proposed, and similar to LTE dual connectivity, is the concept of phantom cell \cite{Nakamura2013}. It allows the UE to camp its CP on a MC, while its DP is being handled at the small cells that lie within the coverage of the earlier mentioned MC. This, in essence offers a low signaling cost regime to perform the intra-MC HOs as the UE does not need to access the CN for radio resource management operations during HO. Concretely, the MC handles the radio resource allocation operations for the phantom cells, and hence, during HOs between the phantom cells the CN signaling is avoided \cite{Carrier2015}. Moreover, owing to the softwarization of the complete network, the process of exchanging information between the various OSI layers, i.e., implementation of the cross layer strategy, is eased. This in turn allows the network to formulate solutions that are optimal, taking into cognizance the impact and benefits that the solution will produce at various levels of the network \cite{Emam2020, Emam2020a, Al-rubaye2016}. \textcolor{red}{However, to realize cross-layer techniques, significant modifications to the software architecture of the protocol stack will be necessary \cite{Emam2020, Emam2020a, Al-rubaye2016}.} Another consequence of the softwarization process is the RAN-as-a-Service (RANaaS), also known as Cloud-RAN (C-RAN), which allows on-demand allocation of access network resources (e.g., Baseband unit (BBU) pool, BBU- Remote Radiohead (RRH) functional splitting) depending on the network and user context \cite{Nikaein2015, Outtagarts2015, Sabella2013}. Additionally, the BBU pool, through close interaction of various RATs at a single location, can orchestrate fast handovers on-demand \cite{Liu2012}. \indent However, in order to choose the best APs to connect to in a multi-RAT scenario, computationally tractable RAT selection mechanisms need to be adopted. The multi-RAT solutions are a broad classification for the myriad RAT selection processes (Optimization based, Fuzzy logic and Genetic Algorithm based, RSSI based, etc. \cite{Zekri2012, Passast2019, Goudarzi2019, Wang2019}) that have been proposed. From our earlier discussions it is evident that RSSI based methods, although simple, do not weigh in other parameters such as network load, backhaul conditions, or user/network policies, for a RAT selection decision. This will most certainly result in sub-optimal solutions. But, optimized mechanisms, that can facilitate closed form solutions and are computationally tractable, will be able to capture more features from the network. Consequently, context aware mechanisms, such as \cite{Calabuig2017,jain2020user}, will lead to optimal solutions that can be implemented for real-time scenarios. It must be stated here that, the aforesaid HO decision may be executed either at the UE (user-centric) \cite{Calabuig2017}, at the network, or as a joint effort between the UE and the network (hybrid decision process). \paragraph{\textcolor{red}{Analysis}\\} \textcolor{red}{As part of the analysis for the access network solutions, we firstly present the \textit{pros} and \textit{cons} for each mechanism discussed above, as follows:} \begin{itemize} \item \textcolor{red}{Phantom Cell method Pros} \begin{itemize} \item \textcolor{red}{Grants the ability to a UE to connect to multiple APs simultaneously, thus also granting redundancy in physical layer connections \cite{Nakamura2013}} \item \textcolor{red}{Provisions the ability to allow per-flow and per-user granularity of service \cite{Nakamura2013, Jain2017}} \item \textcolor{red}{Handover support at access network level \cite{Nakamura2013}} \item \textcolor{red}{Ease of implementation due to existing standards on MR-DC \cite{3GPP2020,Nakamura2013}} \end{itemize} \item \textcolor{red}{Phantom Cell method Cons} \begin{itemize} \item \textcolor{red}{Handovers between different MC domains will still entail service disruption \cite{3GPP2020,Nakamura2013}} \item \textcolor{red}{Inter-MC domain handover signaling will still be a significant burden on the CN \cite{Nakamura2013,Jain2019}} \end{itemize} \item \textcolor{red}{RANaaS Pros} \begin{itemize} \item \textcolor{red}{Provisions on-demand allocation of network resources at the RAN level \cite{Sabella2013, Nikaein2015, Outtagarts2015}} \item \textcolor{red}{Provisions the ability to execute on-demand handovers, through close interaction between the various RATs that are integrated at a BBU pool \cite{Liu2012}} \item \textcolor{red}{Assists in allowing UEs to camp on more than one AP} \item \textcolor{red}{Introduces support for executing handovers at the access network level \cite{Liu2012}} \item \textcolor{red}{Introduces the ability to utilize per-flow/channel granularity of service by being able to manage the physical connections more centrally \cite{Liu2012,Nikaein2015,Sabella2013,Outtagarts2015}} \end{itemize} \item \textcolor{red}{RANaaS Cons} \begin{itemize} \item \textcolor{red}{Requires a complete architectural overhaul at the RAN side of the network \cite{Sabella2013, Nikaein2015, Outtagarts2015}} \end{itemize} \item \textcolor{red}{Cross layer Pros} \begin{itemize} \item \textcolor{red}{Allows for the sharing of network statistics between the various OSI layers \cite{Emam2020, Emam2020a, Al-rubaye2016}} \item \textcolor{red}{Allowing for interaction between multiple OSI layers, thus facilitating the possibility of efficient utilization of multi-homing \cite{Emam2020, Emam2020a, Al-rubaye2016, ITU-T2011}} \end{itemize} \item \textcolor{red}{Cross layer Cons} \begin{itemize} \item \textcolor{red}{Requires significant software modifications to the existing modular nature of the protocol structure \cite{Emam2020, Emam2020a, Al-rubaye2016}} \end{itemize} \item \textcolor{red}{Intelligent RAT selection Pros} \begin{itemize} \item \textcolor{red}{Optimized RAT Selection strategies \cite{Zekri2012, Passast2019, Goudarzi2019, Wang2019, Calabuig2017}} \item \textcolor{red}{Utilization of parameters such as AP load, UE context, etc., for RAT selection \cite{Zekri2012, Passast2019, Goudarzi2019, Wang2019, Calabuig2017}} \item \textcolor{red}{Provisioning the ability to select RATs per-slice/user/flow \cite{Calabuig2017}} \item \textcolor{red}{Provisioning the ability to select multiple APs (possibly belonging to multiple RATs) \cite{jain2020user}} \end{itemize} \item \textcolor{red}{Intelligent RAT selection Cons} \begin{itemize} \item \textcolor{red}{Requires rapid collection of network statistics to perform well informed selection} \item \textcolor{red}{Computational complexity and convergence time of RAT selection algorithms will be critical, given the QoS requirements in 5G \cite{jain2020user}} \end{itemize} \end{itemize} \textcolor{red}{Given the discussions in Section 5.2.2.1 and the \textit{pros} and \textit{cons} listed above, we now determine the parameters, listed in Table 2, satisfied by each of the mechanisms explored. Concretely, for the phantom cell method, parameters \textit{RL1} (redundancy in physical layer connections) and \textit{RL2} (seamless mobility) are satisfied for the reliability criterion. For the flexibility criterion, parameters \textit{FL1} (by permitting the possibility of per-flow and per-user based MM), \textit{FL2} (allowing for connectivity to multiple APs potentially belonging to different RATs) and \textit{FL3} (provisioning handover support at the access network level that will work in synergy with CN based mechanism) are satisfied. In terms of scalability, the phantom cell method satisfies parameters \textit{SL1} to \textit{SL3} (owing to the handling of handover related computation and decision at the access network) and \textit{SL5} (owing to the existing standards on MR-DC, as discussed in Section 3).} \textcolor{red}{Next, the RAN-as-a-service concept satisfies parameters \textit{RL2} (allowing for seamless handovers) and \textit{RL5} (the softwarized nature enables dynamic initiation for RAN functionality such as BBU resources, functional splits, etc., depending on the network and user context) for reliability, parameters \textit{FL1} (allowing for per-flow, per-user, per-slice, etc., service granularity through its softwarized nature), \textit{FL2} (allowing the possibility for connecting a user to multiple APs through its softwarized nature), \textit{FL3} (provisioning handover support at the access network which will work in synergy with the CN and edge network based methods) and \textit{FL4} (enabling the possibility of collection and utilization of RAN based information and generating intelligent AP selection/user association decisions) for flexibility, and parameters \textit{SL1} to \textit{SL3} (by offloading handover decision making and signaling to the access network) for scalability.} \textcolor{red}{On the other hand, the cross-layer method only satisfies parameters \textit{RL2} (allowing for seamless handover) and \textit{RL5} (allows for congestion aware method by sharing statistics about queue lengths, buffer sizes, etc., amongst the various layers) for the reliability criteria. Further, for the flexibility criteria it satisfies only parameters \textit{FL2} (by allowing for the possibility of multi-homing, etc.) and \textit{FL4} (allowing for the possibility of sharing statistics and other information amongst the various OSI layers and enabling joint optimization for AP selection, path re-routing, etc.).} \textcolor{red}{Lastly, for the intelligent RAT selection methods parameter \textit{RL2} (allowing for seamless handover through optimized decisions on RAT selection) is satisfied for the reliability criterion. For the flexibility criterion, parameters \textit{FL1} (allowing for the possibility of flow/user/slice based RAT selection), \textit{FL2} (allowing for the possibility to select multiple RATs for a given user) and \textit{FL5} (via the ability to utilize user and network context for RAT selection) are satisfied, while for scalability only parameter \textit{SL5} (owing to the extensive body of research for optimal RAT selection strategies) is satisfied.} \textcolor{red}{It is important to state here that, given the intelligent RAT selection mechanism assists in MM through RAT selection (which is a CP task) and provision of effective and alternate DP paths, the phantom cell method provisions support for MM by handling the CP signaling for SC selection as well as provision alternate and effective DP paths via SCs, and RANaaS and Cross layer strategies assist through efficient resource allocation decisions (which is a CP task), thus they have been classified as being CP/DP, CP/DP, CP and CP procedures, respectively, in Figure 2.} \subsubsection{Extreme Edge Network Solutions} \paragraph{\textcolor{red}{Discussion}\\} Contrasting to the design and implementation of access and core network based methods, the extreme edge network based solutions consider the potential of utilizing D2D techniques for facilitating seamless HO. \textcolor{red}{Multiple research efforts, such as \cite{Yilmaz2014a,Ouali2020,Klempous2020,Barua2017, Barua2016}, have provisioned methodologies to handle mobility of D2D pairs. Concretely, in \cite{Yilmaz2014a} two types of handovers for D2D pairs have been provisioned. These are either \textit{D2D aware} and \textit{D2D triggered} handovers. They take into account the fact that the control of the D2D pair can be handed over independently of the actual cellular handover. And so, for the \textit{D2D aware} handover, the D2D pair control (and if possible the cellular control) is handed over from the source eNB to the target eNB only after both the devices in the D2D pair satisfy the conditions to handover to the target eNB. On the other hand, the \textit{D2D triggered} handover mechanism aims at clustering the devices of a D2D group in minimum number of cells. Hence, during mobility events the algorithm tries to determine the cell to which the majority of devices within the D2D group belong too.} \textcolor{red}{Similarly, in \cite{Ouali2020} two handover management mechanisms have been proposed. While the joint handover strategy aims at migrating both the devices in a D2D pair simultaneously to the target eNB, the half handover stipulates that such a migration can be asynchronous. Furthermore, the D2D handover decision has also been specified in \cite{Ouali2020}. The Channel Quality Information (CQI) criteria has been utilized for the same. Next, in \cite{Klempous2020}, a markov chain based model has been proposed for D2D mobility.} \textcolor{red}{Lastly, the work done in references \cite{Barua2017, Barua2016} develops a model and simulation framework analyzing D2D mobility. Specifically, it considers a D2D pair with one of them being a transmitter (TX) and the other being just a receiver (RX). Thus, a handover procedure is defined for the scenario when the TX moves to the target eNB. In this procedure, the control of the D2D pair is transferred to the target eNB as soon as the TX migrates to it.} \paragraph{\textcolor{red}{Analysis}\\} \textcolor{red}{We firstly present the \textit{pros} and \textit{cons} for the D2D strategies as follows:} \begin{itemize} \item \textcolor{red}{D2D strategy Pros} \begin{itemize} \item \textcolor{red}{Provisions D2D handover management strategies \cite{Yilmaz2014a,Ouali2020, Barua2016}} \item \textcolor{red}{Provisions MM support at the extreme edge network level \cite{Yilmaz2014a,Ouali2020,Klempous2020,Barua2017, Barua2016}} \item \textcolor{red}{Provisions the ability to decentralize MM functionality\\ \newline} \end{itemize} \item \textcolor{red}{D2D Strategy Cons} \begin{itemize} \item \textcolor{red}{Control signaling overhead will be a challenge \cite{Ouali2020, Yilmaz2014a}} \item \textcolor{red}{The viability with regards to energy efficiency of D2D peers as well as latency incurred in conveying the decisions with regards to MM are un-explored questions} \end{itemize} \end{itemize} \textcolor{red}{Based on the discussions and the aforesaid \textit{pros} and \textit{cons}, the device-to-device methods satisfy parameter \textit{RL2} (through the provision of various seamless handover management studies) for reliability, parameter \textit{FL3} (provisioning mobility support at the edge network level which will work in synergy with access and core network based methods) for flexibility, and parameter \textit{SL4} (allowing for the decentralization of MM functionality) for scalability.} \textcolor{red}{Note that, given the D2D mechanism assists in MM through provision of CP assistance, thus they have been classified as being CP procedure in Figure 2.} \subsection{B5G Networks} \textcolor{red}{In addition to the discussions in Sections 5.1 and 5.2, in this section we present a short study detailing the challenges that current state-of-the-art mechanisms will continue to face for B5G networks. Furthermore, given the special characteristics that B5G networks will pose, as shown in Figure 1, we also list potential research areas for MM in B5G networks. Note that, these are then utilized in the subsequent section wherein we define challenges and potential solutions for 5G and beyond MM.} \textcolor{red}{Concretely, while \emph{SDN} and \emph{NFV} will provide the tools for the B5G networks to provision rapid programmability of the meta-surfaces, during mobility scenarios they will be challenged critically. The reason being that, while current networking paradigms permit anywhere between 1 ms--10 ms time interval for performing any programmability task (latency restrictions, as specified in current 5G networks \cite{Parvez2018}, on most services), in B5G networks this will be constrained even further as additional surfaces need to be programmed and orchestrated. Specifically, an increased number of surfaces/network nodes leads to more data required to be processed for generating appropriate programmability decisions. These decisions then need to be sent out (orchestrated) to the relatively large number of network nodes (including meta-surfaces), to execute the given task. Hence, this leads to an increased latency constraint on the network programmability aspect. Further, while the meta-surfaces provide a higher degree of freedom to the operator, they need to be programmed, as mentioned above. This introduces the challenging aspect of managing the SDN domains, NFV orchestration and the related signaling. As a consequence, the compactness as well as the efficiency of the current state-of-the-art SDN and NFV procedures will be challenged.} \textcolor{red}{Next, with techniques such as DC, the challenge will be multi-fold as B5G networks will not just comprise of meta-surfaces, which can also act as a MIMO array, but they will also be equipped with Terahertz and mobile AP based multi-tier networks. And while, DC and multi-RAT procedures, as stated in Sections 5.1 and 5.2, will aid in ensuring a context-aware network selection procedure, the complexity for the access network techniques will be compounded by the fact that not only will they need to ensure QoS requirements, but they will have to also ensure sufficient available access bandwidth as well as backhaul bandwidth. Note that with the backhaul bandwidth there will be a significant design challenge since VLC technology is capable of carrying data rates of up to 1 Tbps. Current backhaul technologies cannot provision such high bandwidths \cite{Jaber2016b}. Further, it is important to reiterate that the network will be composed of not only 4G-LTE and mmWave APs, but there will also be VLC and drone based APs, which essentially are the main reason for the increased complexity as discussed above.} \textcolor{red}{Moreover, for the edge clouds, while they aid in allowing low latency access to cached content as well as the compute resources, the deployment strategies will need to be rethought given the ongoing growth pattern for data usage as well as the number of served devices coupled with more resource hungry services. Certain important recent studies in this direction have been provisioned via references \cite{Santoyo-Gonzalez2018, Leyva-Pupo2019}.} \textcolor{red}{Given these significant shortcomings in the current state-of-the-art mechanisms towards B5G networks as well as taking into account the seminal works in the area of B5G techniques \cite{Boulogeorgos2018, Renzo2019, Basar2019, Chowdhury2018} \cite{Sekander2018}, the potential areas of research in MM for these networks are as follows:} \begin{itemize} \item Characterization of the channel between meta-surface and the users, and meta-surface and the AP, in the event of user/AP being mobile, for the purpose of MM decisions \item Consideration of reliability and coverage of VLC link for MM decisions \item Characterization of the computational complexity for re-calibrating the meta-surfaces alongside the network, during mobility events \item Impact of mobility upon the programmable environment\footnotemark[1] \footnotetext[1]{By environment, we refer to the physical environment that lies between the transmitter and receiver.} concept, drone based communication and VLC \item Optimal RAT and AP selection with a programmable environment \item Optimal RAT and AP selection in scenarios where both the UE and AP (drone based) are mobile \item Characterizing the computational complexity of optimization methodologies for user association \item Methods to handle possible increase in handover signaling/messaging during other network processes, such as reprogramming meta-surfaces to serve mobile users \item Formulation of a sound heterogenous RAT strategy, just like the 4G-5G concept, given mmWave and Terahertz technologies and their associated challenges related to coverage. \end{itemize} Note that, the aforementioned research areas do not form an exhaustive list, but are broadly indicative of what aspects remain to be explored with regards to MM in B5G networks. \\ \newline \begin{sidewaystable*} \renewcommand{\arraystretch}{1.1} \caption{\textcolor{red}{Compliance with the Reliability, Scalability and Flexibility criteria of the Current state-of-the-art MM mechanism/standard}} \centering \color{red}\begin{tabular}{|*{10}{>{\centering\arraybackslash}m{0.6 cm}|>{\centering\arraybackslash}m{1.1 cm}|}} \cline{3-20} \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{16}{c|}{\textbf{Other Research Efforts}}\\ \cline{5-20} \multicolumn{2}{c|}{} & \multicolumn{2}{m{2cm}|}{\textbf{3GPP 5G MM mechanism}} & \multicolumn{6}{c|}{\textbf{Core Network Solutions}} & \multicolumn{8}{c|}{\textbf{Access Network Solutions}} & \multicolumn{2}{m{2cm}|}{\textbf{Extreme Edge Network Solutions}} \\ \cline{5-20} \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{} & \multicolumn{2}{m{1.5cm}|}{\textbf{SDN based}} & \multicolumn{2}{m{1.5cm}|}{\textbf{DMM based}} & \multicolumn{2}{m{1.5cm}|}{\textbf{Edge Clouds}} & \multicolumn{2}{m{1.5cm}|}{\textbf{Phantom Cell Method}} & \multicolumn{2}{m{1.5cm}|}{\textbf{RAN-as-a-Service}} & \multicolumn{2}{m{1.5cm}|}{\textbf{Cross layer}} & \multicolumn{2}{m{1.5cm}|}{\textbf{Intelligent RAT selection}} & \multicolumn{2}{m{1.5cm}|}{\textbf{Device-to-Device}} \\ \hline \multicolumn{2}{|c|}{} & Cnf.$^\dagger$ & Refs.$^\delta$ & Cnf. & Refs. & Cnf. & Refs. & Cnf. & Refs. & Cnf. & Refs. & Cnf. & Refs. & Cnf. & Refs. & Cnf. & Refs. & Cnf. & Refs. \\ \hline \multirow{5}{*}[-1em]{\rotatebox{90}{\textbf{Reliability}}} & \textbf{RL1} & \Checkmark & \multirow{5}{*}[0.1em]{\cite{3GPP2020,3GPP2020a}} &$\times$ & \multirow{5}{*}[0.1em]{\cite{Li2014,Meneses2018, Assefa2017, Basloom}} & $\times$ & \multirow{5}{*}[0.1em]{\cite{Liu2015,Yang2016}} & $\times$ & \multirow{5}{*}[0.1em]{\cite{Urgaonkar2015}} & \Checkmark & \multirow{5}{*}[0.1em]{\cite{Nakamura2013}} & $\times$ & \multirow{5}{*}[0.1em]{\cite{Nikaein2015}} & $\times$ & \multirow{5}{*}[0.1em]{\cite{ITU-T2011,Emam2020}} & $\times$ & \multirow{5}{*}[0.1em]{\cite{Zekri2012, Passast2019}} & $\times$ & \multirow{5}{*}[1.5em]{\cite{Yilmaz2014a}} \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{RL2} & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark& & \Checkmark & & \Checkmark & \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{RL3} & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark & & $\times$ & & $\times$ & & $\times$ & & $\times$ & & $\times$ & \cite{Ouali2020} \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{RL4} & $\times$ & \cite{Terrestrial2013,Jung2016} & \Checkmark & & $\times$ & \cite{Nguyen2016,Elgendi2016, Battulga2017} & $\times$ & \cite{Mtibaa2018,Mach2017,ETSI2018} & $\times$ & & $\times$ & \cite{Outtagarts2015,Sabella2013} & $\times$ & \cite{Emam2020a,Al-rubaye2016} & $\times$ & \cite{Goudarzi2019,Wang2019, Calabuig2017}& $\times$ & \cite{Barua2016, Barua2017} \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{RL5} & \Checkmark & & \Checkmark & & $\times$ & & $\times$ & & $\times$ & & \Checkmark & & \Checkmark & & $\times$ & & $\times$ & \\ \hline \hline \multirow{5}{*}[-1em]{\rotatebox{90}{\textbf{Flexibility}}} & \textbf{FL1} & \Checkmark & \multirow{5}{*}[1.5em]{\cite{3GPP2020, Terrestrial2013}} & \Checkmark & \multirow{5}{*}[0.1em]{\cite{Li2014,Meneses2018}}& \Checkmark & \multirow{5}{*}[0.1em]{\cite{Liu2015,Nguyen2016}} & \Checkmark & \multirow{5}{*}[0.1em]{\cite{Urgaonkar2015}} & \Checkmark & \multirow{5}{*}[0.1em]{\cite{Nakamura2013,Jain2017}}& \Checkmark & \multirow{5}{*}[0.1em]{\cite{Nikaein2015}} & $\times$ & \multirow{5}{*}[0.1em]{\cite{ITU-T2011,Emam2020}} & \Checkmark & \multirow{5}{*}[0.1em]{\cite{Passast2019}} & $\times$ & \multirow{5}{*}[1.5em]{\cite{Yilmaz2014a}} \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{FL2} & \Checkmark & & $\times$ & & $\times$ & & $\times$ & & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark & & $\times$ & \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{FL3} & \Checkmark & \cite{3GPP2020a, 3GPP2020b} & \Checkmark & & $\times$ & & \Checkmark & & \Checkmark & & \Checkmark & & $\times$ & & $\times$ & & \Checkmark & \cite{Ouali2020, Klempous2020} \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{FL4} & $\times$ & \cite{Jung2016} & \Checkmark & & $\times$ & & \Checkmark & \cite{Mtibaa2018,Mach2017,ETSI2018} & $\times$ & & \Checkmark & \cite{Outtagarts2015,Sabella2013,Liu2012} & \Checkmark & \cite{Emam2020a,Al-rubaye2016} & $\times$ & \cite{Goudarzi2019,Wang2019,Calabuig2017}& $\times$ & \cite{Barua2017, Barua2016} \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{FL5} & \Checkmark& & $\times$ & & $\times$ & & \Checkmark & & $\times$ & & $\times$ & & $\times$ & & \Checkmark & & $\times$ & \\ \hline \hline \multirow{5}{*}[-1em]{\rotatebox{90}{\textbf{Scalability}}} & \textbf{SL1} & \Checkmark & \multirow{5}{*}[0.1em]{\cite{3GPP2020, Terrestrial2013}} & \Checkmark & \multirow{5}{*}[0.1em]{\cite{Assefa2017, Basloom}}& \Checkmark & \multirow{5}{*}[0.1em]{\cite{Liu2015}}& \Checkmark & \multirow{5}{*}[0.1em]{\cite{Urgaonkar2015}} & \Checkmark & \multirow{5}{*}[0.1em]{\cite{Nakamura2013,3GPP2020}} & \Checkmark & \multirow{5}{*}[0.1em]{\cite{Nikaein2015}} & $\times$ & \multirow{5}{*}[0.1em]{\cite{Emam2020}} & $\times$ & \multirow{5}{*}[0.1em]{\cite{Zekri2012, Passast2019}}& $\times$ & \multirow{5}{*}[1.5em]{\cite{Yilmaz2014a}} \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{SL2} & $\times$ & & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark & & $\times$ & & $\times$ & & $\times$ & \\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{SL3} & $\times$ & & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark & & \Checkmark & & $\times$ & & $\times$ & & $\times$ & \cite{Ouali2020,Klempous2020}\\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{SL4} & \Checkmark & \cite{3GPP2020b} & \Checkmark & & \Checkmark & \cite{Nguyen2016,Elgendi2016, Battulga2017}& \Checkmark & \cite{Mtibaa2018,Mach2017,ETSI2018} & $\times$ & & $\times$ & \cite{Outtagarts2015,Sabella2013,Liu2012} & $\times$ & \cite{Al-rubaye2016,Emam2020a} & $\times$ & \cite{Goudarzi2019,Wang2019,Calabuig2017}& \Checkmark & \cite{Barua2017,Barua2016}\\ \cline{2-3}\cline{5-5}\cline{7-7}\cline{9-9}\cline{11-11}\cline{13-13}\cline{15-15}\cline{17-17}\cline{19-19} & \textbf{SL5} & \Checkmark & & $\times$ & & $\times$ & & $\times$ & & \Checkmark & & $\times$ & & $\times$ & & \Checkmark & & $\times$ & \\ \hline \multicolumn{20}{l}{$^{\dagger}$The conformance (Cnf.) of a given mechanism for a given criterion.} \\ \multicolumn{20}{l}{$^{\delta}$The corroborating references (Refs.), if any, for the specified conformance of a mechanism for a given criterion} \end{tabular} \end{sidewaystable*} \noindent \textcolor{red}{To summarize, in this section we firstly introduced the 5G service based architecture and the classification of the various mechanisms that we analyzed, through Figure 2. Following this, we qualitatively analyzed the 3GPP 5G MM mechanisms as well as other research efforts with regards to their efficacy towards 5G and beyond MM solutions. Consequently, we introduce Table 4 wherein we indicate the parameters that each of the explored methods satisfies for the reliability, scalability and flexibility criteria (Table 2). We also enlist the important references that have lead us to the development of Table 4, as presented in this article. And so, from the capability profiles of each mechanism, as illustrated in Table 4, it is evident that even after significant efforts none of them completely meet the specified requirements as expected for the 5G and beyond MM mechanisms. Concretely, neither the 3GPP 5G MM mechanisms nor the other academic and industrial research efforts satisfy all the criteria completely. Subsequently, it is deduced that none of the analyzed mechanisms satisfy the requirements for the future MM mechanisms, as listed in Table 1. Hence, through the aforesaid qualitative analysis we have further exposed the gaps in the design and development for 5G and beyond MM mechanisms.} \section{Challenges, Potential Solutions and Future framework} \textcolor{red}{From our discussions in Sections 2 to 5, we have highlighted the requirements from MM mechanisms as well as the criteria that future MM mechanisms should satisfy to meet these requirements in Tables 1 and 2, respectively. Further, we have analyzed the legacy mechanisms and the current state of the art towards their utility for 5G and B5G networks in Tables 3 and 4, respectively. However, we have observed that gaps in fulfilling the requirements still persist. Concretely, we have demonstrated that none of the strategies evaluated satisfy the reliability, flexibility and scalability criteria in their entirety. Hence, to be able to design and develop a holistic MM mechanism, it is of substance to our study to understand the challenges/questions that persist. We consolidate, from earlier works in literature and the discussion in Sections 2-5, these key challenges/questions in the text that follows.} \subsection{\textcolor{red}{Challenges}} \subsubsection{\textcolor{red}{Handover Signaling}} \textcolor{red}{Even after the release of 3GPP specifications for 5G \cite{Specification2017a}, HO signaling is still a challenge. Hence, reducing HO signaling to ensure system scalability and reliability will be one of the key challenges. Certain studies such as \cite{Jain2019} have provided methods to help overcome this challenge, and hence, can be actively pursued by the research and industrial community.} \subsubsection{\textcolor{red}{Network Slicing}} \textcolor{red}{Network slices have been defined to ensure different service types are served according to their own resource demands. Hence, it will be a key challenge to design MM strategies that either jointly take into account the requirements of multiple network slices or provide individual solutions for each network slice.} \subsubsection{\textcolor{red}{Integration framework for MM solutions}} \textcolor{red}{The state of the art and 3GPP specifications ensure to some extent the provision of flexibility, reliability and scalability for 5G MM solutions, as discussed earlier. However, since these solutions function at different sections of the network (Figure 2), the challenge will be to design them such that collectively they ensure the appropriate levels of flexibility, scalability and reliability in MM mechanisms to cope with the diversity in mobility profiles and applications the devices will access. Also, a part of this challenge will be to ensure that the CAPEX and Operating Expenditure (OPEX), owing to the architectural (software or hardware) transformations stemming from these redesigned MM mechanisms, are manageable.} \subsubsection{\textcolor{red}{Ensuring Context Awareness}} \textcolor{red}{Context based MM solutions accounting for factors such as network load, user preference, network policy, mobility profiles, etc., to ensure best possible provision of requested QoS will be important. The criticality of this challenge is enhanced by the fact that, low computational complexity whilst executing these solutions will be of the essence to meet the strict latency constraint requirements.} \subsubsection{\textcolor{red}{Architectural Evolution Costs}} \textcolor{red}{SDN and edge cloud capabilities will be important for enhancing the user experience during mobility, as discussed in Section 5. However, a key challenge will be to ensure appropriate scalability while maintaining a manageable CAPEX and OPEX.} \subsubsection{\textcolor{red}{Frequent Handovers}} \textcolor{red}{Reducing frequent handovers, ping-pong effects and devising an optimized HO strategy will still be a key challenge, given the dense and heterogeneous future network environment. This is further exacerbated by the fact that current methods, such as IEEE 802.21 and 3GPP specifications, fail to integrate cellular and non-3GPP networks effectively for seamless HO between them. For example, while methods such as LWA have been explored extensively \cite{Ratasuk2014, Alkhansa2014}, an effective handover methodology between 3GPP and non-3GPP networks still remains elusive.} \subsubsection{\textcolor{red}{Security}} \textcolor{red}{An important challenge for ensuring service continuity and seamless mobility in an extremely dense and heterogeneous network environment, such as 5G and beyond networks, will be to ensure that security related tasks, such as authenticating the user as well as the network, be completed as efficiently as possible. By efficiently here we mean that the authentication should guarantee a required level of security whilst provisioning low computational complexity \cite{Ferrag2017} as well as latency \cite{JawadAlam2018}. Again this task will become even more critical in scenarios where mobility occurs between 3GPP to non-3GPP networks.} \subsubsection{\textcolor{red}{Energy Efficiency}} \textcolor{red}{Given that one of the goals of 5G is to ensure enhanced battery lives for the devices, it will be a critical component for 5G MM services to ensure that the mobility of the devices is handled in an energy efficient way \cite{Qiao2017}. Additionally, 5G MM services will also need to ensure that the energy footprint goal for 5G networks is achieved via techniques such as smart AP selection methodologies \cite{Habbal2017} and reduced CN signaling \cite{Jain2019}. By smart AP selection methodologies we refer to being able to not only account for the user energy consumption over the course of its mobility, but also accounting for the energy consumed whilst performing such selections.} \subsubsection{\textcolor{red}{Meta-surface Reconfiguration for mobility support}} \textcolor{red}{For the B5G networks, finding the optimal configuration of meta-surfaces during mobility related scenarios will be challenging. This is because, the physical characteristics of the surfaces will have to be altered rapidly so as to have the signals arriving at the user in a constructive manner.} \subsubsection{\textcolor{red}{Beyond 5G Network: Handovers}} \textcolor{red}{A fundamental question that will be posed in B5G is -- how frequently and when will the handovers be needed? The reason this question is a challenge because, up until now the rate of power loss in an urban environment is characterized by a $R^4$ factor (where $R$ is distance between the transmitter and receiver) given the destructive interference encountered. However, with programmable environments, according to \cite{Renzo2019}, this decay will now be similar to the free space scenario, i.e., $R^2$, since all signals can be modulated in phase and polarization to interfere at the receiver in a constructive manner. And so, in mobile environments, the power decay will not be significant even at distances further away. Hence, the handover triggering methods and their execution procedure need to be revisited as currently they do not expect such a reliable behavior from the channel.} \subsubsection{\textcolor{red}{Beyond 5G Network: Protocol stack}} \textcolor{red}{A next fundamental question posed in B5G, with reference to meta-surfaces, is: What is the impact on the existing layers? The reason this question is a challenge because, the MAC, Radio Link Control (RLC), PDCP and TCP layers, they all have error control, packet re-ordering, transmission repeat request and other reliability control mechanisms in-built. These were designed keeping in mind that the environment is unreliable and randomly varying. However, with programmable surfaces the environment will be much more deterministic and reliable. Thus, there arises a case for either eliminating/modifying some of these layers (for example, a lightweight version of TCP may be utilized , as the channel is deterministic and the probability of having lost packets due to error or timeout is significantly lower since the multipaths can be redirected to interfere constructively at the receiver by the meta-surfaces, or the User Datagram Protocol (UDP) can be utilized with much more reliability), which play a critical part in MM procedures, or revisiting their original implementation to adapt to these programmable environments.} \subsubsection{\textcolor{red}{Dynamic Network Topology}} \textcolor{red}{In terms of user association for B5G networks, the challenge will now not be to just choose an AP with the best SINR/RSSI/RSRP/RSRQ, but it will rather be to choose or program an AP/programmable surface configuration/drone, depending on the user mobility, location and coverage from these sources. While it still reduces to the problem presented for 5G networks, the increased dimensionality and heterogeneity of the problem will provide formidable challenges to existing methods.} \subsubsection{\textcolor{red}{Edge Node configuration in B5G networks}} \textcolor{red}{Edge nodes' placement for supporting user mobility will also be challenged. This is so because the possibility of supporting better QoS over longer distances can reduce the requirements for service replication/service migration. This is a consequence of the fact that the handovers would be impacted given the programmability of the environment and the squared decay instead of a fourth power decay in the received signal power.} \subsubsection{\textcolor{red}{IP address continuity}} \textcolor{red}{The vision for near zero latency by 3GPP \cite{3GPP261} necessitates that E2E link continuity is ensured given any network and mobility scenario. Hence, maintaining IP address continuity during mobility events will remain a critical challenge as the complexity of the networks increases in 5G and B5G.\\ } \noindent \textcolor{red}{The aforementioned key challenges define the technology gap towards fulfilling the MM governing parameters listed in Table 2. In the following subsection we list the potential solutions that can fill this technology gap.} \subsection{Potential Solutions} \subsubsection{\textcolor{red}{Smart CN signaling}} \textcolor{red}{Utilizing the properties of SDN, the signaling performed within the CN for handover and re-routing purposes can be optimized further. This will enable more scalability and better support to users with high mobility. Concretely, techniques such as graph theory, Machine Learning \cite{Prieto2017} as well as the recently established intelligent Information Elements (IE) mapping methods \cite{Jain2019}, etc., can enable faster and efficient CN signaling, as mentioned above. Here by efficiency we imply that the transmission cost, processing cost and other CN signaling related metrics \cite{Jain2019} are reduced/optimized.} \subsubsection{\textcolor{red}{On demand MM}} \textcolor{red}{Given the functional requirements (Section 2), legacy methods (Section 4) and the state of the art (Section 5), on demand MM strategies (such as \cite{Jain2017}) will allow future MM mechanisms to serve users with different mobility profiles, accessing different services and accessing networks with differing loads, more effectively. As an example, slice based MM strategies can enable independent strategies for the various network slices that the 5G networks will serve. This will help cater to the different network slices according to their mobility demands, and avoid the sub-optimal \emph{one size fits all} approach.} \subsubsection{\textcolor{red}{Deep learning}} \textcolor{red}{Learning network parameters such as network load, congestion statistics at access and core network, user mobility trends, etc., enable the network to devise effective and optimal MM strategies for a highly dynamic network environment such as that in 5G and B5G networks. Hence, deep learning methods such as reinforcement learning can assist in such tasks.} \subsubsection{\textcolor{red}{SDN-NFV integrated DMM}} \textcolor{red}{DMM facilitates the distribution of MM functionality throughout the network and avoiding single MM anchors, which consequently assists in alleviating issues such as SPoF and congestion. Note that, SDN and NFV will assist in DMM as network programmability facilitates fast switching while the user/device transits through the network.} \subsubsection{\textcolor{red}{D2D CP-DP extension}} \textcolor{red}{D2D clustering and support for communication with devices in such clusters has been formalized since 3GPP Release-13. Thus, through an extension of CP-DP capabilities of the current D2D framework, i.e., by utilizing the relaying strategies for CP/DP information, handover performance for devices migrating within the network and in such clusters can be enhanced. Further, policy based methods, which take into account the presence of D2D communications between vehicles and other V2X scenarios, will also enable future MM mechanisms to serve the complex scenarios that will prevail in 5G and B5G networks better.} \begin{table*} \color{red} \caption{Mapping potential solutions to MM challenges} \centering \begin{tabular}{|>{\centering\arraybackslash}m{2cm}|>{\centering\arraybackslash}m{2.5cm}|m{11cm}|>{\centering\arraybackslash}m{1.5cm}|} \hline \textbf{Challenges} & \textbf{Recommended Potential Solutions} & \centering\arraybackslash\textbf{Comments} & \textbf{Param. Satisfied$^*$} \\ \hline Handover Signaling & Smart CN Sig. \& SDN-NFV integ. DMM & In addition to the existing strategies, a smart CN signaling method, such as that in \cite{Jain2019} will assist in relieving the handover signaling load significantly. DMM strategies will assist in decentralization of MM anchors and hence, more reliability in mobile environments & RL3, RL5, SL1 -- SL4 \\ \hline Network Slicing & On demand MM & An on demand strategy will assist the network slices to assist in provisioning tailor made mobility solutions for the corresponding tenants & FL1, FL5\\ \hline Integration framework for MM solutions & \textit{Design} & This is a design challenge and hence, should collectively take into account all the other non-design challenges as well as other necessary factors, such as efficacy and delays & SL5\\ \hline Ensuring Context Awareness & On demand MM & It will ensure that the user, network and application context is taken into account and appropriate MM solution is provisioned as and when needed & FL5\\ \hline Architectural Evolution Costs & \textit{Design} & This is a design challenge and hence, should collectively take into account all the other non-design challenges as well as other necessary factors, such as cost of infrastructure & SL5\\ \hline Frequent Handovers & Deep learning & Learning the network conditions, mobility profiles and the corresponding impact on the handovers is a complex task. Deep learning can help predict/estimate valuable system parameters, such as SINR, to avoid the frequent handover condition via appropriate AP-user association & RL1, RL2, FL2, FL3, FL4 \\ \hline Security & Smart CN Signaling & Effective CN signaling will assist in maintaining/migrating security context when required, thus reducing the latency as well as complexity to ensure the same & RL2, SL3\\ \hline Energy Efficiency & Deep learning and Smart CN Signaling & Whilst deep learning methodologies can in general provision an optimal solution for handling user mobility whilst adhering to the energy constraints, smart CN signaling, via reduction in signaling messages during mobility, can enhance energy efficiency of the MM strategy & SL1 \\ \hline Meta-surface Reconfiguration for mobility support & Deep learning & Based on the user mobility deep learning algorithms can assist in understanding how the meta-surface configurations have to be adjusted so as to ensure the requested QoS for the users & RL1,RL2 and FL3 \\ \hline B5G: Handovers & Smart CN Sig., Serv. Cont. through Edge Comp. \& D2D CP-DP Ext. & Edge compute platforms can assist in faster and effective handover decisions, given their capability to provision compute power closer to the access network. Smart CN signaling can assist in efficient and low latency handover signaling in the CN. D2D networks can assist in extended coverage and hence, smoother handovers & RL2, RL4, FL3, SL1 -- SL4\\ \hline B5G: Protocol Stack & \textit{Design} & This is a design challenge and hence, should collectively take into account all the other non-design challenges as well as other necessary factors, such as efficacy and delays& SL5\\ \hline Dynamic Network Topology & Deep learning & The ability to understand complex associations will make deep learning methodologies essential in determining the optimal user-AP association in an increasingly dynamic and multi-dimensional network, such as the B5G networks & RL1, RL2, FL3, FL4\\ \hline Edge Node Configuration in B5G Networks & \textit{Design} & This is a design challenge and hence, should collectively take into account all the other non-design challenges as well as other necessary factors, such as efficacy and infrastructure cost& SL5\\ \hline IP address continuity & Clean Slate Methods & Given their ability to resolve destinations based on names and not the IP address, clean slate methods can assist in maintaining a single IP address throughout with respect to the destination server & RL2, RL4 \\ \hline \multicolumn{4}{l}{$^*$ Details regarding the parameters and the requirements that they help satisfy are provided in Table 2.} \end{tabular} \end{table*} \subsubsection{\textcolor{red}{Service Continuity through Edge Computing}} \textcolor{red}{For serving fast moving users, such as vehicles, and satisfying their latency and bandwidth requirements, edge computing solutions for MM will play a major role in 5G and B5G networks \cite{Boban2017}. And while service migration strategies will play a critical role in ensuring seamless connectivity, a fine balance between service replication and service migration will help mitigate the multitude of challenges that arise for such strategies. Further, given that users might crossover to other PLMNs during the duration of mobility \cite{Report2018}, which can lead to a change in the edge cloud that serves them, effective service migration strategies will greatly enhance the QoS during mobility.} \begin{figure*} \centering \includegraphics[scale=0.38]{Figures/Figure6.pdf} \caption{Proposed 5G and beyond MM framework.} \end{figure*} \subsubsection{\textcolor{red}{Clean Slate Methods}} \textcolor{red}{Current networks rely on resolving the IP addresses of the hosts for the applications requested by the users. However, such a resolution can lead to delays \cite{Zhang2014}. And so, Information Centric Networking (ICN), and specifically Named Data Networking (NDN) paradigm, avoid this process thus making the network more flexible and faster. Additionally, with the proposition of having in-network caching, ICN and NDN paradigms enable caching capabilities near the users.} \textcolor{red}{Another class of such clean slate methods is MobilityFirst \cite{Raychaudhuri2012}. In MobilityFirst, a new paradigm to networking, like in ICN and NDN, has been proposed. In this paradigm, IP based resolution of nodes has been deprecated and name based resolution is proposed. Further, concepts similar to ICN and NDN, such as in-network caching etc., have also been proposed. Additionally, and different to the ICN-NDN paradigm, ensuring security in a fully dynamic scenario has been considered as one of the guiding principles of MobilityFirst. Further, MobilityFirst also introduces support for migration of entire networks and not just the end nodes.} \textcolor{red}{Consequently, such methods together can provision more scalable, flexible and reliable MM strategies.\\} \noindent \textcolor{red}{And so, up until now in this section, we have highlighted the multiple challenges that the 5G and beyond MM mechanism will face, given our qualitative evaluation for legacy and current state-of-the art methods in Sections 2-5. We have then provisioned a brief discussion on the potential solutions that can assist in addressing these challenges. We illustrate a novel mapping between these challenges and potential solutions in Table 5. Additionally, we have also listed the parameters for the qualitative analysis (and hence the requirements specified in Table 1) that they satisfy. This, as a result, reinforces the completeness of our current study. Hence, in the next subsection, utilizing the inferences from Sections 2-5 and Table 5, we propose a framework for 5G and beyond MM.} \begin{comment} \begin{itemize} \item \textbf{Smart CN signaling:} Utilizing the properties of SDN, the signaling performed within the CN for handover and re-routing purposes can be optimized further. This will enable more scalability and better support to users with high mobility. Concretely, techniques such as graph theory, Machine Learning \cite{Prieto2017} as well as the recently established intelligent IE mapping methods \cite{Jain2019}, etc., can enable faster and efficient CN signaling, as mentioned above. Here by efficiency we imply that the transmission cost, processing cost and other CN signaling related metrics \cite{Jain2019} are reduced/optimized. \item \textbf{On demand MM:} Given the functional requirements (Section 2), existing methods (Section 3) and the state of the art (Section 4), on demand MM strategies (such as \cite{Jain2017}) will allow future MM mechanisms to serve users with different mobility profiles, accessing different services and accessing networks with differing loads, more effectively. As an example, slice based MM strategies can enable independent strategies for the various network slices that the 5G networks will serve. This will help cater to the different network slices according to their mobility demands, and avoid the sub-optimal \emph{one size fits all} approach. \item \textbf{On-line learning:} Learning network parameters such as network load, congestion statistics at access and core network, user mobility trends, etc., enable the network to devise effective and optimal MM strategies for a highly dynamic network environment such as that in 5G and B5G networks. \item \textbf{DMM:} DMM facilitates the distribution of MM functionality throughout the network and avoiding single MM anchors, which consequently assists in alleviating issues such as SPoF and congestion. Note that, SDN and NFV will assist in DMM as network programmability facilitates fast switching while the user/device transits through the network. \item \textbf{Cross layer strategy:} Enabling the network layers to exchange information between themselves as well as combining techniques at multiple network layers allow for solutions that are flexible and reliable. For example, MPTCP/SCTP at the transport layer along side ITU-VMH from IP through PHY layer. Further, strategies such as DC alongside RRC-PDCP layer split strategy can aid in fast MM (with reduced CN signaling) in heterogeneous network environments. \item \textbf{D2D assistance:} D2D clustering and support for communication with devices in such clusters has been formalized since 3GPP Release-13. Thus, throu\-gh the relaying strategies for CP/DP information, handover performance for devices migrating within the network and in such clusters can be enhanced. Further, policy based methods which take into account the presence of D2D communications between vehicles and other V2X scenarios will also enable future MM mechanisms to serve the complex scenarios that will prevail in 5G and B5G networks better. \item \textbf{Edge Computing:} For serving fast moving users, such as vehicles, and satisfying their latency and bandwidth requirements, edge computing solutions for MM will play a major role in 5G and B5G networks \cite{Boban2017}. And while service migration strategies will play a critical role in ensuring seamless connectivity, a fine balance between service replication and service migration will help mitigate the multitude of challenges that arise for such strategies. Further, given that users might crossover to other PLMNs during the duration of mobility \cite{Report2018}, which can lead to a change in the edge cloud that serves them, effective service migration strategies will greatly enhance the QoS during mobility. \item \textbf{Clean Slate Methods:} Current networks rely on resolving the IP addresses of the hosts for the applications requested by the users. However, such a resolution can lead to delays \cite{Zhang2014}. And so, Information Centric Networking (ICN), and specifically Named Data Networking (NDN) paradigm, avoid this process thus making the network more flexible and faster. Additionally, with the proposition of having in-network caching, ICN and NDN paradigms enable caching capabilities near the users. Another class of such clean slate methods is MobilityFirst \cite{Raychaudhuri2012}. In MobilityFirst, a new paradigm to networking, like in ICN and NDN, has been proposed. In this paradigm, IP based resolution of nodes has been deprecated and name based resolution is proposed. Further, concepts similar to ICN and NDN, such as in-network caching etc., have also been proposed. Additionally, and different to the ICN-NDN paradigm, ensuring security in a fully dynamic scenario has been considered as one of the guiding principles of MobilityFirst. Further, MobilityFirst also introduces support for migration of entire networks and not just the end nodes. Consequently, such methods together can provision more scalable, flexible and reliable MM strategies. \end{itemize} \end{comment} \begin{comment} \begin{table*} \renewcommand{\arraystretch}{1.1} \caption{Potential 5G MM Strategies} \centering \begin{tabular}{|>{\centering}m{3cm}|*{6}{p{1.5cm}|}} \hline \textbf{Strategies}& \multicolumn{6}{>{\centering}p{9cm}|}{\textbf{Capabilities Offered}} \\ \hline \multirow{1}{*}[-1.5em]{\textbf{On demand MM}} & \multicolumn{6}{p{9cm}|}{The ability to provide MM services to users/devices ubiquitously and whenever requested will be an important feature in the 5G MM mechanisms. 3GPP through TR23.501 and 799 has already established the requirement to include on demand MM. Thus, concepts such as MM as a service \cite{Jain2017} will allow 5G MM mechanisms to serve users with different mobility profiles, accessing different services and accessing networks with differing loads, more effectively. } \\ \hline \multirow{1}{*}[-1.15em]{\textbf{DMM}} & \multicolumn{6}{p{9cm}|}{DMM facilitates the distribution of MM functionality throughout the network and avoiding single MM anchors, which consequently assists in alleviating issues such as SPoF and congestion. Note that, SDN and NFV will assist in DMM as network programmability facilitates fast switching while the user/device transits through the network.} \\ \hline \multirow{1}{*}[-1em]{\textbf{Cross layer strategy}} & \multicolumn{6}{p{9cm}|}{Enabling the network layers to exchange information between themselves as well as combining techniques at multiple network layers allow for solutions that are flexible and reliable. For example, MPTCP/SCTP at the transport layer along side ITU-VMH from IP through PHY layer.} \\ \hline \multirow{1}{*}[-0.5em]{\textbf{On-line learning}} & \multicolumn{6}{p{9cm}|}{Learning network parameters such as network load, congestion statistics at access and core network, user mobility trends, etc., enable the network to devise effective and optimal MM strategies for a highly dynamic network environment such as that in 5G.} \\ \hline \multirow{1}{*}[-0.5em]{\textbf{Smart CN signaling}} & \multicolumn{6}{p{9cm}|}{Utilizing the properties of SDN the signaling performed within the CN for handover and re-routing purposes can be optimized further. This will enable more scalability and better support to users with high mobility.} \\ \hline \multirow{1}{*}[-1em]{\textbf{D2D assistance}} & \multicolumn{6}{p{9cm}|}{D2D clustering and support for communication with devices in such clusters has been formalized since 3GPP Rel-13. Thus, through the relaying strategies for CP/DP information, handover performance for devices migrating within the network and in such clusters can be enhanced.}\\ \hline \end{tabular} \end{table*} \end{comment} \subsection{\textcolor{red}{Proposed 5G and beyond MM framework}} \textcolor{red}{We utilize the earlier established classification process for the current state-of-the-art strategies to define our vision for 5G and beyond MM in Figure 3. Concretely, we have categorized the MM mechanisms as \textit{Core Network level}, \textit{Access Network level} and \textit{Extreme Edge Network level}, depending on where they will be creating an impact on/from. The specific entities (based on the 5G architecture illustrated in Figure 2), to which these aforesaid levels correspond to, have also been mentioned in Figure 3.} \textcolor{red}{To elaborate, the core network strategies encompass the DMM, SDN and Network slicing paradigms to provision the necessary reliability, flexibility and scalability from a more global perspective. Additionally, the aforesaid core network strategies need to be well complemented with an efficient CN signaling strategy. Next, handover management, on-demand MM, IPv6 multi-homing and Edge cloud related MM strategies will be enacted not only in the core network or the access network level, but jointly at both levels thus provisioning the necessary flexibility and reliability. Further, RAN-as-a-Service and Multi-connectivity provisions at the access network level will assist in utilizing the multiple RATs and APs effectively. Moreover, it is envisioned that the RAT selection process maybe either at the access network or at the device level. The D2D techniques, on the other hand, are expected to provide added assistance for mobility at the device level through DP as well CP functionality.} \textcolor{red}{Complementing these mechanisms, NDN-ICN support will be provisioned at all levels, thus assisting in maintaining IP addresses/prefixes during mobility whilst resolving destinations via names. Note that, traditional IP address/prefix allocation strategies are not intended to be changed. Instead, the NDN-ICN concept provisions an over-the-top assistance. Further, the cross layer strategies, as the name suggests, will spawn across the multiple levels and enact policies, utilizing the available information at each of these levels, which assist in optimal MM related decisions across the network. Lastly, the deep learning strategies will again assist across the multiple levels by learning the complex features about the network context, user mobility and overall QoS requirements, and formulating effective MM related decisions.} \textcolor{red}{Hence, given that we utilize the potential solutions for overcoming the technology gap, specified in Section 6.2, alongside certain strategies from the state of the art and legacy MM mechanisms, specified in Sections 4 and 5, it can be inferred from Tables 2-5 that our proposed framework will satisfy all the parameters for the reliability, flexibility and scalability criteria. Consequently, it can be stated that the proposed framework in Figure 3 will also satisfy all the requirements as defined in Table 1, thus provisioning a holistic solution. With this vision, in the following section we summarize the main findings of this article and conclude this paper.} \section{Conclusions} Given the complexity of future network scenarios, i.e., 5G and B5G, a full view of the MM strategies, their capabilities, the persistent challenges and the possible solutions to them, will enable the research community to design better MM strategies. \textcolor{red}{In this paper, through Section 2 and Table 1, we firstly presented the important functional requirements and design criteria to be considered when devising 5G and B5G MM solution. We then presented the multiple parameters that the future MM mechanisms needs to satisfy for each of the evalutation criteria, i.e., scalability, flexibility and reliability, in Section 3 and Table 2. Next, from our discussions in Section 4 it is clear that the legacy MM solutions fail in provisioning scalability, flexibility and reliability simultaneously. Nevertheless, the current standards and research efforts explored in Section 5 are promising as they provide enhanced capabilities towards future MM solutions. We have summarized these conclusions effectively in Tables 3 and 4. And as a consequence, through this qualitative analysis the various benefits and shortcomings of the legacy and the current state of the art mechanisms, studied in this paper, can be understood easily by the research community. Subsequently, we established that none of the mechanisms fulfill the complete 5G and beyond MM mechanism requirements. } \textcolor{red}{And so, it is evident that a holistic MM mechanism for 5G and B5G networks remains elusive. Thus, certain challenges that will still persist for the design, development and deployment of future MM mechanisms have been detailed in this paper in Section 6.1. Furthermore, we have provided a concise discussion on the potential MM strategies that the research community can explore so as to solve these persistent challenges and the technological gaps they present, in Section 6.2. Following this, we have also provisioned a novel mapping between the potential strategies and the persistent challenges in Table 5, thus highlighting the efficacy of our current study. Based on the inferences drawn, we have concluded our study by provisioning a novel framework for the 5G and beyond MM strategies through Section 6.3 and Figure 3.} \section*{References} \bibliographystyle{ieeetran} \input{./main.bbl} \end{document}
1,314,259,993,981
arxiv
\section{Introduction} \label{Introduction} In the study of bioinformatics, one important problem is the prediction of clinical outcomes using profiling datasets with a large amount of variables such as gene expression data. In such datasets, major challenges lie in the relatively small number of samples compared to the large number of predictors (genes), namely the ``$n\ll p$" issue. In addition, the complex unknown correlation structure among predictors results in more difficulty in prediction and feature selection. In order to tackle this challenging situation, machine learning approaches have been introduced for the prediction task \cite{cai2015classification,chen2014risk,kursa2014robustness,liang2013sparse,vanitha2015gene}. While the primary interest of these studies is to achieve high prediction accuracy, contributions have also been made for feature selection or learning effective feature representations \cite{cai2015classification,kursa2014robustness}. Based on the property of gene expression data, i.e., functionally associated genes tend to be statistically dependent and contribute to a biological outcome in a synergistic manner, a branch of classification research has been focused on integrating prior knowledge on the relations between genes into predictive models, in order to improve both classification performance and learning the structure of feature space. A critical data source to achieve this goal is the gene network constructed from existing biological knowledge, such as signal transduction network or protein-protein interaction network \cite{pmid25632107,pmid25859942}. A gene network is a graph-structured dataset with genes as the graph vertices and their functional relations as graph edges. In terms of classification tasks, each vertex in the gene network corresponds to a predictor, and it is expected that the gene network can provide useful information for a learning process. Motivated by this idea, certain classification methods have been developed where gene networks are integrated as additional information for the prediction and feature selection procedure. For example, support vector machines and traditional linear models such as logistic regression classifier can be modified by adding penalty terms to the objective function, where the penalty is defined according to pairwise distances between genes in a gene network \cite{kim2013network,lavi2012network,zhu2009network}. \cite{dutkowski2011protein} develops a random forest-based method, called Network-Guide Forest, where the feature sub-sampling in building decision trees is guided by graph search on the given gene network. Also, a recent study \cite{kong2018graph} brings gene networks to deep learning, where applications on omics data were restricted primarily due to the $n\ll p$ issue \cite{min2016deep}. In \cite{kong2018graph}, a deep learning model Graph-Embedded Deep Feedforward Network (GEDFN) is proposed with the gene network embedded as a hidden layer in deep neural networks to achieve an informative sparse structure. In GEDFN, the graph-embedded layer helps achieve two effects. One is model sparsity, and the other is the informative flow of information for prediction and feature evaluation. These two effects allow GEDFN to outperform other methods in gene expression classification given an appropriately specified feature graph. Authors of these methods have demonstrated that combining gene networks with expression data results in better classification performance and more interpretable feature selection. However, these methods bear a common limitation, which is the potential mis-specification of the required gene network. In practice, gene expression data are used for various clinical outcomes, and the mechanistic relations between genes and different clinical outcomes can be quite different. Hence, there does not exist a known gene network that uniformly fits all classification problems. Thus, gene networks used in graph-embedded methods can only be ``useful" but not ``true". Consequently, how to decide if a known gene network is useful in predicting a certain clinical outcome with a certain gene expression dataset remains an unsolved problem, causing difficulties in applying graph-embedded methods in practice. \cite{kong2018graph} discusses the feature graph mis-specification issue of the GEDFN model and shows that the method is robust with mis-specified gene networks. Nevertheless, it is unrealistic to guarantee that the robustness applies in a broad sense, as feature graph structures can be extremely diverse such that simulation would not be able to cover all scenarios. To address these issues, in this paper, we aim at developing a method that doesn't rely on a given feature network, yet can still benefit from the idea of building a model with sparse and informative flow of information. Instead of using known feature graphs, we try to construct a feature graph within the feature space. We propose a supervised feature graph construction framework using tree-based ensemble models, as literature shows that tree-based ensemble methods such as the Random Forest (RF) \cite{breiman2001random} and the Gradient Boosting Machine (GBM) \cite{friedman2002stochastic} are excellent tools for feature selection \cite{tang2014qualitative,vens2011random}. These tree-based methods also provide relational information between features in terms compensating each other in the classification task. We develop the \underline{for}est \underline{g}raph-\underline{e}mbedded deep feedforward \underline{net}work (forgeNet) model, with a built-in tree-based ensemble classifier as a feature graph extractor on top of a modified GEDFN model. The feature extractor selects features that span a reduced feature space, and constructs a graph between the selected features based on their directional relations in the decision tree ensemble. The application of tree-based ensemble methods as feature graph extractor is mainly based on two considerations: 1) the extractor selects effective features in a supervised manner. Thus the target outcome directly participates the feature graph construction. Compared to unsupervised feature construction such as using marginal or conditional correlation graphs, the resulting graph from trees is more informative and relevant to the specific classification task; 2) the feature extraction procedure helps reduce the dimension of the original feature space, alleviating the $n\ll p$ problem for the downstream neural network model. The paper is organized as follows: Section \ref{Methods} reviews the GEDFN model and illustrates our proposed forgeNet architecture. Section \ref{se} shows the simulation experiment results for comparing our new method with existing cutting-edge classifiers, followed by the real data analysis of a breast cancer dataset in Section \ref{rda}. Finally, a short conclusion is presented in Section \ref{Conclusion}. \section{Methods} \label{Methods} \subsection{Review of graph-embedded deep feedforward networks} \label{gedfn} We first briefly review the GEDFN model as our new method utilizes a similar neural network architecture. Recall a deep feedforward network with $l$ hidden layers: \begin{align*} Pr(\mathbf{y}|\mathbf{X},\mathbf{\Psi})&=softmax(\mathbf{Z}_{out}\mathbf{W}_{out}+\mathbf{b}_{out}) \\ \mathbf{Z}_{out}&=\sigma(\mathbf{Z}_{l}\mathbf{W}_{l}+\mathbf{b}_l) \\ \dots \\ \mathbf{Z}_{k+1}&=\sigma(\mathbf{Z}_{k}\mathbf{W}_{k}+\mathbf{b}_k) \\ \dots \\ \mathbf{Z}_{1}&=\sigma(\mathbf{X}\mathbf{W}_{in}+\mathbf{b}_{in}), \end{align*} where $\mathbf{X}\in \mathcal{R}^{n\times p}$ is the feature matrix with $n$ samples and $p$ features, $\mathbf{y}\in \mathcal{R}^n$ is the outcome containing classification labels, $\mathbf{\Psi}$ denotes all parameters, $\mathbf{Z}_{k}$ ($k=1,\dots,l-1,out$) are hidden layers with corresponding weights $\mathbf{W}_{k}$ and bias $\mathbf{b}_{k}$. The dimensions of $\mathbf{Z}$ and $\mathbf{W}$ depend on the number of hidden neurons $h_k$ ($k=1,\dots,l,in$) of each hidden layer, as well as the input dimension $p$ and the number of classes $h_{out}$. We mainly focus on binary classification problems hence the elements of $\mathbf{y}$ simply take binary values and $h_{out}\equiv 2$. The function $\sigma(\cdot)$ is the nonlinear activation such as sigmoid, hyperbolic tangent or rectifiers. The $softmax(\cdot)$ function converts values of the output layer into probability prediction. The graph-embedded feedforward net is a variant of the regular feedforward net with modified first hidden layer \begin{equation} \mathbf{Z}_{1}=\sigma(\mathbf{X}(\mathbf{W}_{in}\odot A)+\mathbf{b}_{in}) \label{g-layer} \end{equation} where $A$ is the adjacency matrix of a feature graph and $\odot$ is the Hadamard (element-wise) product. As in regular deep neural networks, the parameters to be estimated are all the weights and biases. The model is trained using a stochastic gradient decent (SGD) based algorithm by minimizing the cross-entropy loss function \cite{goodfellow2016deep}. \subsection{The forgeNet model} \label{fgedfn} Our newly proposed forest graph-embedded deep feedforward network (forgeNet) model consists of two components - the extractor component and the neural network component. The extractor component uses a forest model to select useful features from raw inputs with the supervision of training labels, as well as constructs a directed feature graph according to the splitting order in the individual decision trees. The neural network component feeds the generated feature graph and the raw inputs to GEDFN, and serves as the learner to predict outcomes. In forgeNet, a forest is defined as any ensemble of decision trees but not limited to random forests. In fact, any tree-based ensemble approach is applicable within the forgeNet framework. Besides RF and GBM mentioned in Section \ref{Introduction}, their variants with similar outputs are also possible options, or the forest can be simply built through bagging trees \cite{breiman1996bagging}. However, since RF and GBM models are the most commonly used tree ensembles, in this paper, we only employ these two methods for a proof-of-concept purpose. In forgeNet, a forest $\mathcal{F}$ is denoted as a collection of decision trees \begin{equation*} \mathcal{F}(\Theta) = \{\mathcal{T}_m(\Theta_m)\}, \ m=1,\dots, M, \end{equation*} where $M$ is the total number of trees in the forest, $\Theta=\{\Theta_1,\dots,\Theta_M\}$ represents the parameters, which include splitting variables and splitting values. In the feature graph extraction stage, $\mathcal{F}$ is fitted by training data $\mathbf{X}_{train}$ and training label $\mathbf{y}_{train}$, where $\mathbf{X}_{train}\in \mathcal{R}^{n_{train}\times p}$ and $\mathbf{y}_{train}\in \mathcal{R}^{n_{train}}$. After fitting the forest, we obtain $M$ decision trees, each of which contains a subset of features and their directed connections according to the tree splitting. At the same time, a binary tree can be viewed as a special case of a graph with directed edges. Hence, we can construct a set of graphs \begin{equation*} \mathcal{G} = \{G_m(V_m, E_m)\}, \ m=1,\dots, M, \end{equation*} where $V_m$ and $E_m$ are collections of vertices and edges in $G_m$ respectively. Next, by merging all graphs in $\mathcal{G}$, the aggregated feature graph \begin{equation*} \mathbf{G}(V, E) = \bigcup_{m=1}^M G_m(V_m, E_m) \end{equation*} is obtained, where $V = \bigcup_{m=1}^M V_m$ and $E = \bigcup_{m=1}^M E_m$. In the form of its adjacency matrix, $\mathbf{G}$ is the feature graph to be embedded into the second stage of the forgeNet. Note that regardless which tree-based ensemble methods we use, it is likely that not all predictors in the original feature space can enter the forest model. A feature is included in $\mathbf{G}$ if and only if it is used at least once by the forest to split samples. As a result, the original feature space is reduced after the feature extraction. Denoting the number of vertices of $\mathbf{G}$ as $|V|$, we have $|V|<p$, and the input data matrix for the second stage is thus $\tilde{\mathbf{X}}_{train}\in \mathcal{R}^{n\times |V|}$. The columns in $\tilde{\mathbf{X}}_{train}$ corresponds to selected features in the original data $\mathbf{X}_{train}\in \mathcal{R}^{n\times p}$, and the order of columns does not matter. The resulting feature graph $\mathbf{G}$ of feature extraction is a directed network, which differs from the one used in the original GEDFN. In \cite{kong2018graph}, the adjacency matrix $A$ in Eq. \ref{g-layer} represents an undirected feature graph. In the case of forgeNet, the adjacency matrix is naturally generalized to the directed version, and replacing $A$ in Eq. \ref{g-layer} with an asymmetric adjacency does not affect the model construction and training. A visualization of the entire forgeNet architecture is seen in Fig. \ref{architecture}. After fitting forgeNet with the training data, only the reduced input $\tilde{\mathbf{X}}_{test}$ and the testing label $\mathbf{y}_{test}$ are required for testing the prediction results, as $\tilde{\mathbf{X}}_{test}$ can be directly fed into the downstream neural nets together with the feature graph constructed from the forest. \begin{figure} \centering \includegraphics[scale=0.4]{forgeNet_architecture.jpg}\\ \caption{Illustration of the forgeNet model. Notations are consistent with those in the text.}\label{architecture} \end{figure} \subsection{Evaluation of feature importance} The selection of predictors that significantly contribute to the prediction is another major aspect of gene expression data analysis, as they can reveal underlying biological mechanisms. Thus in forgeNet, we introduce a feature importance evaluation mechanism, which is closely related to the Graph Connection Weights (GCW) method proposed in \cite{kong2018graph} for the original GEDFN model. However, since the feature graph used in forgeNet has a different property from that in GEDFN where the feature graph is given, certain modifications of GCW are needed. The main idea of GCW is that, the contribution of a specific predictor is directly reflected by the magnitude of all the weights that are directly associated with the corresponding hidden neuron in the graph-embedded layer (the first hidden layer). In forgeNet, since the connection between the input layer and the first hidden layer is no longer symmetric due to the directed feature graph structure, to evaluate the importance of a given feature, we examine both hidden neurons in the first hidden layer and the nodes in the input layer. The importance score is thereby calculated as the summation of absolute values of the weights that are directly associated with the feature node itself and its corresponding hidden neuron in the graph-embedded layer: \begin{equation*}\label{importance_score} s_j=\sum_{u=1}^{p}|w_{ju}^{(in)}\mathcal{I}(A_{ju}=1)|+\sum_{v=1}^{p}|w_{vj}^{(in)}\mathcal{I}(A_{vj}=1)|+\sum_{m=1}^{h_1}|w_{jm}^{(1)}|,\quad j=1,\dots,p, \end{equation*} where $s_j$ is the importance score for feature $j$, $w^{(in)}$ denotes weights between the input and first hidden layers, and $w^{(1)}$ denotes weights between the first hidden layer and the second hidden layer. The score consists of three parts: the first two terms summarize the importance of a feature according to the directed edge connection in the feature graph $\mathbf{G}$; the third term summarizes the contribution of the feature according to the connection with the second hidden layer $\mathbf{Z}_{2}$. Note that the input data $\mathbf{X}$ are required to be Z-score transformed (the original value minus the mean across all samples and then divided by the standard deviation), ensuring all variables are of the same scale so that the magnitude of weights are comparable. Once the forgeNet is trained, the importance scores for all the variables can be calculated using trained weights. \section{Implementation} \label{app} The method is available in Python at \url{https://github.com/yunchuankong/forgeNet}. We employ the Scikit-learn \cite{scikit-learn} package for the implementation of RF, the Xgboost package \cite{Chen:2016:XST:2939672.2939785} for GBM, and the Tensorflow library \cite{tensorflow2015-whitepaper} for deep neural networks. For the choice of activation functions of neural nets, the rectified linear unit (ReLU) \cite{nair2010rectified} is employed. This non-linear activation has an advantage over the sigmoid function and the hyperbolic tangent function as it avoids the vanishing gradient problem \cite{hochreiter2001gradient} during model training. The entire neural net part of forgeNets is trained using the Adam optimizer \cite{DBLP:journals/corr/KingmaB14}, which is the state-of-the-art version of the popular stochastic gradient descent algorithm. Also, we use the mini-batch training strategy by which the optimizer loops over randomly divided small proportions of the training samples in each iteration. Details about the Adam optimizer and the mini-batch strategy applications in deep learning can be found in \cite{goodfellow2016deep,DBLP:journals/corr/KingmaB14}. The performance of a deep neural network model is associated with many hyper-parameters, including the number of hidden layers, the number of hidden neurons in each layer, the dropout proportion of training, the learning rate and the batch size. As the hyper-parameters are not of primary interest in our research, in the simulation and real data experiments, we simply tune hyper-parameters using grid search in a feasible parameter space. Also, since our experiments contains a number of datasets, it is not plausible to fine tune models for each dataset. Instead, we tune hyper-parameters using some preliminary synthetic datasets, and apply the set of parameters to all experimental data. For simulation experiments, the number of trees of our forgeNets is 1000 and the number of hidden layers of the neural net is three with $p$ (graph-embedded layer), 64 and 16 hidden neurons respectively. For real data analyses, we have 2500 trees in the forest part since the feature space is much larger, and the neural net structure is the same as it is in simulation. \section{Simulation experiments} \label{se} The goal of the simulation experiments is to mimic disease outcome classification using gene expression data with $n\ll p$. Effective features are sparse and potentially correlated through an underlying unknown structure. Several benchmark methods are experimented in addition to the new forgeNet model for comparison purpose. Through simulation, we intend to investigate whether the forgeNet model is able to outperform other classifiers without knowing the underlying structure of features. \subsection{Synthetic data generation} \label{sdg} We follow a similar procedure described in \cite{kong2018graph}. For a given number of features $p$, the preferential attachment algorithm (BA model) \cite{barabasi1999emergence} is employed to generate a scale-free network as the underlying true feature graph. Defining the distance between two features in the network as the shortest path between them, we calculate the $p\times p$ matrix $D$ recording pairwise distances among features. Next, the distance matrix is transformed into a covariance matrix $\Sigma$ by letting \begin{equation*} \Sigma_{ij}=0.6^{D_{ij}}, i,j=1,\dots,p. \end{equation*} After obtaining the covariance matrix between features, we generate $n$ multivariate Normal samples as the data matrix $\mathbf{X}=(\mathbf{x}_1,\dots,\mathbf{x}_n)^T$ i.e. \begin{equation*} \mathbf{x}_i\sim \mathcal{N}(\mathbf{0},\Sigma), i=1,\dots\,n, \end{equation*} where $n\ll p$ for imitating gene expression data. To generate outcome variables, we first select a subset of features to be ``true" predictors. Among vertices with relatively high degrees (``hub nodes") in the feature graph, part of them are randomly selected as ``cores", and a proportion of the neighboring vertices of cores are also selected. Denoting the number of true predictors as $p_0$, we uniformly sample a set of parameters $\mathbf{\beta}=(\beta _1,\dots,\beta _{p_0})^T$ and an intercept $\beta_0$ from a small range, say $(-0.15, 0.15)$. Finally, the outcome variable $\mathbf{y}$ is generated through a procedure similar to the generalized linear model framework \begin{equation*} y_i=\mathcal{I}\{g(\beta_0 + (\mathbf{x_i}^{(true)})^T\mathbf{\beta})>t\},\quad i=1,\dots\,n, \end{equation*} where $\mathbf{x_i}^{(true)}\in \mathcal{R}^{p_0}$ is the sub-vector of $\mathbf{x_i}$ and $t$ is a threshold. For the transformation function $g(\cdot)$, we consider a weighted sum of hyperbolic tangent and quadratic function \begin{equation*} g(x) = 0.7\phi(tanh(x))+0.3\phi(x^2). \end{equation*} The reason of using this $g(\cdot)$ function is that the transformation is non-monotone, which brings in more challenges for classification. The function $\phi(\cdot)$ is the min-max transformation scaling the input to $[0,1]$, i.e., the original value minus the sample minimum and then divided by the difference between the sample maximum and the sample minimum. Following the above data generation scheme, we simulate a set of synthetic datasets with $p=5,000$ features and $n=400$ samples. Since in gene expression data, the true signals for a certain prediction task are sparse ($p_0\ll p$), We choose $p_0=15, 30, 45, 60$ and $75$ as the numbers of true predictors, corresponding to $1$ to $5$ cores selected among all hub nodes in the feature graph. \subsection{Evaluation of simulation experiments} We compare our method with several benchmark models. First, since the true feature graphs are known for simulation data, we are able to test the original GEDFN model with correctly specified feature graphs. At the same time, we also experiment GEDFN with mis-specified feature graphs by randomly generating Erdo-Renyi random graphs \cite{erdos1959random}, which have a different graph topology structure from the true scale-free networks. Also, since forgeNet inherently fits a tree-based ensemble classifier, it is natural to compare the performance of a forgeNet with its forest part alone. We choose two representative tree methods RF and GBM for the experiments, and correspondingly test two versions of forgeNets - forgeNet(RF) and forgeNet(GBM). Finally, the logistic regression classifier with lasso (LRL) \cite{tibshirani1996regression} is also added as a representative of linear machines. For each of the data generation settings, ten independent datasets are generated. For each dataset, we randomly split samples into training and testing sets at a ratio of 4:1. All models are fitted using the training dataset and then used to predict the testing dataset. To evaluate classification results, areas under Receiver Operating Characteristic curves (ROC-AUC) are calculated using the predicted class probabilities and the labels of the testing set. The final testing result for a simulation case is then given by the average testing ROC-AUC across the ten datasets. As for feature selection, all the methods except LRL provide relative feature importance scores; LRL does not rank features but directly gave the selected feature subset. Knowing the true predictors for simulated data, we could use the binary true predictor labels to evaluate the accuracy of feature selection. However, in preliminary numerical experiments, it is observed that though we fix the number of true features in each case, neighboring features of true predictors in the feature graph are also informative for classification even if they are not in the true feature set. This is because these neighboring features have a relatively high correlation with selected true predictors ($0.6$ according to Section \ref{sdg}). Therefore, when evaluating the results of feature selection, it is more appropriate to investigate a set of ``relevant" features including those neighboring features, rather than the ``true" feature set only. The average numbers of relevant features are 208.8, 460.4, 615.4, 717.8, and 864.7 respectively, corresponding to the five cases of true features $p_0=15, 30, 45, 60$ and $75$. Since the relevant feature sets are still small compared to the entire feature space ($p=5000$), the AUC of the precision-recall curve is a more appropriate metric here. We thus compare feature selection results using binary labels of relevant features for all methods providing feature scores. As for LRL, for each dataset, we compare recall values of our methods and LRL given the precision value of LRL. That is, the precision of LRL helps locate points on the precision-recall curves of forgeNets, and corresponding recall values are used for comparison. \subsection{Simulation results} \label{sr} Fig. \ref{simulation_figures}(a) shows the results of classification accuracy comparison. With the increasing number of true predictors, all of the methods performed better as there were more signals in the entire feature space. From the figure, the two versions of forgeNets, forgeNet(RF) and forgeNet(GBM), significantly improved the classification performance of their forest counterparts, i.e., RF and GBM. Also, the forgeNets achieved similar classification accuracy as GEDFN which benefited from the use of true feature graphs, and forgeNet(RF) was the only method that outperformed GEDFN. When GEDFN was given mis-specified feature graphs (GEDFN\_mis), its classification ability was weakened with AUC values similar to LRL. In summary, in terms of prediction, forgeNets beat all classic machine learning methods compared here (RF, GBM, LRL), achieved similar or even better accuracy compared to GEDFN using true feature graphs, and significantly outperformed GEDFN once its feature graphs were mis-specified. Feature selection results can be seen in Fig. \ref{simulation_figures}(b) and (c). Comparing the precision-recall AUCs from Fig. \ref{simulation_figures}(b), it can be observed that GEDFN using true feature graph was the best method for feature importance ranking, yet again the outstanding performance was ruined by mis-specified feature graphs. The results of forgeNets were significantly better than GEDFN\_mis, and were consistent with their forest counterparts. As the training of neural networks in forgeNets largely relied on feature graphs given by forests, it is not surprising to see that forgeNets could achieve similar feature selection results as their forest counterparts. In Fig. \ref{simulation_figures}(c), both forgeNet(RF) and forgeNet(GBM) were able to achieve higher recall values than LRL. In summary, in terms of feature selection, forgeNets outperformed the traditional lasso method and had consistent performance with their forest counterparts. Although not as good as GEDFN with true feature graphs, forgeNets produced significantly better feature selection than GEDFN using mis-specified feature graphs. Finally, we observe that the choice of the forest in forgeNets mattered, and among the two versions in our experiments, forgeNet(RF) was a more powerful model. \begin{figure} \centering \begin{minipage}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{roc_auc.pdf}\\ \centering{(a)} \end{minipage} \begin{minipage}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{pr_auc.pdf}\\ \centering{(b)} \end{minipage} \begin{minipage}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{recall_plot.pdf}\\ \centering{(c)} \end{minipage} \caption{Comparison of classification and feature selection for the simulation study. (a) AUC of ROC for classification; (b) AUC of precision-recall for feature selection; (c) recall plots given fixed precision from LRL. Error bars represent the estimated mean quantities plus/minus the estimated standard errors.}\label{simulation_figures} \end{figure} The simulation study proved the forgeNet a powerful classifier, with reasonably good feature selection ability. Through the experiment results, one can easily conclude the novelty of forgeNets is that, by borrowing the neural net architecture of the original GEDFN, forgeNets utilize feature information more effectively in classification tasks compared to regular tree-based ensemble methods. \section{Real data applications} \label{rda} \subsection{Datasets} We applied forgeNets to the Cancer Genome Atlas (TCGA) breast cancer (BRCA) RNA-seq dataset \cite{koboldt2012comprehensive}. The dataset consists of a gene expression matrix with 20155 genes and 1097 cancer patients, as well as the clinical data including survival information. The classification task is to predict the three-year survival outcome. We excluded patients with missing or censored survival time for which the three-year survival outcome could not be decided. Also, genes with more than 10\% of zero values were also screened out. As a result, the final dataset contains a total of $p=16027$ genes and $n=506$ patients, with 86\% positive cases. For each gene, its expression value was Z-score transformed. Using the BRCA data, we again tested two versions of forgeNets together with RF, GBM, and LRL. The classification was conducted using a 5-fold stratified cross validation process, and the final prediction AUC for each method is computed by averaging the five validation results. \subsection{Results} Table \ref{BRCA_res} summarizes the classification results. From the table, forgeNets again outperformed their forest counterpart models and LRL. Therefore, the real data application also led to a similar conclusion as in Section \ref{se} that forgeNets brought in significant improvement for classification. \begin{table} \centering \normalsize \caption{Classification results for BRCA data} \begin{tabular}{c|ccccc} \hline Methods & forgeNet(RF) & RF & forgeNet(XGB) & XGB & LRL\tabularnewline \hline Mean AUC & 0.742 & 0.672 & 0.716 & 0.691 & 0.689 \tabularnewline \hline s.d. & 0.066 & 0.048 & 0.100 & 0.022 & 0.084\tabularnewline \hline \end{tabular} \label{BRCA_res} \end{table} Feature selection was also conducted for BRCA data. We obtained ranked gene importance lists by averaging importance scores across the five cross validation results from all methods except LRL. For LRL, the intersection (456 genes) of the five selected feature sets is used as the final selected features. We chose top 500 ranked genes for each ranked list so that the numbers are of a similar magnitude as the genes selected by LRL. Functional analysis of all final gene lists was conducted by the Gene Ontology (GO) enrichment test using GOstats package \cite{pmid17098774}. We limited the analysis to GO biological processes containing 10-500 genes, and a p-value cutoff of 0.005. After manual removal of highly overlapping GO terms, the top 3 GO terms that contained the most number of selected genes are found in Table \ref{GO_BRCA}. \begin{table}[H] \centering \small \caption{Top 3 GO biological processes for each method, after manual removal of redundant GO terms.}\label{GO_BRCA} \begin{tabular}{lllll} \hline \textbf{ID} & \textbf{Term} & \textbf{P\_value} & \textbf{Count} & \textbf{Size}\tabularnewline \hline \hline forgeNet(RF) & & & & \tabularnewline \hline GO:0031647 & regulation of protein stability & 0.00123 & 17 & 229\tabularnewline GO:0090502 & RNA phosphodiester bond hydrolysis, endonucleolytic & 0.00369 & 7 & 62\tabularnewline GO:1901998 & toxin transport & 0.00499 & 5 & 35\tabularnewline \hline RF & & & & \tabularnewline \hline GO:2000679 & positive regulation of transcription regulatory region DNA binding & 0.00255 & 4 & 19\tabularnewline GO:0010172 & embryonic body morphogenesis & 0.00313 & 3 & 10\tabularnewline GO:0090042 & tubulin deacetylation & 0.0042 & 3 & 11\tabularnewline \hline forgeNet(GBM) & & & & \tabularnewline \hline GO:0001676 & long-chain fatty acid metabolic process & 0.00138 & 9 & 84\tabularnewline GO:0032890 & regulation of organic acid transport & 0.00155 & 6 & 40\tabularnewline GO:0046470 & phosphatidylcholine metabolic process & 0.00449 & 7 & 65\tabularnewline \hline GBM & & & & \tabularnewline \hline GO:0006633 & fatty acid biosynthetic process & 0.000454 & 12 & 121\tabularnewline GO:0030520 & intracellular estrogen receptor signaling pathway & 0.000643 & 7 & 47\tabularnewline GO:0010763 & positive regulation of fibroblast migration & 0.00322 & 3 & 10\tabularnewline \hline LRL & & & & \tabularnewline \hline GO:0051047 & positive regulation of secretion & 0.000609 & 20 & 317\tabularnewline GO:0006090 & pyruvate metabolic process & 0.000911 & 9 & 90\tabularnewline GO:0019359 & nicotinamide nucleotide biosynthetic process & 0.00204 & 8 & 82\tabularnewline \hline \end{tabular} \end{table} The top GO term selected by forgeNet(RF) was regulation of protein stability. It has been found that estrogen receptor (ER) alpha has increased abundance and activity in breast cancer. One of the mechanisms facilitating this change is the protection of ER from degradation by the ubiquitin-proteasome system \cite{pmid27561704}. Another critical protein, HER2 (human epidermal growth factor receptor 2), has also been found to have increased stability and activity in some breast cancer tissues through the formation of Her2-Heat-shock protein 27 (HSP27) complex \cite{pmid18834540}. The protein stability mechanism has not been previously linked to the survival outcome of breast cancer. The second GO term found by forgeNet(RF), RNA phosphodiester bond hydrolysis, endonucleolytic, is part of rNRA and tRNA processing. It plays a critical role in the protein synthesis of the cancer cells. The third term, toxin transport, is specific to breast cancer. It is suggested that increased toxin presence in the mammary tissue is a pre-disposing factor to breast cancer \cite{pmid12706546,Quezada_2014}. The forgeNet(GBM) and GBM results both point to fatty acid metabolism, which is known to be dysregulated in breast cancer \cite{pmid28412757}. The GBM selected the estrogen receptor signaling pathway, which is critically important in breast cancer development. The LRL selected GO terms include positive regulation of secretion, which includes lactation, in addtion to metabolic processes. \section{Conclusion} \label{Conclusion} We presented forgeNet that uses tree-based ensemble methods to extract feature connectivity information, and uses GEDFN for graph-based predictive model building. The new method was able to achieve sparse connection for neural nets without seeking external information, i.e., known feature graphs. It works well in the ``$n\ll p$" situation. Simulation experiments showed forgeNets' relatively higher classification accuracy compared to existing methods, and a TCGA RNA-seq dataset demonstrated the utility of forgeNets in both classification and the selection of biologically interpretable predictors. \section*{Acknowledgements} \label{Acknowledgements} This work was partially supported by NIH grant R01GM124061. \bibliographystyle{splncs03}
1,314,259,993,982
arxiv
\section{Introduction} Let $\bm{X}\sim N_p(\bm{\theta},\bm{\Sigma})$ where $p\geq 3$, $\bm{\theta}=(\theta_1,\dots,\theta_p)^{\mkern-1.5mu\mathsf{T}} $ and $\bm{\Sigma}=\mathrm{diag}(\sigma^2_1,\dots,\sigma^2_p)$. Let us assume \begin{equation}\label{sigma_descending} \sigma_1^2> \sigma^2_2 > \dots > \sigma^2_p. \end{equation} We are interested in the estimation of $\bm{\theta}$ with respect to the ordinary squared error loss function \begin{equation}\label{ordinary_squared_error_loss} L(\bm{\delta},\bm{\theta})=\|\bm{\delta}-\bm{\theta}\|^2, \end{equation} where the risk of an estimator $\bm{\delta}(\bm{X})$ is $ R(\bm{\delta},\bm{\theta})= E\left[ L(\bm{\delta},\bm{\theta})\right]$. The MLE $\bm{X}$ with constant risk $\sum \sigma^2_i$ is shown to be extended Bayes and hence minimax for any $p$ and any $\bm{\Sigma}$. In the homoscedastic case $ \sigma_1^2= \dots = \sigma^2_p$, \cite{James-Stein-1961} showed that the shrinkage estimator \begin{equation}\label{JS-original} \left(1-\frac{c}{\bm{X}^{\mkern-1.5mu\mathsf{T}} \bm{\Sigma}^{-1}\bm{X}}\right)\bm{X}\text{ for }c\in\left(0,2(p-2)\right) \end{equation} dominates the MLE $\bm{X}$ for $p\geq 3$. There is some literature discussing the minimax properties of shrinkage estimators under heteroscedasticity. \cite{Brown-1975} showed that the James-Stein estimator \eqref{JS-original} is not necessarily minimax when the variances are not equal. Specifically, it is not minimax for any $c\in\left(0,2(p-2)\right)$ when $2\sigma^2_1>\sum_{i=1}^p\sigma^2_i$. \cite{Berger-1976} showed that \begin{equation}\label{JS-variant-1} \left(\bm{I}-\bm{\Sigma}^{-1}\frac{c}{\bm{X}^{\mkern-1.5mu\mathsf{T}} \bm{\Sigma}^{-2}\bm{X}}\right)\bm{X}\text{ for }c\in\left(0,2(p-2)\right) \end{equation} is minimax for $p\geq 3$ and any $\bm{\Sigma}$. However, \cite{Casella-1980} argued that the James-Stein estimator \eqref{JS-variant-1} may not be desirable even if it is minimax. Ordinary minimax estimators, as in \eqref{JS-variant-1}, typically shrink most on the coordinates with smaller variances. From \citeapos{Casella-1980} viewpoint, one of the most natural Jame-Stein variants is \begin{equation}\label{JS-variant-2} \left(\bm{I}-\bm{\Sigma}\frac{c}{\|\bm{X}\|^2}\right)\bm{X}\text{ for }c>0, \end{equation} which we are going to rescue, by providing some minimax properties related to Bayesian viewpoint. In many applications, $\theta_i$ are thought to follow some exchangeable prior distribution $\pi$. It is then natural to consider the compound risk function which is then the Bayes risk with respect to the prior $\pi$ \begin{align}\label{eq:Bayes_risk_0} \bar{R}(\bm{\delta},\pi =\int_{\mathbb{R}^p} R(\bm{\theta},\bm{\delta})\pi(\mathrm{d} \bm{\theta}). \end{align} \cite{Efron-Morris-1971, Efron-Morris-1972-biometrika, Efron-Morris-1972-jasa, Efron-Morris-1973-jasa} addressed this problem from both the Bayes and empirical Bayes perspective. In particular, they considered a prior distribution $\bm{\theta}\sim N_p(\bm{0},\tau\bm{I}_p)$ with $\tau \in (0,\infty)$, and used the term ``ensemble risk'' for the compound risk. By introducing a set of ensemble risks \begin{equation}\label{eq:Bayes_risk} \bar{R}(\bm{\delta},\tau)=\int_{\mathbb{R}^p} R(\bm{\delta},\bm{\theta})\frac{1}{(2\pi\tau)^{p/2}} \exp\left(-\frac{\|\bm{\theta}\|^2}{2\tau}\right)\mathrm{d} \bm{\theta}, \end{equation} we can define ensemble minimaxity with respect to a set of priors \begin{equation}\label{P_star} \mathcal{P}_\star =\{N_p(\bm{0},\tau \bm{I}_p):\tau\in(0,\infty)\}, \end{equation} that is, an estimator $\bm{\delta}$ is said to be ensemble minimax with respect to $\mathcal{P}_\star$ if \begin{equation}\label{em_P_*} \sup_{\tau\in(0,\infty)}\bar{R}(\bm{\delta},\tau)= \inf_{\bm{\delta}'}\sup_{\tau\in(0,\infty)}\bar{R}(\bm{\delta}',\tau). \end{equation} As a matter of fact, the second author in his unpublished manuscript, \cite{Brown-ensemble-2011}, has already introduced the concept of ensemble minimaxity. In this article, we follow their spirit but propose a simpler and clearer approach for establishing ensemble minimaxity of estimators. Our article is organized as follows. In Section \ref{sec:em}, we elaborate the definition of ensemble minimaxity and explain \citeapos{Casella-1980} viewpoint on the contradiction between minimaxity and well-conditioning. In Section \ref{sec:main}, we show the ensemble minimaxity of various shrinkage estimators including a variant of the James-Stein estimator \begin{equation}\label{intro.eq:nice_js} \left(\bm{I}-\bm{\Sigma}\frac{p-2}{(p-2)\sigma^2_1+\|\bm{X}\|^2}\right)\bm{X} \end{equation} as well as the generalized Bayes estimator with respect to the hierarchical prior \begin{equation}\label{eq:gharmonic_intro} \bm{\theta}\,|\,\lambda \sim N_p(\bm{0},(\sigma^2_1/\lambda)\bm{I}-\bm{\Sigma}), \ \pi(\lambda) \sim \lambda^{-2}I_{(0,1)}(\lambda) \end{equation} which is a generalization of the harmonic prior $\|\bm{\theta}\|^{2-p}$ for the heteroscedastic case. \section{Minimaxity, Ensemble Minimaxity and Casella's viewpoint} \label{sec:em} If the prior $\pi(\bm{\theta})$ were known, the resulting posterior mean $E[\bm{\theta}\,|\, \bm{x}]$ would then be the optimal estimate under the sum of the squared error loss. However, it is typically not feasible to exactly specify the prior. One approach to avoid excessive dependence on the choice of prior, is to consider a set of priors $\mathcal{P}$ on $\Theta$ and study the properties of estimators based on the corresponding set of ensemble risks. As in classical decision theory, there rarely exists an estimator that achieves the minimum ensemble risk uniformly for all $\pi\in\mathcal{P}$. A more realistic goal as pursued in this paper is to study the ensemble minimaxity of James-Stein type estimators. Recall that with ordinary risk $R(\bm{\delta},\bm{\theta})$, $\bm{\delta}$ is said to be minimax if \begin{align} \sup_{\bm{\theta}\in\Theta}R(\bm{\delta},\bm{\theta}) =\inf_{\bm{\delta}'}\sup_{\bm{\theta}\in\Theta}R(\bm{\delta}',\bm{\theta}). \end{align} Similarly for the case of ensemble risk we have the following definition. Note the Bayes risk of $\bm{\delta}$ under the prior $\pi$ is given by \eqref{eq:Bayes_risk_0}. The estimator $\bm{\delta}$ is said to be ensemble minimax with respect to $\mathcal{P}$ if \begin{align} \sup_{\pi\in\mathcal{P}}\bar{R}(\bm{\delta},\pi)= \inf_{\bm{\delta}'}\sup_{\pi\in\mathcal{P}}\bar{R}(\bm{\delta}',\pi). \end{align} The motivation for the above definitions comes from the use of the empirical Bayes method in simultaneous inference. \cite{Efron-Morris-1972-jasa}, derived the James-Stein estimator through the parametric empirical Bayes model with $\bm{\theta}\sim N_p(\bm{0},\tau \bm{I}_p)$. Note that in such an empirical Bayes model, $\tau$ is the unknown non-random parameter. Given the family $\mathcal{P}_\star =\{N_p(\bm{0},\tau \bm{I}_p):\tau\in(0,\infty)\}$, the Bayes risk is a function of $\tau$ as follows, \begin{equation}\label{eq:Bayes_risk_1} \bar{R}(\bm{\delta},\tau)=\int_{\mathbb{R}^p} R(\bm{\delta},\bm{\theta})\frac{1}{(2\pi\tau)^{p/2}} \exp\left(-\frac{\|\bm{\theta}\|^2}{2\tau}\right)\mathrm{d} \bm{\theta}. \end{equation} Hence, with $ \bar{R}(\bm{\delta},\tau)$, the estimator $\bm{\delta}$ is said to be ensemble minimax with respect to $\mathcal{P}_\star$ if \begin{equation}\label{em_P_*_1} \sup_{\tau\in(0,\infty)}\bar{R}(\bm{\delta},\tau)= \inf_{\bm{\delta}'}\sup_{\tau\in(0,\infty)}\bar{R}(\bm{\delta}',\tau), \end{equation} which may be seen as the counterpart of ordinary minimaxity in the empirical Bayes model. Clearly the usual estimator $\bm{X}$ has constant risk, has constant Bayes risk and hence $\bm{X}$ is ensemble minimax. Then the ensemble minimaxity of $\bm{\delta}$ follows if \begin{align*} \bar{R}(\bm{\delta},\tau)\leq \sum_{i=1}^p\sigma^2_i, \ \forall \tau\in (0,\infty). \end{align*} \begin{remark} Note that ensemble minimaxity can also be interpreted as a particular case of Gamma minimaxity studied in the context of robust Bayes analysis by \cite{Good-1952, Berger_L-1979}. However, in such studies, a ``large'' set consisting of many diffuse priors are usually included in the analysis. Since this is quite different from our formulation of the problem, we use the term ensemble minimaxity throughout our paper, following the Efron and Morris papers cited above. \end{remark} \medskip A class of shrinkage estimators which we consider in this paper, is given by \begin{equation}\label{phiphiphi} \bm{\delta}_{\phi}= \left(\bm{I} -\bm{G} \frac{\phi(z)}{z}\right)\bm{x}, \ \mbox{ for }z=\bm{x}^{\mkern-1.5mu\mathsf{T}} \bm{G}\bm{\Sigma}^{-1}\bm{x}=\sum \frac{g_ix_i^2}{\sigma^2_i}, \end{equation} where $\bm{G}=\mbox{diag}(g_1,\dots,g_p)$ with \begin{equation*} 0<g_i\leq 1, \ \forall i. \end{equation*} \cite{Berger-Srinivasan-1978} showed, in their Corollary 2.7, that, given positive-definite $\bm{C}$ and non-singular $\bm{B}$, a necessary condition for an estimator of the form \begin{align*} \left(\bm{I} -\bm{B}\frac{\phi(\bm{x}^{\mkern-1.5mu\mathsf{T}} \bm{C}\bm{x})}{\bm{x}^{\mkern-1.5mu\mathsf{T}} \bm{C}\bm{x}}\right)\bm{x} \end{align*} to be admissible is $\bm{B}= k\bm{\Sigma}\bm{C}$ for some constant $k$, which is satisfied by estimators among the class of \eqref{phiphiphi}. A version of \citeapos{Baranchik-1964} sufficient condition for ordinary minimaxity is given in Appendix \ref{sec:ordinary}; For given $\bm{G}$ which satisfies \begin{align*} h(\bm{\Sigma},\bm{G})=2\left(\frac{\sum g_i\sigma^2_i}{\max (g_i\sigma^2_i)}-2\right)>0, \end{align*} $\bm{\delta}_{\phi}$ given by \eqref{phiphiphi} is ordinary minimax if \begin{equation}\label{suffi.condi.ordi.minimax} \phi(\cdot) \text{ is non-decreasing and }0\leq \phi \leq h(\bm{\Sigma},\bm{G}). \end{equation} \cite{Berger-1976} showed that, for any given $\bm{\Sigma}$, \begin{align*} \max_{\bm{G}}h(\bm{\Sigma},\bm{G})=2(p-2),\quad \argmax_{\bm{G}}h(\bm{\Sigma},\bm{G})=\sigma^2_p\bm{\Sigma}^{-1}=\mathrm{diag}\left(\frac{\sigma^2_p}{\sigma_1^2}, \dots, \frac{\sigma^2_p}{\sigma_{p-1}^2},1\right) \end{align*} which seems the right choice of $\bm{G}$. However, from the ``conditioning'' viewpoint of \cite{Casella-1980} which advocates more shrinkage on higher variance estimates, the descending order \begin{equation}\label{eq:ascend} g_1>\dots >g_p \end{equation} is desirable, whereas $ \bm{G}=\sigma^2_p\bm{\Sigma}^{-1}$ corresponding to the ascending order $g_1<\dots <g_p$ under $\bm{\Sigma}$ given by \eqref{sigma_descending}. As \cite{Casella-1980} pointed out, ordinary minimaxity cannot be enjoyed together with well-conditioning given by \eqref{eq:ascend} when \begin{equation*} h(\bm{\Sigma},c\bm{I})\leq 0 \text{ or equivalently }\sum\sigma^2_i\leq 2\sigma^2_1 \end{equation*} for some $0<c\leq 1$. In fact, when $ h(\bm{\Sigma},c\bm{I})\leq 0$ and $ c=g_1>\dots >g_p$, we have \begin{align*} c\sigma^2_1=g_1\sigma^2_1, \ c\sigma^2_2>g_2\sigma^2_2, \ \dots, \ c\sigma^2_p>g_p\sigma^2_p \end{align*} and hence $ h(\bm{\Sigma},\bm{G})<0$ follows. The motivation of \cite{Casella-1980,Casella-1985} seems to provide a better treatment for the case. Actually \cite{Brown-1975} pointed out essentially the same phenomenon from a slightly different viewpoint. Ensemble minimaxity, based on ensemble risk given by \eqref{eq:Bayes_risk}, provides a way of saving shrinkage estimators with well-conditioning, estimators which are not necessarily ordinary minimax. \section{Ensemble minimaxity} \label{sec:main} \subsection{A general theorem} \label{subsec:general} We have the following theorem on ensemble minimaxity of $ \bm{\delta}_{\phi}$ with general $\bm{G}$, though we will eventually focus on $ \bm{\delta}_{\phi}$ with $\bm{G}$ with the descending order $g_1>\dots >g_p$ as in \eqref{eq:ascend}. \begin{thm} Assume $\phi(z)$ is non-negative, non-decreasing and concave. Also $\phi(z)/z$ is assumed non-increasing. Then \begin{equation*} \bm{\delta}_{\phi}= \left(\bm{I} -\bm{G} \frac{\phi(z)}{z}\right)\bm{x}, \ \mbox{ for }z=\bm{x}^{\mkern-1.5mu\mathsf{T}} \bm{G}\bm{\Sigma}^{-1}\bm{x}=\sum \frac{g_ix_i^2}{\sigma^2_i} \end{equation*} is ensemble minimax if \begin{equation}\label{AAA} \phi(p\min_ig_i(1+\tau/\sigma^2_i)) \leq 2(p-2) \frac{\min_ig_i(1+\tau/\sigma^2_i)}{\max_ig_i(1+\tau/\sigma^2_i)}, \quad \forall \tau\in(0,\infty). \end{equation} \end{thm} \begin{proof} Recall, for $i=1,\dots,p$, \begin{align*} x_i \,|\, \theta_i \sim N(\theta_i,\sigma^2_i),\text{ and }\theta_i\sim N(0,\tau). \end{align*} Then the posterior and marginal are given by \begin{align*} \theta_i\,|\, x_i\sim N\left(\frac{\tau}{\tau+\sigma^2_i}x_i,\frac{\tau\sigma^2_i}{\tau+\sigma^2_i}\right) \text{ and }x_i\sim N(0,\tau+\sigma^2_i), \end{align*} respectively, where $ \theta_1\,|\, x_1,\dots,\theta_p\,|\, x_p$ are mutually independent and $x_1,\dots,x_p$ are mutually independent. Then the Bayes risk is given by \begin{align*} \bar{R}(\bm{\delta}_{\phi},\tau)&= \sum_{i=1}^p E_{\bm{\theta}} E_{\bm{x}\,|\, \bm{\theta}} \left[\left\{\left(1-g_i\frac{\phi(z)}{z}\right)x_i-\theta_i\right\}^2\right] \\ &= \sum_{i=1}^p E_{\bm{x}} E_{\bm{\theta}\,|\, \bm{x}} \left[\left\{\left(1-g_i\frac{\phi(z)}{z}\right)x_i-\theta_i\right\}^2\right] \\ &= \sum_{i=1}^p E_{\bm{x}} E_{\bm{\theta}\,|\, \bm{x}} \left[\left\{\left(1-g_i\frac{\phi(z)}{z}\right)x_i- E[\theta_i\,|\, x_i]+E[\theta_i\,|\, x_i]-\theta_i\right\}^2\right] \\ &= \sum_{i=1}^p E_{\bm{x}} \left[\left\{\left(1-g_i\frac{\phi(z)}{z}\right)x_i- E[\theta_i\,|\, x_i]\right\}^2\right] +\sum_{i=1}^p\mathrm{Var}(\theta_i\,|\, x_i) \\ &= \sum_{i=1}^p E_{\bm{x}} \left[\left(\frac{\sigma^2_i}{\tau+\sigma^2_i}x_i - g_i\frac{\phi(z)}{z}x_i\right)^2\right] +\sum_{i=1}^p\frac{\tau\sigma^2_i}{\tau+\sigma^2_i}. \end{align*} Since the first term of the r.h.s.~of the above equality is rewritten as \begin{align*} & \sum_{i=1}^p E_{\bm{x}} \left[\left(\frac{\sigma^2_i}{\tau+\sigma^2_i}x_i - g_i\frac{\phi(z)}{z}x_i\right)^2\right] \\ &= \sum_{i=1}^p \left(\frac{\sigma^2_i}{\tau+\sigma^2_i}\right)^2E_{\bm{x}}[x_i^2] -2E_{\bm{x}} \left[\sum_{i=1}^p\frac{\sigma^2_ig_ix_i^2}{\tau+\sigma^2_i} \frac{\phi(z)}{z}\right]+ E_{\bm{x}} \left[\sum_{i=1}^pg^2_ix_i^2\frac{\phi^2(z)}{z^2}\right] \\ &=\sum_{i=1}^p\frac{\sigma^4_i}{\tau+\sigma^2_i} -2E_{\bm{x}} \left[\sum_{i=1}^p\frac{\sigma^2_ig_ix_i^2}{\tau+\sigma^2_i} \frac{\phi(z)}{z}\right]+ E_{\bm{x}} \left[\sum_{i=1}^pg^2_ix_i^2\frac{\phi^2(z)}{z^2}\right], \end{align*} we have \begin{equation}\label{eq:bayes_risk_diff} \bar{R}(\bm{\delta}_{\phi},\tau)-\sum \sigma^2_i = -2E_{\bm{x}} \left[\sum_{i=1}^p\frac{\sigma^2_ig_ix_i^2}{\tau+\sigma^2_i} \frac{\phi(z)}{z}\right]+ E_{\bm{x}} \left[\sum_{i=1}^pg^2_ix_i^2\frac{\phi^2(z)}{z^2}\right]. \end{equation} Let \begin{align*} w_i=\frac{x_i^2}{\sigma^2_i+\tau},\ w=\sum_{i=1}^p w_i \ \text{ and } \ t_i=\frac{w_i}{w}\text{ for }i=1,\dots,p. \end{align*} Then \begin{gather*} w\sim\chi^2_p, \quad \bm{t}=(t_1,\dots,t_p)^{\mkern-1.5mu\mathsf{T}} \sim \mathrm{Dirichlet}(1/2,\dots,1/2), \end{gather*} and $w$ and $\bm{t}$ are mutually independent. With the notation, we have \begin{align*} x_i^2=wt_i(\sigma^2_i+\tau)\text{ and }z= \bm{x}^{\mkern-1.5mu\mathsf{T}} \bm{G}\bm{\Sigma}^{-1}\bm{x}= \sum_{i=1}^p \frac{g_ix_i^2}{\sigma^2_i}=w\sum_{i=1}^p t_ig_i\left(1+\frac{\tau}{\sigma^2_i}\right) \end{align*} and hence \begin{align*} E_{\bm{x}} \left[\sum g^2_ix_i^2\frac{\phi^2(z)}{z^2}\right] &=E_{w,\bm{t}} \left[\frac{\sum t_ig^2_i(\sigma^2_i+\tau)} {\sum t_ig_i(1+\tau/\sigma^2_i)} \frac{\phi(w\sum t_ig_i(1+\tau/\sigma^2_i))^2}{w\sum t_ig_i(1+\tau/\sigma^2_i)} \right] \\ &=E_{\bm{t}} \left[\frac{\sum t_ig^2_i(\sigma^2_i+\tau)} {\sum t_ig_i(1+\tau/\sigma^2_i)} E_{w\,|\, \bm{t}}\left[\frac{\phi(w\sum t_ig_i(1+\tau/\sigma^2_i))^2}{w\sum t_ig_i(1+\tau/\sigma^2_i)} \right] \right]. \end{align*} Since $\phi(w)/w$ is non-increasing and $\phi(w)$ is non-decreasing, by the correlation inequality, we have \begin{align*} & E_{w\,|\, \bm{t}}\left[ \frac{\phi(w\sum t_ig_i(1+\tau/\sigma^2_i))^2}{w\sum t_ig_i(1+\tau/\sigma^2_i)}\right] \\ &\leq E_{w\,|\, \bm{t}}\left[\phi(w\sum t_ig_i(1+\tau/\sigma^2_i))\right] E_{w\,|\, \bm{t}}\left[\frac{\phi(w\sum t_ig_i(1+\tau/\sigma^2_i))}{w\sum t_ig_i(1+\tau/\sigma^2_i)}\right] \\ &\leq E_{w\,|\, \bm{t}}\left[\phi(w\sum t_ig_i(1+\tau/\sigma^2_i))\right] E_{w}\left[ \frac{\phi(w\min g_i(1+\tau/\sigma^2_i))}{w \min g_i(1+\tau/\sigma^2_i)} \right], \end{align*} and hence \begin{equation}\label{inequ.proof} \begin{split} & E_{\bm{x}} \left[\sum g^2_ix_i^2\frac{\phi^2(z)}{z^2}\right] \leq E_{w}\left[ \frac{\phi(w\min g_i(1+\tau/\sigma^2_i))}{w \min g_i(1+\tau/\sigma^2_i)} \right] \\ &\qquad \qquad \qquad \times E_{\bm{t}} \left[\frac{\sum t_ig^2_i(\sigma^2_i+\tau)} {\sum t_ig_i(1+\tau/\sigma^2_i)} E_{w\,|\, \bm{t}}\left[\phi(w\sum t_ig_i(1+\tau/\sigma^2_i)) \right] \right]. \end{split} \end{equation} In the first part of the r.h.s.~of the inequality \eqref{inequ.proof}, we have \begin{equation}\label{eq:first} \begin{split} E_{w}\left[ \frac{\phi(w\min g_i(1+\tau/\sigma^2_i))}{w} \right] & \leq E_w[1/w]E_w\left[\phi(w\min_i g_i(1+\tau/\sigma^2_i))\right] \\%\text{ (correlation inequality)}\\ &\leq \frac{\phi(E_w\left[w\right]\min_i g_i(1+\tau/\sigma^2_i))}{p-2} \\%\text{ (Jensen's inequality)}\\ &= \frac{\phi(p\min_i g_i(1+\tau/\sigma^2_i))}{p-2}, \end{split} \end{equation} where the first and second inequality follow from the correlation inequality and Jensen's inequality, respectively. In the second part of the r.h.s.~of the inequality \eqref{inequ.proof}, by the inequality \begin{align*} \sum t_ig^2_i(\sigma^2_i+\tau)\leq \max_i g_i(1+\tau/\sigma_i^2) \sum t_ig_i\sigma^2_i, \end{align*} we have \begin{equation}\label{eq:second} \begin{split} & E_{\bm{t}} \left[\frac{\sum t_ig^2_i(\sigma^2_i+\tau)} {\sum t_ig_i(1+\tau/\sigma^2_i)} E_{w\,|\, \bm{t}}\left[\phi(w\sum t_ig_i(1+\tau/\sigma^2_i))\right]\right] \\ & = E_{w,\bm{t}}\left[\frac{\sum t_ig^2_i(\sigma^2_i+\tau)} {\sum t_ig_i(1+\tau/\sigma^2_i)} \phi(w\sum t_ig_i(1+\tau/\sigma^2_i))\right]\\ & \leq \max_i g_i(1+\tau/\sigma_i^2) E_{w,\bm{t}}\left[\frac{\sum t_ig_i\sigma^2_i} {\sum t_ig_i(1+\tau/\sigma^2_i)} \phi(w\sum t_ig_i(1+\tau/\sigma^2_i))\right] \\ &=\max_i g_i(1+\tau/\sigma_i^2) E_{\bm{x}} \left[\sum\frac{\sigma^2_ig_ix_i^2}{\tau+\sigma^2_i} \frac{\phi(z)}{z}\right]. \end{split} \end{equation} By \eqref{inequ.proof}, \eqref{eq:first} and \eqref{eq:second}, we have \begin{equation}\label{eq:-1} \begin{split} &E_{\bm{x}} \left[\sum g^2_ix_i^2\frac{\phi^2(z)}{z^2}\right] \\ &\leq \frac{\phi(p\min_i g_i(1+\tau/\sigma^2_i))}{p-2} \frac{\max_i g_i(1+\tau/\sigma^2_i)}{\min_i g_i(1+\tau/\sigma^2_i)} E_{\bm{x}} \left[\sum\frac{\sigma^2_ig_ix_i^2}{\tau+\sigma^2_i} \frac{\phi(z)}{z}\right], \end{split} \end{equation} and, by \eqref{eq:bayes_risk_diff} and \eqref{eq:-1}, \begin{align*} & \bar{R}(\bm{\delta}_{\phi},\tau)-\sum \sigma^2_i \\ &\leq \left(\frac{\phi(p\min_i g_i(1+\tau/\sigma^2_i))}{p-2} \frac{\max_i g_i(1+\tau/\sigma^2_i)}{\min_i g_i(1+\tau/\sigma^2_i)}-2\right) E_{\bm{x}} \left[\sum\frac{\sigma^2_ig_ix_i^2}{\tau+\sigma^2_i} \frac{\phi(z)}{z}\right], \end{align*} which guarantees $ \bar{R}(\bm{\delta}_{\phi},\tau)-\sum \sigma^2_i \leq 0$ for all $\tau\in(0,\infty)$ under the condition \eqref{AAA}. \end{proof} Given $\bm{\Sigma}$, the choice $\bm{G}=\bm{\Sigma}/\sigma^2_1$ with descending order $g_1>\dots>g_p$, is one of the most natural choice of $\bm{G}$ from \citeapos{Casella-1980} viewpoint. In this case, we have \begin{gather*} \frac{\min_ig_i(1+\tau/\sigma^2_i)}{\max_ig_i(1+\tau/\sigma^2_i)} =\frac{\min_i \{\sigma_i^2+\tau\}} {\max_i\{\sigma_i^2+\tau\}} =\frac{\sigma_p^2+\tau}{\sigma_1^2+\tau}, \\ p\min_ig_i(1+\tau/\sigma^2_i)=\frac{p}{\sigma^2_1}\min_i(\sigma^2_i+\tau)=p\frac{\sigma^2_p+\tau} {\sigma^2_1}, \end{gather*} and hence a following corollary. \begin{corollary}\label{cor:1} Assume that $\phi(z)$ is non-negative, non-decreasing and concave. Also $\phi(z)/z$ is assumed non-increasing. Then \begin{align*} \bm{\delta}_\phi=\left(\bm{I}-\bm{\Sigma}\frac{\phi(\|\bm{x}\|^2/\sigma^2_1)}{\|\bm{x}\|^2}\right)\bm{x} \end{align*} is ensemble minimax if \begin{equation}\label{simple} \phi(p(\sigma^2_p+\tau)/\sigma^2_1) \leq 2(p-2) \frac{\sigma^2_p+\tau}{\sigma^2_1+\tau}\quad, \forall \tau\in(0,\infty). \end{equation} \end{corollary} \subsection{An ensemble minimax James-Stein variant} \label{subsec:js} As an example of Corollary \ref{cor:1}, we consider \begin{align}\label{eq:form_stein} \phi(z)=\frac{c_1z}{c_2+z} \end{align} for $c_1>0$ and $c_2\geq 0$, which is motivated by \cite{Stein-1956} and \cite{James-Stein-1961}. Under $\bm{\Sigma}=\bm{I}_p$, \cite{Stein-1956} suggested that there exist estimators dominating the usual estimator $\bm{x}$ among a class of estimators $\bm{\delta}_\phi$ with $\phi$ given by \eqref{eq:form_stein} for small $c_1$ and large $c_2$. Following \cite{Stein-1956}, \cite{James-Stein-1961} showed that $ \bm{\delta}_\phi $ with $0<c_1<2(p-2)$ and $ c_2=0 $ is ordinary minimax. The choice $ c_2=0 $ is, however, not good since, by Corollary \ref{cor:1}, $c_1$ cannot be larger than $2(p-2)\sigma^2_p/\sigma^2_1$. With positive $c_2$, we can see that $c_1$ can be much larger as follows. Note that $\phi(z)$ given by \eqref{eq:form_stein} is non-negative, increasing and concave and that $ \phi(z)/z$ is decreasing. Then the sufficient condition in \eqref{simple} is \begin{align*} \frac{c_1p(\sigma^2_p+\tau)/\sigma^2_1}{c_2+p(\sigma^2_p+\tau)/\sigma^2_1} \leq 2(p-2) \frac{\sigma^2_p+\tau}{\sigma^2_1+\tau}\quad \forall \tau\in(0,\infty), \end{align*} which is equivalent to \begin{align*} 2(p-2)\left\{\sigma^2_1c_2+p(\sigma^2_p+\tau)\right\} -c_1p(\sigma^2_1+\tau)\geq 0 \quad \forall \tau\in(0,\infty) \end{align*} or \begin{align*} p\tau\left\{2(p-2)-c_1\right\}+2(p-2)\sigma^2_1\left\{c_2-p\left(\frac{c_1}{2(p-2)}-\frac{\sigma^2_p}{\sigma^2_1}\right)\right\}\geq 0 \quad \forall \tau\in(0,\infty). \end{align*} Hence we have a following result. \begin{thm}\label{cor:js} \begin{enumerate} \item \label{cor:js.1} When \begin{equation}\label{eq:cor:js.1} 0<c_1\leq 2(p-2) \text{ and }c_2\geq \max\left(0, p\left(\frac{c_1}{2(p-2)}-\frac{\sigma^2_p}{\sigma^2_1}\right)\right), \end{equation} the shrinkage estimator \begin{align*} \left(\bm{I}-\bm{\Sigma}\frac{c_1}{c_2\sigma^2_1+\|\bm{x}\|^2}\right)\bm{x} \end{align*} is ensemble minimax. \item \label{cor:js.2} It is ordinary minimax if \begin{equation*} 2\left(\sum \sigma_i^4/\sigma_1^4-2\right)\geq c_1. \end{equation*} \end{enumerate} \end{thm} Part \ref{cor:js.2} above follows from Theorem \ref{thm:ordinary_minimax}. It seems to us that one of the most interesting estimators with ensemble minimaxity from Part \ref{cor:js.1} is \begin{equation}\label{eq:nice_js} \left(\bm{I}-\bm{\Sigma}\frac{p-2}{(p-2)\sigma^2_1+\|\bm{x}\|^2}\right)\bm{x} \end{equation} with the choice $c_1=c_2=p-2$ satisfying \eqref{eq:cor:js.1}. It is clear that the $i$-th shrinkage factor \begin{align*} 1-\frac{(p-2)\sigma^2_i}{(p-2)\sigma^2_1+\|\bm{x}\|^2} \end{align*} is nonnegative for any $\bm{x}$ and any $\bm{\Sigma}$, which is a nice property. \subsection{A generalized Bayes ensemble minimax estimator} \label{sec:Bayes} In this subsection, we provide a generalized Bayes ensemble minimax estimator. Following \cite{Strawderman-1971}, \cite{Berger-1976} and \cite{Maru-Straw-2005}, we consider the generalized harmonic prior \begin{equation}\label{eq:gharmonic} \bm{\theta}\,|\, \lambda \sim N_p(\bm{0},\lambda^{-1}\bm{\Sigma}\bm{G}^{-1}-\bm{\Sigma}), \ \pi(\lambda) \sim \lambda^{-2}I_{(0,1)}(\lambda) \end{equation} where $\bm{G} =\mbox{diag}(g_1,\dots,g_p)$ satisfies $0<\forall g_i \leq 1$. Note that for $\bm{\Sigma}=\bm{G}=\bm{I}_p$, the density of $\bm{\theta}$ is exactly $\pi(\bm{\theta})=\|\bm{\theta}\|^{2-p}$, since $ \lambda^{-1}\bm{\Sigma}\bm{G}^{-1}-\bm{\Sigma}=\{(1-\lambda)/\lambda\}\bm{I}_p$ and \begin{align*} & \frac{1}{(2\pi)^{p/2}} \int_0^1\left(\frac{\lambda}{1-\lambda}\right)^{p/2}\exp\left(-\frac{\lambda\|\bm{\theta}\|^2}{2(1-\lambda)}\right) \lambda^{-2}\mathrm{d} \lambda \\ &=\frac{1}{(2\pi)^{p/2}} \int_0^\infty g^{p/2-2}\exp\left(-g\|\bm{\theta}\|^2/2\right)\mathrm{d} g \\ &=\frac{\Gamma(p/2-1)2^{p/2-1}}{(2\pi)^{p/2}}\|\bm{\theta}\|^{2-p}. \end{align*} The prior $\pi(\bm{\theta})=\|\bm{\theta}\|^{2-p}$ is called the harmonic prior and was originally investigated by \cite{Baranchik-1964} and \cite{Stein-1974}. \cite{Berger-1980} and \cite{Berger-Strawderman-1996} recommended the use of the prior \eqref{eq:gharmonic} mainly because it is on the boundary of admissibility. By the way of \cite{Strawderman-1971}, the generalized Bayes estimator with respect to the prior is given by \begin{equation} \bm{\delta}_{*}= \left(\bm{I}-\bm{G} \frac{\phi_*(z)}{z}\right)\bm{x}, \ \mbox{ for }z=\bm{x}^{\mkern-1.5mu\mathsf{T}} \bm{G}\bm{\Sigma}^{-1}\bm{x} \end{equation} with \begin{equation*} \phi_*(z)=z\frac{\int_0^1 \lambda^{p/2-1}\exp(-z\lambda/2)\mathrm{d}\lambda } {\int_0^1 \lambda^{p/2-2}\exp(-z\lambda/2)\mathrm{d}\lambda }, \end{equation*} where $\phi_*(z)$ satisfies the following properties \begin{enumerate}[label= H\arabic*] \item\label{Bayes_phi_1} $\phi_*(z)$ is increasing in $z$. \item\label{Bayes_phi_2} $\phi_*(z)$ is concave. \item \label{Bayes_phi_3} $\lim_{z\to\infty}\phi_*(z)=p-2$. \item\label{Bayes_phi_4} $\phi_*(z)/z$ is decreasing in $z$. \item \label{Bayes_phi_5} The derivative of $\phi_*(z)$ at $z=0$ is $(p-2)/p$. \end{enumerate} Under the choice $\bm{G}=\bm{\Sigma}/\sigma_1^2$ and with the condition of Corollary \ref{cor:1}, we have a following result. \begin{thm}\label{final.thm} \begin{enumerate} \item \label{final.thm.1} The estimator $\bm{\delta}_{*}$ is ensemble minimax. \item \label{final.thm.2} The estimator $\bm{\delta}_{*}$ is ordinary minimax when \begin{equation*} 2\left(\sum \sigma_i^4/\sigma_1^4-2\right)\geq p-2. \end{equation*} \item \label{final.thm.3} The estimator $\bm{\delta}_{*}$ is conventional admissible. \end{enumerate} \end{thm} \begin{proof}\mbox{} [Part \ref{final.thm.1}] Recall that the sufficient condition for ensemble minimaxity is given by Corollary \ref{cor:1}. By \ref{Bayes_phi_1}--\ref{Bayes_phi_5}, we have only to check \eqref{simple} in Corollary \ref{cor:1}. For $\tau\geq \max(0,\sigma^2_1-2\sigma^2_p)$, we have \begin{align*} 2(p-2)\frac{\sigma_p^2+\tau}{\sigma_1^2+\tau}\geq p-2. \end{align*} By the properties \ref{Bayes_phi_1} and \ref{Bayes_phi_3}, \begin{align*} \phi_*(p(\sigma^2_p+\tau)/\sigma^2_1) \leq p-2. \end{align*} for $\tau\in(0,\infty)$. Hence for $\tau\geq \max(0,\sigma^2_1-2\sigma^2_p)$, it follows that \begin{align*} \phi_*(p(\sigma^2_p+\tau)/\sigma^2_1)\leq 2(p-2)\frac{\sigma_p^2+\tau}{\sigma_1^2+\tau}. \end{align*} So it suffices to show \begin{align*} \phi_*(p(\sigma^2_p+\tau)/\sigma^2_1)\leq 2(p-2)\frac{\sigma_p^2+\tau}{\sigma_1^2+\tau} \end{align*} when $\sigma^2_1-2\sigma^2_p>0 $ and $ 0<\tau < \sigma^2_1-2\sigma^2_p$. By the properties \ref{Bayes_phi_2} and \ref{Bayes_phi_5}, we have $ \phi_*(z)\leq \{(p-2)/p\}z$ for all $z\geq 0$. Then \begin{align*} 2(p-2) \frac{\sigma^2_p+\tau}{\sigma^2_1+\tau} -\phi_*(p(\sigma^2_p+\tau)/\sigma^2_1) &\geq 2(p-2) \frac{\sigma^2_p+\tau}{\sigma^2_1+\tau} -\frac{p-2}{p}\frac{p(\sigma^2_p+\tau)}{\sigma^2_1} \\ &=(p-2)(\sigma^2_p+\tau)\left(\frac{2}{\sigma^2_1+\tau}-\frac{1}{\sigma^2_1}\right) \\ &=\frac{(p-2)(\sigma^2_p+\tau)}{\sigma^2_1(\sigma^2_1+\tau)} \left(\sigma^2_1-\tau\right) \\ &\geq\frac{(p-2)(\sigma^2_p+\tau)}{\sigma^2_1(\sigma^2_1+\tau)}2\sigma^2_p \\ &\geq 0, \end{align*} which completes the proof. [Part \ref{final.thm.2}] It follows from Theorem \ref{thm:ordinary_minimax} in Appendix \ref{sec:ordinary}. [Part \ref{final.thm.3}] It follows from Theorem 6.4.2 of \cite{Brown-1971}. \end{proof} \subsection{A numerical experiment} Let $p=10$ and \begin{align*} \bm{\Sigma}=\mathrm{diag}(a^9, a^8,\dots, a, 1) \end{align*} for $ a=1.01, 1.05, 1.25, 1.5$. Approximately $a^9$ is $1.09, 1.55, 7.45, 38.4$, respectively. We investigate numerical performance of two ensemble minimax estimators of the form \begin{align*} \bm{\delta}_\phi=\left(\bm{I}-\bm{\Sigma}\frac{\phi(\|\bm{x}\|^2/\sigma^2_1)}{\|\bm{x}\|^2}\right)\bm{x}, \end{align*} where the one is the generalized Bayes estimator (GB) with \begin{equation*} \phi_*(z)=z\frac{\int_0^1 \lambda^{p/2-1}\exp(-z\lambda/2)\mathrm{d}\lambda } {\int_0^1 \lambda^{p/2-2}\exp(-z\lambda/2)\mathrm{d}\lambda } \end{equation*} and the other is the James-Stein variant (JS) with \begin{equation*} \phi_{\mathrm{JS}}(z)=\frac{(p-2)z}{p-2+z}. \end{equation*} As in Part \ref{cor:js.2} of Theorem \ref{cor:js} and Part \ref{final.thm.2} of Theorem \ref{final.thm}, a sufficient condition for both estimators to be ordinary minimax is given by \begin{equation}\label{suff.minimax.0} 2\left(\sum_{i=1}^p \sigma_i^4/\sigma_1^4-2\right)=2\left(\sum_{i=1}^p a^{2(i-10)}\right)\geq p-2, \end{equation} where the equality is attained by $a\approx 1.066$. Hence the inequality \eqref{suff.minimax.0} is satisfied by $a=1.01, 1.05$ and is not by $a=1.25, 1.5$. Table \ref{table:final} provides relative ordinary risk difference given by \begin{align*} 1-R(\bm{\theta},\bm{\delta}_\phi)/\mathrm{tr}\bm{\Sigma} \end{align*} at \begin{align} \bm{\theta} = m\{\mathrm{tr}\bm{\Sigma}\}^{1/2} \frac{\bm{1}_{10}}{\sqrt{10}}= m\left\{\sum\nolimits_{i=1}^{10} a^{i-1}\right\}^{1/2}\frac{\bm{1}_{10}}{\sqrt{10}} \end{align} for $m=0,2, 20,40,60,80,100$. For both estimators, we see that, for larger $m$ and $a=1.25, 1.5$, the differences are negative, which implies that these two estimators are not ordinary minimax for $a=1.25, 1.5$. Table \ref{table:final_2} provides relative Bayes risk difference given by \begin{align*} 1- \bar{R}(\bm{\delta},\tau)/\mathrm{tr}\bm{\Sigma} \end{align*} for $\tau=1,5,20,40,60,80,100$. We see that, for even $a=1.25, 1.5$, the differences are all positive, which supports the ensemble minimaxity of two estimators. In summary these tables support the theory presented in Theorems \ref{cor:js} and \ref{final.thm}. \begin{table} \setlength{\tabcolsep}{2pt} \caption{Ordinary Risk Difference} \begin{center} \begin{tabular}{cc|ccccccc} & $a\backslash m$ & 0 &2& 20 & 40 & 60 & 80 & 100 \\ \midrule GB & $1.01$ & $0.79$ & $0.14$ & $1.7\!\times\! 10^{-3}$ & $4.8\!\times\! 10^{-4}$ & \quad $2.5\!\times\! 10^{-4}$ & \quad$1.7\!\times\! 10^{-4}$ & \quad$1.3\!\times\! 10^{-4}$ \\ & $1.05$ & $0.75$ & $0.14$ & $1.7\!\times\! 10^{-3}$ & $4.3\!\times\! 10^{-4}$ & \quad $2.0\!\times\! 10^{-4}$ & \quad$1.2\!\times\! 10^{-4}$ & \quad$8.0\!\times\! 10^{-5}$ \\ & $1.25$ & $0.63$ & $0.19$& $1.9\!\times\! 10^{-3}$ & $2.5\!\times\! 10^{-4}$ & $-5.6\!\times\! 10^{-5}$ & $-1.7\!\times\! 10^{-4}$ & $-2.2\!\times\! 10^{-4}$ \\ & $1.5$ & $0.63$ & $0.27$ & $2.7\!\times\! 10^{-3}$ & $1.6\!\times\! 10^{-4}$ & $-3.0\!\times\! 10^{-4}$ & $-4.6\!\times\! 10^{-4}$ & $-5.4\!\times\! 10^{-4}$ \\ JS & $1.01$ & $0.80$ & $0.14$& $1.7\!\times\! 10^{-3}$ & $4.8\!\times\! 10^{-4}$ & \quad $2.5\!\times\! 10^{-4}$ & \quad$1.7\!\times\! 10^{-4}$ & \quad$1.3\!\times\! 10^{-4}$ \\ & $1.05$ & $0.79$ & $0.14 $& $1.7\!\times\! 10^{-3}$ & $4.3\!\times\! 10^{-4}$ & \quad $2.0\!\times\! 10^{-4}$ & \quad$1.2\!\times\! 10^{-4}$ & \quad$8.0\!\times\! 10^{-5}$ \\ & $1.25$ & $0.72$ & $ 0.19$ & $1.9\!\times\! 10^{-3}$ & $2.5\!\times\! 10^{-4}$ & $-5.6\!\times\! 10^{-5}$ & $-1.7\!\times\! 10^{-4}$ & $-2.2\!\times\! 10^{-4}$ \\ & $1.5$ & $0.71$ & $0.25$& $2.7\!\times\! 10^{-3}$ & $1.6\!\times\! 10^{-4}$ & $-3.0\!\times\! 10^{-4}$ & $-4.6\!\times\! 10^{-4}$ & $-5.4\!\times\! 10^{-4}$ \\ \end{tabular} \end{center} \label{table:final} \end{table}% \begin{table} \setlength{\tabcolsep}{2pt} \caption{Bayes Risk Difference} \begin{center} \begin{tabular}{cc|ccccccc} & $a\backslash \tau$ & 1 & 5 & 20 & 40 & 60 & 80 & 100 \\ \midrule GB & $1.01$ & $0.429$ & $0.139$& $0.039$ & $0.020$ & $0.013$ & $0.010$ & $0.008$ \\ & $1.05$ & $0.374$ & $0.144$ &$0.042$ & $0.021$ & $0.015$ & $0.011$ & $0.008$ \\ & $1.25$ & $0.105$ & $0.082$ &$0.038$ & $0.021$ & $0.014$ & $0.011$ & $0.009$ \\ & $1.5$ & $0.023$ & $0.022$ &$0.019$ & $0.014$ & $0.012$ & $0.010$ & $0.008$ \\ JS& $1.01$ & $0.406$& $0.137$& $0.039$ & $0.020$ & $0.014$ & $0.010$ & $0.008$ \\ & $1.05$ & $0.393$& $0.143$& $0.042$ & $0.022$ & $0.015$ & $0.011$ & $0.009$ \\ & $1.25$ & $0.122$& $0.079$&$0.034$ & $0.020$ & $0.014$ & $0.011$ & $0.009$ \\ & $1.5$ & $0.028$& $0.025$ &$0.018$ & $0.013$ & $0.010$ & $0.008$ & $0.007$ \\ \end{tabular} \end{center} \label{table:final_2} \end{table}%
1,314,259,993,983
arxiv
\section*{Acknowledgments} The authors thank O. Dial, B. Halperin, V. Manucharyan, and J. Sau for helpful discussions. This work is supported by the Center for Integrated Quantum Materials (CIQM) under NSF award 1231319 (LSL and OS) and the U.S. DOE Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under award DE-SC0001819 (PJH, MTA, AY). Nanofabrication was performed at the Harvard Center for Nanoscale Systems (CNS), a member of the National Nanotechnology Infrastructure Network (NNIN) supported by NSF award ECS-0335765. AA was supported by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW). ICF was supported by the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC Project MUNATOP, the US-Israel Binational Science Foundation, and the Minerva Foundation for support. \section*{Figure legends} \textbf{Fig. 1.}${}\quad$\textbf{`Fiber-optic' modes and spatially resolved current imaging in a graphene Josephson junction}. \textbf{(A, B)} Guided edge modes induced by an intrinsic band bending near crystal boundary, for single-layer and bilayer graphene (schematic). Mode frequencies positioned outside the Dirac continuum ensure mode decoupling from the bulk states. Guided modes exist for any edge potential no matter how weak. In a single layer, mode velocity changes sign as the potential strength increases, see Eq.(5). In a bilayer, the modes occur in pairs [\textit{green and red curves}: dispersion for positive and negative potential strength, respectively]. \textbf{(C)} The guided modes are manifested through peaks in the density of current-carrying states at the crystal boundaries, prominent near charge neutrality (\textit{red}: $n=0.05\times10^{11} cm^{-2}$; \textit{blue}: $n=2.5\times10^{11} cm^{-2}$). \textbf{(D)} Schematics of superconducting interferometry in a graphene Josephson junction, which is used to image the spatial structure of current-carrying states. A flux is threaded through the junction area to produce interference patterns, as current bias $V_{sd}$ is applied through the superconducting electrodes and the voltage drop across the device is recorded. Carrier density $n$ is tuned by a gate voltage $V_{\rm b}$. \textbf{(E, F)} The recorded interference pattern is of a single-slit Fraunhofer type at high carrier density, turning into a SQUID-like interference near neutrality (colorscale is $dV/dI\,(\Omega)$ for device $BL1$). \textbf{(G, H)} Current flow, extracted from the interference data using Fourier techniques, is uniform at high carrier density and peaks at the crystal edges for carrier density close to neutrality. \\ \textbf{Fig. 2.}${} \quad$\textbf{Gate-tunable evolution of edge and bulk current-carrying states in graphene}. \textbf{(A)} Edge-dominated SQUID-like interference pattern at neutrality in device $ML1$ ($n=2.38\times 10^9$ cm$^{-2}$; colorscale is $dV/dI (\Omega)$). \textbf{(B, C)} Real-space image of current flow confined to the boundaries over a range of densities near neutrality, shown alongside with the raw interference data (corresponding to the white box in (D)). \textbf{(D)} A real-space map of current flow as a function of electron concentration reveals coexistence of edge and bulk modes at intermediate densities. \textbf{(E)} Conventional Fraunhofer pattern for uniform current flow at high electron density ($n=7\times 10^{11}$ cm$^{-2}$). \textbf{(F)} Comparison of current amplitudes along the edge (red) and bulk (blue) from the plot in panel (C). Current flow is edge-dominated near neutrality. Note that minima for both contributions coincide in $n$, indicating that a positional edge/bulk density offset is not present.\\ \textbf{Fig. 3.}${} \quad$\textbf{Boundary currents in bilayer graphene in the presence of broken crystal inversion symmetry}. \textbf{(A)} Spatially resolved supercurrent map in device $BL2$, in a normalized plot of $J(x)/J_{max}(x)$. Edge-dominated transport occurs near charge neutrality, while an increasing bulk contribution is tuned with carrier concentration. \textbf{(B)} Comparison of current amplitudes along the edge (red) and through the bulk (blue) from panel (A). Enhanced edge currents are prominent at neutrality, whereas a uniformly distributed flow is recovered at high densities. The normal state conductance $G(e^2/h)$ {\it vs.} carrier density is also shown (black). \textbf{(C)} Measurement schematic for superconducting interferometry in a dual-gated bilayer graphene Josephson junction. A dual-gated device consists of bilayer graphene flake on hBN with a suspended top gate, where application of voltages $V_{\rm t}$ and $V_{\rm b}$ on the top and back gates enables independent control of the transverse electric field $E$ and carrier density $n$. \textbf{(D)} Resistance map as a function of $V_{\rm b}$ and $V_{\rm t}$ for bilayer $BL4$. Enhanced resistance at high $E$ fields indicates the emergence of a gate-tunable insulating state due to broken crystal inversion symmetry. \textbf{(E)} Spatially-resolved boundary currents as a function of $E$ field. The vertical axis is a trace along the red path labeled in (B). \textbf{(F)} Sequence of Fraunhofer measurements at various locations on the current map in panel (E).\\ \textbf{Fig. 4.}${} \quad$\textbf{`Fiber-optics' theoretical model of transport in graphene}. \textbf{(A)} Real-space maps of measured current flow $J(x)$ in bilayer device $BL3$ at fixed carrier densities on the hole side, showing edge currents near the Dirac point and a continuous evolution towards bulk flow. \textbf{(B)} Theoretical plot of spatially resolved density of states in bilayer graphene at fixed carrier densities for edge waveguide model. For the simulation, an effective delta function potential approximation is used with the best-fit value $\lambda = 0.5 $ eV$\cdot$nm (see SOM). Band mass of bilayer graphene is taken $0.04 m_e$ where $m_e$ is electron mass.\\ \newpage \textbf{Figure 1} \begin{figure}[!h] \includegraphics[width=160mm]{Fig1.png} \end{figure} \newpage \textbf{Figure 2} \begin{figure}[!h] \includegraphics[width=160mm]{Fig2.PNG} \end{figure} \newpage \textbf{Figure 3} \begin{figure}[!h] \includegraphics[width=160mm]{Fig3.PNG} \end{figure} \newpage \textbf{Figure 4} \begin{figure}[!h] \includegraphics[width=120mm]{Fig4.PNG} \end{figure} \end{document} \section{Modeling electronic guided modes} \textbf{Materials and Methods}\\ \underline{Modeling electronic guided modes}\\ A full model of supercurrent-carrying states in our system should account, in principle, for a number of microscopic effects. This includes, in particular, the microscopic details of transport through the NS interfaces, the realistic edge potential profile due to band bending near graphene edge, as well as the effects of disorder. Since treating all these issues simultaneously and on equal footing makes such a modeling a daunting task, here we resort to some simplifications. First, we will completely ignore the effects of induced superconductivity, focusing on the normal metallic state of a pure graphene. Second, we consider a clean system and account for disorder scattering perturbatively at the end. Third, since states in a clean system, being delocalized, are capable of carrying supercurrent, we will focus on evaluating the density of states (DOS) taking it to reflect on the current-carrying capacity of the system. Of course, such an approach may be questioned for disordered systems in which some states are localized, and therefore can contribute to DOS but not to supercurrent. However, taking into account that in a clean system all states possess a roughly similar current-carrying capacity, we adopt this approximation on the merit of its simplicity.\\ Turning to the discussion of system geometry, we note two points. First, as discussed in the main text, the problem of guided states on a halfplane near the edge $x>x_0$ can be mapped onto a similar problem on a full plane by accounting for the states in valleys $K$ and $K'$ mixing at the edge. This mapping is particularly transparent for the armchair edge, where the boundary condition for the spinor wavefunctions in the two valleys is simply $\psi_{K}+\psi_{K'}=0$. In this case, one can see that the two-valley half-plane problem is mathematically equivalent to the problem posed on a full plane for particles in just one valley, provided the line potential for the latter problem is taken to be a sum of the original edge potential and its mirror-reflected double, $V(x>x_0)\,\to\,V(|x-x_0|)$. \\ Second, the states with the wavelengths larger than the edge potential width can be described by a delta function approximation. In that, a realistic microscopic potential $V(x)$ is replaced by a delta-function pseudopotential $\tilde V(x)=\lambda\delta(x-x_0)$, where $\lambda=\int V(x')dx'$ and $x_0$ is the edge position. For a system of width $w$ with two parallel edges positioned at $x_0=\pm w/2$ we therefore arrive at the model \begin{equation} \label{model} V(x) = \lambda \delta(x+w/2)+\lambda \delta(x-w/2), \end{equation} with $-\infty<x<\infty$. Carriers in this system are described by the massless Dirac Hamiltonian \begin{equation} \label{hamiltonian} H =H_0+V(x),\quad H_0=v\sigma_1 p_x+v\sigma_2 p_y \end{equation} with $v\approx 10^6{\rm m/s}$ the carrier velocity and $\sigma_{1,2}$ the pseudospin Pauli matrices. As stated above, we will use spatially-resolved DOS for the problem (\ref{hamiltonian}) as a measure of current-carrying capacity of the system. In justification we note that an electron system carrying normal electric current can be understood in terms of changes in the occupancy of the states near the Fermi energy. As a result, the spatially-resolved current density will vary in the same manner as DOS \begin{equation} N(\mu,\vec r) = \frac{dn(\vec r)}{d\mu} ,\quad n(\vec r)=\langle \psi^\dagger(\vec r)\psi(\vec r)\rangle . \end{equation} Here $n$ is the total carrier density and $\mu$ is chemical potential. Below we evaluate DOS as a function of position and energy, focusing on the characteristic features due to the guided modes. \\ Taking into account that typical wavelength values of relevant electronic states, $\lambda \sim 10$nm , are much smaller than the distance between edges $w\sim \,1{\rm \mu m}$, we can represent DOS in the form \begin{equation} \label{full_density} N(\mu,x) = N_0(\mu)+N_1(\mu,x-w/2)+N_1(\mu,x+w/2) \end{equation} where $N_0$ is the DOS of a uniform infinite system, \begin{equation} \label{n_bulk} N_0(\varepsilon) = \frac{|\varepsilon|}{2\pi \hbar^2v^2} \end{equation} and $N_1$ is the contribution to DOS from a single delta-function line potential, placed at $x=0$. Below we derive an expression \begin{equation} \label{n_edge} N_1(\varepsilon,x) = \frac{4\lambda}{\pi \hbar v}\,\Im\int\frac{dp}{2\pi}\frac{p^2 e^{-2\kappa_{\varepsilon,p}|x|/\hbar}}{\kappa_{\varepsilon,p} \bigl[ 4\lambda\varepsilon+(4-\lambda^2)\hbar v\kappa_{\varepsilon,p}\bigl]} ,\quad \kappa_{\varepsilon,p} = \sqrt{p^2-(\epsilon/\hbar v)^2} \end{equation} where the energy $\varepsilon$ is taken to have an infinitesimal positive imaginary part. In the final result for DOS $\varepsilon$ must be replaced by the chemical potential, $\varepsilon = \mu$. The spatial dependence described by Eq.\eqref{n_edge} is shown on Fig. \ref{peaks}.\\ In our model, which is essentially non-interacting, the effects of screening can be included {\it ad hoc} by treating the potential strength in Eq.\eqref{model} as a function of carrier density. Since the latter is parameterized by $\mu$, we will use a simple model \begin{equation} \label{eq:screening} \lambda\rightarrow\lambda'=\frac{\lambda}{1+(|\mu|/\mu_0)^\alpha} \end{equation} where the parameter $\mu_0$ depends on microscopic details. Comparing to the data indicates that a reasonably good fit can be achieved for $\alpha \approx 2$.\\ Modeling results are presented in \addOS{Fig.1(c) of the main text} for energies corresponding to carrier densities $n = 0.05\cdot 10^{11} \text{cm}^{-2}$ (red curve) and $n = 2.5\cdot10^{11} \text{cm}^{-2}$ (blue curve), where we evaluated $n$ accounting for the spin and valley degeneracy in graphene. Potential strength is chosen to be $\lambda = - 1.5\,\hbar v \approx 1$ eV$\cdot$nm and the screening parameter value is $\mu_0 = 0.2\sqrt{\pi\hbar^2v^2n_0} \approx 7\,\text{meV}$, where $n_0 = 10^{11} \text{cm}^{-2}$ is the corresponding scale for density. \\ The simulation for graphene bilayer, which was used to generate \addOS{Fig.1(b) and Fig. 4(b) of the main text}, was carried out using an effective delta function potential approximation, as above. Greens function expressed through the T-matrix was used to obtain mode dispersion and DOS in a manner similar to our treatment of modes in a single layer. For the delta function strength we used the best-fit value $\lambda = 0.5 $ eV$\cdot$nm (and no screening). \underline{Microscopic derivation}\\ Here we consider long-wavelength modes for a potential line positioned at $x=0$. This problem is described by the Hamiltonian \eqref{hamiltonian} with $V(x) = \lambda \delta(x)$. For this problem we construct the Greens function which takes the full account of scattering by the potential. As is well known, the discrete spectrum of the system (in our case, the guided modes) can be conveniently expressed through the poles of the electron Greens function. Likewise, the spatially-resolved DOS is expressed as the Greens function trace. The Greens function, in turn, can be straightforwardly evaluated using Dysons's equation and the T-matrix representation: \begin{equation}\label{eq:G} G = G_0+G_0 V G_0+G_0 V G_0 V G_0 + \dots =G_0+G_0 T G_0 \end{equation} where $G_0= (i\epsilon-H_0)^{-1}$. Assuming that the phase and amplitude of the electron wavefunction are given by a continuous function of $x$, we can express the quantity $T$ as \begin{equation}\label{eq:T_general} T(\epsilon, p_y) =\lambda\Bigl(1-\lambda\int\frac{dp_x}{2\pi\hbar}G_0(\vec p)\Bigl)^{-1} \end{equation} The continuity assumption should in practice be relaxed by a weaker assumption accounting for the phase jump of the wavefunction across the delta function potential at $x=0$ (see main text). Here, however, we will proceed with Eq.\eqref{eq:T_general} on the merit of its simplicity. Evaluating the integral in Eq.\eqref{eq:T_general} gives \begin{equation}\label{eq:T} T(\epsilon, p_y) = \lambda\Bigl(1+\frac{\lambda }{2\hbar v}\left( i\tilde\epsilon+\sigma_1\tilde p\right)\Bigl)^{-1} \end{equation} where we defined \begin{equation} \tilde\epsilon=\frac{\epsilon}{\sqrt{\epsilon^2+v^2p_y^2}}\qquad \tilde p=\frac{\hbar v p_y}{\sqrt{\epsilon^2+\hbar^2v^2p_y^2}} \end{equation} Here $\varepsilon$ is the Matsubara frequency, with a suitable analytic continuation $i\varepsilon\to\varepsilon+i0$ to be performed at the end.\\ The T-matrix poles give the guided modes dispersion \begin{equation}\label{eq:dispersion} \epsilon=\pm \hbar u|p_y|,\quad u=v\frac{4 \hbar^2v^2-\lambda^2}{4 \hbar^2v^2+\lambda^2} \end{equation} where the sign is given by $\pm=\text{sign}\lambda$. Since $|u|<v$, the energies $\epsilon=\pm u |p_y|$ are positioned, for each $p_y$ value, outside the Dirac continuum of the bulk states. This expression behaves in a qualitatively similar way to the exact dispersion derived in the main text, Eq.(1) \addOS{[see Fig.1(a) of the main text]}. The guided modes described by Eq.\eqref{eq:dispersion} are quasi-1D states that propagate as plane waves in the $y$ direction along the $x=0$ line and decay exponentially as evanescent waves in the transverse direction.\\ Spatially-resolved DOS can be evaluated as \begin{equation}\label{eq:n(E)} N(\epsilon,\vec r)=-\frac1{\pi}{\rm Im}\,{\rm Tr\,} G(\epsilon, \vec r,\vec r')_{\vec r=\vec r'} \end{equation} where the energy variable is analytically continued from positive imaginary to real values via $i\epsilon\to\epsilon+i0$ and a trace is taken over pseudospin variables. To proceed with our calculation, we will need Greens function evaluated in a mixed position-momentum representation \begin{align} &G_0(\epsilon,p_y,x)=\int \frac{dp_x}{2\pi}e^{ip_x x}G_0(\epsilon,\vec p) \\\nonumber &= \frac{-i\tilde\epsilon-\sigma_2 \tilde p-i\sigma_1\text{sign}(x) }{2\hbar v} \exp\bigl(-\kappa(i\varepsilon) |x|/\hbar\bigl) \end{align} where $\kappa(i\varepsilon)=\sqrt{(\epsilon/\hbar v)^2+p_y^2}$. The trace of an equal-point Greens function in Eq.(\ref{eq:n(E)}) then could be evaluated from Eq.\eqref{eq:G} with the help of Eq.\eqref{eq:T}: \begin{equation}\label{eq:TrG} {\rm Tr\,} G(\epsilon,x'=x)=\sum_{p_y}\Biggl( \frac{\tilde\epsilon}{i\hbar v} +\frac{4\lambda \tilde p^2 e^{-2\kappa |x|/\hbar}}{\hbar v\left[\left( 2+i\lambda\tilde\epsilon\right)^2-\lambda^2\tilde p^2\right]}\Biggl) \end{equation} where the two terms represent contributions of $G_0$ and $G_0VG_0$, respectively. \\ As a warmup, we consider the first term of \eqref{eq:TrG}. Introducing a UV cutoff $p_0 = \varepsilon_0/\hbar v$ we evaluate the sum over $p_y$ as \begin{equation} \int_{-p_0}^{p_0}\frac{dp_y}{2\pi\hbar}\frac{\epsilon}{\sqrt{\epsilon^2+\hbar^2v^2p_y^2}}=\frac{\epsilon}{\pi \hbar v}\ln\frac{\epsilon_0}{\epsilon} . \end{equation} Performing analytic continuation $\epsilon\to \delta-i\epsilon$, we arrive at \begin{equation} N_0(\varepsilon) = - \frac{\epsilon}{\pi^2 \hbar^2v^2}\Im\ln\frac{\epsilon_0}{\delta-i\epsilon} \end{equation} where $\delta=+0$. Taking the imaginary part, we obtain the expression in Eq.\eqref{n_bulk}.\\ Next, we proceed to evaluate the second term in Eq.\eqref{eq:TrG}. Performing the same analytic continuation, we arrive at the result in Eq.\eqref{n_edge}. The expression in Eq.\eqref{n_edge} can be conveniently analyzed by dividing the integral into two parts, taken over the domains $|p_y|>\varepsilon/\hbar v$ and $|p_y|<\varepsilon/\hbar v$, respectively. The latter contribution is particularly simple because it is governed by the pole \eqref{eq:dispersion} and can be easily evaluated, giving \begin{equation} \label{guided} N_{\rm g.w.}(\varepsilon,x)=\frac{2\epsilon\lambda }{\hbar^2vu(4-\lambda^2)} e^{-2 \sqrt{(v/u)^2-1} |x| |\epsilon|/\hbar v} \end{equation} This contribution is solely due to the guided edge mode. As illustrated in the Fig \ref{peaks}, this term dominates the peak structure in DOS for guided waves. \\ We used the full expression in Eq.\eqref{n_edge} to produce the spatially-resolved DOS curves shown in \addOS{Fig.1(c) of the main text}. In that, we accounted for screening, as described in Eq.\eqref{eq:screening}. Because of screening, the peak structure is more prominent at low chemical potential, and is suppressed relatively to the bulk DOS at high chemical potential values.\\ \underline{The effect of disorder}\\ Here we estimate the disorder scattering rate $\gamma(k)$ for guided modes [see Eq.(1) in the main text and accompanying discussion]. We will model edge roughness by a fluctuating delta function strength, treating the fluctuations as a gaussian white noise: \begin{equation}\label{eq:V+dV} V(x,y)=(\lambda +\delta\lambda(y))\,\delta(x) ,\quad \langle \delta\lambda(y)\delta\lambda(y')\rangle =\alpha\delta(y-y') . \end{equation} Writing the Greens function as a series in the potential $V+\delta V$, Eq.(\ref{eq:V+dV}), we have \begin{equation} G=G_0+G_0(V+\delta V)G_0+G_0(V+\delta V)G_0(V+\delta V)G_0+... \end{equation} Averaging the Greens function over disorder, we only need to account for the pair correlators $\langle \delta\lambda(y)\delta\lambda(y')\rangle$. In a non-crossing approximation, we express the disorder-averaged Greens function through a suitable self-energy \begin{equation} \langle G\rangle = G_0+ G_0(V+\Sigma)G_0+ G_0(V+\Sigma)G_0(V+\Sigma)G_0+... \end{equation} where \begin{equation}\label{eq:Sigma} \Sigma(\epsilon)=\alpha \int \frac{dp_x}{2\pi} G(\epsilon,p_y,x,x')_{x=x'=0} \end{equation} The quantity (\ref{eq:Sigma}) is complex-valued, with the imaginary part expressed through the density of states at $x=0$ as \begin{equation} {\rm Im}\,\Sigma(\epsilon)=-\pi \alpha N(\epsilon)_{x=0} \end{equation} The disorder scattering rate for the guided waves can now be found from the dispersion relation obtained from the T-matrix pole, Eg(\ref{eq:T_general}), which is corrected by the presence of $\Sigma$ as follows \begin{equation} \label{eq:dispersion_scatter} 1+(\lambda+\Sigma(i\epsilon))\frac{i\tilde\epsilon+\sigma_1\tilde p}{2\hbar v}=0 . \end{equation} Here we continue to use Matsubara notation, as in Eqs.(\ref{eq:T_general}),(\ref{eq:T}). Since the density of states scales linearly with energy, $N(\epsilon)\sim |\epsilon|$, we can solve Eq.\eqref{eq:dispersion_scatter} in the long-wavelength limit treating $\Sigma(i\epsilon)$ as a perturbation. Writing $\epsilon=\epsilon_0(p_y)+\delta\epsilon$, where $\epsilon_0=u|p_y|$ is a solution for $\Sigma=0$, we linearize in $\delta\epsilon$ to obtain \begin{equation} \delta\varepsilon = -\frac1\lambda\Bigl(1-\frac{u^2}{v^2}\Bigl)\Sigma(i\epsilon_0)|p_y| \end{equation} After analytic continuation, we obtain \begin{equation} \gamma(p_y) = \frac{\pi\alpha}{|\lambda|}\Bigl(1-\frac{u^2}{v^2}\Bigl)|p_y|N\bigl(u|p_y|\bigl)_{x=0} \end{equation} Accounting for the linear scaling $N\sim|\epsilon|$, we find that the damping rate scales as a square of $p_y$, \begin{equation} \gamma(p_y) =\frac{\lambda}{\hbar^2 v (4-\lambda^2)}p_y^2 \end{equation} at small $p_y$. A similar dependence, albeit with a different prefactor, is found at large $p_y$. From this we conclude that the modes are undamped over lengthscales $\sim\lambda^2/\xi$, where $\lambda$ is a wavelength and $\xi$ is a disorder lengthscale. Taking realistic values $\lambda\approx 10-100\,{\rm nm}$ and $\xi\approx 0.1\,{\rm nm}$, we obtain an estimate for the guided mode mean free path in the $1-10\,{\rm \mu m}$ range. These large values can be traced to the weak confinement of the waves at small $p_y$. The weak confinement results in the mode wavefunction positioned mostly outside the confining potential, which reduces the impact of scattering. The mean free path rapidly grows with wavelength, in a direct analogy with guided optical waves in weakly guiding fiber designs, where weak confinement is employed to achieve exceptionally long mean free paths. \\ \underline{Josephson junctions: Device overview}\\ We analyze five graphene Josephson junctions on hBN with widths ranging from $W=800-1200$ nm and lengths ranging from $L=250-350$ nm (see Fig. 1d for a labeled device schematic). Listed in Table S1 are details on individual sample geometries. The small $L/W$ aspect ratios place these devices are in the narrow junction limit, where the the critical current $I_c$ can be approximated as a phase dependent summation over many parallel 1D current channels (Equation 2 in the main text). Electrical measurements are conducted using standard Lockin techniques in a Leiden Cryogenics Model Minikelvin 126-TOF dilution refrigerator with a base temperature of 10 mK, well below the critical temperature of Al. \\ Using a dry transfer method, graphene/hBN stacks are sequentially deposited on a 300 nm thermally grown SiO$_2$ layer, which covers a doped silicon substrate functioning as a global back gate. Graphene flakes are etched to the desired geometry using a 950 PMMA A4 polymer mask ($\sim 200$ nm thick; spun at 4000 rpm) followed by an RIE O2 plasma etch. Titanium/aluminum (Ti/Al) superconducting electrodes are defined on selected flakes using electron beam (ebeam) lithography on a 950 PMMA A4 resist mask, followed by thermal evaporation and liftoff in acetone. For the titanium adhesion layer, we evaporate 10 nm at a rate of 0.3 Angstrom/s. This is followed by an evaporation of a 70 nm aluminum layer at a rate of 0.5 Angstrom/s at pressures in the low to mid $10^{-7}$ Torr range. For dual-gated bilayers, suspended top gates are fabricated using a standard PMMA/MMA/PMMA trilayer resist method which leaves a 200 nm air gap between the top gate and graphene. After using ebeam lithography to define the gates, which employs position-dependent dosage, Cr/Au (3/425 nm) gates are deposited using thermal evaporation and liftoff in acetone. To remove processing residues and enhance quality, devices were current annealed in vacuum at dilution refrigerator temperatures. We note that edge currents were detected both in current-annealed and intrinsically high quality non-annealed devices; typically the appearance of edge currents coincided with the occurrence of Fabry-Perot interference in the ballistic transport regime. All five graphene Josephson junctions exhibit similar transport behavior. Additional data sets are provided in the Supplementary Figures.\\ \underline{Fourier method for extraction of supercurrent density distribution}\\ In a magnetic field $B$, the critical current $I_c(B)$ through a Josephson junction equals the magnitude of the complex Fourier transform of the current density distribution $J(x)$: \begin{equation} I_c(B)=|\mathcal{I}_c(B)|=\left| \int_{-\infty}^{\infty} J(x) \exp(2\pi i(L+l_{Al})Bx/\Phi_0) dx \right| \end{equation} where $x$ is the dimension along the width of the superconducting contacts (labeled in Fig. 1d), $L$ is the distance between contacts, $l_{Al}$ is the magnetic penetration length (due to a finite London penetration depth in the superconductor and flux focusing), and $\Phi_0=h/2e$ is the flux quantum. Relevant in the narrow junction limit where current is only a function of one coordinate, Equation (28) provides a simple and concise description of our system. We employ Fourier techniques introduced by Dynes and Fulton to extract the real space current density distribution from the magnetic interference pattern $I_c(B)$. By expressing the current density as $J(x)=J_{s}(x)+J_{a}(x)$, where $J_{s}(x)$ and $J_{a}(x)$ are the symmetric and antisymmetric components, the complex critical current can be rewritten as: \begin{equation} \mathcal{I}_c(B)= \int_{-\infty}^{\infty} J_s(x)\cos(2\pi (L+l_{Al})Bx/\Phi_0) dx + i\int_{-\infty}^{\infty} J_a(x)\sin(2\pi (L+l_{Al})Bx/\Phi_0) dx \end{equation} We calculate symmetric component of distribution, the relevant quantity for analyzing edge versus bulk behavior, as the antisymmetric component goes to zero in the middle of the sample. For symmetric solutions, $\mathcal{I}_c(B)$ is purely real. To reconstruct $\mathcal{I}_c(B)$ from the measured critical current, the sign of $I_c(B)$ is reversed for alternating lobes of the Fraunhofer interference patterns. The extracted supercurrent distribution is expressed as an inverse Fourier transform: \begin{equation} J_s(x)\approx\int_{-\infty}^{\infty} \mathcal{I}_c(B) \exp(2\pi i(L+l_{Al})Bx/\Phi_0) dB \end{equation} Because $I_c(B)$ is only nonzero over a rectangular window dictated by the finite scan range $B_{min}<B<B_{max}$, distribution extracted numerically is given by the convolution of $J(x)$ with the sinc function. To reduce artifacts due the convolution, we employ a raised cosine filter to taper the window at the endpoints of the scan. Explicitly, \begin{equation} J_s(x)\approx\int_{B_{min}}^{B_{max}} \mathcal{I}_c(B) \cos^n(\pi B /2 L_B) \exp(2\pi i(L+l_{Al})Bx/\Phi_0) dB \end{equation} where $n=0.5-1$ and $L_B=(B_{max}-B_{min})/2$ is the magnetic field range of the scan.\\ \underline{Gaussian fits to extract edge state widths}\\ To extract a length scale for the width of the edge currents near the Dirac point, we fit the experimental supercurrent density distribution $J_c(x)$ to the Gaussian function \begin{equation} J_c^G(x)= b\left( \exp\left( \frac{-(x-a)^2}{c} \right) +\exp\left( \frac{-(x+a)^2}{c} \right) \right) \end{equation} where $a$ determines the spatial peak offset, $b$ determines peak height, and $c$ determines peak width. For the data in Fig. 1H, the fit parameters are $a=0.515$, $b=8.8$, and $c=0.017$. The effective edge current width, given by the Gaussian full width at half maximum $x_{FWHM}=2\sqrt{c\cdot\ln2}$, is 220 nm.\\ \underline{Edge versus bulk amplitudes}\\ To more quantitatively assess the evolution of edge and bulk currents with electronic carrier density $n$, we plot line cuts of the individual contributions (see Fig. 2f and 3b). These are given by: \begin{equation} J_c^{edge}(n)=\sum_{x_i=-x_W}^{-x_W+\epsilon_1}\frac{J_c(x_i,n)}{N_1} \quad \mathrm{and} \quad J_c^{bulk}(n)=\sum_{x_i=-\epsilon_2}^{\epsilon_2}\frac{J_c(x_i,n)}{N_2} \end{equation} for a graphene flake whose full width spans from $-x_W$ to $x_W$. $J_c^{edge}(n)$ is the spatially averaged current amplitude over a small window of width $\epsilon_1$ from the edge of the flake. Similarly, $J_c^{bulk}(n)$ is the spatially averaged current amplitude over a strip of width $2\epsilon_2$ around the center of the flake. $N_1=\epsilon_1/x_{step}$ and $N_2=\epsilon_2/x_{step}$, where $x_{step}$ is the distance between data points (determined by the magnetic field range of the scan). For example, for the plots in Fig. 2F, $x_W=405$ nm, $\epsilon_1= 29$ nm, and $\epsilon_2=87$ nm.\\ Based on the edge versus bulk current profiles, one may infer whether edge doping is the dominant cause of edge currents in our devices. In the presence of edge doping, the edge versus bulk contributions should be reversed for opposite polarities of bulk carriers (for example, edge dominated behavior at high densities on the electron side and bulk dominated behavior at high densities on the hole side), which is not consistent with the data. Bulk-dominated or flat distributions appear at both high electron and hole doping fairly consistently. As a second test, one can track the edge versus bulk contributions through the Dirac point to detect an offset in gate voltage between the charge neutrality point at the edge versus in the bulk. We did not detect positional density offset substantial enough to account for the large edge currents in these devices (Fig. 2F).\\ \underline{Bayesian method for extraction of supercurrent density distribution}\\ The critical current as a function of the magnetic field, $I_c(B)$, is related to the current density through the junction, $J_c(x)$, as \begin{equation}\label{eq:ij} I_c(B) = \int_{-\frac{W}{2}}^{\frac{W}{2}} dx\,J_c(x) \exp \left( 2\pi ix LB/\Phi_0 \right), \end{equation} with $L$ and $W$ the length and width of the junction, and $\Phi_0=h/2e$ the superconducting flux quantum.\\ In the measured $|I_c(B)|$ all information about its complex phase is lost, making the problem of determining the current density not have a unique solution. Using the method of Dynes and Fulton (DF), a unique solution can be found under the assumption of a symmetric current distribution, $J_c(x)=J_c(-x)$. In practice however, disorder and inhomogeneities in the junction will lead to asymmetric current densities. Additionally, since experiments are performed over a finite range of magnetic fields, there is a cutoff in the current density resolution. Neither this finite resolution, nor experimental uncertainties are taken into account in the DF method, meaning it can only provide a qualitative estimate of $J_c(x)$.\\ To gain a more quantitative understanding, we instead ask what is the distribution of $J_c(x)$ which produces the same critical current $I_c(B)$. We answer this question by performing Bayesian inference to obtain the posterior distribution of the current density, given the measured critical current. In our case, Bayes' rule reads: \begin{equation}\label{eq:bayes} P ( J_c ; |I_c| ) = \frac{ P( |I_c| ;J_c ) P (J_c )}{ P (|I_c| )}. \end{equation} Here, $ P ( J_c ; |I_c| )$ is the posterior distribution of the current density, the quantity we want to calculate, while $ P (J_c )$ is its prior distribution. The likelihood function $ P( |I_c| ;J_c )$ indicates the compatibility of the measured critical current with a given current density: \begin{equation} P( |I_c| ;J_c ) = \exp \left[ -\frac{(|I_c| - |I_c^f|)^2 }{2\varepsilon^2} \right], \end{equation} where $I_c^f$ is the current obtained from $J_c$ by using Eq.~\eqref{eq:ij}, $I_c$ is the measured current, and $\varepsilon$ is the measurement error. The factor $P (|I_c| )$ is the same for all current densities, meaning it does not enter into determining their relative probabilities.\\ The experimental current profiles are extracted from scans of the differential resistance as a function of DC current bias and magnetic field, $dV/dI(I_{\rm DC}, B)$. Within the same scan, for some field values $dV/dI$ has a clear maximum, while for others it monotonically increases towards its normal state value. We extract the critical current as the value $I_{\rm DC}$ at which the differential resistance is $x \times {\rm max}\,dV/dI$, choosing a value of $x\lesssim 1$. This selects points close to the maxima at field values where they are well defined, and points close to where the differential resistance reaches its normal state value otherwise. The uncertainty is obtained in the same fashion, by choosing a slightly smaller cutoff.\\ We maximize the likelihood function using a Monte Carlo sampling algorithm\cite{pymc}. To get a large resolution of the current density without a significant increase in the dimensionality of the sampling space, we expand $J_c(x)$ as \begin{equation}\label{eq:jc} J_c(x) = \sum_{n=0}^{N} A_n \cos(2\pi n x/L) \end{equation} and enforce $J_c(x)>0$ for all $x$. The $A_n$ coefficients determine the shape of the distribution, which in Eq.~\eqref{eq:jc} is assumed to be symmetric, $J_c(x)=J_c(-x)$. Using an asymmetric form would typically lead to a critical current which shows node lifting -- the minima of $I_c(B)$ have nonzero values. While this feature is present in the measured critical current, it can be accounted for by factors other than an asymmetric current distribution \cite{Heida1998}, such as relatively small aspect ratios ($\sim$5), and a non-sinusoidal current-phase relationship arising from a large junction transparency. Using a symmetric $J_c$ avoids this ambiguity, and has the additional advantage of providing a more direct comparison between our method and that of Dynes and Fulton.\\ The likelihood function is maximized by allowing the $A_n$ coefficients to vary at each Monte Carlo step. As $N$ is increased the posterior distribution of the current density widens, an indication of over-fitting. This increase in uncertainty serves as a criterion for choosing $N$, which for the typical dataset is between 4 and 8. The priors of $A_n$ are set to the uniform distribution $[-\max(I_c), \max(I_c)]$.\\ An example of our method is shown in Fig.~\ref{fig:jcic}, using $N=5$. The current density is peaked at the edges of the sample, a feature also recovered in the DF approach. The corresponding critical current is in good agreement with the measured one, with the exception of the regions close to the nodes. Fig.~\ref{fig:jcic} indicates that the supercurrent through the junction flows mainly along its edges. As a further test of the edge state contribution, we modify the functional form of the current density in Eq.~\eqref{eq:jc}, to explicitly allow for edge states. We add delta functions to the current density at the edges of the sample, $J_c(x) \rightarrow J_c(x) + d_L \delta(x+W/2) + d_R \delta (x-W/2)$, and estimate the contribution of edge states as the ratio of $d_L+d_R$ to the total current density $J_c^{\rm tot}$. As the carrier density approaches zero a significant fraction of the supercurrent is carried by the edge states, with $(d_L+d_R)/J_c^{\rm tot} \simeq 0.45$ (see Fig.~\ref{fig:deltas}). \newpage \textbf{Supplementary Figures}\\ \begin{figure*}[h!] \begin{minipage}[h!]{0.49\linewidth} \center{\includegraphics[width=0.9\linewidth]{02.png}} \end{minipage} \hfill \begin{minipage}[h!]{0.49\linewidth} \center{\includegraphics[width=0.9\linewidth]{01.png}} \end{minipage} \caption{} \label{peaks} \end{figure*} \textbf{Fig. S1}. The excess contribution to the spatially-resolved DOS near a line delta function potential, $\Delta N(\epsilon,x)=N(\epsilon,x)-N_0(\epsilon)$ vs. distance from the delta function. Subtracted is the bulk contribution $N_0$ given in Eq.(S5). The left panel shows the full excess contribution obtained from Eq.(S6), the right panel shows the contribution solely due to the guided modes, Eq.(S18). The two contributions are nearly identical, confirming that the peak in DOS can serve as a telltale of the guided modes.Parameter values used: $\lambda=-1.5\hbar v$, energies $\epsilon=\epsilon_0$, $0.5\epsilon_0$, $0.1\epsilon_0$, where $\epsilon_0=\pi\hbar \sqrt{\pi n_0}$, $n_0 = 10^{11}\,\text{cm}^{-2}$ (higher peaks correspond to higher energy $\epsilon$ values). \newpage \begin{center} \begin{figure}[t] \includegraphics[width=0.8\textwidth]{Fig2supp.PNG} \caption{} \end{figure} \end{center} \textbf{Fig. S2}. \textbf{(A)} Edge-dominated SQUID-like interference pattern at neutrality in device $ML1$ ($n=2.38\times 10^9$ cm$^{-2}$; colorscale is $dV/dI (\Omega)$). From Fig. 2A in main text. \textbf{(B)} Real-space image of current flow confined to the boundaries, from data in part (A). \textbf{(C)} Conventional Fraunhofer pattern for uniform current flow at high electron density ($n=7\times 10^{11}$ cm$^{-2}$). From Fig. 2E in main text. \textbf{(D)} Real-space image of current flow confined to the boundaries, from data in part (C). \newpage \begin{center} \begin{figure}[t] \includegraphics[width=1.0\textwidth]{Data1.png} \caption{} \end{figure} \end{center} \textbf{Fig. S3}. \textbf{(A)} Sequence of Fraunhofer measurements in bilayer device $BL3$ for the current maps in panels (B) and (C), shown in plots of $dV/dI (\Omega)$ as a function of magnetic field $B$ (mT) and current bias $I_{DC}$ (nA). \textbf{(B)} Real space image of current flow $J(x)$ as a function of carrier density on the hole side, showing edge currents near the Dirac point and a continuous evolution of bulk flow. \textbf{(C)} Individual line cuts of $J(x)$ plotted from (B). This is the data set in Fig. 4A, plotted with a properly scaled vertical axis (supercurrent density, nA/$\mu$m). \newpage \begin{center} \begin{figure*}[htb] \includegraphics[width=0.44\textwidth]{b15_5_ix_sym.pdf} \includegraphics[width=0.44\textwidth]{b15_5_ib_sym.pdf} \caption{\label{fig:jcic}} \end{figure*} \end{center} \textbf{Fig. S4}. Bayesian estimation of the supercurrent distribution. Posterior distribution of the current density near the Dirac point in device \textit{BL3} (left panel), and corresponding critical current (right panel). The values of $I_c$ obtained from the posterior distribution (orange) are in good agreement with the measured critical current (blue). \newpage \begin{center} \begin{figure}[t] \includegraphics[width=0.44\textwidth]{deltas.pdf} \caption{\label{fig:deltas}} \end{figure} \end{center} \textbf{Fig. S5}. Ratio of the supercurrent carried by the edge states as a function of carrier density in device \textit{BL3} over the density range of $n\sim -1$ to $-2.9 \times 10^{11}$ cm$^{-2}$. Each scan corresponds to a Fraunhofer pattern, with Fig.~\ref{fig:jcic} showing the 8$^{\rm th}$ scan. (Increasing scan number corresponds to decreasing carrier density.) \newpage \textbf{Supplementary Tables}\\ \begin{table}[h] \begin{tabular}{|c|c|c|c|c|} \hline Device & L (nm) & W (nm) & Aspect ratio, L/W & Contact width (nm)\\ \hline \hline BL1 & 250 & 1200 & 0.208 & 400\\ ML1 & 300 & 1200 & 0.25 & 300 \\ BL2 & 300 & 800 & 0.375 & 400 \\ BL3 & 350 & 1200 & 0.292 & 600\\ BL4 & 250 & 900 & 0.278 & 400\\ \hline \end{tabular} \end{table} \textbf{Table S1}. List of device dimensions for the graphene Josephson junctions studied in this work. $L$ and $W$ refer to junction length and width, respectively, as labeled in Fig. 1d of the main text. Contact width refers to the size of the superconducting Ti/Al electrodes in the direction perpendicular to W. BLx and MLx refer to bilayer and monolayer graphene devices, respectively. \newpage
1,314,259,993,984
arxiv
\section*{INTRODUCTION} \label{sec:introduction} Polynomials in $d$ noncommuting indeterminates can naturally be evaluated on $d$-tuples of square matrices of any size. The resulting function is graded (tuples of $n\times n$ matrices are mapped to $n\times n$ matrices) and preserves direct sums and similarities. Along with polynomials, noncommutative rational functions and power series, the convergence of which has been studied for example in \cite{book}, \cite{pop1}, \cite{pop2}, serve as prototypical examples of a more general class of functions called \emph{noncommutative functions}. The theory of noncommutative functions finds its origin in the 1973 work of J. L. Taylor \cite{jtaylor}, who studied the functional calculus of noncommuting operators. Roughly speaking, noncommutative functions are to polynomials in noncommuting variables as holomorphic functions from complex analysis are to polynomials in commuting variables. Noncommutative functions are classically defined on domains sitting inside of a graded space of $d$-tuples of square matrices which is closed under direct sums. These matrices are usually over the complex numbers, but much of the theory works for matrices over a general module over a commutative ring. See the book by D. S. Kaliuzhnyi-Verbovetskyi and V. Vinnikov \cite{book} for a comprehensive, foundational treatment in this generality. In the complex case, for example, this means that a (matricial) noncommutative function is defined on a domain $D\subset M^d:=\bigsqcup_{n=1}^{\infty} M_n^d,$ where $M_n$ is the space of $n \times n$ complex matrices, and $D$ is assumed to be open in the Euclidean topology at each level and closed under direct sums: $x\in D$ at level $n$ and $y\in D$ at level $m$ implies $x\oplus y \in D$ at level $n+m$. A noncommutative function on $D$ is a graded function $f:D \rightarrow M^r$ which preserves direct sums and similarities: $x,y \in D$ and $s$ invertible with $s^{-1}xs \in D$ implies $f(x\oplus y)=f(x)\oplus f(y)$ and $f(s^{-1}xs)=s^{-1}f(x)s.$ In this note, we consider noncommutative functions defined on a domain $\Omega \subset B(\mathcal{H})^d$ for an infinite dimensional separable Hilbert space $\mathcal{H}.$ Noncommutative polynomials, rational functions, and power series may again be naturally evaluated at operator tuples in a suitable domain inside of $B(\mathcal{H})^d.$ With this point of view of a noncommutative function, we are no longer considering a space of infinitely many disjoint levels, but instead are working with a \emph{complete} space. This should be seen as a type of \emph{completion} of the classical matricial noncommutative setting. In this operatorial setting, noncommuative functions are still defined to be direct sum-preserving, but since the domain is no longer graded, we need to make identifications of $\mathcal{H}$ with countable direct sums of $\mathcal{H}$ via unitary equivalence. The precise definitions and further discussion will be given in Section \ref{sec: prelim}. Many foundational properties and formulas from the matricial theory, such as those found in the work of Helton, Klep, and McCullough \cite{HKM}, have analogues in this setting. We give their formulations and proofs in Section \ref{sec: foundational}. For example, a standard derivative formula now takes the form \[f\left(s^{-1}\begin{bmatrix} x & h \\ 0 & x \end{bmatrix}s\right)=s^{-1}\begin{bmatrix} f(x) & Df(x)[h] \\ 0 & f(x) \end{bmatrix}s,\] where $s:\mathcal{H} \rightarrow \mathcal{H} \oplus\mathcal{H}$ is linear and invertible. In the interest of clarity, when dealing with noncommuative functions on operator domains inside of $B(\mathcal{H})^d,$ we will use the abbreviation NC, and use the lower case nc for the matricial noncommutative setting. Agler and \mc Carthy proposed a definition similar to ours for NC functions on operator domains in \cite{amop} and gave a set of equivalent conditions for when such functions are approximable by polynomials and have a realization formula on a polynomial polyhedron. Our main results are (global) inverse and implicit function theorems in the operatorial noncommutative setting. It is here that completeness and the structure and topologies of $B(\mathcal{H})$ play a key role. The inverse function theorem of J. E. Pascoe \cite{james} gave necessary and sufficient conditions for a matricial nc function to be invertible in terms of injectivity of its derivative map at all points. We prove similar results for operator NC functions. In \cite{akv15}, quite general matricial nc results on the inverse and implicit function theorems are obtained in the setting of operator spaces and nilpotent matrices. In that paper, the authors exploit the existence of a natural "uniformly-open" topology and consider nc functions that are locally bounded in this topology which also have a completely bounded and invertible derivative. As we are not working with functions on graded domains in this note, such a topology is unavailable to us in our study of operator NC functions. In further contrast to the work in \cite{akv15} and other articles on noncommutative inversion, we give a sufficient condition guaranteeing the invertibility of the derivative map of an NC function at all points in a connected domain. Indeed, Theorem \ref{bbthm} states that for an NC function $f$ on a connected domain in $B(\mathcal{H})^d$, if the derivative $Df$ satisfies a noncommutative bounded below condition (see Definition \ref{ncbbp}) and we assume the existence of just one point $a$ in the domain such that $Df(a)$ is invertible, then we may conclude the invertibility of $Df(x)$ for \emph{every} $x$ in the domain. This result provides the basis for the inverse and implicit function theorems \ref{inverse} and \ref{implicit}. Finally, we end this note by considering operator NC functions that are continuous (in a precise sense detailed in Section \ref{sec: strongshift}) in the \emph{strong operator topology}. In fact, this allows us to further weaken the assumptions on the derivative maps. There does not appear to be much in the literature on the connection between noncommutative inversion results and continuity in the strong operator topology, but it is reasonable to impose this extra continuity condition on NC functions since the examples of interest in most applications (such as polynomials, rational functions, etc.) are strongly continuous on appropriately defined norm-bounded sets. In Section \ref{sec: strongshift}, ideas from noncommutative dilation theory are used to prove certain convergence and compactness-like results in the strong operator topology that interact well with noncommutative function theory. It is proved as a consequence of these results, in Theorem \ref{strongbb} and Corollary \ref{strongbbcor}, that injective strongly continuous NC functions, on suitable domains, have everywhere bounded below derivative. Therefore, in the operator setting, and especially in the case of strong operator continuity, we are able to obtain global inversion-type theorems with minor hypotheses on the derivative. \section{PRELIMINARIES} \label{sec: prelim} In this section, we elaborate on our general setting and provide definitions and examples of our main objects of study: NC operator domains and functions. Operator noncommutative functions are to be defined on domains sitting inside of $B(\mathcal{H})^d$, where $\mathcal{H}$ is an infinite dimensional separable Hilbert space over $\mathbb{C}$ and $B(\mathcal{H})$ is the Banach space of bounded linear operators on $\mathcal{H}$ equipped with the operator norm. Elements of $B(\mathcal{H})^d$ will sometimes be written as $d$-tuples with superscripts such as $(x^1,\ldots, x^d).$ We equip $B(\mathcal{H})^d$ with the maximum norm $$\Vert x\Vert := \max\{\Vert x^1\Vert, \ldots, \Vert x^d\Vert \},$$ which induces the product topology on $B(\mathcal{H})^d$ with respect to the norm topology on $B(\mathcal{H})$ and turns $B(\mathcal{H})^d$ into a complex Banach space. The direct sum of $l$ copies of the Hilbert space $\mathcal{H},$ for $l\in \mathbb{N} \cup \{\infty\},$ will be denoted $\mathcal{H}^{(l)}$. Direct sums of operators will often be written as a diagonal matrix: if $x_1, x_2 \ldots$ is a finite or countably infinite sequence of operators in $B(\mathcal{H})$ of length $l\in \mathbb{N} \cup \{\infty\},$ we will write the direct sum operator $\bigoplus_{i=1}^l x_i$ as the diagonal matrix $$\begin{bmatrix} x_{1} & & \\ & x_{2} & \\ & & \ddots \end{bmatrix}: \mathcal{H}^{(l)} \rightarrow \mathcal{H}^{(l)}.$$ Operations on $B(\mathcal{H})^d$ are defined component-wise: for $L\in B(\mathcal{H})$ and $x\in B(\mathcal{H})^d,$ define $$L(x^1,\ldots, x^d):=(Lx^1,\ldots, Lx^d) \,\,\,\, \text{and} \,\,\,\, (x^1,\ldots, x^d)L:=(x^1L,\ldots, x^dL).$$ Similarly, if $s: \mathcal{H} \rightarrow \mathcal{H}^{(l)}$ is an invertible linear map and $z\in B(\mathcal{H}^{(l)})^d$, we define $$s^{-1}zs:=(s^{-1}z^1s,\ldots,s^{-1}z^ds).$$ Direct sums of operator tuples are also defined component-wise. If $x_1, x_2, \ldots$ is a finite or countably infinite sequence of elements of $B(\mathcal{H})^d$ of length $l\in \mathbb{N} \cup \{\infty\},$ we define their direct sum to be the element of $B(\mathcal{H}^{(l)})^d$ given by $$\begin{bmatrix} x_{1} & & \\ & x_{2} & \\ & & \ddots \end{bmatrix}:= \left(\begin{bmatrix} x_{1}^1 & & \\ & x_{2}^1 & \\ & & \ddots \end{bmatrix},\ldots, \begin{bmatrix} x_{1}^d & & \\ & x_{2}^d & \\ & & \ddots \end{bmatrix}\right).$$ Expressions such as $$\begin{bmatrix} x & y \\ z & w \end{bmatrix}$$ for $x,y,z,w \in B(\mathcal{H})^d$ are similarly defined. We say a subset $\Omega$ of $B(\mathcal{H})^d$ is \emph{unitarily invariant} if whenever $x\in \Omega$ and $u \in B(\mathcal{H})$ is a unitary operator, then $u^*xu \in \Omega.$ In what follows, the interior of a set is with respect to the norm topology on $B(\mathcal{H})^d$. \begin{definition} \label{ncd} A set $\Omega\subset B(\mathcal{H})^d$ is called an \emph{NC domain} if there exists a sequence $\{\Omega_k\}_{k=1}^{\infty}$ of subsets of $\Omega$ with the following properties: \begin{enumerate} \item $\Omega_k \subset \emph{\emph{int\,}}\Omega_{k+1}$ for all $k$ and $\Omega=\bigcup_{k=1}^{\infty} \Omega_k$. \item Each $\Omega_k$ is bounded and unitarily invariant. \item Each $\Omega_k$ is closed under countable direct sums: If $x_n$ is a sequence in $\Omega_k$ of length $l\in \mathbb{N}\cup \{\infty\}$, then there exists a unitary $u:\mathcal{H}\rightarrow \mathcal{H}^{(l)}$ such that \begin{align} \label{exhaustds} u^{-1}\begin{bmatrix} x_{1} & & \\ & x_{2} & \\ & & \ddots \end{bmatrix}u \in \Omega_k.\end{align} \end{enumerate} \end{definition} NC domains are open subsets in the norm topology of $B(\mathcal{H})^d.$ Note that by unitary invariance of each level $\Omega_k,$ given a finite or countably infinite sequence $x_n$ in $\Omega_k$ of length $l,$ as soon as (\ref{exhaustds}) holds for some unitary $u:\mathcal{H}\rightarrow \mathcal{H}^{(l)},$ it will in fact hold for \emph{all} unitaries $v:\mathcal{H}\rightarrow \mathcal{H}^{(l)}$ by considering $u^{-1}v$. A large supply of examples of operator NC domains can be given as follows. Let $\delta$ be an $I\times J$ matrix of polynomials in $d$ noncommuting variables (i.e. a matrix whose entries are elements of the free associative algebra $\mathbb{C}\langle x^1,\ldots, x^d \rangle$). Define $$B_{\delta}:=\{x\in B(\mathcal{H})^d : \Vert \delta(x)\Vert<1 \},$$ where the norm is taken in $B(\mathcal{H}^{(J)},\mathcal{H}^{(I)}).$ Important concrete examples take this form for particular choices of $\delta$. For example, the noncommutative polydisk $\{x\in B(\mathcal{H})^d: \Vert x\Vert <1\}$ in $B(\mathcal{H})^d$ may be realized as a $B_{\delta}$ for the diagonal matrix $$\delta(x^1,\ldots, x^d)= \begin{bmatrix} x^1 & & \\ & \ddots & \\ & & x^d \end{bmatrix}.$$ The noncommutative operatorial ball $$\{x\in B(\mathcal{H})^d : \Vert x^1(x^1)^* +\dots + x^d(x^d)^* \Vert^{1/2} <1 \}$$ is a $B_{\delta}$ for the row matrix $\delta(x)=[x^1 \cdots \, x^d]$. To see that any $B_{\delta}$ is in fact an NC domain according to Definition \ref{ncd}, one may take the exhausting sequence to be \begin{align}\label{bdelta} \Omega_k=\{x\in B(\mathcal{H})^d : \Vert \delta(x)\Vert\leq 1-1/k\}\cap \{x \in B(\mathcal{H})^d : \Vert x\Vert \leq k\}.\end{align} It is immediately checked that $\{\Omega_k\}$ has all of the required properties. Another example of an NC domain is the set of invertible elements of $B(\mathcal{H}),$ where one may use the exhausting sequence $\Omega_k=\{x \in B(\mathcal{H}): \Vert x\Vert \leq k, \Vert x^{-1}\Vert \leq k\}.$ Let us make a few remarks about Definition \ref{ncd}. Our notion of operator NC domain using exhausting sequences is a way to reasonably think of the (open) domains as being closed under countably infinite direct sums while still providing a sufficiently large class of examples. Even for bounded domains, it will rarely be the case that one may take an arbitrary sequence in the domain and conclude that its direct sum (conjugated by a sufficient unitary) will remain in the domain. Indeed, this fails even for the open unit ball in $d=1$: consider the sequence $(1-1/n)1_{\mathcal{H}}$. When we restrict to sequences contained in a fixed level of an exhaustion as in Definition \ref{ncd}, however, it is a much less stringent requirement for (\ref{exhaustds}) to hold. We want to consider functions which act appropriately on NC domains. Namely, we make the following definition of an operator NC function. \begin{definition} \label{ncf} Let $\Omega \subset B(\mathcal{H})^d$ be an NC domain. We say a function $f:\Omega \rightarrow B(\mathcal{H})^r$ is an \emph{NC function} if it preserves direct sums in the sense that whenever $x,y \in \Omega$ and whenever $s:\mathcal{H}\rightarrow \mathcal{H}^{(2)}$ is a bounded invertible linear map with $$s^{-1}\begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}s \in \Omega,$$ then $$f\left(s^{-1}\begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}s\right)=s^{-1}\begin{bmatrix} f(x) & 0 \\ 0 & f(y) \end{bmatrix}s.$$ \end{definition} As a consequence of Lemma \ref{L}, proved in Section \ref{sec: foundational}, NC functions on operator domains preserve intertwinings, just as in the matricial nc theory: if $L\in B(\mathcal{H})$ and $Lx=yL,$ then $Lf(x)=f(y)L.$ From this observation, it follows that whenever $f:\Omega \subset B(\mathcal{H})^d \rightarrow B(\mathcal{H})^r$ is NC and $\{\Omega_k\}$ is an exhausting sequence of $\Omega$ as in Definition \ref{ncd}, then each $f(\Omega_k)$ is norm-bounded. In particular, NC functions are automatically locally bounded. To see this, take a sequence $x_n$ in a fixed $\Omega_k$. There is a unitary $u:\mathcal{H} \rightarrow \mathcal{H}^{(\infty)}$ such that $x:=u^{-1}[\bigoplus x_n ] u \in \Omega_k.$ Define $\Gamma_n :\mathcal{H}^{(\infty)}\rightarrow \mathcal{H}$ to be projection onto the $n$th component and let $L_n:=\Gamma_nu.$ By definition of $L_n,$ we have $L_nx=x_nL_n$ for all $n,$ and since $f$ preserves intertwinings, we then have $L_nf(x)=f(x_n)L_n.$ Since $f(x)$ is an element of $B(\mathcal{H})^r,$ it has finite norm, so this relation implies the $f(x_n)$ are uniformly bounded. Similar reasoning lets us conclude that operator NC functions actually preserve \emph{countable} direct sums: if $x_n$ is a sequence in $\Omega$ of length $l \in \mathbb{N} \cup \{\infty\}$ and $s:\mathcal{H} \rightarrow \mathcal{H}^{(l)}$ is linear and invertible with $$s^{-1}\begin{bmatrix} x_{1} & & \\ & x_{2} & \\ & & \ddots \end{bmatrix}s \in \Omega,$$ then $f(x_n)$ is uniformly bounded and we have $$f\left(s^{-1}\begin{bmatrix} x_{1} & & \\ & x_{2} & \\ & & \ddots \end{bmatrix}s\right)=s^{-1}\begin{bmatrix} f(x_{1}) & & \\ & f(x_{2}) & \\ & & \ddots \end{bmatrix}s.$$ If we write $f:\Omega\rightarrow B(\mathcal{H})^r$ as $f=(f^1,\ldots, f^r)$, where each $f^j: \Omega \rightarrow B(\mathcal{H})$, then it follows from the definitions that $f$ is an NC function if and only if each $f^j$ is an NC function. As discussed in the introduction, any polynomial in $d$ noncommuting variables is an NC function when defined on any NC domain $\Omega \subset B(\mathcal{H})^d$. Furthermore, rational functions and noncommutative power series, on appropriately defined NC domains, provide us with a sizable class of prototypical examples of NC functions. For a simple, explicit such example, consider the rational function \begin{align} \label{example}f(x,y):=(1-xy)^{-1}=\sum_{n=0}^{\infty} (xy)^n\end{align} defined on the unit bidisk $\Omega=\{(x,y): \Vert x\Vert<1, \Vert y\Vert<1\}$ in $B(\mathcal{H})^2$. We verify here through a direct calculation that this function is in fact NC. Let $(x^1,x^2)$ and $(y^1,y^2)$ be points in $\Omega$ and suppose $$s^{-1}\begin{bmatrix} (x^1,x^2) & 0 \\ 0 & (y^1,y^2) \end{bmatrix}s \in \Omega.$$ Then, \begin{align*} f\left(s^{-1}\begin{bmatrix} (x^1,x^2) & 0 \\ 0 & (y^1,y^2) \end{bmatrix}s\right)&= \sum_{n=0}^{\infty} \left(s^{-1}\begin{bmatrix} x^1 & 0 \\ 0 & y^1 \end{bmatrix}\begin{bmatrix} x^2 & 0\\ 0 & y^2 \end{bmatrix}s\right)^n \\ &= s^{-1}\sum_{n=0}^{\infty}\begin{bmatrix} (x^1x^2)^n & 0 \\ 0 & (y^1y^2)^n \end{bmatrix}s \\ &= s^{-1}\begin{bmatrix} f(x^1,x^2) & 0 \\ 0 & f(y^1,y^2) \end{bmatrix}s, \end{align*} as claimed. We conclude this section with terminology that will be used in the statements of the inverse and implicit function theorems. Recall that an operator $T\in B(X)$, where $X$ is a Banach space, is bounded below if there is a constant $C>0$ such that $\Vert Tx\Vert\geq C\Vert x\Vert$ for all $x \in X.$ \begin{definition} \label{ncbbp} Let $\Omega \subset B(\mathcal{H})^d$ be an NC domain. A map $\Psi: \Omega \rightarrow B(B(\mathcal{H})^r)$ is said to have the \emph{NC bounded below property} if whenever $\{x_n\}_{n=1}^{\infty}$ is a bounded sequence in $\Omega$ such that $\Psi(x_n)$ is bounded below for every $n,$ and whenever $u:\mathcal{H} \rightarrow \mathcal{H}^{(\infty)}$ is unitary such that $$z:=u^{-1}\begin{bmatrix} x_{1} & & \\ & x_{2} & \\ & & \ddots \end{bmatrix}u \in \Omega,$$ then $\Psi(z)$ is bounded below. \end{definition} We note that in the notation of Definition \ref{ncbbp}, the supposed bound below for $\Psi(x_n)$ is allowed to depend on $n.$ When $\Psi$ arises naturally from an operator NC function, for example when $\Psi$ is the derivative map of an NC function, the argument in the proof of Theorem \ref{bbthm} shows that $\Psi$ being bounded below when evaluated at the direct sum of such a sequence $x_n$ implies a \emph{uniform} bound below for the sequence $\Psi(x_n).$ As a result, we show that this property characterizes global invertibility of the derivative of NC functions on connected NC domains. The NC bounded below property, when imposed on the derivative map, may be thought of as an operatorial analogue of injectivity of the derivative. \section{MAIN RESULTS} \label{sec:main} In this section, we list the main results of the paper; for the detailed proofs, see Section \ref{sec: proofs}. The notion of connectedness is always with respect to the norm topology on $B(\mathcal{H})^d$. Operator NC functions will be shown to automatically be Fr\'{e}chet differentiable, and the notation $Df(x)$ denotes the derivative mapping $B(\mathcal{H})^d \rightarrow B(\mathcal{H})^r$ of $f$ at the point $x$ in the domain of $f$. We denote by $Df$ the map $x\mapsto Df(x).$ Our primary objective is to prove an inverse function theorem for NC functions defined on operator domains, as described in Section \ref{sec: prelim}. Therefore, we begin by studying the derivative of such functions and ask when are the derivative maps $Df(x)$ invertible for \emph{every} $x$ in the domain of $f.$ Theorem \ref{bbthm} below provides an answer to this question on connected NC domains. \begin{theorem} \label{bbthm} Let $f:\Omega \subset B(\mathcal{H})^d \rightarrow B(\mathcal{H})^d$ be an NC function and suppose $\Omega$ is connected. If $Df$ has the NC bounded below property, and there exists a point $a\in \Omega$ such that $Df(a)$ is invertible, then $Df(x)$ is invertible for every $x\in \Omega.$ \end{theorem} With this result giving a sufficient condition for the invertibility of the derivative map of an NC function at \emph{all} points, we arrive at an operatorial NC inverse function theorem. Theorem \ref{bbthm} justifies the NC bounded below property as a substitute for injectivity in the general operatorial setting. The hypotheses for the inverse function theorem are the same as in Theorem \ref{bbthm}. \begin{theorem} \label{inverse} \emph{(Inverse Function Theorem)} Let $f:\Omega \subset B(\mathcal{H})^d \rightarrow B(\mathcal{H})^d$ be an NC function and suppose $\Omega$ is connected. If $Df$ has the NC bounded below property, and there exists a point $a\in \Omega$ such that $Df(a)$ is invertible, then $f(\Omega)$ is an NC domain and $f^{-1}: f(\Omega) \rightarrow \Omega$ exists and is an NC function. \end{theorem} As one might expect, the inverse function theorem, Theorem \ref{inverse}, gives rise to an operatorial implicit function theorem under the hypothesis that an augmented derivative map satisfies the NC bounded below property. The notation $Z_f$ denotes the zero set of the function $f$. In the implicit function theorem, we write, for notational convenience, $(h^{d-r+1},\ldots, h^d)$ for elements of $B(\mathcal{H})^r.$ \begin{theorem} \label{implicit} \emph{(Implicit Function Theorem)} Let $f:\Omega \subset B(\mathcal{H})^d \rightarrow B(\mathcal{H})^r$ be NC, where $1\leq r\leq d-1$, and $\Omega$ is connected. Suppose the map $\Psi:\Omega \rightarrow B(B(\mathcal{H})^r)$ defined by $$\Psi(x)(h^{d-r+1},\ldots, h^d)=Df(x)[0,\ldots,0,h^{d-r+1},\ldots, h^d]$$ has the NC bounded below property, and there exists a point $a \in \Omega$ such that $\Psi(a)$ is invertible. Then, there exists $V\subset B(\mathcal{H})^{d-r}$ an NC domain and $\phi:V\rightarrow B(\mathcal{H})^r$ an NC function such that $$Z_f=\{(y,\phi(y)): y \in V\}.$$ Furthermore, $V$ is given by the projection onto the first $d-r$ coordinates of the zero set $Z_f.$ \end{theorem} Theorem \ref{implicit} is an operatorial analogue of Agler and \mc Carthy's implicit function theorem (Theorem 6.1 in \cite{amif16}) for the fine matricial nc topology. In the operatorial setting, we require a slightly stronger assumption than merely injectivity of the maps $\Psi(x)$, which is the assumption for the implicit function theorem in \cite{amif16}. For further emphasis, the parametrizing function $\phi$ in Theorem \ref{implicit}, being itself operator NC, is \emph{infinite} direct sum-preserving. It is important to note that the conclusions of Theorems \ref{inverse} and \ref{implicit} are \emph{global}, a phenomenon that is rare outside of the noncommutative setting. As mentioned in the introduction, results similar to Theorems \ref{inverse} and \ref{implicit} are obtained in \cite{akv15} for a quite general matricial nc setting with local invertibility conclusions and hypotheses of analyticity in the "uniformly-open" topology and a completely bounded and invertible derivative map with completely bounded inverse. In the operator NC setting, we note once more that the notion of a uniformly-open topology is no longer available, so we instead make extensive use of the completeness of $B(\mathcal{H})$ and its various topologies. It is reasonable to ask if additional structure imposed on the NC functions in the strong operator topology (SOT) allows us to weaken our assumptions on their derivatives. If we assume the NC operator domain is exhausted by certain SOT-closed sets, and impose strong continuity on the NC function, we arrive at the following, rather surprising theorem. In particular, it is valid for maps whose components are polynomials and rational functions, as these are SOT continuous on appropriate norm-bounded sets. \begin{theorem} \label{strongbb} Suppose $f:\Omega \subset B(\mathcal{H})^d \rightarrow B(\mathcal{H})^r$ is a strong NC function. If $Df(x)$ is injective for every $x\in \Omega$, then $Df(x)$ is bounded below for every $x\in \Omega.$ \end{theorem} We note that Theorem \ref{strongbb} does not require the NC domain to be connected in any topology. On the other hand, injective strong NC functions on norm-connected domains are especially nice: \begin{corollary} \label{strongbbcor} Let $f:\Omega \subset B(\mathcal{H})^d \rightarrow B(\mathcal{H})^d$ be an injective strong NC function. If $\Omega$ is connected and there exists a point $a\in \Omega$ such that $Df(a)$ is surjective, then $Df(x)$ is invertible for every $x\in \Omega.$ \end{corollary} Results such as Theorem \ref{strongbb} and Corollary \ref{strongbbcor} suggest it may be natural to have some structure in the strong operator topology built into the definitions of NC domain and function. However, Theorems \ref{bbthm}, \ref{inverse}, and \ref{implicit}, along with the foundations found in Section \ref{sec: foundational}, require no such hypotheses. As such, there is merit to also studying a more general theory. Therefore, we maintain a distinction throughout this note. See Section \ref{sec: strongshift} for more details on the precise definition of \emph{strong NC function} and the construction of what we call \emph{shift forms}. Reminiscent of noncommutative dilation theory, these shift forms have nice SOT convergence properties (Lemma \ref{sotcompact}) that are suited well for applications to strong NC functions. \section{FOUNDATIONAL PROPERTIES}\label{sec: foundational} The aim of this section is to collect basic properties and formulas for NC functions defined on operator domains. Our first lemma is an operatorial version of a fundamental formula for noncommutative functions. In \cite{HKM}, Helton, Klep, and McCullough proved a similar formula for matricial nc functions. In this and other related formulas to follow, the presence of unitaries or some invertible linear map $s$ in the statements is necessary as we need a way of identifying $\mathcal{H}$ with some $\mathcal{H}^{(l)}.$ Several results in this section have analogues in the classical matricial nc theory. However, we present precise statements and complete proofs here, adhering to the formalisms introduced in Section \ref{sec: prelim}. \begin{lemma} \label{L} Let $f:\Omega \subset B(\mathcal{H})^d\rightarrow B(\mathcal{H})^r$ be an NC function and let $L\in B(\mathcal{H})$. If $x,y \in \Omega$ and $s:\mathcal{H} \rightarrow \mathcal{H}^{(2)}$ is any invertible linear map such that $$s^{-1}\begin{bmatrix} x & Ly-xL \\ 0 & y{} \end{bmatrix}s \in \Omega,$$ then $$f\left(s^{-1}\begin{bmatrix} x & Ly-xL \\ 0 & y \end{bmatrix}s\right)=s^{-1}\begin{bmatrix} f(x) & Lf(y)-f(x)L \\ 0 & f(y) \end{bmatrix}s.$$ \end{lemma} \begin{proof} Define $\sigma:\mathcal{H}\rightarrow \mathcal{H}^{(2)}$ to be the invertible map $\sigma:=\begin{bmatrix} 1 & -L \\ 0 & 1 \end{bmatrix}s.$ Then a computation shows $$\sigma^{-1} \begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}\sigma=s^{-1}\begin{bmatrix} 1 & L \\ 0 & 1 \end{bmatrix}\begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}\begin{bmatrix} 1 & -L \\ 0 & 1 \end{bmatrix}s = s^{-1}\begin{bmatrix} x & Ly-xL \\ 0 & y \end{bmatrix}s \in \Omega.$$ Since $f$ is NC, we have \begin{align*} f\left(s^{-1}\begin{bmatrix} x & Ly-xL \\ 0 & y \end{bmatrix}s\right) &= f\left(\sigma^{-1}\begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}\sigma\right) \\ &= \sigma^{-1}\begin{bmatrix} f(x) & 0 \\ 0 & f(y) \end{bmatrix}\sigma \\ &= s^{-1}\begin{bmatrix} f(x) & Lf(y)-f(x)L \\ 0 & f(y) \end{bmatrix}s, \end{align*} which completes the proof. \end{proof} As noted previously, it immediately follows from Lemma \ref{L} that operator NC functions preserve intertwinings. Recall that if $X$ and $Y$ are Banach spaces and $U\subset X$ is open, then a function $g:U\rightarrow Y$ is said to be \emph{G\^{a}teaux differentiable} if for all $x\in U$ and all $h\in X$, the limit $$Dg(x)[h]:=\lim_{t\rightarrow 0} \frac{g(x+th)-g(x)}{t}$$ exists. It is a well-known general fact (see \cite{taylor}) that over complex scalars, a norm-continuous and G\^{a}teaux differentiable function is automatically \emph{Fr\'{e}chet} differentiable, and the two derivatives must then coincide. In particular, $Dg(x):X\rightarrow Y$ is then a bounded linear map for each $x\in U$. \begin{lemma} \label{difflemma} An NC function is norm-continuous and G\^{a}teaux differentiable, and therefore is Fr\'{e}chet differentiable. \end{lemma} \begin{proof} We begin by showing that if $f:\Omega \subset B(\mathcal{H})^d\rightarrow B(\mathcal{H})^r$ is NC, then $f$ is norm-continuous. Fix $x \in \Omega$ and $\varepsilon>0,$ and let $\{\Omega_k\}$ be an exhausting sequence for $\Omega$ as in Definition \ref{ncd}. Say $x\in \Omega_k,$ so there is $u:\mathcal{H} \rightarrow \mathcal{H}^{(2)}$ unitary such that $$z:=u^{-1}\begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix}u\in \Omega_k.$$ Then there is some $r>0$ such that the balls centered at $x$ and $z$ with radius $r$ are contained in $\Omega_{k+1}.$ By the discussion immediately following Definition \ref{ncf}, there is $M>0$ such that $\Vert f\Vert < M$ on $\Omega_{k+1}$. Now, set $\delta:=\min\{\frac{r\varepsilon}{2M}, r/2\}$ and let $\Vert y-x\Vert <\delta.$ Then \begin{align*} \left \Vert u^{-1}\begin{bmatrix} x & \frac{M}{\varepsilon}(y-x) \\ 0 & y \end{bmatrix}u - z\right\Vert &= \left\Vert \begin{bmatrix} 0 & \frac{M}{\varepsilon}(y-x) \\ 0 & y-x \end{bmatrix}\right\Vert \\ &\leq M/\varepsilon \Vert y-x\Vert +\Vert y-x\Vert \\ &< r, \end{align*} so we have, by Lemma \ref{L}, $$\left\Vert \begin{bmatrix} f(x) & \frac{M}{\varepsilon}(f(y)-f(x)) \\ 0 & f(y) \end{bmatrix}\right\Vert= \left\Vert f\left(u^{-1}\begin{bmatrix} x & \frac{M}{\varepsilon}(y-x) \\ 0 & y \end{bmatrix}u\right)\right\Vert<M.$$ It then follows that $\Vert f(y)-f(x)\Vert<\varepsilon.$ Next, we show $f$ is G\^{a}teaux differentiable. Fix $x\in \Omega$ and $h\in B(\mathcal{H})^d.$ There is $k\geq 1,$ $u:\mathcal{H} \rightarrow \mathcal{H}^{(2)}$ unitary, and $\varepsilon>0$ small so that $x\in \Omega_k$ and $$u^{-1}\begin{bmatrix} x & \varepsilon h \\ 0 & x \end{bmatrix}u \in \Omega_k.$$ Then for all $t\neq 0$ with small enough modulus, $$\Omega_{k+1}\ni u^{-1}\begin{bmatrix} x+th & \varepsilon h \\ 0 & x \end{bmatrix}u = u^{-1}\begin{bmatrix} x+th & \frac{\varepsilon}{t}(x+th-x) \\ 0 & x \end{bmatrix}u,$$ so by Lemma \ref{L} again, \begin{align}\label{diffquo} f\left(u^{-1}\begin{bmatrix} x+th & \varepsilon h \\ 0 & x \end{bmatrix}u\right)=u^{-1}\begin{bmatrix} f(x+th) & \frac{\varepsilon}{t}(f(x+th)-f(x)) \\ 0 & f(x) \end{bmatrix}u.\end{align} By continuity of $f$, as $t\rightarrow 0$, the limit on the left-hand side of (\ref{diffquo}) exists, and therefore so does that of the 1-2 entry of the matrix on the right-hand side of (\ref{diffquo}), thus proving $f$ is G\^{a}teaux differentiable. Since $f$ is also continuous, the discussion immediately preceding this proof implies $f$ is Fr\'{e}chet differentiable. \end{proof} Moreover, the second part of the above proof also provides the following derivative formula for operator NC functions. It is reminiscent of a formula obtained in \cite{HKM}, and will be an irreplaceable tool for us moving forward. \begin{proposition} \label{derprop} Let $f:\Omega \subset B(\mathcal{H})^d\rightarrow B(\mathcal{H})^r$ be an NC function. Suppose $x\in \Omega$, $h\in B(\mathcal{H})^d,$ and $s:\mathcal{H} \rightarrow \mathcal{H}^{(2)}$ is any invertible linear map such that $$s^{-1}\begin{bmatrix} x & h \\ 0 & x \end{bmatrix}s \in \Omega.$$ Then, \begin{align} \label{derform} f\left(s^{-1}\begin{bmatrix} x & h \\ 0 & x \end{bmatrix}s\right)=s^{-1}\begin{bmatrix} f(x) & Df(x)[h] \\ 0 & f(x) \end{bmatrix}s.\end{align} \end{proposition} A common scenario where we can apply Proposition \ref{derprop} is as follows. Suppose $x\in \Omega$ and $s=u$ is a given unitary. Then by closure under direct sums and unitary invariance, $u^{-1}(x\oplus x)u$ is an element of $\Omega$ (for the \emph{given} unitary $u$) and the conclusion of Proposition \ref{derprop} holds for all $h \in B(\mathcal{H})^d$ with sufficiently small norm. The next theorem is an operatorial analogue of J. E. Pascoe's inverse function theorem \cite{james} for matricial nc functions. It is a first step towards a bonafide inverse function theorem for operator NC functions. We remark that, in contrast to the finite dimensional case, it is possible for a linear map $B(\mathcal{H})^d\rightarrow B(\mathcal{H})^r$ to be injective even if $d>r.$ Therefore, this theorem has content even when $d\neq r,$ and so we state it in this generality. \begin{theorem}\label{injthm} An NC function $f:\Omega \subset B(\mathcal{H})^d \rightarrow B(\mathcal{H})^r$ is injective if and only if $Df(x):B(\mathcal{H})^d\rightarrow B(\mathcal{H})^r$ is injective for every $x\in \Omega.$ \end{theorem} \begin{proof} Suppose first $f$ is injective and let $x\in \Omega$. Assume that $Df(x)[h]=0$. There is $u$ unitary and $\varepsilon>0$ small enough so that $u^{-1}\begin{bmatrix} x & \varepsilon h \\ 0 & x \end{bmatrix}u \in \Omega$. Formula (\ref{derform}) then yields \begin{align*} f\left(u^{-1}\begin{bmatrix} x & \varepsilon h \\ 0 & x \end{bmatrix}u\right) &= u^{-1}\begin{bmatrix} f(x) & Df(x)[\varepsilon h] \\ 0 & f(x) \end{bmatrix}u \\ &= f\left(u^{-1}\begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix}u\right). \end{align*} By injectivity of $f,$ it must hold that $$u^{-1}\begin{bmatrix} x & \varepsilon h \\ 0 & x \end{bmatrix}u=u^{-1}\begin{bmatrix} x & 0 \\ 0 & x \end{bmatrix}u,$$ which implies $h=0.$ Thus, $Df(x)$ has trivial kernel. To prove the converse, suppose $x,y \in \Omega$ and $f(x)=f(y).$ There are unitaries $u,v: \mathcal{H} \rightarrow \mathcal{H}^{(2)}$ and $\varepsilon>0$ such that $v^{-1}\begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}v \in \Omega$ and $$z:=u^{-1}\begin{bmatrix} v^{-1}\begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}v & v^{-1}\begin{bmatrix} 0 & \varepsilon(x-y) \\ 0 & 0 \end{bmatrix}v \\ 0 & v^{-1}\begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}v \end{bmatrix}u \in \Omega.$$ First, by Proposition \ref{derprop}, and because $f$ preserves direct sums, we know \begin{align}\label{bigmatrix} f(z)= u^{-1}\begin{bmatrix} v^{-1}\begin{bmatrix} f(x) & 0 \\ 0 & f(y) \end{bmatrix}v & Df\left(v^{-1}\begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}v\right)\left[v^{-1}\begin{bmatrix} 0 & \varepsilon(x-y) \\ 0 & 0 \end{bmatrix}v\right] \\ 0 & v^{-1}\begin{bmatrix} f(x) & 0 \\ 0 & f(y) \end{bmatrix}v \end{bmatrix}u. \end{align} On the other hand, a calculation shows that if we define $w:\mathcal{H} \rightarrow \mathcal{H}^{(4)}$ by $w:=(v\oplus v)u$ and $s:\mathcal{H} \rightarrow \mathcal{H}^{(4)}$ by $$s:=\begin{bmatrix} 1 & 0 & 0 & \varepsilon 1 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}w,$$ then $z$ may be rewritten as $$z=s^{-1}\begin{bmatrix} x & 0 & 0 & 0 \\ 0 & y & 0 & 0 \\ 0 & 0 & x & 0 \\ 0 & 0 & 0 & y \end{bmatrix}s.$$ Therefore, as $f$ is NC, we have \begin{align*} f(z)&= f\left(s^{-1}\begin{bmatrix} x & 0 & 0 & 0 \\ 0 & y & 0 & 0 \\ 0 & 0 & x & 0 \\ 0 & 0 & 0 & y \end{bmatrix}s\right) \\ &= s^{-1}\begin{bmatrix} f(x) & 0 & 0 & 0 \\ 0 & f(y) & 0 & 0 \\ 0 & 0 & f(x) & 0 \\ 0 & 0 & 0 & f(y) \end{bmatrix}s \\ &= w^{-1}\begin{bmatrix} f(x) & 0 & 0 & \varepsilon(f(x)-f(y)) \\ 0 & f(y) & 0 & 0 \\ 0 & 0 & f(x) & 0 \\ 0 & 0 & 0 & f(y) \end{bmatrix}w \\ &= w^{-1}\begin{bmatrix} f(x) & 0 & 0 & 0 \\ 0 & f(y) & 0 & 0 \\ 0 & 0 & f(x) & 0 \\ 0 & 0 & 0 & f(y) \end{bmatrix}w \\ &= u^{-1} \begin{bmatrix} v^{-1}\begin{bmatrix} f(x) & 0 \\ 0 & f(y) \end{bmatrix}v & 0 \\ 0 & v^{-1}\begin{bmatrix} f(x) & 0 \\ 0 & f(y) \end{bmatrix}v \end{bmatrix}u. \end{align*} Comparing this to equation (\ref{bigmatrix}) implies \[Df\left(v^{-1}\begin{bmatrix} x & 0 \\ 0 & y \end{bmatrix}v\right)\left[v^{-1}\begin{bmatrix} 0 & \varepsilon(x-y) \\ 0 & 0 \end{bmatrix}v\right]=0\] in $B(\mathcal{H})^r.$ By the assumption of the derivative being injective at all points, \[v^{-1}\begin{bmatrix} 0 & \varepsilon(x-y) \\ 0 & 0 \end{bmatrix}v=0,\] and we conclude $x=y$ as desired. \end{proof} Other results on this type of "lack of dimensionality" were observed by Cushing, Pascoe, and Tully-Doyle in \cite{cushingpascoe}. Theorem \ref{injthm} already provides a stark contrast between classical function theory and the noncommutative theory; examples abound of functions with globally invertible derivative who fail to be injective. We now recall the definition of the Hessian of a G\^{a}teaux differentiable function and later prove an analogous formula to Proposition \ref{derprop} for the Hessian of an NC operator function. The formula is of similar flavor to one derived by Agler and \mc Carthy in \cite{amif16} for matricial nc functions. \begin{definition} Let $X$ and $Y$ be Banach spaces and $U\subset X$ be open. For a G\^{a}teaux differentiable function $g:U\rightarrow Y$, we define the \emph{Hessian} of $g$ at the point $x\in U$ to be \begin{align}\label{hessquo} Hg(x)[h,k]:=\lim_{t\rightarrow 0}\frac{Dg(x+tk)[h]-Dg(x)[h]}{t},\end{align} whenever the limit exists for all $h,k \in B(\mathcal{H})^d$. \end{definition} In the next lemma, we show that the derivative of an operator NC function is itself NC, that the Hessian exists for NC functions, and that the Hessian is again NC. As an application of these facts, we give a simple, calculus-based proof using boundedness of the Hessian that an operator NC function must, in particular, be of class $C^1.$ \begin{lemma}\label{hessianexists} Suppose $f:\Omega \subset B(\mathcal{H})^d \rightarrow B(\mathcal{H})^r$ is an NC function. \begin{enumerate} \item The derivative map $\phi:\Omega \times B(\mathcal{H})^d \rightarrow B(\mathcal{H})^r$ given by $$\phi(x,h):=Df(x)[h]$$ is an NC function. \item The Hessian $Hf(x)$ exists at all $x\in \Omega$ and the map $\Omega \times B(\mathcal{H})^{2d} \rightarrow B(\mathcal{H})^r$ given by $(x,h,k)\mapsto Hf(x)[h,k]$ is an NC function. Furthermore, $$Hf(x)[h,k]=D\phi(x,h)[k,0].$$ \item $f$ is $C^1.$ \end{enumerate} \end{lemma} \begin{proof} (i) Let $\{\Omega_k\}$ be an exhaustion of $\Omega$ as in the definition of NC domain. A natural candidate for an NC exhausting sequence for $\Omega \times B(\mathcal{H})^d$ is $$W_k:=\Omega_k \times \{h\in B(\mathcal{H})^d : \Vert h\Vert \leq k\}.$$ Indeed, the requirements of Definition \ref{ncd} are readily seen, so $\Omega \times B(\mathcal{H})^d$ is an NC domain. We now show $\phi$ is an NC function. This is a simple matter of using the definition of the derivative. Let $(x_1,h_1)$ and $(x_2,h_2)$ be in $\Omega \times B(\mathcal{H})^d$ and let $s:\mathcal{H} \rightarrow \mathcal{H}^{(2)}$ be invertible such that $$(X,H):=s^{-1}\begin{bmatrix} (x_1,h_1) & 0 \\ 0 & (x_2,h_2) \end{bmatrix}s \in \Omega \times B(\mathcal{H})^d.$$ Since $f$ is NC,\begin{align*} \phi(X,H) &= \lim_{t\rightarrow 0} \frac{1}{t}\left\{f\left(s^{-1}\begin{bmatrix} x_1 & 0 \\ 0 & x_2 \end{bmatrix}s +ts^{-1}\begin{bmatrix} h_1 & 0 \\ 0 & h_2 \end{bmatrix}s\right)-f\left(s^{-1}\begin{bmatrix} x_1 & 0 \\ 0 & x_2 \end{bmatrix}s\right)\right\} \\ &= \lim_{t\rightarrow 0} s^{-1}\begin{bmatrix} \frac{f(x_1+th_1)-f(x_1)}{t} & 0 \\ 0 & \frac{f(x_2+th_2)-f(x_2)}{t} \end{bmatrix}s \\ &= s^{-1}\begin{bmatrix} \phi(x_1,h_1) & 0 \\ 0 & \phi(x_2,h_2) \end{bmatrix}s, \end{align*} which proves part (i). (ii) Since $\phi$ is NC on its domain, we apply Lemma \ref{difflemma} to conclude $\phi$ is G\^{a}teaux differentiable. Unraveling the definitions therefore shows that the Hessian $Hf(x)$ exists for all $x \in \Omega,$ and the equality $Hf(x)[h,k]=D\phi(x,h)[k,0]$ must hold. Applying the result in part (i) to the NC function $\phi$ shows the map $\Omega \times B(\mathcal{H})^{3d}\rightarrow B(\mathcal{H})^r$ given by $(x,h,k,k')\mapsto D\phi(x,h)[k,k']$ is NC. Therefore, the Hessian map $(x,h,k)\mapsto Hf(x)[h,k]=D\phi(x,h)[k,0]$ must also be NC on $\Omega \times B(\mathcal{H})^{2d}$. (iii) By part (ii), it in particular holds that for every $x\in \Omega$, there is a norm ball $B$ about $x$ and $M>0$ such that $\Vert Hf(y)[h,k] \Vert \leq M \Vert h\Vert \Vert k\Vert$ for all $y \in B$ and all $h,k \in B(\mathcal{H})^d.$ Fix $x\in \Omega$. Choose a ball $B$ about $x$ and $M>0$ as above. Then for $y\in B$ and $h\in B(\mathcal{H})^d,$ the map $t \mapsto Hf(x+t(y-x))[h,y-x]$ is continuous on the interval $[0,1]$ by part (ii), so we may estimate \begin{align*} \Vert Df(y)[h]-Df(x)[h]\Vert &= \left\Vert \int_{0}^1 \frac{d}{dt} Df(x+t(y-x))[h]dt\right\Vert \\ &= \left\Vert \int_{0}^1 Hf(x+t(y-x))[h,y-x]dt\right\Vert \\ &\leq \int_0^1 \Vert Hf(x+t(y-x))[h,y-x]\Vert dt \\ &\leq M\Vert h\Vert \Vert y-x\Vert. \end{align*} By definition of the operator norm, it then holds that $$\Vert Df(y)-Df(x)\Vert \leq M\Vert y-x\Vert$$ for $y\in B.$ \end{proof} Finally, we have the aforementioned formula for the Hessian of operatorial NC functions: \begin{proposition}\label{hessprop} Let $f:\Omega \subset B(\mathcal{H})^d \rightarrow B(\mathcal{H})^r$ be NC. Suppose $x\in \Omega$ and $u,v:\mathcal{H}\rightarrow \mathcal{H}^{(2)}$ are unitaries. Then for all $h,k\in B(\mathcal{H})^d$ of sufficiently small norm, \begin{align}\begin{split} \label{hess} &f\left(v^{-1}\begin{bmatrix} u^{-1}\begin{bmatrix} x & k \\ 0 & x \end{bmatrix}u & u^{-1}\begin{bmatrix} h & 0 \\ 0 & h \end{bmatrix}u \\ 0 & u^{-1}\begin{bmatrix} x & k \\ 0 & x \end{bmatrix}u \end{bmatrix}v\right) \\ &= w^{-1}\begin{bmatrix} f(x) & Df(x)[k] & Df(x)[h] & Hf(x)[h,k] \\ 0 & f(x) & 0 & Df(x)[h] \\ 0 & 0 & f(x) & Df(x)[k] \\ 0 & 0 & 0 & f(x) \end{bmatrix}w,\end{split}\end{align} where we set $w:=(u\oplus u)v$. \end{proposition} \begin{proof} For ease of reading, let us write \[X:=u^{-1}\begin{bmatrix} x & k \\ 0 & x \end{bmatrix}u \hspace{20pt} {\rm and} \hspace{20pt} H:=u^{-1}\begin{bmatrix} h & 0 \\ 0 & h \end{bmatrix}u.\] By closure under direct sums and unitary invariance, $X\in \Omega$ for $\Vert k\Vert$ sufficiently small, and $$v^{-1}\begin{bmatrix} X & H \\ 0 & X \end{bmatrix}v \in \Omega$$ for $\Vert h\Vert$ sufficiently small. We may then compute, by letting $\phi$ be the derivative as in Lemma \ref{hessianexists}, $$\phi(X,H)=u^{-1}\begin{bmatrix} \phi(x,h) & D\phi(x,h)[k,0] \\ 0 & \phi(x,h) \end{bmatrix}u=u^{-1}\begin{bmatrix} Df(x)[h] & Hf(x)[h,k] \\ 0 & Df(x)[h] \end{bmatrix}u.$$ The left-hand side of (\ref{hess}) is then equal to \begin{align*} f&\left(v^{-1}\begin{bmatrix} X & H \\ 0 & X \end{bmatrix}v\right) = v^{-1}\begin{bmatrix} f(X) & Df(X)(H)\\ 0 & f(X) \end{bmatrix}v \\ &=v^{-1}\begin{bmatrix} f(X) & u^{-1}\begin{bmatrix} Df(x)[h] & Hf(x)[h,k] \\ 0 & Df(x)[h] \end{bmatrix}u \\ 0 & f(X) \end{bmatrix}v \\ &= v^{-1}\begin{bmatrix} u^{-1}\begin{bmatrix} f(x) & Df(x)[k] \\ 0 & f(x) \end{bmatrix}u & u^{-1}\begin{bmatrix} Df(x)[h] & Hf(x)[h,k] \\ 0 & Df(x)[h] \end{bmatrix}u \\ 0 & u^{-1}\begin{bmatrix} f(x) & Df(x)[k] \\ 0 & f(x) \end{bmatrix}u \end{bmatrix}v, \end{align*} which is equal to the right-hand side of (\ref{hess}). \end{proof} We note that it is possible to derive similar, albeit increasingly complicated formulas for higher order derivatives of NC functions, but we will be content with doing so only for the first derivative and the Hessian, as this is sufficient for our purposes and it illustrates the general principles behind derivative formulas of NC functions on operatorial domains. \section{STRONG NC FUNCTIONS AND THE SHIFT FORM} \label{sec: strongshift} As discussed briefly in the introduction and Section \ref{sec:main}, we want to impose additional requirements of SOT-closedness of each level in an exhaustion of an NC domain, and that of SOT continuity of NC operator functions in order to relax the instances of the hypothesis of the derivative satisfying the NC bounded below property to merely being \emph{injective} at all points. In practice, checking such boundedness below may be difficult in certain cases, but injectivity will typically be more readily verified. For $\varepsilon>0$, we call the set $\{x\in B(\mathcal{H})^d : \text{dist}\,(x,U)<\varepsilon\},$ of points in $B(\mathcal{H})^d$ with distance less than $\varepsilon$ from the set $U,$ the $\varepsilon$\emph{-neighborhood} of $U.$ \begin{definition}\label{sncd} We say $\Omega \subset B(\mathcal{H})^d$ is a \emph{strong NC domain} if there exists an exhausting sequence $\{\Omega_k\}_{k=1}^{\infty}$ of $\Omega$ as in Definition \ref{ncd}, with the additional requirements that \begin{enumerate} \item Each $\Omega_k$ is closed in the strong operator topology. \item For each $k$ there is $\varepsilon_k>0$ such that $\Omega_{k+1}$ contains the $\varepsilon_k$-neighborhood of $\Omega_k.$ \end{enumerate} \end{definition} \begin{definition} \label{sncf} Let $\Omega\subset B(\mathcal{H})^d$. A function $f:\Omega \rightarrow B(\mathcal{H})^r$ is called a \emph{strong NC function} if \begin{enumerate} \item There exists an exhausting sequence $\{\Omega_k\}_{k=1}^{\infty}$ of $\Omega$ as in Definition \ref{sncd} such that each restriction $f|_{\Omega_k}$ is continuous in the strong operator topology. \item $f$ is an NC function. \end{enumerate} \end{definition} Since the strong operator topology is metrizable on norm-bounded subsets of $B(\mathcal{H})^d$ when $\mathcal{H}$ is separable, the continuity condition (i) in Definition \ref{sncf} is equivalent to the following sequential criterion: for every $k,$ whenever $x_n$ is a sequence in $\Omega_k$ with $x_n\rightarrow x$ in SOT, we have $f(x_n)\rightarrow f(x)$ in SOT. Similarly, the condition of each $\Omega_k$ being SOT-closed in Definition \ref{sncd} is equivalent to a sequential characterization. We remark further about Definitions \ref{sncd} and \ref{sncf}. Any $B_{\delta},$ as described in Section \ref{sec: prelim}, is a strong NC domain since the exhaustion given in (\ref{bdelta}) satisfies the additional requirements of Definition \ref{sncd}. Indeed, such a $\delta$ is Lipschitz on bounded sets and multiplication is strongly continuous on bounded sets. Moreover, as noncommutative polynomials and rational functions (such as the example in (\ref{example}) on the bidisk) are strongly continuous on appropriate norm-bounded sets, in practice these additional requirements seem rather mild and natural. Secondly, condition (ii) in Definition \ref{sncd} is just a technical strengthening of the condition $\Omega_k \subset \emph{\emph{int\,}}\Omega_{k+1}$ (which we have been using so far), and it ensures that the derivative of a strong NC function is also a strong NC function. Indeed, if $f:\Omega \subset B(\mathcal{H})^d \rightarrow B(\mathcal{H})^r$ is a strong NC function, say with exhausting sequence $\Omega_k$ as in Definition \ref{sncf}, taking the obvious exhaustion of $\Omega \times B(\mathcal{H})^d$ shows it is a strong NC domain. Furthermore, for every $k,$ whenever $x_n$ is a sequence in $\Omega_k$ with $x_n\rightarrow x$ in SOT and whenever $h_n\rightarrow h$ in SOT, we have $Df(x_n)[h_n]\rightarrow Df(x)[h]$ in SOT. To see this, fix $k$ and note that by closure under direct sums and unitary invariance of $\Omega_k,$ there is a unitary $u:\mathcal{H}\rightarrow \mathcal{H}^{(2)}$ such that $$u^{-1}\begin{bmatrix} x_n & 0 \\ 0 & x_n \end{bmatrix}u \in \Omega_k$$ for all $n$. As the strongly convergent sequence $h_n$ is bounded, condition (ii) in Definition \ref{sncd} implies there is $\varepsilon>0$ (independent of $n$) such that $$u^{-1}\begin{bmatrix} x_n & \varepsilon h_n \\ 0 & x_n \end{bmatrix}u \in \Omega_{k+1}$$ for all $n.$ Therefore, by Proposition \ref{derprop} and because $f|_{\Omega_{k+1}}$ is strongly continuous, \begin{align*} u^{-1}\begin{bmatrix} f(x) & \varepsilon Df(x)[h] \\ 0 & f(x) \end{bmatrix}u &= f\left(u^{-1}\begin{bmatrix} x & \varepsilon h \\ 0 & x \end{bmatrix}u\right) \\ &= \lim_{n\rightarrow \infty} f\left(u^{-1}\begin{bmatrix} x_n & \varepsilon h_n \\ 0 & x_n \end{bmatrix}u\right) \\ &= \lim_{n\rightarrow \infty} u^{-1}\begin{bmatrix} f(x_n) & \varepsilon Df(x_n)[h_n] \\ 0 & f(x_n) \end{bmatrix}u,\end{align*} where all limits are in the strong operator topology. Therefore, we conclude $Df(x_n)[h_n]\rightarrow Df(x)[h]$ in SOT. In order to prove non-trivial results such as Theorem \ref{strongbb} for strong NC functions, we turn our attention to the notion of "shift forms". The following construction is motivated by the dilation theory introduced by A. Frazho \cite{FRAZHO2}, \cite{FRAZHO} and G. Popescu \cite{pop3}, \cite{pop4} and was privately communicated to the author by J. E. Pascoe. Similar ideas in a different setting were utilized in \cite{passerpascoe}. The separability of the underlying Hilbert space will now be used extensively. Throughout this section, we fix a countable orthonormal basis $\{e_1, e_2, \ldots\}$ for $\mathcal{H}.$ Given a $d$-tuple $X\in B(\mathcal{H})^d,$ the idea is to find a unitary operator in $B(\mathcal{H})$ which provides a basis for $\mathcal{H}$ on which the coordinates of $X$ essentially act as shifts. Let $M$ be the shift operator $Me_k=e_{k+1}$. For the sake of brevity, we write $(X,M)$ for the $(d+1)$-tuple $(X^1,\ldots,X^d,M).$ We will denote the complex vector space of polynomials in $(d+1)$ noncommuting variables of degree less than or equal to $k$ by $\mathcal{P}(k,d)$ and write $\alpha(k,d)$ for its dimension. Begin by defining a nested sequence of subspaces of $\mathcal{H}:$ \[V_k^X:= \{p(X,M)e_1 : p \in \mathcal{P}(k,d)\}, \] for $k\geq 0.$ We record the following properties of the $V_k^X:$ \begin{enumerate} \item For each $k\geq 0,$ we have $e_1,\ldots, e_{k+1}\in V_k^X$. In particular, $$\mathcal{H}=\overline{\bigcup_{k=0}^{\infty} V_k^X}.$$ \item The inclusion $X^i V_k^X \subset V_{k+1}^X$ holds for all $i=1,\ldots,d$ and $k\geq 0.$ \item The $V_k^X$ form a strictly increasing sequence. \item The inequality $$\dim V_k^X \leq \alpha(k,d)$$ holds for all $k\geq 0,$ independent of the choice of $d$-tuple $X.$ \end{enumerate} Properties (i), (iii), and (iv) above imply that there exists a unitary operator $u\in B(\mathcal{H})$, depending on $d$ and $X,$ but not $k$, such that \begin{align} \label{sh1} u\,(\text{span}\, \{e_1,\ldots,e_k, e_{k+1}\})\subset V_k^X \end{align} and \begin{align} \label{sh2} u^*(V_k^X)\subset \text{span}\, \{e_1,\ldots, e_{\alpha(k,d)}\} \end{align} hold for every $k\geq 0.$ For a unitary $u$ satisfying (\ref{sh1}) and (\ref{sh2}), we call the $d$-tuple \begin{align*}\widetilde{X}:=u^*Xu\end{align*} a \emph{shift form} of $X.$ We note that there may well be more than one such unitary for a given $X,$ but for our purposes, the existence of at least one is sufficient. Moreover, the results proved in the present section are independent of choice of shift form; all that is required are the four properties listed above. This construction allows us to prove an SOT compactness-like theorem for bounded subsets of $B(\mathcal{H})^d$. It is well-known that the unit ball of $B(\mathcal{H})$ is not SOT (sequentially) compact when $\mathcal{H}$ is infinite dimensional, but we prove in Lemma \ref{sotcompact} that for any bounded sequence in $B(\mathcal{H})^d,$ there is a subsequence along which its sequence of shift forms converge SOT. More precisely, in fact, given a bounded sequence $X_n \in B(\mathcal{H})^d,$ and given any sequence of unitaries $u_n$ such that $u_n^*X_nu_n$ is a shift form of $X_n$ for each $n,$ there is a subsequence along which $u_n^*X_nu_n$ converges in SOT. This statement lends itself nicely to applications with strong NC functions since they preserve conjugations by unitary operators and are strongly continuous when restricted to certain unitarily invariant sets. Moreover, we have sufficient norm control over the shift forms so that, after conjugating by further unitaries if necessary, the subsequential limit will have large norm if the original sequence is bounded away from zero (see part (ii) of Lemma \ref{sotcompact}). Lemma \ref{sizeshift} below is a technical ingredient used in this note only in the proof of part (ii) of Lemma \ref{sotcompact} but is an interesting property of shift forms in its own right. \begin{lemma}\label{sizeshift} If $X\in B(\mathcal{H})^d$ and $k\geq 1$, then by letting $P_k$ denote the projection onto the subspace spanned by the first $k$ basis vectors $e_1,\ldots, e_k,$ we have \begin{align} \label{shiftinequality} \Vert P_k X^iP_k\Vert \leq \Vert P_{\alpha(k,d)} \widetilde{X}^iP_{\alpha(k,d)} \Vert \end{align} for each $i=1,\ldots, d$ and choice of shift form $\widetilde{X}$ of $X.$ \end{lemma} \begin{remark}\rm The proof shows, in fact, that the norm inequality (\ref{shiftinequality}) can be refined slightly. For example, under the hypotheses of Lemma \ref{sizeshift}, it holds that \begin{align*} \Vert X^iP_k\Vert \leq \Vert P_{\alpha(k,d)} \widetilde{X}^iP_{\alpha(k-1,d)} \Vert. \end{align*} Since we do not require this inequality moving forward, we opt for the more visually symmetric (\ref{shiftinequality}). \end{remark} \begin{proof} Write $\widetilde{X}=u^*Xu$ for a unitary $u$ satisfying (\ref{sh1}) and (\ref{sh2}). Let $y \in \text{span}\,\{e_1,\ldots, e_k\}$ with $\Vert y\Vert \leq1.$ Since $\text{span}\,\{e_1,\ldots, e_k\} \subset V_{k-1}^X,$ we know by (\ref{sh2}) that the containment $u^*y \in \text{span}\, \{e_1,\ldots, e_{\alpha(k-1,d)}\}$ holds. Furthermore, this implies $X^iy\in V_k^X$, and so $u^*X^iy \in \text{span}\, \{e_1,\ldots, e_{\alpha(k,d)}\}.$ Therefore we may estimate \begin{align*} \Vert P_kX^iP_k y\Vert &\leq \Vert X^i y\Vert = \Vert u^*X^iy\Vert \\ &=\Vert P_{\alpha(k,d)}u^*X^iy\Vert =\Vert P_{\alpha(k,d)}[u^*X^iu]u^*y\Vert \\ &= \Vert P_{\alpha(k,d)}[u^*X^iu]P_{\alpha(k,d)}u^*y\Vert \\ &\leq \Vert P_{\alpha(k,d)} \widetilde{X}^iP_{\alpha(k,d)} \Vert.\end{align*} Taking supremum over such $y$ finishes the proof. \end{proof} Part (ii) of Lemma \ref{sotcompact} will be used in the proof of Theorem \ref{strongbb}. In the notation of this lemma, we need $H'\neq 0$ to ensure it is not in the kernel of any injective derivative map of a strong NC function. \begin{lemma}\label{sotcompact} The following two convergence properties hold. \begin{enumerate} \item Let $X_n$ be a bounded sequence in $B(\mathcal{H})^d.$ For any sequence of shift forms $\widetilde{X_n}$ of $X_n$, there is a subsequence along which $\widetilde{X_n}$ converges in SOT. In particular, given a bounded sequence $X_n$ in $B(\mathcal{H})^d$, there exists a sequence of unitaries $U_n$ such that $U_n^*X_nU_n$ converges in SOT along a subsequence. \item Suppose $X \in B(\mathcal{H})^d$ and $H_n\in B(\mathcal{H})^d$ with $\Vert H_n\Vert =1$ for all $n.$ Then there exist unitaries $W_n \in B(\mathcal{H}),$ a point $(X',H')\in B(\mathcal{H})^{2d}$ with $H'\neq 0$, and a subsequence along which $$W_n^*(X,H_n)W_n \rightarrow (X',H')$$ in SOT. \end{enumerate} \end{lemma} \begin{proof} (i) For each $n$, let $\widetilde{X_n}=u_n^*X_nu_n$ be any shift form of $X_n$. For every $n$, $i=1,\ldots, d$, and $k\geq 1,$ properties (\ref{sh1}) and (\ref{sh2}) imply \begin{align*} \widetilde{X_n}^i (\text{span}\, \{e_1,\ldots,e_k\}) &= u_n^*X_n^iu_n (\text{span}\, \{e_1,\ldots,e_k\}) \\ &\subset u_n^*X_n^i (V_{k-1}^{X_n}) \\ &\subset u_n^* (V_k^{X_n})\\ &\subset \text{span}\, \{e_1,\ldots,e_{\alpha(k,d)}\}. \end{align*} Therefore, for every $i=1,\ldots, d$ and $k\geq 1,$ the sequence $\{\widetilde{X_n}^i e_k\}_{n=1}^{\infty}$ is bounded and contained in a finite dimensional subspace. By a diagonalization argument, we may then find a subsequence $n_j$ so that $\widetilde{X_{n_j}}^i e_k$ converges for every $i=1,\ldots, d$ and $k\geq 1.$ By boundedness again, this implies $\widetilde{X_{n_j}}^i$ converges SOT for every $i=1,\ldots, d.$ (ii) By passing to a subsequence if necessary, we may assume there is $i\in \{1,\ldots, d\}$ such that $\Vert H_n^i \Vert=1$ for all $n.$ We again denote by $P_k$ the projection onto the subspace spanned by the first $k$ basis vectors $e_1,\ldots, e_k.$ First note that if $T \in B(\mathcal{H})$ has operator norm equal to 1, then for every $\varepsilon>0$ small there exists a unitary $W \in B(\mathcal{H})$ such that $1-\varepsilon \leq \Vert P_2 W^*TWP_2\Vert.$ This can be seen by choosing a unit vector $v$ which approximates the norm of $T$, and then defining a unitary which maps $e_1$ to $v$, and $e_2$ to a suitable linear combination $av+bTv$. Applying this to $H_n^i$ for each $n,$ we can find unitaries $Q_n \in B(\mathcal{H})$ so that $$1-\frac{1}{n} \leq \Vert P_2 Q_n^*H_n^iQ_nP_2\Vert.$$ Since $X_n:=Q_n^*(X,H_n)Q_n$ is a bounded sequence in $B(\mathcal{H})^{2d},$ by part (i) there is a sequence of unitaries $U_n$ and a subsequence $n_j$ along which $\widetilde{X_n}=U_n^*X_nU_n$ converges in SOT, say to $(X',H') \in B(\mathcal{H})^{2d}$. Define $W_n:=Q_nU_n$. Combining this and (\ref{shiftinequality}) with $k=2$ and $2d$ variables, we estimate \begin{align*} 1-\frac{1}{j} &\leq \Vert P_2 Q_{n_j}^*H_{n_j}^iQ_{n_j} P_2\Vert \\ &= \Vert P_2 X_{n_j}^{d+i} P_2\Vert \\ &\leq \Vert P_{\alpha(2,2d)} \widetilde{X_{n_j}}^{d+i} P_{\alpha(2,2d)} \Vert \\ &= \Vert P_{\alpha(2,2d)} W_{n_j}^*H_{n_j}^iW_{n_j} P_{\alpha(2,2d)} \Vert . \end{align*} Since strong convergence implies norm convergence on finite dimensional spaces, taking the limit as $j\rightarrow \infty$ in the above estimate implies $$1\leq \Vert P_{\alpha(2,2d)} (H')^i P_{\alpha(2,2d)} \Vert \leq \Vert H'\Vert.$$ Therefore $H'\neq 0,$ which concludes the proof. \end{proof} \begin{remark}\rm The proof of part (i) of Lemma \ref{sotcompact} shows further that we can choose a subsequence along which $u_n^*X_nu_n$ and $u_n^*$ both converge in SOT. This is often a useful property of the unitaries defining the shift forms since the SOT limit of $u_n^*$ is necessarily an isometry. These observations can be helpful in the study of convexity in the operator setting. \end{remark} \section{PROOFS OF MAIN RESULTS} \label{sec: proofs} We now provide detailed proofs of the main results listed in Section \ref{sec:main}. To begin, we need a general result on linear maps between Banach spaces. As the author could not find a suitable source in the literature, we include its statement and simple proof below for convenience. $B(X,Y)$ denotes the bounded linear maps between the Banach spaces $X$ and $Y$, with the operator norm. \begin{lemma}\label{banach} Let $X$ and $Y$ be Banach spaces and fix $\alpha>0$. The set of maps in $B(X,Y)$ that are surjective and bounded below by $\alpha$ is norm-closed. \end{lemma} \begin{proof} Let $T_n$ be a sequence of such linear maps converging to $T$. As $\alpha\Vert x\Vert \leq \Vert T_nx\Vert$ holds for all $n$ and $x\in X,$ we see that $\alpha\Vert x\Vert \leq \Vert Tx\Vert$ for all $x\in X,$ so $T$ is bounded below by $\alpha.$ Now we show $T$ must also be surjective. The uniform bound below on the $T_n$ implies the sequence of inverses $T_n^{-1}$ is uniformly bounded in operator norm by $1/\alpha.$ Thus, the estimate $$\Vert T_n^{-1}-T_m^{-1}\Vert \leq \Vert T_m^{-1}\Vert \Vert T_m-T_n\Vert \Vert T_n^{-1}\Vert \leq 1/\alpha^2 \Vert T_m-T_n\Vert$$ shows $T_n^{-1}$ is a convergent sequence in $B(Y,X)$. Since $T_nT_n^{-1}=1_Y$ for all $n,$ we immediately see that $T$ is surjective. \end{proof} We reiterate that the notion of connectedness in what follows is with respect to the norm topology on $B(\mathcal{H})^d$. It is suggested that the reader recall Definition \ref{ncbbp} of the NC bounded below property. \begin{proof}[Proof of Theorem \ref{bbthm}] By hypothesis, the set \[U:=\{x\in \Omega : Df(x) \,\,\text{is invertible}\}\] is non-empty. Since invertible maps form a norm-open set in $B(B(\mathcal{H})^d),$ the continuity of the map $x\mapsto Df(x)$ implies $U$ is open in norm. As $\Omega$ is connected, it suffices to show $U$ is also closed in $\Omega.$ To that end, take a sequence $x_n$ in $U$ converging to $x \in \Omega$. We claim there is a uniform $\alpha>0$ such that each $Df(x_n)$ is bounded below by $\alpha.$ To see this, take an exhaustion $\{\Omega_k\}$ of $\Omega$ as in Definition \ref{ncd}. Since $x_n\rightarrow x \in \Omega,$ and since the exhaustion satisfies $\Omega_k \subset \text{int}\,\Omega_{k+1}$, there is $k$ large enough so that all the $x_n$ lie in $\Omega_k.$ Since $\Omega_k$ is closed under countably infinite direct sums, there is $u:\mathcal{H} \rightarrow \mathcal{H}^{(\infty)}$ unitary such that $$z:=u^{-1}\begin{bmatrix} x_1 & & \\ & x_2 & \\ & & \ddots \end{bmatrix}u \in \Omega_k.$$ By the hypothesis of $Df$ satisfying the NC bounded below property, $Df(z)$ is bounded below, say by $\alpha>0.$ Now fix $n$ and let $h \in B(\mathcal{H})^d$ be arbitrary. Let $h_n$ denote the diagonal matrix with $h$ in the $n$th diagonal entry and 0 else. Since \begin{align} \label{derivdirect} Df(z)[u^{-1}h_nu]=u^{-1}\begin{bmatrix} 0 & & & & \\ & \ddots & & & \\ & & Df(x_n)[h] & & \\ & & & 0 & \\ & & & & \ddots \end{bmatrix}u \end{align} holds by Lemma \ref{hessianexists} (i), we may take norms in (\ref{derivdirect}) to get $$\Vert Df(x_n)[h]\Vert = \Vert Df(z)[u^{-1}h_nu] \Vert \geq \alpha \Vert u^{-1}h_nu\Vert= \alpha \Vert h\Vert.$$ This implies each $Df(x_n)$ is bounded below by $\alpha.$ Again, since $f$ is $C^1$, we have $Df(x_n)\rightarrow Df(x)$ in norm so Lemma \ref{banach} implies $Df(x)$ is invertible. Thus $x\in U$ and $U$ is closed in $\Omega$. \end{proof} With this sufficient condition for global invertibility of the derivative of an NC function now obtained, we can prove our inverse function theorem: \begin{proof}[Proof of Theorem \ref{inverse}] By Theorem \ref{bbthm}, $Df(x)$ is an invertible linear mapping $B(\mathcal{H})^d\rightarrow B(\mathcal{H})^d$ for every $x\in \Omega.$ Theorem \ref{injthm} tells us that $f$ is then injective on $\Omega,$ so $f^{-1}$ exists as a map $f(\Omega)\rightarrow \Omega.$ We must show $f(\Omega)$ and $f^{-1}$ are both NC. In fact, we claim that if we take an exhausting sequence $\{\Omega_k\}$ for $\Omega$, then the sequence of images $\{f(\Omega_k)\}$ is an exhaustion for $f(\Omega)$. First, we show $f(\Omega)$ is an NC domain. All required properties in Definition \ref{ncd} of the sequence $f(\Omega_k)$ are immediate from the corresponding properties of $\Omega_k$ and the fact that $f$ is NC, except possibly the containment $f(\Omega_k)\subset \text{int}\, f(\Omega_{k+1}).$ But since $f$ is $C^1,$ the classical inverse function theorem for Banach spaces (see \cite{banachbook} for a reference) implies $f$ is an open map because each $Df(x)$ is invertible. Hence, $$f(\Omega_k)\subset f(\text{int}\, \Omega_{k+1}) =\text{int}\, f(\text{int}\, \Omega_{k+1}) \subset \text{int}\, f(\Omega_{k+1}).$$ Finally, we show $f^{-1}$ is an NC function. Let $f(x_1)$ and $f(x_2)$ be in $f(\Omega)$ and let $s:\mathcal{H}\rightarrow \mathcal{H}^{(2)}$ be invertible with \begin{align} \label{btrick} s^{-1} \begin{bmatrix} f(x_1) & 0 \\ 0 & f(x_2) \end{bmatrix}s \in f(\Omega).\end{align} It suffices to show $w:=s^{-1}\begin{bmatrix} x_1 & 0 \\ 0 & x_2 \end{bmatrix}s$ lies in $\Omega,$ since we may then apply $f$ and use the fact that $f$ preserves direct sums to get $$f^{-1}\left(s^{-1} \begin{bmatrix} f(x_1) & 0 \\ 0 & f(x_2) \end{bmatrix}s\right)=s^{-1} \begin{bmatrix} x_1 & 0 \\ 0 & x_2 \end{bmatrix}s.$$ This then shows $f^{-1}$ preserves direct sums. Note that the membership $w\in \Omega$ does not immediately follow since $s$ is not necessarily unitary. To that end, call the expression in (\ref{btrick}) $f(z)$ for a unique $z \in \Omega.$ We know there is a unitary $u:\mathcal{H} \rightarrow \mathcal{H}^{(2)}$ such that $$x:=u^{-1}\begin{bmatrix} x_1 & 0 \\ 0 & x_2 \end{bmatrix}u \in \Omega.$$ Since $f$ is NC, if we define $L:=s^{-1}u \in B(\mathcal{H}),$ then $$f(x)=u^{-1}\begin{bmatrix} f(x_1) & 0 \\ 0 & f(x_2) \end{bmatrix}u = L^{-1}f(z)L.$$ We claim that $z=LxL^{-1},$ which proves $w \in \Omega,$ since $LxL^{-1}=w.$ There is unitary $v:\mathcal{H}\rightarrow \mathcal{H}^{(2)}$ such that $v^{-1}\begin{bmatrix} z & 0 \\ 0 & x \end{bmatrix}v \in \Omega$. For sufficiently small $\varepsilon>0,$ apply Lemma \ref{L}: \begin{align*}f\left(v^{-1}\begin{bmatrix} z & \varepsilon(Lx-zL) \\ 0 & x \end{bmatrix}v\right)&=v^{-1}\begin{bmatrix} f(z) & \varepsilon(Lf(x)-f(z)L) \\ 0 & f(x) \end{bmatrix}v \\ &= v^{-1}\begin{bmatrix} f(z) & 0 \\ 0 & f(x) \end{bmatrix}v \\ &= f\left(v^{-1}\begin{bmatrix} z & 0 \\ 0 & x \end{bmatrix}v\right).\end{align*} It now follows from injectivity of $f,$ that $Lx=zL,$ as desired. \end{proof} Recall from Section \ref{sec:main} the notation $Z_f$ denotes the zero set of the function $f$. We now prove the implicit function theorem for NC operator functions by using Theorem \ref{inverse} applied to an appropriate auxiliary function. The derivative map of this function will be shown to also have the NC bounded below property under the hypotheses of Theorem \ref{implicit}. \begin{proof}[Proof of Theorem \ref{implicit}] Consider the NC function $F:\Omega \rightarrow B(\mathcal{H})^d$ given by the formula $F(x)=(x^1,\ldots, x^{d-r},f(x)).$ We claim that $F$ satisfies the hypotheses of Theorem \ref{inverse}. The derivative of $F$ is computed as \begin{align}\label{implicitdiff} DF(x)[h]=(h^1,\ldots,h^{d-r},Df(x)[h]). \end{align} We first show that $DF(a)$ is invertible in $B(B(\mathcal{H})^d)$, where $a\in \Omega$ is the point such that $\Psi(a)$ is assumed to be invertible. Let $(v,w)\in B(\mathcal{H})^{d-r}\times B(\mathcal{H})^r$ be arbitrary. By hypothesis, there is $(h^{d-r+1},\ldots, h^d)\in B(\mathcal{H})^r$ such that $$Df(a)[0,\ldots,0,h^{d-r+1},\ldots, h^d]=w-Df(a)[v,0,\ldots,0].$$ Linearity of the derivative and (\ref{implicitdiff}) then give \begin{align*} DF(a)[v,h^{d-r+1},\ldots, h^d] &= (v,Df(a)[v,h^{d-r+1},\ldots, h^d]) \\ &=(v,Df(a)[v,0,\ldots,0]+Df(a)[0,\ldots,0,h^{d-r+1},\ldots, h^d]) \\ &= (v,w), \end{align*} so $DF(a)$ is surjective. As $DF(a)$ is clearly injective when $\Psi(a)$ is, we conclude that $DF(a)$ is invertible. We now show that $DF$ has the NC bounded below property by showing, for $x\in \Omega$, that $DF(x)$ is bounded below if and only if $\Psi(x)$ is bounded below. It is immediate to see that $\Psi(x)$ is bounded below if $DF(x)$ is, so we prove the converse. Fix $x\in \Omega$ such that $\Psi(x)$ is bounded below. Then there is $\varepsilon>0,$ depending only on $x,$ such that $$\Vert Df(x)[0,\ldots,0,h^{d-r+1},\ldots, h^d]\Vert \geq \varepsilon \max \{\Vert h^{d-r+1}\Vert, \ldots, \Vert h^d\Vert\}$$ for all $(h^{d-r+1},\ldots, h^d)\in B(\mathcal{H})^r.$ Therefore we may estimate \begin{align*} \Vert Df(x)[h^1,\ldots,h^d]\Vert &= \Vert Df(x)[h^1,\ldots,h^{d-r},0,\ldots,0] \\ &\hspace{90pt}+Df(x)[0,\ldots,0,h^{d-r+1},\ldots,h^d]\Vert \\ &\geq \Vert Df(x)[0,\ldots,0,h^{d-r+1},\ldots,h^d]\Vert \\ &\hspace{90pt}-\Vert Df(x)[h^1,\ldots,h^{d-r},0,\ldots,0]\Vert \\ &\geq \varepsilon \max\{\Vert h^{d-r+1}\Vert, \ldots, \Vert h^d\Vert\} \\ &\hspace{90pt}-\Vert Df(x)\Vert \max \{\Vert h^1\Vert,\ldots,\Vert h^{d-r}\Vert\}. \end{align*} This combined with taking norms in (\ref{implicitdiff}) gives us \begin{align*}\Vert DF(x)[h^1,\ldots, h^d]\Vert&=\max \{\Vert h^1\Vert,\ldots,\Vert h^{d-r}\Vert,\Vert Df(x)[h^1,\ldots,h^d]\Vert\} \\ &\geq \frac{\varepsilon}{\varepsilon+\Vert Df(x)\Vert +1} \max \{\Vert h^1\Vert,\ldots,\Vert h^d\Vert\}, \end{align*} so $DF(x)$ is bounded below. Therefore, by Theorem \ref{inverse}, we know $F^{-1}: F(\Omega)\rightarrow \Omega$ is NC. We may write $F^{-1}$ it terms of its coordinates, say $F^{-1}=(G^1,\ldots, G^d).$ Let $V$ be the projection onto the first $d-r$ coordinates of the zero set $Z_f.$ Thus, $V$ can explicitly be written as the set of $y\in B(\mathcal{H})^{d-r}$ such that there exists $z\in B(\mathcal{H})^r$ with $(y,z)\in \Omega$ and $f(y,z)=0.$ Then $V$ is seen to be an NC domain with exhaustion $\{V_k\}$, where $V_k$ is defined to be the set of $y\in B(\mathcal{H})^{d-r}$ such that there exists $z\in B(\mathcal{H})^r$ with $(y,z)\in \Omega_k$ and $f(y,z)=0.$ (The containment $V_k\subset \text{int} \, V_{k+1}$ follows since $F$ is an open map by Theorem \ref{bbthm} and the classical Banach space inverse function theorem.) Now define $\phi:V\rightarrow B(\mathcal{H})^r$ by $$\phi(y):=(G^{d-r+1}(y,0),\ldots, G^d(y,0)).$$ It is immediate to check that $\phi$ is an NC function. Let $y\in V.$ From the definitions, \begin{align*} (y,0)&= F(G^1(y,0),\ldots, G^{d-r}(y,0),\phi(y)) \\ &= (G^1(y,0),\ldots, G^{d-r}(y,0),f(F^{-1}(y,0))). \end{align*} Therefore $y=(G^1(y,0),\ldots, G^{d-r}(y,0))$ and $f(F^{-1}(y,0))=0$, so $(y,\phi(y))\in Z_f.$ Conversely, let $x=(y,z)\in Z_f,$ where $y \in B(\mathcal{H})^{d-r}$ and $z\in B(\mathcal{H})^r$. Then $y\in V$ and $F(x)=(y,f(x))=(y,0).$ Thus, $(y,z)=F^{-1}(y,0)$, which implies $z=\phi(y).$ This establishes the desired parametrization of $Z_f.$ \end{proof} Now we come to the proofs of the main results concerning \emph{strong} NC functions. Theorem \ref{strongbb} and its corollary are our primary applications of the shift form construction from Section \ref{sec: strongshift}. The reader may want to review that section before proceeding with the following proof. \begin{proof}[Proof of Theorem \ref{strongbb}] Suppose there is $x \in \Omega$ such that $Df(x)$ is not bounded below. Then we can find a sequence $h_n\in B(\mathcal{H})^d$ of unit vectors such that \[\Vert Df(x)[h_n]\Vert \rightarrow 0.\] Let $\{\Omega_k\}$ be an exhaustion for $\Omega$ as in Definition \ref{sncf} and say the point $x$ lies in $\Omega_k.$ By Lemma \ref{sotcompact} (ii), there are unitaries $v_n$ and a point $(x',h')\in B(\mathcal{H})^{2d}$ with $h'\neq 0$ such that $$v_n^*(x,h_n)v_n\rightarrow (x',h')$$ in SOT along a subsequence $n_j$. Since $\Omega_k$ is unitarily invariant and SOT-closed, it follows that $v_{n_j}^*xv_{n_j} \in \Omega_k$ for every $j$ and that $x'\in \Omega_k.$ Therefore, by the discussion following Definition \ref{sncf} on the SOT continuity of the derivative of a strong NC function, we have $$v_{n_j}^*Df(x)[h_{n_j}]v_{n_j}=Df(v_{n_j}^*xv_{n_j})[v_{n_j}^*h_{n_j}v_{n_j}] \rightarrow Df(x')[h']$$ in SOT. But by the choice of $h_n$ and because the $v_n$ are unitary, we also have $$v_{n_j}^*Df(x)[h_{n_j}]v_{n_j}\rightarrow 0$$ in norm. Therefore, $Df(x')[h']=0,$ contradicting the hypothesis of injectivity of $Df(x').$ \end{proof} \begin{proof} [Proof of Corollary \ref{strongbbcor}] Apply Theorem \ref{injthm} to conclude that each $Df(x)$ is injective. Then by Theorem \ref{strongbb}, each $Df(x)$ is in fact bounded below since $f$ is strong NC. Theorem \ref{bbthm} then implies the desired conclusion. \end{proof} \bigskip \emph{Acknowledgements.} This work was partially supported by the National Science Foundation Grant DMS 1565243.
1,314,259,993,985
arxiv
\section*{Acknowledgment} We would like to thank M. Sasai, S. Takada and G. Chikenji for critical reading of the manuscript and valuable comments. The present work is partially supported by IT-program of Ministry of Education, Culture, Sports, Science and Technology, The 21st Century COE program named "Towards a new basic science: depth and synthesis", Grant-in-Aid for Scientific Research (C) (17540383) from Japan Society for the Promotion of Science and JST-CREST.
1,314,259,993,986
arxiv
\section{Introduction} Multi-vehicles driving on the road is a complex interactive game problem \cite{xiaoming2010study,li2006cooperative}. It requires each vehicle to decide driving strategy considering surrounding vehicles in real-time. Therefore, it is necessary to design a driving control model which can make decisions interactively. With the rapid development of artificial intelligence and V2X technologies, autonomous driving has gradually become possible. Autonomous vehicles driving on the road also faces game problems \cite{dreves2018generalized}. In traditional planning and decision-making methods, the trajectories of surrounding vehicles are predicted at first and the predicted trajectories are input as obstacles into the planning and decision-making module \cite{grigorescu2020survey}. This two-stages method does not consider the influence of interactions between the ego car and surrounding cars. In order to deal with this limitation, we absorb game theory into decision-making model to consider the mutual influence of the ego car and surrounding cars. \begin{figure}[t] \begin{center} \centerline{\includegraphics[width=0.8\columnwidth]{material/multi_driving.png}} \caption{\textbf{Three vehicles drive at a non-signalized intersection}. They have individual objective functions $J_i$, which involves their destinations, the reference trajectories, the expected speeds, the acceptable safe distances, et al. The moving of vehicles is controlled by acceleration and steering angle. This multi-player decision-making scenario with individual objective functions is a non-zero-sum game problem.} \label{intersection_scenario} \end{center} \end{figure} Several scholars have conducted in-depth research on driving games, but there are still some problems. In 2015, game theory is introduced in intersection control for the first time \cite{elhenawy2015intersection}. It was found that the traffic efficiency was improved by 49\% compared to using stop signs. \cite{yang2016cooperative} and \cite{cheng2018speed} use Pareto optimality, a global optimal state, as the optimal driving control strategy. However, vehicles tend to be non-cooperative in the real world, so it is unrealistic to reach the global optimal state. To address this limitation, \cite{cheng2019vehicle} used Nash equilibrium strategy as the optimal driving strategy to achieve individual's optimality. However, they assume that vehicles can only be controlled by acceleration without steering angle. It makes vehicles must travel on fixed trajectories regardless of trajectory offset. Moreover, the above methods tend to define the range of acceleration as a discrete finite set, for example, \{$0$ m/s, $\pm1$ m/s, $\pm2$ m/s\}. In reality, acceleration should be a continuous variable. There are some works that partially solve those problems. For example, \cite{fridovich2020iterative} uses the iterative quadratic method to solve a general-sum driving game. However, they require the system dynamics be linearizable, which is unrealistic in complex driving games with highly non-linear vehicle dynamics. \cite{cleac2019algames} proposes ALGAMES, which is an augmented Lagrangian game-theoretic solver for driving games. They use MPC (Model Predictive Control) to solve a constrained game problem based on optimization methods. However, when the number of cars becomes large, optimization-based methods will take a long time to find the optimal solution. The large time usage makes it impossible in real-time decision-making and control. To overcome those issues mentioned above, we adopt the 3-DOF (Degree of Freedom) vehicle dynamic model \cite{ge2021numerically} rather than the vehicle kinematic model, to achieve joint control of acceleration and steering angle. The control variables are both defined in continuous space for realistic. To achieve better driving performance, we consider long-term cost based on tracking performance, efficiency, safety and comfort index in the future. Then, multi-vehicles driving on the road is modeled as a non-zero-sum game problem. To obtain high-precision control actions, we directly solve the original nonlinear problem rather than linearizing it like \cite{fridovich2020iterative}. Finally, to solve the game problems and implement in real-time applications, we adopt the model-based reinforcement learning method, ADP (Approximate Dynamic Programming), to learn a neural network driving control policy. \textbf{Contributions. } We summarize our key contributions as follows: \begin{itemize} \item We propose a non-zero-sum game framework for modeling multi-vehicles driving. This framework can consider the influence of interactions between cars for improving driving intelligence. \item We provide an effective way to solve the Nash equilibrium driving strategy based on reinforcement learning. Experience replay and self-play tricks is adopted for scaling up the number of agents. \item We provide experiments showing our algorithm could complete trajectory tracking, speed tracking and collision avoidance task perfectly. Vehicles can learn to overtake and pass behaviors without prior knowledge. \item We release our code to facilitate reproducibility and future research \footnote{{\rm https://github.com/jerry99s/NonZeroSum\_Driving}}. \end{itemize} \section{Background} \subsection{Game Theory} Game theory is an effective method to study the interactive control of multi-agents. It studies how individuals obtain their best strategies while facing conflicts \cite{von1947theory}. It has been applied to several driving control problems, such as lane changing control \cite{wang2015game,talebpour2015modeling} and driver-automation shared control \cite{na2014game,flad2014steering}. According to the cooperative and competitive relationship between participants, game problems can be divided into zero-sum games, non-zero-sum games, and completely cooperative games. The participants in a zero-sum game try to maximize or minimize a same objective function, while they have individual objective functions in a non-zero-sum game. For example, AlphaGo \cite{silver2016mastering} models Go as a zero-sum game, while Pluribus \cite{brown2019superhuman} models Texas hold'em poker as a non-zero-sum game. \subsection{Nash Equilibrium} \textbf{Definition 2.1} \cite{nash1950equilibrium} Let $u_i$ and $J_i(u_1,...,u_n)$ be the action and objective function of participant $i$. The \textit{Nash equilibrium} point $(u_1^*,...,u_n^*)$ satisfies: \[J_i(u_1^*,...,u_i^*,...,u_n^*) \leq J_i(u_1^*,...,u_i,...,u_n^*), \forall u_i \forall i\] \begin{figure}[H] \begin{center} \centerline{\includegraphics[width=0.9\columnwidth]{material/nash_equilibrium.pdf}} \caption{\textbf{Nash equilibrium for a two-player zero-sum game.} Two players have the same objective function $J(u_1,u_2)$ while player 1 tries to minimize it and player 2 tries to maximize it. They reach the Nash equilibrium at $(u_1^*, u_2^*)$.} \end{center} \end{figure} Nash equilibrium is an important concept in game theory. It describes the equilibrium state in non-cooperative games. At the Nash equilibrium point, it is impossible for every participant to obtain a lower objective function value by only changing his own action. Therefore, the Nash equilibrium action $u^*$ is the optimal action for each individuals if we suppose that all players are completely rational. It inspires that we can find the Nash equilibrium strategy , which always give the Nash equilibrium action at whatever state, to improve individual's intelligence. So, in the non-zero-sum multi-vehicles driving games described in Figure \ref{intersection_scenario}, we aims to find the Nash equilibrium driving strategies. \subsection{Approximate Dynamic Programming} Reinforcement learning is a learning method by imitating the learning process of human beings via "trial and error-evaluation". The learning process is shown in Figure \ref{RL_learning}. \begin{figure}[H] \begin{center} \centerline{\includegraphics[width=0.8\columnwidth]{material/reinforcement_learning.pdf}} \caption{\textbf{Reinforcement learning}: It perceives the current state and takes action to interact with the environment. Environment will update the state according to action and give reward or punishment feedback for action evaluation.} \label{RL_learning} \end{center} \end{figure} According to whether the learning method uses an environment dynamic model, reinforcement learning can be divided into model-based category and model-free category. ADP (Approximate dynamic programming) is a model-based reinforcement learning method \cite{lewis2009reinforcement}. ADP learns a actor network $\pi(s;\theta):\mathbb{R}^{|S|}\rightarrow\mathbb{R}^{|U|}$ to calculate the optimal action, and a critic network $V(s;\phi):\mathbb{R}^{|S|}\rightarrow\mathbb{R}$ to evaluate the current state $s$ under policy $\pi$. $\theta$ and $\phi$ represent network weights in actor and critic. $|S|$ and $|U|$ represent the dimension of state and action, respectively. Here we use $U$ rather than $A$ to represent action space and use $u$ rather that $a$ to represent an action, which matches up the expression in game theory and avoids character conflict with vehicle acceleration. The value of critic network is equal to the sum of future rewards: \vskip -0.1in \begin{align} V(s_t;\phi) = \sum_{i=0}^\infty r(s_{t+i},u_{t+i}) \label{ADP_value} \end{align} where $s_t$ is the state in time step $t$, and $r$ is the reward function. The weights $\phi$ in critic network $V(s_t;\phi)$ is updated according to Bellman equation: \vskip -0.1in \begin{align} V(s_t;\phi) = r(s_{t},u_{t}) + V(s_{t+1};\phi) \label{bellman_eq} \end{align} and the weights $\theta$ in actor network is updated based on the reward and current critic network: \vskip -0.1in \begin{align} \pi(s_t;\theta) = \mathop{\arg \min}\limits_u \{r(s_{t},u) + V(f(s_t,u);\phi) \} \label{pi_update} \end{align} where $f:\mathbb{R}^{|S|}\times\mathbb{R}^{|U|}\rightarrow\mathbb{R}^{|S|}$ is the environment dynamic model. Environment dynamic model could provide the next state $s_{t+1}$ given the inputs of the state and action in the current time step. \section{Methods} \subsection{Modeling of Non-zero-sum Game} \label{modeling} To model the vehicle motion more realistic, we adopt the vehicle dynamic model rather than the kinematic model. Vehicle dynamic model is widely used in several well-known autonomous driving platforms, such as NVIDIA DRIVE Sim \cite{nvndia_sim} and Baidu Apollo \cite{fan2018baidu}. The dynamic model makes it possible to jointly control acceleration and steering angle in continuous space. We use the 3-DOF vehicle dynamic model derived by backward Euler [Ge \emph{et al.}, 2021] to get a stable simulation even the longitudinal vehicle speed is low. \begin{figure}[H] \begin{center} \centerline{\includegraphics[width=0.8\columnwidth]{material/dynamic.png}} \caption{\textbf{The 3-DOF vehicle dynamic model.} $\delta$ is the front wheel steering angle. $a$ is the longitudinal acceleration. $v_x, v_y$ are longitudinal speed and lateral speed. $\omega$ is the yaw rate. $l_f$ is the distance between the center of gravity and the front axle. $l_r$ is the distance between the center of gravity and the rear axle. $d$ is the trajectory offset from vehicle center to reference trajectory. $\alpha$ is the heading angle error between the vehicle heading angle and the tangent line of reference trajectory.} \label{vehicle_dynamic} \end{center} \end{figure} We use an intersection scenario as example, where each vehicle tracks its reference trajectory until going through the intersection. The scenario is shown in Figure \ref{intersection_scenario}. The vehicles' motions obey the dynamic model and they take individual actions according to different objective functions. Objective functions includes trajectory tracking performance index, traffic efficiency index, safety performance index and comfort performance index. This situation can be modeled as a MDP (Markov Decision Process). We give the definations of state, action, dynamic and reward in this MDP as follows. \textbf{State. } The state of vehicle $i$ is defined as $x_i$ \[x_i = [X_i, Y_i, d_i, \alpha_i, v_{x,i}, v_{y,i}, \omega_i]^\top,\ i\in\{1,\cdots,n\}\] where $(X_i, Y_i)$ are the global location of vehicle $i$ and $n$ is the total number of vehicles. Other variables are declared in Figure \ref{vehicle_dynamic}. The system state of all vehicles is defined as $s$: \[s = [x_1,\cdots,x_n]^\top\] It is the combination of all vehicle states. Here we ignore timestamp $t$ for simplification. \textbf{Action. } The action of vehicle $i$ is defined as $u_i$ \[u_i = [a_i, \delta_i]^\top\] where $a_i$, $\delta_i$ are the longitudinal acceleration and steering angle of vehicle $i$, respectively. Here we use $u$ to represent action rather than $a$ to match up the expression in game theory and avoid character conflict with vehicle acceleration. \textbf{Dynamic. } After the definition of state and action, the discrete-time state transition can be written in an affine nonlinear equation (\ref{affine_dynamic}) when only controlling the steering angle: \vskip -0.1in \begin{align} s_{t+1} = f(s_t) + \sum_{i=1}^n g_i(s_t)u_i(s_t) \label{affine_dynamic} \end{align} where transform $f(s_t)$ and $g_i(s_t)$ can be found based on the vehicle dynamic model. The exact form of $f(s_t)$ and $g_i(s_t)$ are listed in Appendix \ref{dynamic_formula}. Furthermore, when we control acceleration and steering angle together, the discrete-time state transition can be written in a non-affine nonlinear equation (\ref{non-affine_dynamic}): \vskip -0.1in \begin{align} s_{t+1} = f(s_t, u_1,\cdots,u_n) \label{non-affine_dynamic} \end{align} Due to the good property of the affine nonlinear system, we use equation (\ref{affine_dynamic}) rather than (\ref{non-affine_dynamic}) in the expression in section \ref{solving_ADP}. And we simply give a convergence illustration of solving non-zero-sum game by ADP when using affine system in section \ref{solving_ADP}. However, according to experiments in section \ref{experiments}, we found that our algorithm can also converge when using non-affine system and reach a better performance than affine system. \textbf{Reward. } The reward $r_i$ of vehicle $i$ is defined as: \vskip -0.1in \begin{align} r_i(s, u_1,&\cdots,u_n) = \bm{\textcolor{MediumVioletRed}{c_{d} \cdot d_{i}^{2} + c_{\theta} \cdot \alpha_{i}^{2}}} \nonumber\\ &+ \bm{\textcolor{DodgerBlue}{c_{v} \cdot\left(v_{x, i}-v_{ref, i}\right)^{2}}} \nonumber\\ &+ \bm{\textcolor{OliveDrab}{c_{safe} \cdot \max \left\{0,\ 5^{2}-\left(\max_j {\rm dis}(i,j)\right)^2\right\}}} \nonumber\\ &+ \bm{\textcolor{DarkOrange}{c_{\delta} \cdot \delta_{i}^{2}+c_{a} \cdot a_{i}^{2}}} \label{reward} \end{align} where $v_{ref, i}$ is the reference longitudinal speed of vehicle $i$, ${\rm dis}(i,j)$ is the Euclidean distance of vehile $i$ and $j$. And $c_*$ are hyper-parameter constants. Other variables are already declared in this section. The {red} part in equation (\ref{reward}) indicates \bm{\textcolor{MediumVioletRed}{trajectory tracking performance}}. The {blud} part indicates \bm{\textcolor{DodgerBlue}{traffic efficiency}}. The {green} part indicates \bm{\textcolor{OliveDrab}{safety performance}}. The {orange} part indicates \bm{\textcolor{DarkOrange}{comfort performance}}. All of the parts except safety index are in quadratic form. There is a convergence guarantee of using ADP to solve non-zero-sum game when reward is in quadratic, as we illustrated in section \ref{solving_ADP}. According to experiments in section \ref{experiments}, we found that our algorithm can also converge when the safety index is not quadratic. \subsection{Solving Coupled HJB Equation} \label{solving_ADP} According to equation (\ref{ADP_value}), the value function is the sum of future rewards. By the Bellman's principle of optimality, the optimal value function of vehicle $i$ should be \vskip -0.1in \begin{align} V_{i}^{*}(s_t)=\min_{u_{i}}\left\{ r_{i}\left(s_t, u_{1}, \ldots, u_n\right)+V_{i}^{*}(s_{t+1}) \right\} \label{optimal_value} \end{align} Next state $s_{t+1}$ is derived based on the environment dynamic model. We assume that equation (\ref{optimal_value}) is continuous and differentiable, the environment dynamic model is affine, the reward is in quadratic form. Then \vskip -0.1in \begin{align} \frac{\partial V_{i}^{*}(s_t)}{\partial u_{i}} = 2 R_{ii} u_{i}+g_{i}^\top(s_t) \frac{\partial V_{i}^{*}(s_{t+1})}{\partial s_{t+1}} \label{dev_value} \end{align} where $R_{ii}$ is the diagonal matrix containing the weights in quadratic reward function, and $g_i$ is the matrix in equation (\ref{affine_dynamic}). According to stationarity conditions of optimization, which is the first-order optimality condition, we can get the necessary condition of Nash equilibrium: \vskip -0.1in \begin{align} \frac{\partial V_{i}^{*}(x)}{\partial u_{i}}=0,\ \forall i \label{optimal_condition} \end{align} Take equation (\ref{optimal_condition}) into equation (\ref{dev_value}), get \vskip -0.1in \begin{align} u_{i}^{*}=-\frac{1}{2} R_{ii}^{-1} g_{i}^\top(s_t) \nabla V_{i}^{*}(s_{t+1}),\ \forall i \label{control_law} \end{align} where $\nabla V_{i}^{*}(s_{t+1})=\frac{\partial V_{i}^{*}(s_{t+1})}{\partial s_{t+1}}$. Take equation (\ref{control_law}) into equation (\ref{optimal_value}), get the coupled HJB (Hamilton-Jacobi-Bellman) equations \vskip -0.1in \begin{align} \label{HJB_eq} 0&=Q_{i}(s_t)+V_{i}^{*}\left(s_{t+1}\right)-V_{i}^{*}(s_t) \\ &+\frac{1}{4} \sum_{j=1}^{n}\left(\nabla V_{j}^{*}\left(s_{t+1}\right)\right)^\top g_{j}(s_t) R_{jj}^{-1} g_{j}^\top(s_t) \nabla V_{j}^{*}\left(s_{t+1}\right), \forall i \nonumber \end{align} Note that there are $n$ equations in (\ref{HJB_eq}) and each equation contains $n$ value functions $V_1^*, \ldots, V_n^*$, therefore it is called \textit{coupled} HJB equations. In equation (\ref{HJB_eq}), the $n$ value functions $V_{i}^{*}$ are the only unknown quantities. In simple tasks, we can directly solve equation (\ref{HJB_eq}) to get $V_{i}^{*}$ in a closed-form solution. Then the Nash equilibrium control strategy can be obtained by taking $V_{i}^{*}$ into the equation (\ref{control_law}). However, due to the severe nonlinearity of the multi-vehicle driving problem, equation (\ref{HJB_eq}) can only be solved by numerical methods. For the nonlinear non-zero-sum game with known dynamic, ADP can be used to solve it \cite{vamvoudakis2011multi,zhang2016discrete,zhang2012near,li2013adaptive}. Convergence guarantees are provided in \cite{vamvoudakis2011multi,zhang2016discrete} when the dynamic system is affine and reward function is quadratic. Noteworthily, these works are all verified in simple tasks, e.g. the system with just $2$ or $3$ dimension state. And some works just use polynomials as the function approximator due to the straightforward system dynamic. In our scenario, we extend to a high-dimensional system with at least $14$ dimension state and use deep neural networks as the function approximator. To increase the training stability and convergence speed, we also adopt \textit{Experience Replay} \cite{mnih2013playing}. Finally, the method is described in Algorithm \ref{alg_1}. The interactive flow chart is shown in Figure \ref{alg_1_flow}. \begin{algorithm}[h] \caption{Solve Nash Equilibrium by ADP} \label{alg_1} \begin{algorithmic} \STATE {\bfseries Initialize:} $V_i(s;\phi_i)$ and $\pi_i(s;\theta_i),\ \forall i$ \REPEAT \STATE Use $\pi_i(s;\theta_i)$ run an episode and store the state trajectory ($s_0,\ldots,s_T$) into replay buffer. \STATE Sample a batch of $s_t$ from replay buffer. \STATE Calculate actions, rewards and next states: \vskip -0.25in \begin{align*} u_i &= \pi_i(s_t;\theta_i)\\ r_i &= r_i(s_{t},u_1,\ldots,u_n)\\ s_{t+1} &= f(s_{t}, u_1,\ldots,u_n) \end{align*} \STATE Calculate critic loss and actor loss according to equations (\ref{bellman_eq}) and (\ref{pi_update}): \vskip -0.25in \begin{align*} {\rm c\_loss}_{i} &= \left(r_i + V_i(s_{t+1};\phi_i) - V_i(s_t;\phi_i)\right)^2\\ {\rm a\_loss}_{i} &= r_i + V_i(s_{t+1};\phi_i) \end{align*} \STATE Update $V_i(s;\phi_i)$ and $\pi_i(s;\theta_i)$ by gradient descent: \vskip -0.2in \begin{align*} \phi_i &\leftarrow \phi_i - \eta \frac{\partial {\rm c\_loss}_i}{\partial \phi_i} \\ \theta_i &\leftarrow \theta_i - \eta \frac{\partial {\rm a\_loss}_i}{\partial \theta_i} \end{align*} \UNTIL{ $V_i(s;\phi_i)$ and $\pi_i(s;\theta_i)$ converge, $\forall i$. } \end{algorithmic} \end{algorithm} \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=0.8\columnwidth]{material/alg_1_flow.png}} \caption{\textbf{Interactive flow chart of non-zero-sum driving control.} To represent time step and vehicle index at the same time when writing reward $r$ and action $u$, we place the time step and vehicle index in subscript and superscript, respectively.} \label{alg_1_flow} \end{center} \end{figure} \subsection{Training by Self-play} \label{self_play_design} Self-play is an important trick for training the reinforcement learning agent. The agent will be trained by regarding itself as competitors. In this way, the intelligence and training efficiency will be improved. For example, AlphaGo \cite{silver2016mastering}, AlphaGo Zero \cite{silver2017mastering} and DouZero \cite{zha2021douzero} are all trained in self-play. In Algorithm \ref{alg_1}, there are $n$ critic networks $V_i(s;\phi_i)$ and $n$ actor networks $\pi_i(s;\theta_i)$ corresponding to $n$ vehicles. When the number of vehicles $n$ is big enough, the size of training parameter space becomes too large. To overcome this issue, we design a parameter-sharing mechanism to decrease the number of parameters and make it possible to self-play training. Specifically, we first transform the state $s_t$ from the global observation view to a local observation view. When vehicle $i$ perceives environment state, the surrounding vehicles' state will be observed in vehicle $i$'s view. Let's define the relative state of surrounding vehicle $j$ in the observation view of vehicle $i$ as \[rx_j = [rx_j, ry_j, r\alpha_j, v_{x,j}]^\top\] where $(rx_j, ry_j)$ is the relative coordinate in the view of vehicle $i$, $r\alpha_j$ is the relative heading angle, $v_{x,j}$ is the longitudinal speed of vehicle $j$. The variables explanation is shown in Figure \ref{relative_state}. \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=\columnwidth]{material/relative_state.png}} \caption{\textbf{Relative state} of surrounding vehicle $j$ in the observation view of vehicle $i$.} \label{relative_state} \end{center} \end{figure} Then, the overall state in the view of vehicle $i$ is defined as \[s_i = [d_i, \alpha_i, v_{x,i}, v_{y,i}, \omega_i, rx_1^\top, \ldots, rx_{n-1}^\top]^\top\] It contains the dynamic state of vehicle $i$ and the relative states of all surrounding vehicles. In this way, the observed state of each vehicle has a unified form, which makes it possible to adopt the parameter-sharing mechanism. There will be only one critic network $V(s;\phi)$ with parameter $\phi$ suitable for each vehicle, and only one actor network $\pi(s;\theta)$ with parameter $\theta$ suitable for each vehicle. And we suppose that the hyper-parameters in the reward function of each vehicle are the same. After that, the Algorithm \ref{alg_1} changes from mutual-play to self-play, which is shown in Algorithm \ref{alg_2}. The interactive flow chart is shown in Figure \ref{alg_2_flow}. \begin{algorithm}[H] \caption{Self-play ADP for Solving Nash Equilibrium} \label{alg_2} \begin{algorithmic} \STATE {\bfseries Initialize:} $V(s;\phi)$ and $\pi(s;\theta)$ \REPEAT \STATE Use $\pi(s;\theta)$ to run an episode and store the trajectory of unified-form state ($s_0,\ldots,s_T$) into replay buffer. \STATE Sample a batch of $s_t$ from replay buffer. \STATE Calculate actions, rewards and next states: \vskip -0.25in \begin{align*} u &= \pi(s_t;\theta)\\ r &= r(s_{t},u_1,\ldots,u_n)\\ s_{t+1} &= f(s_{t}, u_1,\ldots,u_n) \end{align*} \STATE Calculate critic loss and actor loss according to equations (\ref{bellman_eq}) and (\ref{pi_update}): \vskip -0.25in \begin{align*} \rm c\_loss &= \left(r + V(s_{t+1};\phi) - V(s_t;\phi)\right)^2\\ \rm a\_loss &= r + V(s_{t+1};\phi) \end{align*} \STATE Update $V(s;\phi)$ and $\pi(s;\theta)$ by gradient descent: \vskip -0.2in \begin{align*} \phi &\leftarrow \phi - \eta \frac{\partial {\rm c\_loss}}{\partial \phi} \\ \theta &\leftarrow \theta - \eta \frac{\partial {\rm a\_loss}}{\partial \theta} \end{align*} \UNTIL{ $V(s;\phi)$ and $\pi(s;\theta)$ converge. } \end{algorithmic} \end{algorithm} \begin{figure}[h] \begin{center} \centerline{\includegraphics[width=\columnwidth]{material/alg_2_flow.png}} \caption{\textbf{Interactive flow chart of non-zero-sum driving control when using self-play training.} To represent time step and vehicle index at the same time when writing reward $r$ and action $u$, we place the time step and vehicle index in subscript and superscript, respectively.} \label{alg_2_flow} \end{center} \end{figure} \section{Experiments} \label{experiments} Intersection are known as a high collision risk scene, where about 61\% of side collisions happened \cite{accidents}. There will be more interactions between vehicles if intersection has no traffic light, which increases the driving dangerousness. Therefore, we choose intersection without traffic light as the standard scene for experiments. We describe three experiments in this section. The first experiment aims to achieve intersection passing via lateral control. The second experiment aims to achieve intersection passing via both lateral and longitudinal control. We demonstrate two vehicles scenario as an example in the first and second experiments. In the third experiment, four vehicles are going through the intersection, which could verify the ability to scale up thanks to self-play training. The neural networks used in the three experiments are the same, whose structure is shown in Appendix \ref{net_structure}. \begin{figure*}[b] \begin{center} \subfigure[Vehicles trajectory]{\includegraphics[width=0.675\columnwidth]{material/exp1_traj.png}} \subfigure[Distance between vehicles]{\includegraphics[width=0.67\columnwidth]{material/exp1_dis.png}} \subfigure[Reward curve]{\includegraphics[width=0.67\columnwidth]{material/exp1_utility.png}} \vskip -0.1in \caption{\textbf{Lateral control result}.} \label{exp1} \end{center} \end{figure*} \textbf{Lateral Control. } In the first experiment, we describe a two-vehicle driving scenario at a non-signalized intersection. Vehicles can only control the steering angle to track their reference trajectories and avoid collisions. The longitudinal speeds are set as a constant $5\ \rm{m/s}$. There is a non-signalized intersection formed by two reference trajectories marked in green dash line, which is shown in Figure \ref{exp1}{\rm (a)}. The vechiles aims to track the reference trajecotries to pass the intersection. The two reference trajectories meet at point $(0,0)$. The first vehicle drives from left to right, whose real trajectory is marked in blue. The second vehicle drives from down to up, whose real trajectory is marked in orange. At the beginning, vehicle 1 and vehicle 2 are located at $(-100,-10)$ and $(-5,-100)$, which means they are $10$m and $5$m away from their reference trajectories, respectively. When the simulation starts, two vehicles track their reference trajectories successfully by controlling the steering angle. When they approach the center of intersection, both of them try to be the first one to pass. However, because the arrival time of vehicle11 is slightly later than vehicle 2, vehicle 1 can only turn the steering wheel right and give the way for passing the intersection as soon as possible. Figure \ref{exp1}{\rm (b)} implies that the nearest distance between vehicle 1 and 2 is $7.5\ \rm m$, which is bigger than the accepted safety distance $5\ \rm m$ set in equation \ref{reward}. Figure \ref{exp1}{\rm (c)} implies the reward of vehicle 1 is less than vehicle 2 when they meet at the center of intersection because vehicle 1 gives the way. \textbf{Lateral and Longitudinal Control. } In the second experiment, we describe a two-vehicle driving scenario at a non-signalized intersection. Vehicles can control both steering angle and acceleration to track their reference trajectories, track their reference speed and avoid collisions. In Figure \ref{exp2}{\rm (a)}, reference trajectories are marked in green dash line, who meet at point $(0,0)$. The vehicle 1 in blue drives from left to right. The vehicle 2 in orange drives from down to up. At the beginning, vehicle 1 and vehicle 2 are located at $(-100,5)$ and $(10,-100)$, which means they are $5\ \rm m$ and $10\ \rm m$ away from their reference trajectories, respectively. As shown in Figure \ref{exp2}{\rm (b)}, their initial longitudinal speeds are $5.5\ \rm{m/s}$ and $4.5\ \rm{m/s}$, respectively. When the simulation starts, two vehicles track their reference trajectories and reference speeds successfully by controlling steering angle and acceleration, as shown in Figure \ref{exp2}{\rm (a)(b)} and \ref{exp2}{\rm (b)}. When they approach the intersection, both of them try to be the first one to pass. However, because the arrival time of vehicle 2 is slightly later than vehicle 1, vehicle 2 can only turn the steering wheel left and decelerate to give way while vehicle 1 turn the steering wheel left and accelerates to rush. Figure \ref{exp2}{\rm (c)} implies that the nearest distance between them is $5.2\ \rm m$, which is bigger than the accepted safety distance $5\ \rm m$ set in equation \ref{reward}. Comparing Figure \ref{exp1}{\rm (a)} and Figure \ref{exp2}{\rm (a)}, the trajectory offsets at the intersection center decrease from about $10\ \rm m$ to about $3\ \rm m$ after considering longitudinal control. It is because the vehicle can avoid collision not only by turning the steering wheel, but also by adjusting the acceleration. Therefore, the lateral and longitudinal control model is more reasonable than the lateral control model although no convergence proof. \begin{figure*}[t] \begin{center} \subfigure[Vehicles trajectory]{\includegraphics[width=0.67\columnwidth]{material/exp2_traj.png}} \subfigure[Longitudinal speed]{\includegraphics[width=0.67\columnwidth]{material/exp2_speed.png}} \subfigure[Distance between vehicles]{\includegraphics[width=0.67\columnwidth]{material/exp2_dis.png}} \vskip -0.1in \caption{\textbf{Lateral and longitudinal control result}.} \label{exp2} \end{center} \end{figure*} \begin{figure*}[htbp] \begin{center} \subfigure[Vehicles trajectory]{\includegraphics[width=0.678\columnwidth]{material/exp3_traj.png}} \subfigure[Longitudinal speed]{\includegraphics[width=0.67\columnwidth]{material/exp3_speed.png}} \subfigure[Reward curve]{\includegraphics[width=0.67\columnwidth]{material/exp3_utility.png}} \vskip -0.1in \caption{\textbf{Self-play training result}.} \label{exp3} \end{center} \end{figure*} \textbf{Self-play Training. } In the third experiment, we describe a four-vehicle driving scenario at a non-signalized intersection via lateral and longitudinal control. Parameter sharing mechanism is adopted and Algorithm \ref{alg_2} is used to train the agents. This experiment aims to show the effectiveness of self-play training designed in section \ref{self_play_design}. As shown in Figure \ref{exp3}{\rm (a)}, the initial trajectory offsets of these four vehicles range from $0\ \rm m$ to $25\ \rm m$. And their initial speeds range from $0\ \rm{m/s}$ to $7\ \rm{m/s}$, which is shown in Figure \ref{exp3}{\rm (b)}. Figure \ref{exp3}{\rm (c)} shows the rewards of vehicles during the simulation. When the simulation starts, all of the vehicles track their reference trajectories and reference speeds successfully by controlling steering angle and acceleration. When they approach the intersection center, they show the behavior of overtaking and passing in Figure \ref{exp3}{\rm (a)}. It implies the agents have learned to overtake and pass by itself, even though we do not set any traffic rules for overtaking and passing. The simulation video can be seen here \footnote{{\rm https://github.com/jerry99s/NonZeroSum\_Driving}}. \section{Conclusion} Multi-vehicles driving on the road is a complex interactive game problem. In this paper, we construct the multi-vehicle driving scenario as a non-zero-sum game and propose a novel game control framework, which considers prediction, decision and control as a whole. The mutual influence of interactions between vehicles is considered in this framework because decisions are made by Nash equilibrium strategies. We provide an effective way to solve the Nash equilibrium driving strategies based on model-based reinforcement learning. Then we introduce experience replay and self-play tricks for scaling up to a larger agent number. Experiments show that the Nash equilibrium driving strategies trained by our algorithm could drive perfectly. Furthermore, vehicles have learned to overtake and pass without prior knowledge. In the future, we should consider more scenarios besides intersections. Hyper-parameters in reward function should also be adaptable to surrounding cars suitable for different driving styles. \iffalse \section*{Acknowledgements} This research was done during the lecture \textit{Advances in Autonomous Driving and Intelligent Vehicles} and Xujie Song's \textit{bachelor graduation project}. This paper is still in submitting now. Please do not spread this PDF file. \fi
1,314,259,993,987
arxiv
\section{Introduction} At the workshop in combinatorial games in BIRS 2011, Aviezri Fraenkel posed the following intriguing problem: find nice (short/simple) rules for a 2-player combinatorial game for which the $\mathcal{P}$-positions are obtained from a pair of complementary Beatty sequences \cite{Be}. We begin by solving this problem, by defining a class of heap games, dubbed Bi-Chromatic Nim, or just Chromatic Nim, and then later in Section~\ref{S:4}, we explain some background to the problem. In Section~\ref{S:3}, we solve a similar game on arithmetic progressions. In Section~\ref{S:5}, we discuss the general environment for Chromatic Nim on two heaps. At last, in Section~\ref{S:6}, we study the famous evil numbers, also known as the indexes of the 0s in the Thue-Morse sequence. \section{Bi-chromatic Nim finds a game for your Complementary Beatty solution}\label{S:2} Let $S$ denote a subset of the positive integers. We let the $i$th token in a stack be \emph{red} if $i\in S$, and otherwise the $i$th token is \emph{green}. We play a take-away game on $k\geqslant 0$ copies of such stacks of various finite sizes. Classical Nim rules are always allowed; any number of tokens can be removed from precisely one of the stacks. In addition, if no heap size belongs to $S$, then \emph{the position is green} and any move is legal; in particular it is now allowed to lower all stacks to 0. Another way to identify a green position is to look at the stacks from above. If you see only green tokens, then the position is green. Two players alternate moving and a player who cannot move loses. Note that if $S$ is the set of positive integers, then the game is $k$-pile Nim (because all tokens are red). If $S$ is the empty set, then the game is 1-pile Nim, independently of $k$, because all tokens are green (for a reader who likes to compute so-called Grundy values of impartial games, in this special case it obviously means that the Grundy value is the total number of tokens). We call this game $S$-Chromatic Nim. Let $\beta > 2$ be irrational and let $S=\{\lfloor \beta n\rfloor \}$, for $n$ running over the positive integers. Then the red tokens are determined by: the $i$th token is red if and only if there is an $n$ such that $i=\lfloor \beta n\rfloor $. We play on two stacks, and, since the sequences are regular (as opposed to random), determined by a number $\beta$, we call the game (2-stack) $\beta $-Chromatic Nim. \begin{figure}[h] \psset{xunit=1cm} \psset{yunit=0.8cm} \begin{center} \begin{pspicture}(3.2, 2) \multirput(-.47,-.3 )(2, 0){1}{\begin{pspicture}(12pt,3pt)\psframe[linewidth=1 pt,framearc=.7,fillstyle=solid, fillcolor=brown](3.95,0.2)\end{pspicture}} \multirput(0,0)(0,0.3){4}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.8,fillstyle=solid, fillcolor=myGreen](1,0.3)\end{pspicture}} \multirput(2,0)(0,0.3){2}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.8,fillstyle=solid, fillcolor=myGreen](1,0.3)\end{pspicture}} \multiput(-0.2,0.08)(0,0.3){1}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.7,fillstyle=solid, fillcolor=red](1,0.3)\end{pspicture}} \multiput(1.8,0.08)(0,0.3){1}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.7,fillstyle=solid, fillcolor=red](1,0.3)\end{pspicture}} \end{pspicture} \caption{The 2-stack $\phi^2$-Chromatic Nim position $(4, 2)$, where $\phi = \frac{1+\sqrt{5}}{2}$.}\label{F:1} \end{center} \end{figure} For an example, view Figure~\ref{F:1}. Since the position is not green, then only Nim type moves are possible. The unique winning move is to remove three tokens from the Left most heap. The easiest way to identify a winning move is to make sure precisely one heap is green and the other one is red. Now you also need to count the number of tokens in the respective stack, colored in the same color as the top token. If these two numbers are identical then you have found your winning move. This idea generalizes as we show in Section~\ref{S:5} \begin{theorem}\label{two} Let $\beta>2$ be irrational. Then a position $(x,y)$ of 2-stack $\beta$-Chromatic Nim is a previous player winning position if and only if $(x,y)=(\lfloor \alpha n \rfloor, \lfloor\beta n\rfloor)$ or $(\lfloor \beta n \rfloor, \lfloor\alpha n\rfloor)$, for some $n\in \mathbb{N}$, and where \begin{align}\label{Beatty} \frac{1}{\alpha} + \frac{1}{\beta} = 1. \end{align} \end{theorem} \begin{proof} Recall Beatty's theorem \cite{Be}: if (\ref{Beatty}) is satisfied, then the sequences $(\lfloor \alpha n\rfloor )$ and $(\lfloor \beta n\rfloor )$ are complementary, for $n>0$, each positive integer occurs in precisely one of the sequences and only once in this sequence. Suppose first that $x=\lfloor \alpha m \rfloor$ and $y = \lfloor\beta m\rfloor$, for some $m\in \mathbb{N}$. If $m=0$, we are done, so suppose that $m>0$. Observe that no position of this form is green, since one of the coordinates is $\lfloor \beta m\rfloor$, a red stack height. Hence it suffices to show that there is no Nim option of the same form. Note that (\ref{Beatty}) together with $\beta >2$ implies that $\alpha >1$. Hence decreasing just one of the heaps cannot give a position of the same form; it follows since, by Beatty's theorem, the sequences $(\lfloor \alpha n\rfloor )$ and $(\lfloor \beta n\rfloor )$ are complementary. Suppose next that the pair $(x, y)$ is not of the given form. If the smaller heap is empty then, the current player removes all tokens in the higher stack as well, which solves this case. Otherwise there are positive integers $m\geqslant n$ such that either\\ \noindent{\sc Case 1:} $x=\lfloor \alpha m \rfloor, y= \lfloor\beta n\rfloor$, $m>n$\\ \noindent{\sc Case 2:} $x=\lfloor \alpha m \rfloor, y=\lfloor\alpha n\rfloor$, $n>0$\\ \noindent{\sc Case 3:} $x=\lfloor \alpha n \rfloor, y=\lfloor\beta m\rfloor$, $m>n$\\ \noindent{\sc Case 4:} $x=\lfloor \beta m \rfloor, y=\lfloor\beta n\rfloor$, $n>0$.\\ Notice that none of the positions represents a position of the form in the theorem, the first and third since the sequences are strictly increasing and the second and fourth by complementarity. Hence, our task is to find a legal move to a position of the form in the theorem, for each case. The position $(x,y)$ given by the second case is green and so it is an $\mathcal{N}$-position. This follows from complementarity of the sequences $(\lfloor\alpha i\rfloor)$ and $(\lfloor\beta i\rfloor)$, namely since $x = \lfloor \alpha m \rfloor$ there is no integer $i$ such that $\lfloor \beta i\rfloor = x$ and similarly for $y$. For Case 1, it is clear that the current player can lower the $x$ stack to $x=\lfloor \alpha n \rfloor$. For the third case, by $m>n$ since $\beta>2$ we get $\lfloor \beta m\rfloor>\lfloor \beta n\rfloor$, so that the desired Nim move on the $y$-stack is to lower it to the position $(\lfloor \alpha n \rfloor, \lfloor\beta n\rfloor)$. The fourth case is similar, but the lowering is on the $x$-stack, motivated by $\lfloor \beta m \rfloor\ge \lfloor\beta n\rfloor >\lfloor \alpha n\rfloor $, which follows since (\ref{Beatty}) gives $1<\alpha<2<\beta $ and by $n>0$. (The latter inequality excludes the terminal position $(x,y) = (0,0)$ which of course is also of the form $(x,y) = (\lfloor \alpha n \rfloor, \lfloor\beta n\rfloor)$, for some $n\in \mathbb{N}$). \end{proof} \section{Games with arithmetic progression solutions}\label{S:3} Next, let us study a generalization of the game rules of $\beta$-Chromatic Nim to $\beta$ an integer, that is, let $\beta\geqslant 2$ be an integer and let $S = \{\beta n\mid n\in \hbox{\amsy\char'132}^+ \}$. We have the following perhaps not so surprising result in view of Theorem~\ref{two}. The solution will still consist of complementary sequences, and indeed we have now shifted focus around to the more standard one in CGT, finding a solution for your game. \begin{theorem}\label{one} Let $\beta\geqslant 2$ be an integer. Then a position $(x,y)$ of 2-pile $\beta$-Chromatic Nim, with $x\leqslant y$, is a previous player winning position if and only if $(x,y)=(0,0)$ or \begin{align}\label{xy} (x, y) &= ( \beta n+t, (\beta-1)(\beta n +t) + t)\\ &= (\beta n + t, \beta ((\beta -1)n+t)), \label{xy2} \end{align} for some $n\in \mathbb{N}$ and some $t\in\{1,\ldots ,\beta -1\}$. \end{theorem} \begin{proof} Note first that, by definition of $t$, $(\beta -1)n+t$ takes on all the positive integers. Therefore the $y$-coordinates will take on precisely all multiples of $\beta$. For the same reason, the $x$-coordinates will take on precisely the complement of this set. Let $\mathcal{P}'$ denote all positions $(x,y)$ and $(y,x)$ where $x$ and $y$ are defined by (\ref{xy}). We begin by showing that no option of $(x,y)\in \mathcal{P}'$ is in $\mathcal{P}'$. Since the sets of all $x$'s and $y$'s are complementary, it suffices to prove that only Nim type moves are possible. But, by definition, with notation as in (\ref{xy2}), each $x$ is green and each $y$ is red. Let us next prove that from each position $(x, y)\not\in \mathcal{P}'$, there is a move to a position in $\mathcal{P}'$. For this case, there are positive integers $m\geqslant n$ such that either\\ \noindent{\sc Case 1:} $x=\beta m+t, y= \beta ((\beta -1)n+t)$, $m>n$\\ \noindent{\sc Case 2:} $x=\beta m+t, y=\beta n+t$, $n>0$\\ \noindent{\sc Case 3:} $x=\beta n +t, y=\beta ((\beta -1)m+t)$, $m>n$\\ \noindent{\sc Case 4:} $x=\beta ((\beta -1)m+t), y=\beta ((\beta -1)n+t)$, $n>0$.\\ For case 1, we can reduce the $x$-stack to $\beta n+t$. For case 2, both stacks are green, so a move to $(0,0)$ is possible. For case 3, we can reduce $y$ to $\beta ((\beta -1)n+t)$. Finally, for case 4, since $\beta\geqslant 2$, $\beta ((\beta -1)m+t)\geqslant \beta ((\beta -1)n+t)\geqslant \beta (n+t)>\beta n+t$, so the desired move to $(\beta n + t, \beta ((\beta -1)n+t))$ is possible. \end{proof} \section{Discussion of the origin to the problematic solution}\label{S:4} Typically, game rules of combinatorial games are short and easy to learn, but not always. One should be able to learn the rules of a game without a degree in mathematics. The distinction we are speaking of is Play-games versus Math-games. We contribute Play-game rules to an original Math-game problem. The new element is the coloring of the tokens. This takes care of uncountably many problems (disguised as one problem). We already know that there is a countably infinite family of Play-game rules for $\mathcal{P}$-positions of distinct complementary Beatty sequences \cite{F82} and that family has been expanded via continued fractions in \cite{DR, LW}. But Wythoff Nim \cite{W} is the origin of these type of questions. Nim on two heaps provides a nearly trivial mimicking winning strategy and the $\mathcal{P}$-positions are all positions with equal heap sizes. By adjoining these Nim $\mathcal{P}$-positions as moves in a new game, Wythoff discovered, that the new $\mathcal{P}$-positions will be described by half lines of slope the Golden ratio and its inverse. In fact, the game in Figure~\ref{F:1} is $\mathcal{P}$-equivalent to Wythoff Nim. Now, the challenge of finding game rules for \emph{any} complementary pair of Beatty sequences was posed in \cite{DR} and resolved in \cite{LHF}. There was a proviso to the solution in \cite{DR}; the game rules must be \emph{invariant}, and this is a new notion to an old description of vector subtraction games from \cite{G}. Many game rules are non-invariant (perhaps the most famous of them all is Fibonacci Nim), but the classical ones (Subtraction games, Nim, Wythoff Nim) are invariant in the sense that a rule does not depend on which position it was moved from (apart from the empty-heap condition). The $\star$-operator defined in \cite{LHF} produces invariant games, but only with exponential complexity in log of heap sizes. So the proviso game rules being invariant is nice in one way, but on the other hand, the obtained games cannot easily be played by human beings. The exponential complexity is decreased to polynomial ditto in \cite{FL}, but where invariance was relaxed to 2-invariance, a special restricted family of variant games; but although the game rules are polynomial in succinct input size, they remain Math games. Here we study simple game rules: true Play-games. They have a somewhat surprising solution (although not as surprising as the solution Wythoff originally discovered in his variation of the game of Nim). \section{Other Chromatic Nim games}\label{S:5} Since Chromatic Nim was capable enough to solve the problem posed at the BIRS 2011 workshop in combinatorial games, we got interested in what properties the game might have in a somewhat more general setting. Let us discuss a natural generalization of the games and sequences from Section~\ref{S:2}, still just on 2 stacks. Given a set $S$, let us call our game $S$-Chromatic Nim. The following lemma will allow us to resolve any 2-stack game for sequences with a surplus of green tokens, without further knowledge of the sequence. Let us regard the set $S$ as an increasing sequence of integers $S=\{s_i\}_{i>0}$ and let $\overline S=\{\overline s_i\}_{i>0}$ denote the unique increasing sequence \emph{complementary} to $S$ on the positive integers (that is if $n$ is a positive integer, then $n$ is in precisely one of the sequences). Then $S$ is \emph{green-dominated} if, for all $i$, $\bar s_i < s_i$, and $S$ is \emph{red-dominated} if, for all $i$, $\bar s_i > s_i$. See also paper \cite{L2014} for a similar construction. Hence it is clear that a sequence cannot be both red-dominated and green-dominated, and of course, `most' increasing sequences are neither. Note that for example $\beta$-Chromatic Nim from Section~\ref{S:2} is green-dominated, since $\beta>2$. We have the following lemma for any green-dominated game $S$-Chromatic Nim. \begin{lemma}\label{lem:1} Suppose that $S$-Chromatic Nim is green-dominated. Let $$\{(a_i,b_i), (b_i,a_i) \mid i\in \mathbb{N} \}$$ denote its set of $\mathcal{P}$-positions, where for all $i\geqslant 0$, $a_i\leqslant b_i$. Then, for all $i>0$, the $i$th green token from below is the $a_i$th token and the $i$th red token from below is the $b_i$th token. Therefore, for all $i$, $a_i<b_i$, which implies that all monochromatic positions are $\mathcal{N}$-positions. \end{lemma} \begin{proof} First of all it is clear that if both heaps are green, then there is a move to $(0,0)$, so we assume first that one of the heaps is red and the other green. If in addition, the red heap contains the same number of red tokens as the number of green tokens in the green heap, then there is no nim-type move to a position of the same form. But these are the only legal type of moves, so the ``$\mathcal{P}$ cannot go to $\mathcal{P}$ property" is satisfied. Now, if the number of red tokens in the red heap is different from the number of green tokens in the green heap, then we have to find a candidate $\mathcal{P}$-position to move to. If there are more red tokens in the red heap than there are green tokens in the green heap, then there is a nim-type move that equalizes the numbers. Hence, only the case for two red heaps remains to be considered. One of the heaps contains no more red tokens than the other. Then, because of the green-dominated property, the other heap contains at least as many green tokens as the number of red ones in the former. Hence a Nim-type move suffices to reduce the taller red heap to a green heap with as many green tokens as the number of red ones in the previously smaller red heap. The base case is that since the game is green-dominated, when the first red token appears, then there is a green below, so the above proof applies. \end{proof} How do you play to win a green-dominated game? Let us summarize the proof of Lemma~\ref{lem:1}. \begin{prop}\label{prop:easyN} If the heaps have different colors and the red heap has $r$ red tokens and the green heap has $g$ green tokens, with $r=g$, then there is no winning move for the current player. Otherwise the first player should remove $r-g$ red tokens from the red heap if $r>g$ and $g-r$ green tokens from the green heap if $g>r$, in either case keeping the color of the changed heap the same. If both heaps are green you move to $(0,0)$. If both heaps are red and $r$ is the number of red tokens in the smaller heap, then play in the larger (or equally sized) heap so that it becomes green with $r$ green tokens. \end{prop} \begin{proof} This is a direct consequence of Lemma \ref{lem:1}. Notice again, for the last sentence, this is always possible, because of the green-dominated property. \end{proof} For $\beta$-Chromatic Nim with a rational $\beta >2$, we know a winning strategy via Proposition~\ref{prop:easyN}, but do not yet have a complete characterization extending Theorems~\ref{two} and \ref{one}. When $S$ is red-dominated, then Lemma~\ref{lem:1} and Proposition~\ref{prop:easyN} are no longer true, and it is easy to see because the base case fails. Let $\beta=3/2$. Then, for $\beta$-Chromatic Nim, $S=\{1,3,4,6,7,9,\ldots\}$. This sequence is not green-dominated since, for example, the stack with one token has one red token and no green one. It is red-dominated because $\bar S=\{2,5,8,11\ldots\}$, and so $s_i<\bar s_i$ for all $i$. In fact, the smallest non-terminal $\mathcal{P}$-position is the red position $(1,1)$, so the conclusions of Lemma~\ref{lem:1} and Proposition~\ref{prop:easyN} are false. Let us list the first few $\mathcal{P}$-positions of this game, to obtain intuition for the next result on red-dominated sequences. In Figure~\ref{F:2}, we see the three first P-positions of $\frac{3}{2}$-Chromatic Nim, and the pictures illustrate how colors can be either both red or mixed. But there is a simple explanation to this behavior. We use the following definition. \begin{definition}\label{def:1} The game $S$-Chromatic Nim is \emph{locally green-dominated} (lgd) at level $d\in S$, if there is some positive integer $k\in \bar S$, such that the $k$-shifted heaps (with the lower $k-1$ tokens removed) induces a local green-dominated game on $d$ tokens, colored according to (a re-indexed) $S\setminus\{0,\ldots , k-1\}$. The local green-domination is maximal if the game is not lgd at level $d+1$. \end{definition} In a sense the red-dominated games behave like Nim, on the red tokens, but any local green-dominance has to be compensated for; we can use a local variation of Lemma~\ref{lem:1} to compute the $\mathcal{P}$-positions, until the position is not lgd any longer, at which point the old Nim-strategy will reappear (see the third picture). \begin{figure}[ht!] \psset{xunit=.8cm} \psset{yunit=0.65cm} \begin{pspicture}(3.2, 2) \rput(2,0){ \multirput(-.47,-.3 )(2, 0){1}{\begin{pspicture}(12pt,3pt)\psframe[linewidth=1 pt,framearc=.7,fillstyle=solid, fillcolor=brown](3.95,0.2)\end{pspicture}} \multirput(0,0)(0,0.3){1}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.7,fillstyle=solid, fillcolor=red](1,0.3)\end{pspicture}} \multirput(2,0)(0,0.3){1}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.7,fillstyle=solid, fillcolor=red](1,0.3)\end{pspicture}} } \rput(6.5,0){ \multirput(-.47,-.3 )(2, 0){1}{\begin{pspicture}(12pt,3pt)\psframe[linewidth=1 pt,framearc=.7,fillstyle=solid, fillcolor=brown](3.95,0.2)\end{pspicture}} \multirput(0,0)(0,0.3){3}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.7,fillstyle=solid, fillcolor=red](1,0.3)\end{pspicture}} \multirput(2,0)(0,0.3){1}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.7,fillstyle=solid, fillcolor=red](1,0.3)\end{pspicture}} \multirput(0,.3)(0,0.3){1}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.8,fillstyle=solid, fillcolor=myGreen](1,0.3)\end{pspicture}} \multirput(2,.3)(0,0.3){1}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.8,fillstyle=solid, fillcolor=myGreen](1,0.3)\end{pspicture}} } \rput(11,0){ \multirput(-.47,-.3 )(2, 0){1}{\begin{pspicture}(12pt,3pt)\psframe[linewidth=1 pt,framearc=.7,fillstyle=solid, fillcolor=brown](3.95,0.2)\end{pspicture}} \multirput(0,0)(0,0.3){4}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.7,fillstyle=solid, fillcolor=red](1,0.3)\end{pspicture}} \multirput(2,0)(0,0.3){4}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.7,fillstyle=solid, fillcolor=red](1,0.3)\end{pspicture}} \multirput(0,.3)(0,0.3){1}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.8,fillstyle=solid, fillcolor=myGreen](1,0.3)\end{pspicture}} \multirput(2,.3)(0,0.3){1}{\begin{pspicture}(12pt,9pt)\psframe[linewidth=1pt,framearc=.8,fillstyle=solid, fillcolor=myGreen](1,0.3)\end{pspicture}} } \end{pspicture} \caption{The first three $\mathcal{P}$-positions of $\frac{3}{2}$-Chromatic Nim.}\label{F:2} \end{figure} \begin{prop}\label{prop:red} Consider a red-dominated game $S$ and $d$ a positive integer. Then $(d,d)$ is a $\mathcal{P}$-position if and only if the game is not locally green-dominated at level $d$. Otherwise, we consider the locally green-dominated game beginning at some minimal level $k$, that is, we apply Lemma~\ref{lem:1} to compute the $\mathcal{P}$-positions with level $k$ exchanged for level $1$ (a green token). This computation stops at a level where the lgd is maximal. \end{prop} \begin{remark} In Figure 2, we obtain the first non-zero $\mathcal{P}$-position as $(1,1)$. (Only red tokens behave as Nim.) The second $\mathcal{P}$-position is of the second type (with $k=2$), because colors of heaps are mixed. The way to compute the $\mathcal{P}$-positions in a lgd game is to apply the algorithm for green-dominated games, but here starting with the 2nd green layer of tokens rather than the first red layer. Now, already at level~3 ($d=3$) the lgd becomes maximal, so in fact the green-dominated algorithm terminates in just one step. (Notice here that Definition~\ref{def:1} is satisfied with $k=2$ and $d=3$.) \end{remark} \begin{proof} By Definition~\ref{def:1}, the stack-sizes can be partitioned into two classes. Class~1 is the set of all non-lgd's and Class~2 is the set of all (maximal) lgd's. The latter contains the discrete intervals of the form $\{k,\ldots , d\}$ as in Definition~\ref{def:1}. Since we are discussing red-dominated games the base case is as the left-most picture in Figure \ref{F:2}, level 1 is red. Now, there is a smallest green token, at say level $k$. It will be the first Class 2 token. Since the first red token above this green one exists (by red-dominating), say at level $d>k$, the first Class 2 $\mathcal{P}$-position will be $(k,d)$. The Class 2 $\mathcal{P}$-positions will have to continue as outlined in Lemma~\ref{lem:1} until perhaps the level-$k$ adjusted green-dominating property fails. (It does not have to fail, because the set $S$ can still be red-dominated, because of the initial layer(s) of red tokens could compensate for example a local green dominated periodic behavior.) If it fails, then there will be a maximal (red) token, say at level $d>k$ for which lgd holds. Then $(d+1, d+1)$ is a $\mathcal{P}$-position. It follows from the fact that the red token at level $d+1$ cannot be paired up with a green token below, because, by Lemma~\ref{lem:1}, they have already been matched up with lower red tokens. Now the proof follows by induction, since the continuation from level $d+1$ onwards is the same as restarting from level 1, with $d+1$ exchanged for 1. (Because only Nim-type moves are allowed, by induction, no lower $\mathcal{P}$-positions can be reached in a single move, and the special move rule for green positions can just as well be used to move to $(d+1, d+1)$, which is $\mathcal{P}$ by induction. \end{proof} Now it is easy to combine Proposition~\ref{prop:easyN} with the proof of Proposition~\ref{prop:red} to find your winning move. The clue is to identify the non-lgd sequences to know the excess of red tokens that need to be subtracted in order to use the green-dominated result correctly. But, this is easy by the recursive argument. We note that this general winning strategy requires a bottom-up approach and is therefore slow compared to the results in Sections~\ref{S:2} and \ref{S:3}. \section{More fractal rules and strategies}\label{S:6} We say that a non-negative integer is \emph{evil} if it has an even number of ones in its binary expansion; otherwise it is called \emph{odious}. In this section, we let $S$ be the set of evil integers and call the resulting game \emph{evil-Chromatic Nim}. Thus, in this game, the evil integers are red and the odious integers are green. The following observation is not strictly needed in the proofs to come, but we prove it anyway. \begin{note}\label{evil:4} {There cannot be more than two consecutive odious numbers (evil numbers) in any sequence of consecutive integers.} \end{note} \begin{proof} Suppose $n, {n+1}, {n+2}$ are three consecutive odious numbers. Note that $n$ has to be odd or the parity of the number of 1's for ${n+1}$ will be incorrect. Because of the constraints on $n$, $n$ has to look like $$\begin{array}{rrrrr} (i) &{n = \underbrace{11\cdots1}_{2i+1 \ 1\text{'s}}} &\text{or} &{(ii)} &{n = \underbrace{1\ast \cdots\ast}_{2i \ 1\text{'s}} \circled{0}\underbrace{11 \cdots 1}_{2j+1 \ 1\text{'s}}} \\ \end{array}$$ where $(i)$ consists of an odd number of consecutive 1's and $(ii)$ begins from the left with a 1 followed by 1's and 0's, with an even number of 1's to the left of the circled 0 (a $*$ indicates a 0 or a 1). Then $$\begin{array}{rrrrr} (i) &{ {n+1} = \mathbf{1}\underbrace{00\cdots0}_{2i+1 \ 0\text{'s}}} &\text{or} &{(ii)} &{{n+1} = \underbrace{1\ast \cdots\ast}_{2i \ 1\text{'s}} \circled{1}\underbrace{00 \cdots 0}_{2j +1 \ 0\text{'s}}.} \end{array}$$ But then, in either case, ${n+2}$ will contain an even number of 1's and thus be evil. The case for evil numbers is similar. \end{proof} \begin{definition} Let $k$ be a nonnegative integer and let $U \subseteq [k] \cup \{0\} = \{0, 1, \dots, k\}$ be a subset consisting of consecutive integers. Then the \textbf{pseudo-chromatic number} of $U$, denoted $\tau(U)$, is defined to be the difference $$(\# \text{ of green numbers in } U)-(\# \text{ of red numbers in } U).$$ In the special case where $U = [k] \cup \{0\}$, we write $\tau(k)$ for $\tau(U)$. \end{definition} \begin{lemma}\label{evil:1} For any nonnegative integer $n$, $-1 \leq \tau(n) \leq 1$. \end{lemma} \begin{proof} The proof will proceed by induction on $n$. For our base cases, we consider the integers 0 through 7. $$\begin{array}{c|c|c|r} \textbf{Integer} &\textbf{Binary Representation} &\textbf{Green/Red} &\tau \\ \hline 0 &000 &\text{red} &-1\\ 1 &001 &\text{green} &0 \\ 2 &010 &\text{green} &1 \\ 3 &011 &\text{red} &0 \\ 4 &100 &\text{green} &1 \\ 5 &101 &\text{red} &0 \\ 6 &110 &\text{red} &-1 \\ 7 &111 &\text{green} &0 \\ \end{array}$$ Longer lists (of length $2^j$) can be built recursively as follows. \begin{displaymath} \xymatrix{A &{\begin{array}{c|r} {} &\tau \\ \hline \mathbf{00}00 & {-1} \\ \mathbf{00}01 &{0} \\ \mathbf{00}10 & {1} \\ \mathbf{00}11 & {0} \end{array}} \ar@/^4pc/[ddd] \\ B &{\begin{array}{c|r} \mathbf{01}00 &1\\ \mathbf{01}01&0 \\ \mathbf{01}10 &-1 \\ \mathbf{01}11 &0\end{array}} \ar@(dl,dr)[d] \\ C& {\begin{array}{c|r} \mathbf{10}00 &1\\ \mathbf{10}01&0 \\ \mathbf{10}10 &-1 \\ \mathbf{10}11 &0\end{array}} \\ D& {\begin{array}{c|r} \mathbf{11}00 & {-1} \\ \mathbf{11}01 &{0} \\ \mathbf{11}10 & {1} \\ \mathbf{11}11 & {0} \end{array}} } \end{displaymath} Notice that the above figure is constructed by prefixing the block of the binary numbers $00, 01, 10,$ and $11$ with $00$, $01$, $10$, or $11$, respectively. Further, observe that blocks $A$, $B$, $C$, and $D$ each have pseudo-chromatic number equal to $0$. Hence, $\tau(15) = 0$. Moreover, notice the maps illustrated in the figure from $A$ to $D$ and from $B$ to $C$ preserve the evil/odious quality of each integer and its respective pseudo-chromatic number. Next assume that our result holds for all $j$ such that $1 \leq j < n$. Find $m >0$ so that $2^{m-1} \leq n \leq 2^m-1$. Using the recursive construction described above, we construct the list below of length $2^m$. \begin{displaymath} \xymatrix{A &{\begin{array}{c|r} {} &\tau \\ \hline \mathbf{00}00 \cdots 0 &-1 \\ \vdots &{\vdots} \\ \mathbf{00}11\cdots 1 &0 \end{array}} \ar@/^5pc/[ddd] \\ B &{\begin{array}{c|r} \mathbf{01}00 \cdots 0 &1\\ \vdots &\vdots \\ \mathbf{01}11\cdots 1 & 0\end{array}} \ar@(dl,dr)[d] \\ C& {\begin{array}{c|r} \mathbf{10}00 \cdots 0 &1 \\ \vdots &\vdots\\ \mathbf{10}11\cdots 1&0 \end{array}} \\ D& {\begin{array}{c|r} \mathbf{11}00 \cdots 0 &-1 \\ \vdots &\vdots \\ \mathbf{11}11\cdots 1 &0 \end{array}}} \end{displaymath} Note that $n$ is either in block $C$ or block $D$. By induction and the recursive construction of the list, the desired result holds for $n$. \end{proof} \begin{definition} Given a positive integer $k$, we define the \textbf{chromatic number} of the set $[k] = \{1, 2, \dots, k\}$, denoted by $\chi(k)$, by $\chi(k) = \tau([k])+1.$ \end{definition} The next lemma refines Lemma~\ref{evil:1}. \begin{lemma}\label{evil:2} If $k$ is odious and even, then $\chi(k) =2$. If $k$ is evil and even, then $\chi(k) = 0$. $\chi(k) = 1$ for every positive odd number $k$. \end{lemma} \begin{proof} Suppose that $k$ has binary expansion $$k = 1\underbrace{00\cdots0}_{z_1\geq0 \ 0\text{'s}}1\underbrace{00\cdots0}_{z_2\geq0 \ 0\text{'s}}10 \cdots 0 1\underbrace{00\cdots0}_{z_{j-1}\geq0 \ 0\text{'s}}1\underbrace{00\cdots0}_{z_{j}\geq0 \ 0\text{'s}},$$ where $k$ has $j$ ones in positions $i_1, i_2, \dots, i_j$ (reading from left-to-right) and the $z_{\ell}$'s give the number of zeros after each one. For the purposes of this proof, we let $k(i_{\ell})$ denote the number in binary notation derived from $k$ by changing the $i_{\ell}$-th one and any nonzero bit associated to a power of 2 less than it to a zero. For example, if $k = 10011001$, then $i_1 = 7, i_2 = 4, i_3 = 3, i_4 = 0$, $z_1 = 2, z_2 = 0, z_3 = 2, z_4=0$, and $k(i_3) = k(3) = 10010000$ (note that the one counted by $i_1$ is associated with $2^7$). Next, for a given positive integer $k$ and an associated $i_{\ell}$, we define $$L(k, i_{\ell}) = k(i_{\ell}) + \left\{\left(\sum_{r=0}^{i_{\ell}-1} c_r 2^r\right)_2 \ : \ c_r = 0 \ \text{or}\ c_r = 1 \right\}$$ if $\ell > 1$ and $$L(k, i_{1}) = \left\{\left(\sum_{r=0}^{i_{1}-1} c_r 2^r\right)_2 \ : \ c_r = 0 \ \text{or}\ c_r = 1 \right\}\setminus \{0\}.$$ For example, if $k = 10011001$, then $$L(k, i_{3}) = 10010000 + \{111,110,101,100,011,010,001,000 \}.$$ We are now ready to proceed with the proof. We consider two cases. \noindent{\sc Case 1:} $k$ is odd. If $k$ is odious, then $\chi(L(k,i_j)) = -1$, since $L(k,i_j)$ consists of exactly one evil number. Also observe that $\chi(L(k,i_1)) = 1$, since $L(k,i_1) = [2^{i_1 - 1}]\setminus \{0\}.$ However, $\chi(L(k,i_{\ell})) = 0$ for all $1 < \ell < j$ since $\chi([2^q]\cup \{0\}) = 0$ for all $q>0$. Thus, $\chi(k) = 1 +1 - 1 = 1$, as desired. If $k$ is evil, then $\chi(L(k,i_j)) = 1$, since $L(k,i_j)$ consists of exactly one odious number. Also observe that $\chi(L(k,i_1)) = 1$, since $L(k,i_1) = [2^{i_1 - 1}]\setminus \{0\}.$ However, $\chi(L(k,i_{\ell})) = 0$ for all $1 < \ell < j$ since $\chi([2^q]\cup \{0\}) = 0$ for all $q>0$. Thus, $\chi(k) = -1 +1 + 1 = 1$, as desired. \bigskip \noindent{\sc Case 2:} $k$ is even. If $k$ is odious, then $\chi(L(k,i_{\ell})) = 0$ for all $1 < \ell \leq j$ since $\chi([2^q]\cup \{0\}) = 0$ for all $q>0$. As in the case above, we have $\chi(L(k,i_1)) = 1$, since $L(k,i_1) = [2^{i_1 - 1}]\setminus \{0\}.$ Hence, $\chi(k) = 1 + 1 = 2$, as desired. If $k$ is evil, then $\chi(L(k,i_{\ell})) = 0$ for all $1 < \ell \leq j$ since $\chi([2^q]\cup \{0\}) = 0$ for all $q>0$. Again, we have $\chi(L(k,i_1)) = 1$, since $L(k,i_1) = [2^{i_1 - 1}]\setminus \{0\}.$ Hence, $\chi(k) = -1 + 1 = 0$, as desired. \end{proof} Recall that the `\rm{mex}' of a nonempty set of nonnegative integers is defined to be the minimum excluded element. For example, if $S = \{0,1,2,7,9,13 \}$, then mex$(S)$ = 3. Using the mex rule and Lemma~\ref{lem:1}, we characterize the $\mathcal{P}$-positions of evil-Chromatic Nim in the next theorem. \begin{theorem}\label{evil:3} The set of $\mathcal{P}$-positions of evil-Chromatic Nim $\{(a_i, b_i) \ | \ i \geq 0\}$ can be computed recursively as follows. The only terminal $\mathcal{P}$-position is $(a_0,b_0) = (0,0)$. Otherwise, $a_n = {\rm mex}\{a_i, b_i \ | \ i < n\}$ and $b_n$ is the smallest evil number such that $a_n < b_n$. \end{theorem} \begin{proof} The proof of this result follows from Lemma~\ref{lem:1} and Lemma~\ref{evil:1} above. \end{proof} Given the interesting number theory surrounding evil and odious numbers, we are able to refine the last result quite substantially. We say that a non-negative integer is \emph{vile} if its binary expansion ends in an even number of zeros; otherwise it is called \emph{dopey}. \begin{theorem}\label{evil:5} The set of $\mathcal{P}$-positions of evil-Chromatic Nim $\{(a_i, b_i) \ | \ i \geq 0\}$ are given by $(a_0,b_0) = (0,0)$, and, for $n>0$, by $$b_n = \left\{ \begin{array}{ll} 2n &\text{if } n \text{ is evil} \\ 2n+1 &\text{if } n \text{ is odious} \end{array} \right.$$ and $$a_n = \left\{ \begin{array}{ll} b_n-1 &\text{if } n \text{ is evil and dopey}\\ b_n-2 &\text{if } n \text{ is vile} \\ b_n-3 &\text{if } n \text{ is odious and dopey} \end{array} \right.$$ \end{theorem} \begin{proof} The proof will proceed by induction on $n$. \bigskip \noindent{\sc Case 1:} $n$ is \textbf{evil} and \textbf{dopey}. We claim that $n-1$ must be evil and vile. If not, then $n-1$ is evil and dopey, odious and dopey, or odious and vile. In either of the first two cases, $n-1$ has binary expansion $$\underbrace{1** \cdots *1}_{k \ 1\text{'s}} \underbrace{00\cdots 0}_{2m+1\ 0\text{'s}},$$ where $k$ is even if $n-1$ is evil and $k$ is odd if $n-1$ is odious (where $m \geq 0$ and a $\ast$ denotes a 0 or a 1). But this implies that $n = (n-1) + 1$ is either evil and vile or odious and vile, a contradiction. Next suppose that $n-1$ is odious and vile. Then $n-1$ has binary expansion $$\begin{array}{lllll} (i) &\underbrace{1** \cdots *}_{2k+1 \ 1\text{'s}}0\underbrace{11\cdots1}_{2m \ 1\text{'s}}, &{} &(ii) &\underbrace{1** \cdots *}_{2k \ 1\text{'s}}0\underbrace{11\cdots1}_{2m+1 \ 1\text{'s}},\\ {}&{}&{}&{}&{}\\ (iii) &\underbrace{11\cdots1}_{2k+1 \ 1\text{'s}}, &\text{or} &{(iv)} & \underbrace{1** \cdots *1}_{2k+1 \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m>0 \ 0\text{'s}}. \end{array}$$ Since $n$ is dopey, cases $(i)$ and $(iv)$ are not possible, and since $n$ is evil, cases $(ii)$ and $(iii)$ are not possible. Hence, $n-1$ is evil and vile. Then $n-1$ has binary expansion $$\begin{array}{rrrrrrr} (i) &\underbrace{1** \cdots *}_{2k+1 \ 1\text{'s}}0\underbrace{11\cdots1}_{2m+1 \ 1\text{'s}}, &(ii) &\underbrace{11\cdots1}_{2k \ 1\text{'s}}, &\text{or} &{(iii)} & \underbrace{1** \cdots *1}_{2k \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m>0 \ 0\text{'s}}. \end{array}$$ Since $n$ is dopey, cases $(ii)$ and $(iii)$ are not possible. In case $(i)$, $$b_{n-1} = 2(n-1) = (\underbrace{1** \cdots *}_{2k+1 \ 1\text{'s}}0\underbrace{11\cdots1}_{2m+1 \ 1\text{'s}}0)_2,$$ by induction. Adding the next two consecutive terms after $b_{n-1}$ and using Lemma~\ref{evil:2} we observe that $$\begin{array}{l|l|crlll} {} &\chi &{} \\ \hline b_{n-1} &0 &b_{n-1} &=&2(n-1)\\ a_{i} &1 &{} \\ b_n &0 &b_n &= &b_{n-1} + 2 &= &2n \end{array}$$ Based on the chromatic numbers shown above, we must have $a_n = a_{i} = b_n-1$. Therefore, if $n$ is evil and dopey, then $b_n = 2n$ and $a_n = b_n - 1$. \bigskip \bigskip \noindent{\sc Case 2:} $n$ is \textbf{odious} and \textbf{dopey}. We will show that $n-1$ must be odious and vile. If $n-1$ is evil and dopey, then the binary expansion of $n-1$ looks like $$ \underbrace{1** \cdots *1}_{2k \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m+1 \ 0\text{'s}}.$$ But, if this was true, then $n$ would be vile. Thus $n-1$ is not evil and dopey. Next we consider what happens if $n-1$ was evil and vile. Then $n-1$ would have binary expansion $$\begin{array}{rrrrrrr} (i) &\underbrace{1** \cdots *}_{2k+1 \ 1\text{'s}}0\underbrace{11\cdots1}_{2m+1 \ 1\text{'s}}, &(ii) &\underbrace{11\cdots1}_{2k \ 1\text{'s}}, &\text{or} &{(iii)} & \underbrace{1** \cdots *1}_{2k \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m>0 \ 0\text{'s}}. \end{array}$$ Case $(i)$ is not possible since $n$ is odious. Further, cases $(ii)$ and $(iii)$ are not possible because $n$ is dopey. Now, if $n-1$ is odious and dopey, then $n-1$ would have binary expansion $$ \underbrace{1** \cdots *1}_{2k+1 \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m+1 \ 0\text{'s}}.$$ If this was so, then $n$ would be evil and vile, a contradiction. By process of elimination, $n-1$ must be odious and vile. Then $n-1$ has binary expansion $$\begin{array}{lllll} (i) &\underbrace{1** \cdots *}_{2k+1 \ 1\text{'s}}0\underbrace{11\cdots1}_{2m \ 1\text{'s}}, &{} &(ii) &\underbrace{1** \cdots *}_{2k \ 1\text{'s}}0\underbrace{11\cdots1}_{2m+1 \ 1\text{'s}},\\ {}&{}&{}&{}&{}\\ (iii) &\underbrace{11\cdots1}_{2k+1 \ 1\text{'s}}, &\text{or} &{(iv)} & \underbrace{1** \cdots *1}_{2k+1 \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m>0 \ 0\text{'s}}. \end{array}$$ Since $n$ is dopey, cases $(i)$ and $(iv)$ are not possible. In case $(ii)$, $$b_{n-1} = 2(n-1)+1 = (\underbrace{1** \cdots *}_{2k \ 1\text{'s}}0\underbrace{11\cdots1}_{2m+1 \ 1\text{'s}}1)_2,$$ and in case $(iii)$, $$b_{n-1} = 2(n-1)+1 = (\underbrace{11\cdots1}_{2k+1 \ 1\text{'s}}1)_2,$$ both by induction. Adding the term before $b_{n-1}$ and the two after it, we see by Lemma~\ref{evil:2} that $$\begin{array}{l|l|crlll} {} &\chi &{} \\ \hline a_{i-1} &2 &{}\\ b_{n-1} &1 &b_{n-1} &=&2(n-1)+1\\ a_{i} &2 &{} \\ b_n &1 &b_n &= &b_{n-1} + 2 &= &2n+1 \end{array}$$ Based on the chromatic numbers shown above, we must have $a_n = a_{i-1} = b_n-3$. Therefore, if $n$ is odious and dopey, then $b_n = 2n+1$ and $a_n = b_n - 3$. \bigskip \bigskip \noindent{\sc Case 3:} $n$ is \textbf{evil} and \textbf{vile}. We will show that $n-1$ is either odious and dopey or odious and vile. To this end, if $n-1$ was evil and dopey, then its binary expansion would look like $$\underbrace{1** \cdots *1}_{2k \ 1\text{'s}}\underbrace{00\cdots0}_{2m+1 \ 0\text{'s}}.$$ Since $n$ is evil, this is not possible. If $n-1$ was evil and vile then its binary expansion would look like $$\begin{array}{lllll} (i) &\underbrace{1** \cdots *}_{2k \ 1\text{'s}}0\underbrace{11\cdots1}_{2m \ 1\text{'s}}, &{} &(ii) &\underbrace{1** \cdots *}_{2k+1 \ 1\text{'s}}0\underbrace{11\cdots1}_{2m+1 \ 1\text{'s}},\\ {}&{}&{}&{}&{}\\ (iii) &\underbrace{11\cdots1}_{2k \ 1\text{'s}}, &\text{or} &{(iv)} & \underbrace{1** \cdots *1}_{2k \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m>0 \ 0\text{'s}}. \end{array}$$ Cases $(i), (iii)$, and $(iv)$ are not possible since $n$ is evil and case $(ii)$ is not possible as $n$ is vile. Thus, $n-1$ is either odious and dopey or odious and vile. If $n-1$ is odious and dopey, then its binary expansion looks like $$\underbrace{1** \cdots *1}_{2k+1 \ 1\text{'s}}\underbrace{00\cdots0}_{2m+1 \ 0\text{'s}}.$$ Then, by induction, $$b_{n-1} = 2(n-1) + 1 = (\underbrace{1** \cdots *1}_{2k+1 \ 1\text{'s}}\underbrace{00\cdots0}_{2m+1 \ 0\text{'s}}1)_2.$$ Adding the term before $b_{n-1}$ and the one after it, we see by Lemma~\ref{evil:2} that $$\begin{array}{l|l|crlll} {} &\chi &{} \\ \hline a_i &2 &{}\\ b_{n-1} &1 &b_{n-1} &=&2(n-1)+1\\ b_n &0 &b_n &= &b_{n-1} + 1 &= &2n \end{array}$$ Based on the chromatic numbers shown above, we must have $a_n = a_i = b_n-2$. Now if $n-1$ is odious and vile, then its binary expansion looks like $$\begin{array}{lllll} (i) &\underbrace{1** \cdots *}_{2k+1 \ 1\text{'s}}0\underbrace{11\cdots1}_{2m \ 1\text{'s}},&{} &(ii) &\underbrace{1** \cdots *}_{2k \ 1\text{'s}}0\underbrace{11\cdots1}_{2m+1 \ 1\text{'s}},\\ {}&{}&{}&{}&{}\\ (iii) &\underbrace{11\cdots1}_{2k+1 \ 1\text{'s}}, &\text{or} &{(iv)} & \underbrace{1** \cdots *1}_{2k+1 \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m >0 \ 0\text{'s}}. \end{array}$$ Because $n$ is vile, cases $(ii)$ and $(iii)$ are not possible. By induction $$b_{n-1} = 2(n-1)+1 = (\underbrace{1** \cdots *}_{2k+1 \ 1\text{'s}}0\underbrace{11\cdots1}_{2m \ 1\text{'s}}1)_2$$ in case $(i)$ and $$b_{n-1} = 2(n-1)+1 = (\underbrace{1** \cdots *1}_{2k+1 \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m >0 \ 0\text{'s}}1)_2$$ in case $(iv)$. In either case we again have $$\begin{array}{l|l|crlll} {} &\chi &{} \\ \hline a_i &2 &{}\\ b_{n-1} &1 &b_{n-1} &=&2(n-1)+1\\ b_n &0 &b_n &= &b_{n-1} + 1 &= &2n \end{array}$$ Thus, if $n$ is evil and vile, then $b_n = 2n$ and $a_n = b_n - 2$. \bigskip \bigskip \noindent{\sc Case 4:} $n$ is \textbf{odious} and \textbf{vile}. We will show that $n-1$ is either evil and dopey or evil and vile. If $n-1$ was odious and dopey, then its binary expansion would look like $$\underbrace{1** \cdots *1}_{2k+1 \ 1\text{'s}}\underbrace{00\cdots0}_{2m+1 \ 0\text{'s}}.$$ Since $n$ is odious, this is not possible. If $n-1$ was odious and vile, then it binary expansion would look like $$\begin{array}{lllll} (i) &\underbrace{1** \cdots *}_{2k+1 \ 1\text{'s}}0\underbrace{11\cdots1}_{2m \ 1\text{'s}}, &{} &(ii) &\underbrace{1** \cdots *}_{2k \ 1\text{'s}}0\underbrace{11\cdots1}_{2m+1 \ 1\text{'s}},\\ {}&{}&{}&{}&{}\\ (iii) &\underbrace{11\cdots1}_{2k+1 \ 1\text{'s}}, &\text{or} &{(iv)} &\underbrace{1** \cdots *1}_{2k+1 \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m >0 \ 0\text{'s}}.\end{array}$$ Because $n$ is odious, cases $(i)$ and $(iv)$ are not possible and since $n$ is vile, cases $(ii)$ and $(iii)$ are not possible. Thus, $n-1$ is either evil and dopey or evil and vile. If $n-1$ is evil and dopey, then it has binary expansion $$\underbrace{1**\cdots *1}_{2k \ 1\text{'s}}\underbrace{00\cdots0}_{2m+1 \ 0\text{'s}}.$$ Thus, by induction, $$b_{n-1} = 2(n-1) = (\underbrace{1**\cdots *1}_{2k \ 1\text{'s}}\underbrace{00\cdots0}_{2m+1 \ 0\text{'s}}0)_2.$$ Adding the three consecutive terms after $b_{n-1}$ and using Lemma~\ref{evil:2} we have $$\begin{array}{l|l|crlll} {} &\chi &{} \\ \hline b_{n-1} &0 &b_{n-1} &=&2(n-1)\\ a_{i-1} &1 \\ a_{i} &2 &{} \\ b_n &1 &b_n &= &b_{n-1} + 3 &= &2n + 1 \end{array}$$ Based on the chromatic numbers shown, we must have $a_n = a_{i-1} = b_n-2$. If $n-1$ is evil and vile, then it has binary expansion $$\begin{array}{lllll} (i) &\underbrace{1** \cdots *}_{2k \ 1\text{'s}}0\underbrace{11\cdots1}_{2m \ 1\text{'s}},&{} &(ii) &\underbrace{1** \cdots *}_{2k+1 \ 1\text{'s}}0\underbrace{11\cdots1}_{2m+1 \ 1\text{'s}},\\ {}&{}&{}&{}&{}\\ (iii) &\underbrace{11\cdots1}_{2k \ 1\text{'s}}, &\text{or} &{(iv)} & \underbrace{1** \cdots *1}_{2k \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m>0 \ 0\text{'s}}. \end{array}$$ Since $n$ is odious, $(ii)$ is not possible. By induction $$b_{n-1} = 2(n-1) = (\underbrace{1** \cdots *}_{2k \ 1\text{'s}}0\underbrace{11\cdots1}_{2m \ 1\text{'s}}0)_2$$ in case $(i)$, $$b_{n-1} = 2(n-1) = (\underbrace{11\cdots1}_{2k \ 1\text{'s}}0)_2$$ in case $(iii)$, and $$b_{n-1} = 2(n-1) = (\underbrace{1** \cdots *1}_{2k \ 1\text{'s}}\underbrace{00 \cdots 0}_{2m >0 \ 0\text{'s}}0)_2$$ in case $(iv)$. No matter what the case, we again have $$\begin{array}{l|l|crlll} {} &\chi &{} \\ \hline b_{n-1} &0 &b_{n-1} &=&2(n-1)\\ a_{i-1} &1 \\ a_{i} &2 &{} \\ b_n &1 &b_n &= &b_{n-1} + 3 &= &2n + 1 \end{array}$$ Thus $a_n = a_{i-1} = b_n-2$. Hence, if $n$ is odious and vile, then $b_n = 2n+1$ and $a_n = b_n-2$. \end{proof} \begin{example} Find the $17509^{17509}$th $\mathcal{P}$-position of evil-Chromatic Nim ($17509$ is the $2015$th prime). \end{example} \noindent With the help of the computer algebra system \emph{Mathematica} we know that $q=17509^{17509}$ is evil and vile. Hence, Theorem~\ref{evil:5} tells us that $$(a_q,b_q) = (2q-2, 2q).$$ \hfill$\Diamond$
1,314,259,993,988
arxiv
\section{Introduction} \label{sec:introduction} In current scenarios of planet formation, a small planetary body grows into a full-sized planet by accreting smaller bodies that might vary in size from pebbles to planetesimals \citep{Ormel2010, Lambrechts2012, Johansen2017, Bitsch2018, Bitsch2019, Liu2019, Bruegger2020}. The basic idea is that a planetary body encounters smaller pebbles and planetesimals on its orbit, which are then gravitationally attracted by the growing planet and are added to its solid mass. Depending on the size and the local gas pressure, the small bodies experience gas drag. Very small grains or dust couple to the gas perfectly. They follow streamlines around the planet and are not accreted. Pebble-sized objects are efficiently slowed down and eventually get the chance to settle onto the planet. Planetesimals are only slightly affected by the gas drag, so accretion efficiency is reduced again, although a single planetesimal homing in on the planet can deliver more mass. Pebble and planetesimal accretion are two terms often encountered in this context in the literature but they are only two extremes of the same story, each focusing on a specific size scale. Hybrid accretion scenarios with both size scales are also viable options \citep{Alibert2018, Bruegger2020}. We will also consider accretion of different-sized bodies here.\\ Due to gas drag, accretion is strongly dependent on the size of the incoming body. Therefore, a change in size during the accretion process will change the accretion outcome significantly. It has sometimes been considered that planetesimals on supersonic trajectories become ablated \citep{Valletta2019}. This changes their size especially if they break up at a certain point, though this is considered more in the context of providing the atmosphere with heavier elements \citep{Venturini2020}.\\ On a more elemental level, for weakly bound bodies moving through a protoplanetary disk, wind erosion occurs long before ablation has a chance to take over. All primordial bodies smaller than a large planetesimal are thought to consist of dust and therefore are weakly bound. First generation small bodies are thought to form from small millimeter-sized dust aggregates \citep{Kruss2016}, which are only put together loosely as they are concentrated by various drag instabilities or the streaming instability \citep{Youdin2005, Johansen2015, Simon2016, Yang2017, Schreiber2018, Squire2018, Schneider2019}. This results in low tensile strength \citep{Skorov2012}. The stability of small weak bodies against wind erosion has been studied experimentally by \citet{Demirci2019} and \citet{Demirci2020}. They found that small bodies on a regular circular orbit in the inner protoplanetary disk are already affected by erosion up to complete destruction. \citet{Schaffer2020} also recently used numerical simulations to study the conditions under which planetesimals are eroded by gas drag on eccentric orbits. To put this in context here, the head wind that a body encounters on a circular orbit is about $50\,\mathrm{m}\,\mathrm{s}^{-1}$, but if it is accelerated by a planet's gravity, like in the pebble accretion scenario, the head wind and therefore the surface shear stress increases. This makes pebbles and planetesimals, which were stable on their orbit, prone to wind erosion in the planetary envelopes. In this work, we simulate the influence of wind erosion on pebble and planetesimal accretion. \section{Numerical method} \begin{figure} \centering \includegraphics[width=\columnwidth]{setup.pdf} \caption{Simulation geometry. The planet is fixed to the center of the rotating coordinate system. The initial positions of the pebbles are distributed on the starting lines. At the beginning of the simulation the pebbles have the same velocity as the gas in the same location. The paths of the pebbles evolve in time according to the equation of motion (Eq. \ref{eq:motionequation}).} \label{fig:setup} \end{figure} In this section we introduce the simulation geometry and the equations that are considered for the numerical simulations of different pebble accretion scenarios. Figure \ref{fig:setup} shows the rotating coordinate system, where the planet is fixed to the center. The planet has a radial distance $a$ to the star in a minimum mass solar nebula \citep{Hayashi1981}. The gas density at the planet's location is \begin{equation} \rho_\mathrm{PPD}=\rho_0 \left( \frac{a}{1\,\mathrm{AU}} \right)^{-\frac{11}{4}}, \end{equation} with $\rho_0=1.4 \times 10^{-6}\,\mathrm{kg}\,\mathrm{m}^{-3}$ being the gas density at $1\,\mathrm{AU}$. If the planet has an atmosphere, we take the total gas density at a radial distance $r$ to the planet as \begin{equation} \label{eq:totalgaspressure} \rho_\mathrm{total}(r,a)=\rho_\mathrm{PPD}(a)+\rho_\mathrm{atmosphere}(r,a), \end{equation} with \begin{equation} \rho_\mathrm{atmosphere}(r,a)=\rho_\mathrm{surface} \cdot \exp \left( -\frac{r-R_\mathrm{planet}}{R_\mathrm{s} T(a)} g_\mathrm{planet} \right) \end{equation} following the barometric formula. It depends on the gas density $\rho_\mathrm{surface}$ and the gravitational $g_\mathrm{planet}$ acceleration at the planet's surface, and the local temperature $T(a) = 280\,\mathrm{K}\left( \frac{a}{1\,\mathrm{AU}} \right)^{-\frac{1}{2}}$ \citep{Hayashi1981}. The specific gas constant of $\mathrm{H}_2$ is described by $R_\mathrm{s}$. At the radial distance $a$ the Kepler frequency $\Omega_0$ and Hill radius $r_\mathrm{Hill}$ of the planet are \begin{equation} \Omega_0(a)=\left(\frac{G m_\mathrm{star}}{a^3}\right)^{\frac{1}{2}}\mathrm{and} \end{equation} \begin{equation} r_\mathrm{Hill}(a)=a \left(\frac{m_\mathrm{planet}}{3 m_\mathrm{star}} \right)^{\frac{1}{3}}, \end{equation} with the gravitational constant $G$, the stellar mass $m_\mathrm{star}$, and the planet mass $m_\mathrm{planet}$. The relative velocity between objects on circular Kepler orbits and the gas is only weakly dependent on the semi-major axis $a$. For the minimum mass solar nebula, it is on the order of $u_\mathrm{rel}=50\,\mathrm{m}\,\mathrm{s}^{-1}$ \citep{Weidenschilling1977}. In the rotating coordinate system the relative gas velocity is expressed as $\mathbf{u}_\mathrm{rel}+\frac{3}{2}y \Omega_0 \mathbf{e_x}$, caused by the increasing Kepler frequency $\Omega_0$ for decreasing $y$ (and vice versa) \citep{Ormel2010}. To take a non-moving planetary atmosphere into account, we weigh its influence on the total relative gas velocity in this coordinate system with the local gas density \begin{equation} \label{eq:gasvelocity} \mathbf{u}_\mathrm{gas}(r,a)=\frac{\rho_\mathrm{PPD}(a)}{\rho_\mathrm{PPD}(a)+\rho_\mathrm{atmosphere}(r,a)}\left(\mathbf{u}_\mathrm{rel}+\frac{3}{2}y \Omega_0 \mathbf{e_x} \right). \end{equation} Far from the planet the velocity is $\mathbf{u}_\mathrm{rel}+\frac{3}{2}y \Omega_0 \mathbf{e_x}$, but near to the planetary surface, where the atmospheric gas density $\rho_\mathrm{atmosphere}(r,a)$ becomes significant, the relative gas velocity converges to zero for dense atmospheres.\\ The equation of motion in a rotating system, where the planet is fixed to the center of the coordinate system, is \citep{Ormel2010} \begin{equation} \label{eq:motionequation} \ddot{\mathbf{r}}=\frac{\mathbf{u}_\mathrm{gas}(r,a)-\dot{\mathbf{r}}}{t_\mathrm{c}(r,a)}-G m_\mathrm{planet} \frac{\mathbf{r}}{r^3}-2 \Omega_0 \times \dot{\mathbf{r}}+ \Omega_0^2 \mathbf{r}. \end{equation} The equation of motion for the pebbles includes the gas drag, the gravitational interaction with the planet, and the Coriolis and centrifugal force caused by the rotating system. The coupling time of the pebble to the gas is $t_\mathrm{c}$ and is described as \begin{equation} \label{eq:coulingtime} t_\mathrm{c}=\left\{\begin{array}{ll} f_\mathrm{C} \cdot \frac{2}{9} \cdot \frac{\rho_\mathrm{pebble} d_\mathrm{pebble}^2}{\eta} &\mathrm{ for }~\mathrm{Re}<0.1,\\ \\ \frac{f_\mathrm{C}}{C_\mathrm{d}} \cdot \frac{4}{3} \cdot \frac{\rho_\mathrm{pebble}}{\rho_\mathrm{total}} \frac{d_\mathrm{pebble}}{\Delta v} &\mathrm{ for }~\mathrm{Re}\geq 0.1.\\ \end{array} \right. \end{equation} Here, we distinguish between two cases, the Stokes regime $\mathrm{Re}<0.1$ and the quadratic regime $\mathrm{Re} \geq 0.1.$ For small Reynolds numbers, $\mathrm{Re}=\rho \Delta v d_\mathrm{pebble}/\eta$, with $\eta$ being the dynamic viscosity of the gas; the drag force is linearly dependent on the relative velocity $\Delta v$ between the pebble and gas, and the coupling time can be expressed independently from $\Delta v$. This changes for larger Reynolds numbers. Here, the drag force depends on $\Delta v^2$. The pebble diameter is $d_\mathrm{pebble}$ and $C_\mathrm{d}$ is the drag coefficient for a sphere and is taken from \citet{Brown2003}, \begin{equation} C_\mathrm{d}(\mathrm{Re})=\frac{24}{\mathrm{Re}}\left(1+0.15\mathrm{Re}^{0.681}\right)+\frac{0.407}{1+8700\mathrm{Re}^{-1}}, \end{equation} and is valid for Reynolds numbers $\mathrm{Re}< 2\times 10^5$. To take molecular flow effects into account, both cases in Eq. \ref{eq:coulingtime} are scaled with the Cunningham correction \citep{Davies1945} \begin{equation} f_\mathrm{C}(\mathrm{Kn})=1+2 \mathrm{Kn} \left[1.257 + 0.4 \exp\left(- \frac{0.55}{\mathrm{Kn}} \right) \right]. \end{equation} The Cunningham correction is a function of the Knudsen number of the pebbles \begin{equation} \mathrm{Kn}= \frac{\lambda}{d_\mathrm{pebble}} =\frac{k_\mathrm{B}}{\sqrt{2} \pi d_\mathrm{mol}^2 R_\mathrm{s} \rho_\mathrm{total} d_\mathrm{pebble}}, \end{equation} which describes the ratio of the mean free path $\lambda$ of the gas molecules and the pebble diameter $d_\mathrm{pebble}$. Here, $k_\mathrm{B}$ is the Boltzmann constant and $d_\mathrm{mol}$ the molecular diameter. For large Knudsen numbers the coupling time describes the case of the Epstein regime.\\ To study the size evolution of small bodies, we numerically integrated the equation of motion (Eq. \ref{eq:motionequation}). A short code for this was programmed dedicated to the problem. We used the fourth-order Runge-Kutta integration method with an adaptive time step, which ensures a velocity resolution of $v_\mathrm{res}=0.1\,\mathrm{m}\,\mathrm{s}^{-1}$. However, the time step never exceeds $0.3t_\mathrm{c}$. At the beginning of the simulation runs the pebbles are distributed on the starting lines (see Fig. \ref{fig:setup}). These starting lines are located at $(+r_\mathrm{Hill},y,0)$ for $y \in [-b_0~ r_\mathrm{Hill},b~r_\mathrm{Hill}]$ and $(- b~r_\mathrm{Hill},y,0)$ for $y \in [- b ~r_\mathrm{Hill},- b_0~ r_\mathrm{Hill})$, with $b_0=2 u_\mathrm{rel}/\left(3 r_\mathrm{Hill} \Omega_0\right)$ and $b$ as constant, which must be selected as a sufficiently large quantity that the entire accretion cross-section $\sigma$ is considered within the simulation parameters. At the beginning of the simulation the pebbles have the same velocity as the surrounding gas (see Eq. \ref{eq:gasvelocity}).\\ We are interested in the influence of wind erosion on pebble accretion. Therefore, the considered pebbles in our simulation are porous clusters consisting of particles of a diameter of $d_\mathrm{particle}=10^{-3}\,\mathrm{m}$ (bouncing barrier size) \citep{Kruss2016,Kruss2017,Demirci2017}. For each time step in the simulation, the shear stress $\tau_\mathrm{wall}$, which is acting on the pebble, is compared with the shear stress $\tau_\mathrm{erosion}$ for wind erosion. The wall shear stress $\tau_\mathrm{wall}=\eta u_\mathrm{gas}/\delta$ can be determined by approximating the boundary layer thickness $\delta=5\sqrt{\eta l/\Delta v}$ around a sphere and is expressed as \citep{Schlichting2006} \begin{equation} \tau_\mathrm{wall}=\frac{1}{5}\left(\frac{2 \rho~ \eta ~\Delta v^3}{d_\mathrm{pebble}} \right)^{\frac{1}{2}}. \end{equation} The shear stress $\tau_\mathrm{erosion}$, which is needed to erode a protoplanetary surface, was determined experimentally for a wide range of gas pressure and gravitational acceleration by \citet{Demirci2019} and \citet{Demirci2020}. It is \begin{equation} \tau_\mathrm{erosion}=\alpha f_\mathrm{C}(\beta^{-1} \mathrm{Kn})\left( \frac{\gamma_\mathrm{eff}}{d_\mathrm{particle}} + \frac{1}{9} \rho_\mathrm{pebble} g_\mathrm{pebble} d_\mathrm{particle} \right), \end{equation} where the first term describes the cohesion and the second term describes the gravitational acceleration. The gravity term only becomes important for larger objects, like planetesimals. The effective surface energy $\gamma_\mathrm{eff}$ \citep{Johnson1971} of the constituent particles is assumed to be about $10^{-4}\,\mathrm{N}\,\mathrm{m}^{-1}$ and $\beta=0.67$ is an empirical scaling factor for the Knudsen number in the Cunningham correction for $d_\mathrm{particle}$ \citep{Demirci2019,Demirci2020}. If the wall shear stress exceeds the erosion shear stress ($\tau_\mathrm{wall}>\tau_\mathrm{erosion}$), the radius of the pebble will be decreased with a certain erosion rate $\epsilon=\Delta R_\mathrm{pebble}/\Delta t$. We take $\epsilon=5\times 10^{-4}\,\mathrm{m}\,\mathrm{s}^{-1}$, which was observed experimentally near the threshold shear stress by \citet{Demirci2019}. A pebble could be eroded by the gas drag - if the conditions are fulfilled - to a size down to that of one particle $d_\mathrm{particle}=10^{-3}\,\mathrm{m}$. \section{Results} \begin{table} \caption{Simulations parameters. For each simulation run the pebble size is varied between $10^{-3}$ and $10^{4}\,\mathrm{m}$ and the range for the initial $y$-position is kept large enough so that all accretion events are considered in the simulations.} \label{tab:simparameter} \centering \begin{tabular}{ccc} \hline \hline Planet radius & Semi-major axis& Gas pressure at\\ & & planetary surface\\ $R_\mathrm{planet}~[R_\mathrm{Earth}]$ & $a~[\rm AU]$ & $p_\mathrm{surface}~[\mathrm{Pa}]$\\ \hline \hline $0.1$ & $0.1, 0.2, 0.3, 0.5,$ & $0$\\ & $1,2,3$ & \\ \hline $0.2$ & $0.1, 0.2, 0.3, 0.5,$ & $0$\\ & $1,2,3$ & \\ \hline $0.3$ & $0.1, 0.2, 0.3, 0.5,$ & $0$\\ & $1,2,3$ & \\ \hline $0.5$ & $0.1, 0.2, 0.3, 0.5,$ & $0$\\ & $1,2,3$ & \\ \hline $1$ & $0.1, 0.2, 0.3, 0.5,$ & $0$\\ & $1,2,3$ & \\ \hline $2$ & $0.1, 0.2, 0.3, 0.5,$ & $0$\\ & $1,2,3$ & \\ \hline $3$ & $0.1, 0.2, 0.3, 0.5,$ & $0$\\ & $1,2,3$ & \\ \hline $5$ & $0.1, 0.2, 0.3, 0.5,$ & $0$\\ & $1,2,3$ & \\ \hline $10$ & $0.1, 0.2, 0.3, 0.5,$ & $0$\\ & $1,2,3$ & \\ \hline $1$ & $0.1, 0.2, 0.3, 0.5,$ & $10^3$\\ & $1,2,3$ & \\ \hline $1$ & $0.1, 0.2, 0.3, 0.5,$ & $10^5$\\ & $1,2,3$ & \\ \hline \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{paths_without_erosion.pdf} \includegraphics[width=\columnwidth]{paths_with_erosion.pdf} \caption{Trajectories of pebbles without wind erosion (top) and with wind erosion (bottom). The planet in the center has Earth's radius ($R_\mathrm{planet}=1 R_\mathrm{Earth}$) and is located at $a=1\,\mathrm{AU}$. The initial pebble radius is $R_\mathrm{pebble}=1\,\mathrm{m}$. The surface energy of the millimeter-sized particles that are constituents of the larger pebble is assumed to be $\gamma=10^{-4}\, \mathrm{N}\,\mathrm{m}^{-1}$. The diameter of the pebbles is shown with the color of the trajectories (red: $2\,\mathrm{m}$, blue: $1\,\mathrm{mm}$). The rapid transition in the case of erosion occurs in the little white gap. For visualization reasons, we chose not to plot the sizes in between.} \label{fig:paths} \end{figure} We analyzed the outcome of pebble accretion for a wide range of simulation parameters (see Table \ref{tab:simparameter}). After each experimental run, a pebble is categorized as accreted by the planet ($r\leq R_\mathrm{planet}$) or not ($r\geq 1.5 r_\mathrm{init}$). As an example, Fig. \ref{fig:paths} shows the trajectories of $2\,\mathrm{m}$ sized pebbles - or probably better termed more generally "bodies" but we ignore this subtlety here - accreted by an Earth-like planet at $1\,\mathrm{AU}$. The body consists of $1\,\mathrm{mm}$ sized particles. In the top figure wind erosion is deactivated. In the bottom figure wind erosion is enabled. The color of the lines indicates the current diameter of the body (red: $2\,\mathrm{m}$, blue: $1\,\mathrm{mm}$). If wind erosion is deactivated, all bodies in this example are accreted by the planet. By activating the erosion due to gas drag, the bodies are eroded quickly to the minimum size of $1\,\mathrm{mm}$ (see rapid transition between red and blue in Fig. \ref{fig:paths}). With the transition in size - and thus Stokes number $\mathrm{St}=t_\mathrm{c} \Omega_0$ - the bodies change their aerodynamic behavior and couple well to gas. Being deflected by the gas flow, the accretion efficiency is decreased significantly, close to zero in this case. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{parameterplot_without_erosion.pdf} \includegraphics[width=0.95\columnwidth]{parameterplot_with_erosion.pdf} \caption{Pebble accretion outcome for a range of initial $y$-coordinates and initial pebble radii. Yellow areas in the plot mark parameters, where pebbles are accreted by the planet, and blue marks parameters where no pebble accretion occurs. The top figure shows the case without wind erosion and the bottom figure the case with erosion. Obviously, wind erosion decreases the parameter range for pebble accretion strongly.} \label{fig:parameterplot} \end{figure} Figure \ref{fig:parameterplot} shows the accretion outcome for an Earth-sized planet at $1\,\mathrm{AU}$ for a wide range of sizes for in-falling bodies ($R_\mathrm{pebble} \in [10^{-3}\,\mathrm{m},10^{4}\,\mathrm{m}]$ and initial $y$-position. The range for the initial $y$-coordinate is kept large enough that all accretion events are considered for the chosen pebble radius range. Here again, the top figure shows the case without wind erosion and the bottom figure the case with wind erosion. There is a clear decrease in pebble accretion if wind erosion is considered. To quantify the influence of wind erosion on pebble accretion, we compare the accretion cross-section diameter $d_\sigma$ as dependent on the pebble radius for the case with erosion ($d_\sigma^\mathrm{e.}$) and without ($d_\sigma^\mathrm{n.e.}$). The cross-section diameter is determined by the width of the yellow area for each pebble radius. An example for a Earth-like planet at $1\,\mathrm{AU}$ is shown in Fig. \ref{fig:crosssectionplot}. The black data points represent the case without wind erosion. Small pebbles have a small accretion cross-section, because due to their small coupling times (small Stokes numbers) they mainly follow the gas flow and are not accreted by the planet. The accretion cross-section increases up to a body radius of approximately $10\,\mathrm{m}$ (intermediate Stokes numbers). Pebbles or planetesimals larger than $\sim 10\,\mathrm{m}$ have a rapid decrease in the cross-section size for increasing pebble radius. Objects larger than approximately $100\,\mathrm{m}$ have a constant cross-section diameter. At these large Stokes numbers, they no longer interact with the surrounding gas on relevant timescales. Scattering caused by the planet's gravity dominates. This is consistent with gravitational focusing as indicated by the blue line \citep{Safronov1972}. The red data points represent the case with wind erosion. Large objects ($\geq 100\,\mathrm{m}$) are sufficiently stable against wind erosion, so the accretion cross-section diameter does not differ from the case without wind erosion. Small objects ($\leq 1\,\mathrm{cm}$) are coupled well to the gas, so they do not exceed the threshold shear stress for wind erosion. Due to their acceleration on the planet, pebble radii between ($10^{-2}-10^2\,\mathrm{m}$) experience gas-drag-driven erosion. After being eroded to a smaller size, the pebble accretion diameter decreases significantly, due to the aerodynamic change in the pebbles. \begin{figure} \centering \includegraphics[width=\columnwidth]{crosssectionplot.pdf} \caption{Accretion cross-section diameter $d_\sigma$ dependent on the pebble radius $R_\mathrm{pebble}$ for an Earth-sized planet at $1\,\mathrm{AU}$ with wind erosion (red) and without wind erosion (black). For large pebble radii, both cases converge to an accretion cross-section diameter of about $0.5 r_\mathrm{Hill}$, which is slightly below the theoretical value (blue line) for gravitational focusing \citep{Safronov1972}. We assume that the underestimation is caused by the close encounters that can spiral on highly eccentric orbits onto the planet, but are registered as non-accreting events when they reach a large distance from the planet.} \label{fig:crosssectionplot} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{PA_crosssection_1Earthradius_legend.pdf} \caption{Accretion cross-section diameter $d_\sigma^\mathrm{e.}$ with erosion in relation to $d_\sigma^\mathrm{n.e.}$ without erosion as dependent on the pebble radius $R_\mathrm{pebble}$ for an Earth-sized planet. The semi-major axis $a$ varies between $0.1\,\mathrm{AU}$ (violet) and $3\,\mathrm{AU}$ (red). For planets near to the central star, a significant reduction in accretion efficiency for pebbles with $R_\mathrm{pebble} \lesssim 10\,\mathrm{m}$ can be observed with wind erosion. For a semi-major axis of $a \geq 3\,\mathrm{AU}$ there is no difference between the case with erosion and without.} \label{fig:crosssectionfactorplot} \end{figure} To illustrate the difference between the case without and with wind erosion, in Fig. 5 we plot the ratio of the accretional cross section diameters $d_\sigma^\mathrm{e.}$ and $d_\sigma^\mathrm{n.e.}$ as dependent on the pebble radius $R_\mathrm{pebble}$ for an Earth-like planet at different semi-major axes, $a \in [0.1 \, \mathrm{AU}, 3\,\mathrm{AU}]$. For planets near to the central star $a \leq 1\,\mathrm{AU,}$ there is a significant reduction in pebble accretion efficiency for pebble radii between approximately $0.1$ and $10\, \mathrm{m}$ with erosion. Below and above this range the ratio of $d_\sigma^\mathrm{e.}$ to $d_\sigma^\mathrm{n.e.}$ is 1. Wind erosion produces a significant dip in accretion efficiency within this size range. For an increasing semi-major axis, this dip is less distinctive. For $a\geq 3\,\mathrm{AU}$ the dip disappears completely. At these far distances wind erosion does not occur and influence the pebble accretion outcome due to the low gas density.\\ Figure \ref{fig:crossectionfactorplots} shows the ratio $d_\sigma^\mathrm{e.}/d_\sigma^\mathrm{n.e.}$ as dependent on the pebble radius $R_\mathrm{pebble}$ for planets with sizes between $0.1$ and $10\,R_\mathrm{Earth}$. Semi-major axes between $0.1$ and $3\,\mathrm{AU}$ are simulated and no additional planetary atmosphere is considered (Eq. \ref{eq:totalgaspressure} consists only of the first term). For semi-major axes $a \gtrsim 3 \,\mathrm{AU,}$ wind erosion does not affect the pebble accretion outcome. This can be seen by the red data points, which are equal to $d_\sigma^\mathrm{e.}/d_\sigma^\mathrm{n.e.}=1$. The dip, caused by wind erosion, is more pronounced at smaller semi-major axes $a$ and is wider the larger the planet. The threshold pebble radius, which we define as the maximum pebble radius of the wind erosion dip, increases with increasing planet radius. The data presented so far have not taken the additional planetary atmosphere into account. We consider that an Earth-sized planet might hang on to an additional planetary atmosphere. Therefore, we carried out two simulation runs with $10^3\,\mathrm{Pa}$ and $10^5\,\mathrm{Pa}$ gas pressure at the planet's surface and compared the results with that of the case without planetary atmosphere. The thin atmosphere is similar to the Martian atmosphere while the thicker one is comparable with Earth's atmosphere. Figure \ref{fig:crossectionfactorplotsatmosphere} compares the results for the different planetary atmospheres (left: no atmosphere, middle: $10^3\,\mathrm{Pa}$, right: $10^5\,\mathrm{Pa}$). For the probed parameter space ($R_\mathrm{pebble} \in [10^{-3}\,\mathrm{m},10^{4}\,\mathrm{m}]$ and $a \in [0.1\,\mathrm{AU},3\,\mathrm{AU}]$), we see no significant difference in the pebble accretion outcome. This indicates that wind erosion of pebbles and planetesimals, which are accreted by the planet, occurs at distances far away from the planet where the planetary atmosphere has no additional influence on the local gas pressure (see Eq. \ref{eq:totalgaspressure}). This means that for future pebble accretion simulations the local gas density in the protoplanetary disk is more important than the atmosphere of the planet. The reason for this is that the planetary atmosphere only becomes relevant at very small distances from the planetary surface, so that pebbles and planetesimals eroded at these distances are accreted anyway. \begin{figure*} \centering \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_0,1Earthradius.pdf} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_0,2Earthradius.pdf} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_0,3Earthradius.pdf} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_0,5Earthradius.pdf} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_1Earthradius.pdf} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_2Earthradius.pdf} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_3Earthradius.pdf} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_5Earthradius.pdf} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_10Earthradius_legend.pdf} \end{minipage} \caption{\label{fig:crossectionfactorplots}Accretion cross-section diameter $d_\sigma^\mathrm{e.}$ with erosion in relation with $d_\sigma^\mathrm{n.e.}$ without erosion depending on the pebble radius $R_\mathrm{pebble}$ for planets with sizes between $0.1$ and $10\,R_\mathrm{Earth}$. The semi-major axis $a$ varies between $0.1\,\mathrm{AU}$ (violet) and $3\,\mathrm{AU}$ (red). The planets do not have an additional planetary atmosphere. For a semi-major axis $a \gtrsim 3\,\mathrm{AU,}$ wind erosion does not affect the pebble accretion outcome (see yellow line at $d_\sigma^\mathrm{e.}/d_\sigma^\mathrm{n.e.}=1$). With increasing planet size $R_\mathrm{planet}$ the pebble radius range, which is affected by the pebble accretion outcome, increases significantly. This can be seen by a shift in the threshold pebble radius around $10\,\mathrm{m}$. Also bigger planets increase the region where wind erosion influences the accretion of pebbles and planetesimals. Error bars (similar to Fig. \ref{fig:crosssectionfactorplot}) are removed for better visualization.} \end{figure*} \begin{figure*} \centering \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_1Earthradius.pdf} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_1Earthradius_0,01Earthatmosphere.pdf} \end{minipage} \begin{minipage}[t]{0.32\textwidth} \includegraphics[width=\textwidth]{PA_crosssection_1Earthradius_Earthatmosphere_legend.pdf} \end{minipage} \caption{\label{fig:crossectionfactorplotsatmosphere}Accretion cross-section diameter $d_\sigma^\mathrm{e.}$ with erosion in relation with $d_\sigma^\mathrm{n.e.}$ without erosion depending on the pebble radius $R_\mathrm{pebble}$ for an Earth-sized planet with no atmosphere (left), with Martian atmosphere ($10^3\,\mathrm{Pa}$; middle), and with Earth's atmosphere ($10^5\,\mathrm{Pa}$; right). The accretion outcome of these three cases does not differ significantly. This indicates that wind erosion of pebbles and planetesimals, accreted by the planet, occurs at distances far away from the planet where the planetary atmosphere has no additional influence on the local gas pressure (see Eq. \ref{eq:totalgaspressure}). The semi-major axis $a$ varies between $0.1\,\mathrm{AU}$ (violet) and $3\,\mathrm{AU}$ (red). Error bars (similar to Fig. \ref{fig:crosssectionfactorplot}) are removed for better visualization.} \end{figure*} \section{Conclusion} We studied the influence of gas-drag-driven wind erosion on pebble and planetesimal accretion in numerical simulations. For bodies consisting of millimeter-sized particles, wind erosion can be a destructive process, which can lead to complete dissolution into the individual parts. During pebble or planetesimal accretion, where the pebble or the planetesimal can reach high relative velocities to the gas due to the gravitational acceleration of the planet, wind erosion is a major driver for the accretion outcome. For a semi-major axis of $a<3\,\mathrm{AU}$, we observe that wind erosion decreases the accretion efficiency for pebbles smaller $\lesssim 10 \, \mathrm{m}$ significantly. We also observe that it is not important for the accretion outcome whether the planet has an additional planetary atmosphere or not, at least the kind of atmosphere the Earth has. The wind erosion dip in the accretion efficiency is characterized by the threshold pebble radius and this quantity is dependent on the planetary radius $R_\mathrm{planet}$ and semi-major axis $a$. \begin{acknowledgements} This project is funded by DLR space administration with funds provided by the BMWi under grant 50 WM 1760. We thank Remo Burn for a constructive discussion of these processes. We appreciate the constructive review by the anonymous referee. \end{acknowledgements} \bibliographystyle{aa}
1,314,259,993,989
arxiv
\section{Introduction} \input{introduction} \section{Materials and methods} \input{method} \section{Results} \input{results} \section{Discussion} \input{discussion} \section{Conclusion} \input{conclusion} \section*{Acknowledgments} This work was sponsored by Varian Medical Systems. The authors would like to thank Dr. Anand P. Santhanam for the access to GPU clusters. \bibliographystyle{ama} \subsection{Image data} Retrospective analysis was performed using CT and MR images from 20 prostate cancer patients (61 to 80 years old). The CTs were acquired on a 64-slice CT scanner (Sensation, Siemens Medical Solutions, Erlangen, Germany) using the following settings: 120 kVp, 400 mA, and 1.5 mm or 3 mm slice thickness, with in-plane spatial resolutions varying from 0.85 $\times$ 0.85 mm$^{2}$ to 1.27 $\times$ 1.27 mm$^{2}$. For each patient, an MR image was acquired on the same day as the CT with a non-contrast T1-weighted 2D turbo spin echo sequence (echo time: 12 ms or 13 ms, repetition time: 523 ms to 784 ms, flip angle: 150o) on a 1.5 T MR scanner (Sonata, Siemens Healthcare, Erlangen, Germany). MR images had slice thickness of 5 mm and in-plane spatial resolutions ranging from 0.71 $\times$ 0.71 mm$^{2}$ to 0.94 $\times$ 0.94 mm$^{2}$. Thirty slices covering the prostate region were extracted from MR images and resampled to dimensions of 256 $\times$ 256 $\times$ 30. The final voxel size of MR images varies from 1.25 $\times$ 1.11 $\times$ 5 mm$^{3}$ to 1.41 $\times$ 1.41 $\times$ 5 mm$^{3}$. \subsection{Preprocessing} Figure~\ref{fig1} outlines the image preprocessing and CNN model training workflows, respectively. N4 bias field correction\cite{RN30} and histogram-based normalization\cite{RN31} were performed on the MR images to minimize the inter-patient intensity variation. A body mask of each patient, which was used for restricting loss evaluation and sCT accuracy assessment, was generated from the bias-corrected MR image using Otsu`s thresholding\cite{RN32} followed by opening and closing morphological operations. To account for organ movement and patient setup variations between CT and MR images, the CT was registered to the bias-corrected MR image using rigid and affine registrations, followed by a multi-resolution B-spline registration (Elastix\cite{RN33}). Each deformed CT (dCT) was resampled to match the MR image resolution. Each dCT was visually compared to its paired MR image to assure that the images were properly registered. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[height=12.5cm]{figure1.pdf} \end{tabular} \end{center} \caption {\label{fig1} The overall workflow of sCT generation. (a) In the preprocessing stage, N4 bias correction was applied to the MRI to get the bias-correct MRI (bc-MRI). The CT was then deformably registered to the bc-MRI to get the paired MRI-deformed CT (dCT). The body mask and normalized MRI (nMRI) were acquired from the bc-MRI for each patient. (b) In the training stage, the sCT was generated by feeding the nMRI into the CNN model. The loss was computed as the mean absolute error between the sCT and dCT within the body mask and then minimized by updating variables of the CNN model using backpropagation and stochastic gradient descent.} \end{figure} \subsection{2D and 3D CNN models} The proposed 2D model was modified from SegNet{\cite{RN27}}, a state-of-the-art deep learning architecture for semantic segmentation, and extended to 3D. 2D MR slices and 3D MR volumes were fed into the corresponding CNN models which were trained to output 2D sCT slices and 3D sCT volumes, respectively. Figure~\ref{fig2} shows the architecture of the 2D model. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[height=12.5cm]{figure2.pdf} \end{tabular} \end{center} \caption {\label{fig2} The overall 2D CNN model architecture. A slice from the normalized MRI is input into the model. Each blue box represents a set of feature maps whose dimensions and number are shown. Each orange arrow represents a convolutional (conv) layer followed by instance normalization and the activation function (rectified linear unit, ReLu). In the encoder network, a maxpool operation with a 2 $\times$ 2 window and at a stride of 2, shown by green arrows, is applied to reduce the spatial resolution of feature maps, while the deconvolutional (deconv) layer followed by instance normalization layer, shown by black arrows, is used to upsample feature maps. A residual shortcut, shown by gray arrows, is achieved by adding high-resolution feature maps in the encoder network to up-sampled feature maps in the decoder network. Finally, a conv layer consisting of 1×1 filters is used to generate a 2D sCT.} \end{figure} Like SegNet, the 2D model has encoder and decoder networks. The encoder network, consisting of 13 convolutional layers, is identical to the convolutional layers in the VGG16 model{\cite{RN29}}, except that filters in the first convolutional layer have a depth of 1 rather than 3, because of the scalar nature of MR and CT. Each encoding convolutional layer performed convolution of its input with a set of 3 $\times$ 3 trainable filters at a stride of 1. Zero padding was used to produce feature maps with the same resolution as the inputs. These feature maps were normalized using instance normalization\cite{RN34} to reduce internal covariate shifts and then operated by the element-wise activation function $max(0,x)$, termed the Rectified Linear Unit (ReLU). The feature maps were downsampled by applying a maxpooling layer with a 2 $\times$ 2 window and a stride of 2. The sequence of several convolutional layers and max pooling layers act to extract local and global features and increase translation invariance. The decoder network, consisting of a hierarchy of decoders, was used to upsample low-resolution feature maps and gradually reconstruct the sCT. Each decoding convolutional layer corresponded to an encoding convolutional layer, except for the final convolutional layer that had a set of 1 $\times$ 1 learnable filters with a stride of 1. Three modifications to SegNet\cite{RN27} were made to develop the proposed 2D CNN model. First, the unpooling layers in the original SegNet\cite{RN27} were replaced with fractionally-strided convolutional layers (also known as deconvolutional layers). Unlike unpooling layers, which use memorized pooling indices from maxpooling layers to produce sparse high-resolution feature maps, fractionally-strided convolutional layers can be trained to produce dense high-resolution feature maps.\cite{RN35} Second, residual shortcuts, which element-wise add encoder feature maps to corresponding upsampled feature maps, were introduced for faster convergence. This was inspired by ResNet{\cite{RN36}}. Third, instance normalization\cite{RN34} was employed rather than batch normalization\cite{RN37} to deal with the small batch size. The 3D model shared the same architecture as the 2D model except that all 2D operations were replaced with their corresponding 3D counterparts. The filters in the convolutional layers and fractionally-strided convolutional layers had sets of weights and biases, which were trained by minimizing a loss function. The loss function was defined as the mean absolute error (MAE) between the sCT and deformed CT (dCT) within the body mask; \begin{equation} \label{eq1} loss = \frac{1}{N}\sum_{i=1}^{N}|sCT_{i}-CT_{i}| \end{equation} where N was the number of voxels inside the body masks of MR images, and $sCT_{i}$ and $CT_{i}$ represented the HU values of the $i^{th}$ voxel in the sCT and dCT, respectively. \subsection{Model optimization details} Both the 2D and 3D CNN models were implemented using Tensorflow\cite{RN38} packages. The Adam stochastic gradient descent method\cite{RN39} with default parameters, except for the learning rate that was set at 0.01, was used for minimizing the loss function (Equation~\ref{eq1}). At each iteration, a mini-batch of 2D images or 3D volumes was randomly selected from the training set. The batch size was limited by GPU memory. A mini-batch of 15 training slices was used to run the 2D model on an 8 GB NVIDIA GeForce GTX 1080 GPU. The 3D model was run on a 12 GB NVIDIA GeForce GTX Titan X GPU with a mini-batch of 1 training volume. The reduced batch size and large memory GPU card were necessary for implementing the 3D model due to its greater memory consumption. On-the-fly data augmentation (random shift and rotation) was performed on each set of MR images, body masks, and dCTs to reduce overfitting. For both the 2D and 3D models, the random translation was up to 15 pixels in the x and y directions, and the random rotation angle in the x-y plane was confined within $\pm$5$^{o}$. Rotations with random angles within $\pm$2$^{o}$ in the x-z and y-z planes were applied to the 3D images. The 2D and 3D model weights were initialized using He initialization{\cite{RN40}}, and the biases were initialized to 0. \subsection{Model evaluation} Five-fold-cross-validation was performed to evaluate model performance. The 20 patient-cohort was randomly divided into five groups. Each time validation was performed, four groups were used as the training set to optimize the model. The optimized model was then used to generate sCTs of patients in the remaining group. For the 2D (3D) model, four groups of four patients provided 480 (16) training samples. Using the batch size of 15 (1), it took 32 (16) iterations to go over all samples in the training set for the 2D (3D) model, which was considered as one epoch. CNN model accuracy was evaluated by using voxel-wise MAE between the sCT and dCT for three regions: 1) the whole body; 2) a soft tissue region generated by thresholding the dCT with a range [-100,150) HU; and 3) a bone region generated by thresholding the dCT at 150 HU, i.e., [150,$\infty$) HU. CNN model accuracy was also evaluated by calculating the dice similarity coefficient (DSC), recall, and precision for the bone region. They were defined as: \begin{equation} \label{eq2} DSC = \frac{2(V_{sCT} \cap V_{dCT})}{V_{sCT} + V_{dCT}}, recall = \frac{V_{sCT} \cap V_{dCT}}{V_{dCT}}, precison = \frac{V_{sCT} \cap V_{dCT}}{V_{sCT}} \end{equation} where V was the bone-region volume. Wilcoxon signed-rank tests were performed on the evaluation metrics to test the difference between the performance of 2D and 3D models. $P < 0.05$ was considered statistically significant.
1,314,259,993,990
arxiv
\section{Introduction} \indent Stochastic gradient descent (SGD) and its variants such as adaptive momentum estimation (Adam) \cite{kingma2014adam} have become popular optimization methods for training deep neural networks. These methods split a dataset into multiple batches and then optimize them sequentially by gradient descent in each epoch. SGD has two main advantages: not only is it simple to implement, it can also be applied in online settings where new coming training data are used to train models. However, while many researchers have provided solid theoretical guarantees on the convergence of SGD \cite{kingma2014adam,j.2018on,sutskever2013importance}, the assumptions of their proofs cannot be applied to problems involving deep neural networks, which are highly nonsmooth and nonconvex. Aside from the lack of theoretical guarantees, several additional drawbacks restrict the applications of SGD. It suffers from the gradient vanishing problem, meaning that the error signal diminishes as the gradient is backpropagated, which prevents the neural networks from utlizing further training \cite{taylor2016training}, and the gradient of the activation function is highly sensitive to the input (i.e. poor conditioning), so a small change in the input can lead to a dramatic change in the gradient.\\ \indent To tackle these intrinsic drawbacks of gradient descent optimization methods, alternating minimization methods have started to attract attention as a potential way to solve deep learning problems. Here, the loss function of a deep neural network is reformulated as a nested function associated with multiple linear and nonlinear transformations across multi-layers. This nested structure is then decomposed into a series of linear and nonlinear equality constraints by introducing auxiliary variables. The linear and nonlinear equality constraints generate multiple subproblems, which can be minimized alternately. Some recent alternating minimization methods have focused on applying the Alternating Direction Method of Multipliers (ADMM) \cite{taylor2016training} and Block Coordinate Descent (BCD) \cite{zeng2018global}, with empirical evaluations demonstrating good scalability in terms of the number of layers and high accuracy on the test sets, especially for neural networks that are very deep, thanks to parallelism \cite{taylor2016training}. These methods also avoid gradient vanishing problems and allow for non-differentiable activation functions such as binarized neural networks \cite{courbariaux2015binaryconnect}, as well as allowing for complex non-smooth regularization and the constraints that are increasingly important for deep neural architectures that are required to satisfy practical requirements such as interpretability, energy-efficiency, and cost awareness \cite{carreira2014distributed}. \\ \indent However, as an emerging domain, alternating minimization for deep model optimization suffers from a number of unsolved challenges including: \textbf{1. The lack of global convergence guarantee with mild and practical conditions.} Most existing alternating minimization methods for deep learning cannot provide convergence guarantees \cite{taylor2016training}; the few that do impose guarantee request for conditions that are unrealistic or hard to satisfy for most practical applications \cite{lau2018proximal}. This is because in order to enable the alternating minimization methods to disentangle the nested functions into subproblems, additional equality constraints must be added to coordinate these subproblems. However, these equality constraints are inherently nonconvex due to the nonlinearity of the activation functions (e.g., sigmoids), making it prohibitively difficult to obtain a global minimum and convergence \cite{taylor2016training,lau2018proximal}. \textbf{2. Cubic time complexity in the size of the feature dimensions.} Existing methods are generally very time-consuming. This is because in order to quantify the auxiliary variables, existing alternating minimization methods typically require matrix inversion computation, which has $O(d^3)$ time complexity, where $d$ is the dimension of features, and is thus prohibitively costly \cite{taylor2016training,boyd2011distributed}. \\ \indent In order to simultaneously address these technical problems, we propose a new formulation of the deep neural network problem, along with a novel Deep Learning Alternating Minimization (DLAM) algorithm. The proposed framework is highly generic and sufficiently flexible to be utilized in common fully-connected deep neural network models, as well as being easily extendable to other models such as convolutional neural networks \cite{krizhevsky2012imagenet} and recurrent neural networks \cite{mikolov2010recurrent}. Specifically, we, for the first time, transform the original deep neural network optimization problem into an inequality-constrained problem that can be infinitely approximate to the original one. Applying this innovation to a inequality-constraint based transformation ensures the convexity of all subproblems, and hence easily ensures global minima. The operation of matrix inversion is avoided by the quadratic approximation technique and a backtracking algorithm, which also speeds up the convergence of the DLAM algorithm. Moreover, while existing methods require typically strict and complex conditions, such as Kurdyka-Łojasiewicz (KL) properties \cite{lau2018proximal} to prove convergence, our proposed methods requires simple and mild conditions to guarantee convergence and covers most of the commonly-used loss functions and activation functions. Last but not least, the choice of hyperparameters has no effect on the convergence of our DLAM algorithm theoretically. Our contributions in this paper include: \begin{itemize} \item We propose a novel formulation for deep neural network optimization. The deeply nested activation functions are disentangled into separate functions innovatively coordinated by inequality constraints that are inherently convex. \item We present a novel and efficient DLAM algorithm. A quadratic approximation technique and a backtracking algorithm are utilized to avoid matrix inversion, thus reducing the computational cost to $O(d^2)$ considerably. Every subproblem has a closed-form solution, further boosting the efficiency. \item We investigate several attractive convergence properties of the DLAM algorithm under mild conditions. The model assumptions are very mild, ensuring that most deep learning problems will satisfy our assumptions. The new DLAM algorithm is guaranteed to converge to a critical point. \item We conduct experiments on benchmark datasets to validate our proposed DLAM algorithm. Experiments on two benchmark datasets show that the new algorithm performs well compared with SGD or its variants and ADMM. \end{itemize} \indent The rest of paper is organized as follows. In Section \ref{sec:related work}, we summarize recent research related to this topic. In Section \ref{sec:algorithm}, we present the problem formulation and the new DLAM algorithm. In Section \ref{sec:convergence}, we introduce the main convergence results for the DLAM algorithm. Section \ref{sec:experiment} reports the results of the extensive experiments conducted to validate the convergence and effectiveness of the new DLAM. Section \ref{sec:conclusion} concludes by summarizing the research. \section{Related Work} \label{sec:related work} \indent All of the existing works on optimization methods in deep neural network problems falls into two major classes: stochastic gradient descent methods and alternating minimization methods. This research related to both is discussed in this section.\\ \indent \textbf{Stochastic gradient descent methods:} The renaissance of SGD can be traced back to 1951, when Robbins and Monro published the first paper\cite{robbins1951textordfemininea}. The famous back-propagation algorithm was introduced by Rumelhart et al. \cite{rumelhart1986learning}. Many variants of SGD methods have since been presented, including the use of Polyak momentum, which accelerates the convergence of iterative methods \cite{polyak1964some}, and research by Sutskever et al., who highlighted the importance of Nesterov momentum and initialization \cite{sutskever2013importance}. Many well-known SGD methods that incorporate with adaptive learning rates have been proposed by the deep learning community, including AdaGrad \cite{duchi2011adaptive}, RMSProp \cite{tielemandivide}, Adam \cite{kingma2014adam} and AMSGrad \cite{ j.2018on}.\\ \indent \textbf{Alternating minimization methods for deep learning:} Previous work on the application of alternating minimization algorithms to deep learning problems can be categorized into two main types. The first research strand proposes the use of alternating minimization algorithms for specific applications. For example, Taylor et al. presented an Alternating Direction Method of Multipliers (ADMM) algorithm to transform a fully-connected neural network problem into an equality-constrained problem, where many subproblems split by ADMM can be solved in parallel \cite{taylor2016training}, while Zhang et al. handled very deep supervised hashing (VDSH) problems by utlizing an ADMM algorithm to overcome issues related to vanishing gradients and poor computational efficiency \cite{zhang2016efficient}. Zhang and Bastiaan trained a deep neural network by utlizing ADMM with a graph \cite{zhang2017training} and Askari et al. introduced a new framework for multilayer feedforward neural networks and solved the new framework using block coordinate descent (BCD) methods \cite{askari2018lifted}. Others have proposed novel alternating minimization methods and proved their convergence results. For instance, Carreira and Wang suggested a method involving the use of auxiliary coordinates (MAC) to replace a nested neural network with a constrained problem without nesting \cite{carreira2014distributed}. Lin and Yao, and Lau et al. both proposed BCD algorithms, proving its convergence via the Kurdyka-Łojasiewicz (KL) property \cite{lau2018proximal}, while Choromanska et al. proposed a BCD algorithm for training deep feedforward neural networks based on the concept of co-activation memory \cite{choromanska2018beyond} and a BCD algorithm with R-linear convergence was proposed by Zhang and Brand to train Tikhonov regularized deep neural networks \cite{zhang2017convergent}. However, most of these researchers focused on specific applications of neural networks rather than their general formulations. Even though several did discuss general neural network problems and provide theoretical guarantees, the assumptions involved are hard to satisfy in practice. \section{The DLAM algorithm} \label{sec:algorithm} \indent In this section, we present our novel DLAM algorithm. Section \ref{sec:problem} provides the new algorithm's formulation and Section \ref{sec:quadratic approxmation} shows how the DLAM algorithm and quadratic approximation technique function together to solve all the subproblems. \begin{table} \scriptsize \centering \caption{Notation Used in This Paper} \begin{tabular}{cc} \hline Notations&Descriptions\\ \hline $L$& Number of layers.\\ $W_l$& The weight vector in the $l$-th layer.\\ $b_l$& The intercept in the $l$-th layer.\\ $z_l$& The temporary variable of the linear mapping in the $l$-th layer.\\ $h_l(z_l)$& The nonlinear activation function in the $l$-th layer.\\ $a_l$& The output of the $l$-th layer.\\ $x$& The input matrix of the neural network.\\ $y$& The predefined label vector.\\ $R(z_l,y)$& The risk function in the $l$-th layer.\\ $\Omega_l(W_l)$& The regularization term in the $l$-th layer.\\ $\varepsilon_l$& The tolerance of the nonlinear mapping in the $l$-th layer.\\ \hline \end{tabular} \label{tab:notation} \vspace{-0.5cm} \end{table} \subsection{Inequality Approximation for Deep Learning} \label{sec:problem} \ \indent The important notation used in this paper is listed in Table \ref{tab:notation}. A typical fully-connected deep neural network consists of $L$ layers, each of which are defined by a linear mapping and a nonlinear activation function. A linear mapping is composed of a weight vector $W_l\in \mathbb{R}^{n_l\times n_{l-1}}$ , where $n_l$ is the number of neurons on the $l$-th layer and an intercept $b_l\in \mathbb{R}^{n_l}$; a nonlinear mapping is defined by an activation function $h_l(\bullet)$. Given an input $a_{l-1}\in \mathbb{R}^{n_{l-1}}$ from the $(l-1)$-th layer, the $l$-th layer outputs $a_l=h_l(W_la_{l-1}+b_l)$. By introducing an auxiliary variable $z_l$ as the temporary result of the linear mapping, the deep neural network problem is formulated mathematically as follows: \begin{problem} \label{prob: problem 1} \begin{align*} &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \min\nolimits_{a_l,W_l,b_l,z_l} R(z_L;y)+\sum\nolimits_{l=1}^L\Omega_l(W_l)\\ &s.t.\ z_l\!=W_la_{l-1}\!+\!b_l(l\!=\!1\!,\!\cdots\!,\!L)\!,\ a_l\!=\!h_l\!(z_l)\! \ \!(l\!=\!1,\cdots,L\!-\!1) \end{align*} \end{problem} where $a_0=x\in\mathbb{R}^d$ is the input of the deep neural network, $d$ is the number of feature dimensions, and $y$ is a predefined label vector. $R(z_L;y)$ is the risk function for the $L$-th layer, which is convex and proper, and $\Omega_l(W_l)$ is a regularization term on the $l$-th layer, which is also convex and proper. The equality constraint $a_l=h_l(z_l)$ is the most challenging to handle here, because common activation functions such as tanh and smooth sigmoid are nonlinear. This makes them nonconvex constraints and hence it is difficult to obtain a global minimum when updating $z_l$ \cite{taylor2016training}. To deal with this challenge, we innovatively transform the original nonconvex constraints into convex inequality constraints, which can be infinitely approximate to Problem \ref{prob: problem 1}. To do this, we introduce a tolerance $\varepsilon_l>0$ and reformulate Problem \ref{prob: problem 1} to reach the following form: \begin{align*} \min\nolimits_{W_l,b_l,z_l,a_l} &R(z_L;y)\!+\!\sum\nolimits_{l=1}^L\Omega_l(W_l)\\&\!+\!\sum\nolimits_{l\!=\!1}^{L\!-\!1}\mathbb{I}\!(h_l\!(z_l)\!-\!\varepsilon_l\!\leq\! a_l\!\leq\! h_l\!(z_l)\!+\!\varepsilon_l) \\& s.t. z_l=W_la_{l-1}+b_l(l\!=\!1\!,\!\cdots\!,\!L) \end{align*} $\mathbb{I}(h_l(z_l)-\varepsilon_l\leq a_l\leq h_l(z_l)+\varepsilon_l)$ is an indicator function such that the value is 0 if $h_l(z_l)-\varepsilon_l\leq a_l\leq h_l(z_l)+\varepsilon_l$ and $\infty$ otherwise. For the linear constraint $z_l\!=W_la_{l-1}\!+\!b_l$, this can be transformed into a penalty term in the objective function to minimize the difference between $z_l$ and $W_la_{l-1}\!+\!b_l$. The formulation is shown as follows: \begin{problem} \label{prob: problem 2} \begin{align*} &\min\nolimits_{W_l,b_l,z_l,a_l} F(\textbf{W},\textbf{b},\textbf{z},\textbf{a})=R(z_L;y)\!+\!\sum\nolimits_{l=1}^L\Omega_l(W_l)\\&\!+\!\sum\nolimits_{l\!=\!1}^L\!\phi\!(a_{l-1}\!,\!W_l\!,\!b_l\!,\!z_l)\!+\!\sum\nolimits_{l\!=\!1}^{L\!-\!1}\! \mathbb{I}(h_l\!(z_l)\!-\!\varepsilon_l\!\leq\! a_l\!\leq\! h_l\!(z_l)\!+\!\varepsilon_l) \end{align*} \end{problem} The penalty term is defined as $\phi(a_{l-1},W_l,b_l,z_l)=(\rho/2)\Vert z_l-W_la_{l-1}-b_l\Vert^2_2$, where $\rho>0$ a penalty parameter. $\textbf{W}=\{W_l\}_{l=1}^{L}$, $\textbf{b}=\{b_l\}_{l=1}^{L}$, $\textbf{z}=\{z_l\}_{l=1}^{L}$,$\textbf{a}=\{a_l\}_{l=1}^{L-1}$. \indent The reason for introducing $\varepsilon_l$ is that it allows us to project the nonlinear constraint to a convex $\varepsilon_l$-ball, thus transforming the nonconvex Problem \ref{prob: problem 1} into the multi-convex Problem \ref{prob: problem 2}, which is much easier to solve. Here, a multi-convex problem means this problem is convex with regard to one variable while fixing others. For example, Problem \ref{prob: problem 2} is convex with regard to $\textbf{z}$ when $\textbf{W}$, $\textbf{b}$, and $\textbf{a}$ are fixed. As $\rho\rightarrow \infty$ and $\varepsilon_l\rightarrow 0$, Problem \ref{prob: problem 2} approaches Problem \ref{prob: problem 1}. \subsection{Quadratically-Approximated Alternative Optimization} \label{sec:quadratic approxmation} \indent In this section, we present the DLAM algorithm developed to solve Problem \ref{prob: problem 2}, shown in Algorithm \ref{algo:DLAM}. Lines 4, 5, 7, and 10 update $W_l$, $b_l$,$z_l$ and $a_l$, respectively, and the four relevant subproblems are discussed in detail below:\\ \begin{algorithm} \caption{DLAM Algorithm for Solving Problem \ref{prob: problem 2}} \begin{algorithmic}[1]\scriptsize \label{algo:DLAM} \REQUIRE $y$, $a_0=x$. \ENSURE $a_l,W_l,b_l, z_l(l=1,\cdots,L)$. \STATE Initialize $\rho$, $k=0$. \REPEAT \FOR{$l=1$ to $L$} \STATE Update $W_l^{k+1}$ using Algorithm \ref{algo:theta update}. \STATE Update $b^{k+1}_l$ in Equation \eqref{eq:update b}. \IF{$l=L$} \STATE Update $z_l^{k+1}$ in Equation \eqref{eq:update zl}. \ELSE \STATE Update $z_l^{k+1}$ in Equation \eqref{eq:update z}. \STATE Update $a^{k+1}_{l}$ using Algorithm \ref{algo:tau update}. \ENDIF \ENDFOR \STATE $k\leftarrow k+1$. \UNTIL convergence. \STATE Output $a_l,W_l,b_l,z_l$. \end{algorithmic} \end{algorithm}\\ \textbf{1. Update $W_l$}\\ \indent The variables $W_l(l=1,\cdots,L)$ are updated as follows: \begin{align} W^{k+1}_l\leftarrow\arg\min\nolimits_{W_l} \phi(a^{k+1}_{l-1},W_l,b^k_l,z^k_l)+\Omega_l(W_l) \label{eq: update W original} \end{align} \indent Because $W_l$ and $a_{l-1}$ are coupled in $\phi(\bullet)$, solving $W_l$ requires an inversion operation of $a^{k+1}_{l-1}$, which is computationally expensive. In order to handle this challenge, we define $P^{k+1}_l(W_l;\theta^{k+1}_l)$ as a quadratic approximation of $\phi$ at $W^k_{l}$, which is mathematically reformulated as follows \cite{beck2009fast}: \begin{align*}P^{k\!+\!1}_l(W_l\!;\!\theta^{k\!+\!1}_l)&\!=\!\phi(a^{k\!+\!1}_{l\!-\!1}\!,\!W^k_{l}\!,\!z^k_{l}\!,\!b^k_{l})\!+\!<\!\nabla_{W^k_l} \phi\!,\!W_l\!-\!W^k_l\!>\\&+\Vert\theta_l^{k+1}\circ (W_l-W_l^k)^{\circ 2}\Vert_{1}/2 \end{align*} where $\theta_l^{k+1}>0$ is a parameter vector, $\circ$ denotes Hadamard product (the elementwise product), and $a^{\circ b}$ denotes $a$ to the Hadamard power of $b$ and $\Vert\bullet\Vert_{1}$ is the $\ell_{1}$ norm. $<\bullet,\bullet>$ is a Frobenius inner product.$\nabla_{W^k_l}\phi=\rho(W_{l}^ka^{k+1}_{l-1}+b^k_{l}-z^k_{l})(a^{k+1}_{l-1})^T(l=1,\cdots,L)$ is the gradient of $\phi$ with regard to $W_l$ at $W^k_l$. Obviously, $P^{k+1}_l(W^k_l;\theta^{k+1}_l)=\phi(a^{k+1}_{l-1},W^k_{l},b_{l}^k,z^k_{l})$. Rather than minimizing the original subproblem in Equation \eqref{eq: update W original}, we instead minimize the following: \begin{align} &W^{k+1}_l \leftarrow \arg\min\nolimits_{W_l} P^{k+1}_l(W_l;\theta_l^{k+1})+\Omega_l (W_l) \label{eq:update W} \end{align} For $\Omega_l(W_l)$, common regularization terms like $\ell_1$ or $\ell_2$ regularizations lead to closed-form solutions. As for the choice of $\theta_l^{k+1}$, the backtracking algorithm is shown in Algorithm \ref{algo:theta update}. Specifically, for a given $\theta^{k+1}_l$, we minimize Equation \eqref{eq:update W} to obtain $W^{k+1}_l$ until the condition in Line 3 is satisfied. The backtracking algorithm always terminates because as $\theta^{k+1}_l\rightarrow \infty$, $W^{k+1}_l\rightarrow W^{k}_l$, and $W^{k}_l$ satisfies the condition in Line 3. The time complexity of Algorithm \ref{algo:theta update} is $O(d^2)$, where $d$ is the dimension of the neurons or features. \begin{algorithm} \caption{Backtracking Algorithm to update $W^{k+1}_l$ } \begin{algorithmic}[1]\scriptsize \label{algo:theta update \REQUIRE $a_{l-1}^{k+1}$, $W^k_{l}$, $b^k_l$, $z^k_l$, $\rho$, some constant $\gamma>1$. \ENSURE $\theta^{k+1}_l$,$W^{k+1}_l$. \STATE Initialize $\alpha$. \STATE update $\zeta$ in Equation \eqref{eq:update W} where $\theta^{k+1}_l=\alpha$. \WHILE{$\phi(a^{k+1}_{l-1},\zeta,b^k_{l},z^k_{l})>P^{k+1}_l(\zeta;\alpha)$} \STATE $\alpha\leftarrow \alpha\gamma$.\\ \STATE update $\zeta$ in Equation \eqref{eq:update W} where $\theta^{k+1}_l=\alpha$.\\ \ENDWHILE \STATE Output $\theta^{k+1}_l \leftarrow \alpha $.\\ \STATE Output $W^{k+1}_{l}\leftarrow \zeta$. \end{algorithmic} \end{algorithm}\\ \textbf{2. Update $b_l$}\\ \indent The variables $b_l(l=1,\cdots,L)$ are updated as follows: \begin{align*} b_l^{k+1}\leftarrow \arg\min\nolimits_{b_l} \phi(a^{k+1}_{l-1},W^{k+1}_l,b_l,z^k_l). \end{align*} \indent The above subproblem has a closed-form solution $b^{k+1}_l=z^{k}_l-W^{k+1}_la^{k+1}_{l-1}$. However, the value of $b^{k+1}_l$ is subject to fluctuation as the signs of either $W_l^{k+1}$ or $a^{k+1}_{l-1}$ may change, which slows down the convergence of $b^{k+1}_l$. We therefore define $U^{k+1}_l(b_l;L_b)$ as a quadratic approximation of $\phi$ at $b^k_l$, which is formulated mathematically as follows \cite{beck2009fast}: \begin{align*} U^{k+1}_l(b_l;L_b)&=\phi(a^{k+1}_{l-1},W^{k+1}_l,b^k_l,z^k_l)+(\nabla_{b^k_l} \phi)^T(b_l-b^k_l)\\&+(L_b/2)\Vert b_l-b^k_l\Vert^2_2. \end{align*} where $L_b\geq \rho$ is a parameter and $\nabla_{b^k_l} \phi=\rho(b^k_l+W^{k+1}_la^{k+1}_{l-1}-z^k_l)$. Here, $L_b\geq \rho$ is required for the convergence analysis \cite{beck2009fast}. Without loss of generality, we set $L_b=\rho$. We can now solve the following subproblem: \begin{align} b^{k+1}_l\leftarrow \arg\min\nolimits_{b_l} U^{k+1}_l(b_l;\rho) \label{eq:update b} \end{align} \indent The solution to Equation \eqref{eq:update b} is \begin{align} b^{k+1}_l\leftarrow b^{k}_l-\nabla_{b^k_l} \phi/\rho \end{align} This indicates that although $b^{k+1}_l$ is closely related to $b^{k}_l$, it is more resistant to the impact of a sign change for either $W^{k+1}_l$ or $a^{k+1}_{l-1}$.\\ \textbf{3. Update $z_l$}\\ \indent The variables $z_l(l=1,\cdots,L)$ are updated as follows: \begin{align*} z^{k\!+\!1}_l\! &\leftarrow\arg\min\nolimits_{z_l}\! \phi(a^{k\!+\!1}_{l\!-\!1},W^{k\!+\!1}_l,b^{k\!+\!1}_l,z_l)\\&\!+\!\mathbb{I}(h_l(z_l)\!-\!\varepsilon_l\leq a^{k}_l\leq h_l(z_l)\!+\!\varepsilon_l) (l< \!L)\\ z^{k+1}_L &\leftarrow\arg\min\nolimits_{z_L} \phi(a^{k+1}_{L-1},W^{k+1}_L,b^{k+1}_L,z_L)+R(z_L;y) \end{align*} As when updating $b_l$, we define $V^{k+1}_l(z_l;L_z)$ as a quadratic approximation of $\phi$ at $z^k_l$, which is formulated mathematically as follows: \begin{align*} V^{k\!+\!1}_l(z_l\!;\!L_z)&=\!\phi(a_{l\!-\!1}^{k\!+\!1},W^{k\!+\!1}_l,b^{k\!+\!1}_l,z^k_l)\!+\!(\nabla_{z^k_l}\phi)^T(z_l\!-\!z^k_l)\\&\!+\!(L_z/2)\Vert z_l\!-\!z^k_l\Vert^2_2 \end{align*} where $L_z\geq \rho$ is a parameter and $\nabla_{z^k_l}\phi=\rho(z_l^k-W^{k+1}_l a^{k+1}_{l-1}-b^{k+1}_l)$. Without loss of generality, we set $L_z=\rho$. Obviously, $V^{k+1}_l(z^k_l;\rho)=\phi(a^{k+1}_{l-1},W^{k+1}_l,b^{k+1}_l,z^k_l)$. Therefore, we solve the following problems: \begin{align} \nonumber z^{k\!+\!1}_l\! &\leftarrow\arg\min\nolimits_{z_l}\! V^{k+1}_l(z_l;\rho)\\& +\!\mathbb{I}(h_l(z_l)\!-\!\varepsilon_l\leq a^{k}_l\leq h_l(z_l)\!+\!\varepsilon_l) (l< \!L) \label{eq:update z}\\ z^{k+1}_L &\leftarrow\arg\min\nolimits_{z_L} V^{k+1}_L(z_L;\rho)+R(z_L;y) \label{eq:update zl} \end{align} As for $z_l(l=1,\cdots,l-1)$, the solution is \begin{align*} z_l^{k+1}\leftarrow \min(\max(B^{k+1}_1,z^{k}_l-\nabla\phi_{z^k_l}/\rho), B^{k+1}_2). \end{align*} where $B^{k+1}_1$ and $B^{k+1}_2$ represent the lower bound and the upper bound of the set $\{ z_l|h_l(z_l)\!-\!\varepsilon_l\leq a^{k}_l\leq h_l(z_l)\!+\!\varepsilon_l\}$. Equation \eqref{eq:update zl} is easy to solve using the Fast Iterative Soft Thresholding Algorithm (FISTA) \cite{beck2009fast}.\\ \textbf{4. Update $a_l$}\\ \indent The variables $a_l(l=1,\cdots,L-1)$ are updated as follows: \begin{align*} a^{k+1}_{l}&\leftarrow \arg\min\nolimits_{a_{l}} \phi(a_{l},W^k_{l+1},b^k_{l+1},z^k_{l+1})\\&+\mathbb{I}(h_l(z^{k+1}_l)-\varepsilon_l\leq a_l\leq h_l(z^{k+1}_l)+\varepsilon_l) \end{align*} \indent As when solving $W^{k+1}_l$, the quadratic approximation of $\phi$ at $a^k_l$ is defined as \begin{align*}Q^{k\!+\!1}_l(a_{l};\!\tau^{k\!+\!1}_l)&=\!\phi(a^k_{l}\!,\!W^k_{l\!+\!1}\!,\!b^k_{l\!+\!1}\!,\!z^k_{l\!+\!1})\!+\!(\nabla_{a^k_{l}} \phi)^T(a_{l}\!-\!a^k_{l})\\&+\Vert\tau_l^{k+1}\circ (a_{l}-a_{l}^k)^{\circ 2}\Vert_{1}/2 \end{align*} and this allows us to solve the following problem instead: \begin{align} \nonumber a^{k+1}_l&\leftarrow\arg\min\nolimits_{a_l} Q^{k+1}_l(a_l;\tau^{k+1}_l)\\&+\mathbb{I}(h_l(z^{k+1}_l)-\varepsilon_l\leq a_l\leq h_l(z^{k+1}_l)+\varepsilon_l) \label{eq:update a} \end{align} where $\tau_l^{k+1}>0$ is a parameter vector. $\nabla_{a^k_{l}}\phi=\rho(W_{l+1}^{k})^T(W_{l+1}^{k}a^{k}_{l}+b^k_{l+1}-z^k_{l+1})(l=1,\cdots,L-1)$ is the gradient of $\phi$ with regard to $a_l$ at $a^k_l$. Obviously, $Q^{k+1}_l(a^k_{l};\tau^{k+1}_l)=\phi(a^k_{l},W^k_{l+1},b_{l+1}^k,z^k_{l+1})$. Because $Q^{k+1}_l(a_{l};\tau^{k+1}_l)$ is a quadratic function with respect to $a_{l}$, the solution can be obtained by \begin{align*} a_{l}^{k+1}\leftarrow a^k_{l}-\nabla_{a^k_{l}}\phi/\tau_{l}^{k+1} \end{align*} given a suitable $\tau_l^{k+1}$. Now the main focus is how to choose $\tau_l^{k+1}$. Similar to Algorithm \ref{algo:theta update}, the backtracking algorithm for finding a suitable $\tau_l^{k+1}$ is shown in Algorithm \ref{algo:tau update}. The time complexity of Algorithm \ref{algo:tau update} is $O(d^2)$, where $d$ is the dimension of neurons or features. \begin{algorithm} \caption{Backtracking Algorithm to update $a^{k+1}_{l}$ } \begin{algorithmic}[1]\scriptsize \label{algo:tau update \REQUIRE $a_{l}^k$, $W^k_{l+1}$, $z^{k+1}_l$, $z^k_{l+1}$, $b^k_{l+1}$, $\rho$, some constant $\eta>1$. \ENSURE $\tau^{k+1}_l$,$a^{k+1}_{l}$. \STATE Pick up $t$ such that $\beta=a^k_l-\nabla_{a^k_l}\phi/t$ and $h_{l}(z^{k+1}_{l})-\varepsilon_l\leq \beta\leq h_{l}(z^{k+1}_{l})+\varepsilon_l$. \WHILE{$\phi(\beta,W^k_{l+1},z^k_{l+1},b^k_{l+1})>Q^{k+1}_l(\beta;t)$} \STATE $t\leftarrow t\eta$.\\ \STATE $\beta\leftarrow a^k_{l}-\nabla\phi_{a^k_{l}}/t$.\\ \ENDWHILE \STATE Output $\tau^{k+1}_l \leftarrow t $.\\ \STATE Output $a^{k+1}_{l}\leftarrow \beta$. \end{algorithmic} \end{algorithm} \section{Convergence Analysis} \label{sec:convergence} \indent In this section, we present the main convergence analyses for the DLAM algorithm. Specifically, Section \ref{sec:assumptions} introduces the assumptions necessary to guarantee convergence. The main convergence properties of the new DLAM algorithm are presented in Section \ref{sec:convergence property}. \subsection{Assumptions} \label{sec:assumptions} \indent First recall the concept of coercivity as follows \cite{wang2015global}: \begin{definition}[Coercivity] Suppose $f(x_1,\cdots,x_m)$ is a function with respect to $(x_1,\cdots,x_m)\in G$ where $G$ is the domain, then $f(x_1,\cdots,x_m)$ is coercive if $(x_1,\cdots,x_m)\in G$ and $\Vert (x_1,\cdots,x_m)\Vert\rightarrow \infty$ leads to $f\rightarrow \infty$. \end{definition} \indent Then we propose a new concept called \emph{multi-coercitvity} based on the above definition of coercivity, which is defined as: \begin{definition}[Multi-coercivity] Suppose $g(u_1,\cdots,u_m)$ is a function with respect to $(u_1,\cdots,u_m)\in G$ where $G$ is the domain, then $g(u_1,\cdots,u_m)$ is coercive with regard to $u_1$ if $(u_1,\cdots,u_m)\in G$ and $\Vert u_1\Vert\rightarrow \infty$ while fixing $u_i(i=2,3,\cdots,m)$ leads to $g\rightarrow \infty$. If $g$ is coercive with regard to all variables $u_i(i=1,2,\cdots,m)$, then $g$ is multi-coercive. \end{definition} \indent Multi-coercivity is milder than coercivity because although a coercive function must be multi-coercive, the reverse is not the case. For example, $f_1(x,y)=x+y$ is coercive with regard to $x$ and $y$, respectively, and therefore is multi-coercive. However, $f_1$ is not coercive because when $\Vert (x,y)\Vert\rightarrow\infty$ and $(x,y)$ follows the line $x+y=0$, $f_1=0$. Given this definition of multi-coercivity, we can make the following assumption: \begin{assumption}[Multi-coercivity] $F(\textbf{W},\textbf{b},\textbf{z},\textbf{a})$ is multi-coercive over the set $S=\{(\textbf{W},\textbf{b},\textbf{z},\textbf{a}):h_l(z_l)-\varepsilon_l\leq a_l\leq h_l(z_l)+\varepsilon_l(l=1\cdots,L-1)\}$. \label{ass:assumption 1} \end{assumption} \indent With the introduction of the multi-coercivity assumption, the condition of our problem becomes much milder than is the case with the solutions proposed by previous researchers and also enables our framework to cover most of the common loss functions utilized in neural networks; cross entropy and square loss are both multi-coercive, for example. The next assumption guarantees that all subproblems have global minima. \indent Before stating the second assumption, recall the definition of quasilinearity \cite{boyd2004convex}: \begin{definition} A function $f(x)$ is quasiconvex if for any sublevel set $S_\alpha(f)=\{x|f(x)\leq \alpha\}$ is a convex set. Likewise, A function $f(x)$ is quasiconcave if for any superlevel set $S_\alpha(f)=\{x|f(x)\geq \alpha\}$ is a convex set. A function $f(x)$ is quasilinear if it is both quasiconvex and quasiconcave. \end{definition} \indent Given this definition of quasilinearity, we can make the following assumption: \begin{assumption}[Quasilinearity] \label{ass:assumption 3} Activation functions $h_l(z_l)(l=1,\cdots,n)$ are quasilinear functions. \end{assumption} \indent Assumption \ref{ass:assumption 3} ensures that the nonlinear constraint $a_l=h_l(z_l)$ in Problem \ref{prob: problem 1} is projected in a convex set. Fortunately, most of the widely used nonlinear activation functions, including tanh \cite{zamanlooy2014efficient}, smooth sigmoid \cite{glorot2010understanding}, and the rectified linear unit(Relu) \cite{maas2013rectifier} that are used in deep neural networks are quasilinear. They therefore fit neatly into our framework and incorporate several important theoretical properties. \subsection{Key Convergence Properties} \label{sec:convergence property} \indent We introduce several important main convergence properties possessed by the DLAM algorithm in this section. If Assumptions 1 and 2 hold, then Properties 1-3 stated below are satisfied. These three properties are proven to be possessed by the DLAM algorithm, and are key for demonstrating the theoretical merits of DLAM; the proofs of them are provided in the supplementary materials. Finally, the global convergence and convergence rate of the DLAM are proved based on Properties 1-3. The three convergence properties are as follows: \begin{property}[Boundness] For any $\rho> 0$ and $\varepsilon_l>0$, starting from any $(\textbf{W}^0,\textbf{b}^0,\textbf{z}^0,\textbf{a}^0)$ such that $h_l(z^0_l)-\varepsilon_l\leq a^0_l\leq h_l(z^0_l)+\varepsilon_l(l=1,\cdots,L)$, $\{(\textbf{W}^k,\textbf{b}^k,\textbf{z}^k,\textbf{a}^k)\}$ is bounded, and $F(\textbf{W}^k,\textbf{b}^k,\textbf{z}^k,\textbf{a}^k)$ is lower bounded. \label{pro:property 1} \end{property} \indent Property \ref{pro:property 1} guarantees that all variables and objective functions during iterations are bounded. The proof of Property \ref{pro:property 1} requires Lemma \ref{lemma:lemma 3} and Assumption \ref{ass:assumption 1}, and the proof is elaborated in Theorem \ref{theorem: property 1} in the supplementary materials. \begin{property}[Sufficient Descent] \label{pro:property 2} For any $\rho>0$ and $\varepsilon_l>0$, we have \begin{align} \nonumber &F(\textbf{W}^k,\textbf{b}^k,\textbf{z}^k,\textbf{a}^k)-F(\textbf{W}^{k+1},\textbf{b}^{k+1},\textbf{z}^{k+1},\textbf{a}^{k+1})\\\nonumber&\geq\sum\nolimits_{l=1}^{L} \Vert\theta_l^{k\!+\!1}\circ (W^{k\!+\!1}_l-W_l^k)^{\circ 2}\Vert_{1}/2 \\\nonumber &+\!(\rho/2)\!\sum\nolimits_{l\!=\!1}^L\Vert b^{k\!+\!1}_l\!-\!b^k_l\Vert^2_2\!+\!(\rho/2)\!\sum\nolimits_{l\!=\!1}^L\Vert z^{k\!+\!1}_l\!-\!z^k_l\Vert^2_2\\&+\sum\nolimits_{l=1}^{L-1} \Vert\tau_l^{k\!+\!1}\circ (a^{k\!+\!1}_l-a_l^k)^{\circ 2}\Vert_{1}/2 \label{eq: property2} \end{align} \end{property} \indent Property \ref{pro:property 2} depicts the monotonic decrease of the objective value during iterations. The proof of Property \ref{pro:property 2} is detailed in Theorem \ref{theorem: property 2} in the supplementary materials. \begin{property}[Subgradient Bound] \label{pro:property 3} There exists a constant $C^{k+1}>0$ and $g\in \partial F(\textbf{W}^{k+1},\textbf{b}^{k+1},\textbf{z}^{k+1},\textbf{a}^{k+1})$ such that \begin{align} \nonumber\Vert g\Vert\! &\leq\! C^{k\!+\!1}\!(\Vert \textbf{W}^{k\!+\!1}\!-\!\textbf{W}^{k}\Vert\!+\!\Vert\textbf{b}^{k\!+\!1}\!-\!\textbf{b}^{k}\Vert\!\\&+\!\Vert\textbf{z}^{k\!+\!1}\!-\!\textbf{z}^{k}\Vert\!+\!\Vert\textbf{a}^{k\!+\!1}\!-\!\textbf{a}^{k}\Vert)\label{eq: property3} \end{align} \end{property} \indent Property \ref{pro:property 3} ensures that the subgradient of the objective function is bounded by variables. The proof of Property \ref{pro:property 3} requires Property \ref{pro:property 1} and the proof process is elaborated in Theorem \ref{theorem: property 3} in the supplementary materials. We will now move on to present the global convergence of the DLAM algorithm using the following three theorems. The first theorem states that Properties \ref{pro:property 1}-\ref{pro:property 3} are guaranteed. \begin{theorem}[Convergence Property] \label{thero: theorem 1} For any $\rho>0$ and $\varepsilon_l>0$, if Assumptions \ref{ass:assumption 1} and \ref{ass:assumption 3} are satisfied, then Properties \ref{pro:property 1}-\ref{pro:property 3} hold. \end{theorem} \begin{proof} This theorem can be concluded by the proof of Theorems \ref{theorem: property 1}, \ref{theorem: property 2} and \ref{theorem: property 3} in the supplementary materials. \end{proof} \indent The next theorem presents the global convergence of the DLAM algorithm. \begin{theorem} [Global Convergence] \label{thero: theorem 2} For the variables $(\textbf{W},\textbf{b},\textbf{z},\textbf{a})$ in Problem \ref{prob: problem 2}, starting from any $(\textbf{W}^0,\textbf{b}^0,\textbf{z}^0,\textbf{a}^0)$ such that $h_l(z^0_l)-\varepsilon_l\leq a^0_l\leq h_l(z^0_l)+\varepsilon_l(l=1,\cdots,L)$ , it has at least a limit point $(\textbf{W}^*,\textbf{b}^*,\textbf{z}^*,\textbf{a}^*)$, and any limit point $(\textbf{W}^*,\textbf{b}^*,\textbf{z}^*,\textbf{a}^*)$ is a critical point. That is, $0\in \partial F(\textbf{W}^*,\textbf{b}^*,\textbf{z}^*,\textbf{a}^*)$. \end{theorem} \begin{proof} Because $(\textbf{W}^k,\textbf{b}^k,\textbf{z}^k,\textbf{a}^k)$ is bounded, there exists a subsequence $(\textbf{W}^s,\textbf{b}^s,\textbf{z}^s,\textbf{a}^s)$ such that $(\textbf{W}^s,\textbf{b}^s,\textbf{z}^s,\textbf{a}^s)\rightarrow (\textbf{W}^*,\textbf{b}^*,\textbf{z}^*,\textbf{a}^*)$ where $(\textbf{W}^*,\textbf{b}^*,\textbf{z}^*,\textbf{a}^*)$ is a limit point. By Properties \ref{pro:property 1} and \ref{pro:property 2}, $F(\textbf{W}^k,\textbf{b}^k,\textbf{z}^k,\textbf{a}^k)$ is non-increasing and lower bounded and hence converged. By Property \ref{pro:property 2}, we prove that $\Vert \textbf{W}^{k+1}-\textbf{W}^k\Vert\rightarrow 0$, $\Vert \textbf{b}^{k+1}-\textbf{b}^k\Vert\rightarrow 0$, $\Vert \textbf{z}^{k+1}-\textbf{z}^k\Vert\rightarrow 0$ and $\Vert \textbf{a}^{k+1}-\textbf{a}^k\Vert\rightarrow 0$ as $k\rightarrow \infty$ . We infer there exists $g^k\in \partial F(\textbf{W}^k,\textbf{b}^k,\textbf{z}^k,\textbf{a}^k)$ such that $\Vert g^k\Vert \rightarrow 0$ as $k\rightarrow \infty$ based on Property \ref{pro:property 3}. Specifically, $\Vert g^s\Vert \rightarrow 0$ as $s\rightarrow \infty$. According to the definition of general subgradient (Defintion 8.3 in \cite{rockafellar2009variational}), we have $0\in \partial F(\textbf{W}^*,\textbf{b}^*,\textbf{z}^*,\textbf{a}^*)$. In other words, the limit point $(\textbf{W}^*,\textbf{b}^*,\textbf{z}^*,\textbf{a}^*)$ is a critical point of $F$. \end{proof} \indent Theorem \ref{thero: theorem 2} shows that our proposed DLAM algorithm converges globally no matter what $\rho$ and $\varepsilon_l$ are chosen. This ensures that our DLAM algorithm is parameter-restriction free, so the choice of hyperparameters has no effect on its convergence.\\ \indent The next theorem shows that the convergence rate of DLAM is $o(1/k)$, which is shown as follows: \begin{theorem}[Convergence Rate] For a sequence $(\textbf{W}^k,\textbf{b}^k,\textbf{z}^k,\textbf{a}^k)$, define $c_k=\min\nolimits_{0\leq i\leq k}(\sum\nolimits_{l=1}^{L} \Vert\theta_l^{i\!+\!1}\circ (W^{i\!+\!1}_l-W_l^i)^{\circ 2}\Vert_{1}/2+\!(\rho/2)\!\sum\nolimits_{l\!=\!1}^L\Vert b^{i\!+\!1}_l\!-\!b^i_l\Vert^2_2\!+\!(\rho/2)\!\sum\nolimits_{l\!=\!1}^L\Vert z^{i\!+\!1}_l\!-\!z^i_l\Vert^2_2+\sum\nolimits_{l=1}^{L-1} \Vert\tau_l^{i\!+\!1}\circ (a^{i\!+\!1}_l-a_l^i)^{\circ 2}\Vert_{1}/2)$, then the convergence rate of $c_k$ is $o(1/k)$. \label{thero: theorem 3} \end{theorem} \begin{proof} The proof of this theorem is in Appendix \ref{sec:convergence rate proof} in the supplementary materials. \end{proof} \section{Experiments} \label{sec:experiment} In this section, the dlADMM algorithm is evaluated by several benchmark datasets. Effectiveness, efficiency and convergence properties of dlADMM are compared with state-of-the-art methods. All experiments were conducted on 64-bit Ubuntu16.04 LTS with Intel(R) Xeon processor and GTX1080Ti GPU. \subsection{Experiment Setup} \subsubsection{Dataset} In this experiment, two benchmark datasets were used for comparison: MNIST \cite{lecun1998gradient} and Fashion MNIST \cite{xiao2017fashion}. The MNIST dataset has ten classes of handwritten-digit images, which was firstly introduced by Lecun et al. in 1998 \cite{lecun1998gradient}. It contains 55,000 training samples and 10,000 test samples with 196 features each, which is provided by the Keras library \cite{chollet2017deep}. Unlike the MNIST dataset, the Fashion MNIST dataset has ten classes of assortment images on the website of Zalando, which is Europe’s largest online fashion platform \cite{xiao2017fashion}. The Fashion-MNIST dataset consists of 60,000 training samples and 10,000 test samples with 784 features each. \subsubsection{Experiment Settings} \indent We set up two different architectures of multi-layer neural networks in the experiment. Two network structures contained two hidden layers with $100$ and $500$ hidden units each, respectively. The rectified linear unit (Relu) was used for the activation function for both network structures. The loss function was set as the deterministic cross-entropy loss. $\rho$ was set to $10^{-4}$. $\varepsilon$ was initialized as $10$ and updated adaptively as follows: if $R(z^k_L;y)>10\varepsilon^k$, $\varepsilon^{k+1}=\max(2\varepsilon^k,1)$; if $\varepsilon^k>10R(z^k_L;y)$, $\varepsilon^{k+1}=\min(\varepsilon^k/2,0.01)$, which balances between the loss function $R(z_L;y)$ and $\varepsilon$. The number of iteration was set to $150$. In the experiment, one iteration means one epoch. \subsubsection{Comparison Methods} \indent Since this paper focuses on fully-connected deep neural networks, SGD and its variants and ADMM are state-of-the-art methods and hence were served as comparison methods. For SGD-based methods, the full batch dataset is used for training models. All parameters were chosen by the accuracy of the training dataset. The baselines are described as follows: \\ \indent 1. Stochastic Gradient Descent (SGD) \cite{bottou2010large}. The SGD and its variants are the most popular deep learning optimizers, whose convergence has been studied extensively in the literature. \\ \indent 2. Adaptive gradient algorithm (Adagrad) \cite{duchi2011adaptive}. Adagrad is an improved version of SGD: rather than fixing the learning rate during iteration, it adapts the learning rate to the hyperparameter.\\ \indent 3. Adaptive learning rate method (Adadelta) \cite{zeiler2012adadelta}. As an improved version of the Adagrad, the Adadelta is proposed to overcome the sensitivity to hyperparameter selection.\\ \indent 4. Alternating Direction Method of Multipliers (ADMM) \cite{taylor2016training}. ADMM is a powerful convex optimization method because it can split a objective function into a series of subproblems, which are coordinated to get global solutions. It is scalable to large-scale datasets and supports parallel computations.\\ \subsection{Experimental Results} \indent In this section, experimental results of DLAM algorithm are analyzed against comparison methods. \subsubsection{Convergence} \indent First, we show that our proposed DLAM algorithm converges for both the MNIST dataset and the Fashion MNIST dataset. \begin{figure}[h] \centering \small \vspace{-0.5cm} \begin{minipage} {0.49\linewidth} \centerline{\includegraphics[width=\columnwidth] {1a.pdf}} \centerline{(a). Objective value on} \centerline{the $100\times100$ neural network} \end{minipage} \hfill \begin{minipage} {0.49\linewidth} \centerline{\includegraphics[width=\columnwidth] {1b.pdf}} \centerline{(b). Objective value on}\centerline{the $500\times500$ neural network} \end{minipage} \caption{Convergence curves of DLAM algorithm on MNIST and Fashion MNIST datasets for two neural network structures: DLAM algorithm converged.} \label{fig:convergence} \vspace{-0.6cm} \end{figure} The convergence of DLAM algorithm is shown in Figure \ref{fig:convergence}. The X axis and Y axis denote number of iterations and the logorithm of objective value, respectively. Overall, the objective value decreased monotonically during iteration whatever network structures and datasets we choose. Specifically, the objective value droped tremendously at the early stage, and then converged smoothly towards the critical point of the problem. We also found that the objective value for the Fashion MNIST dataset decreased more quickly than that for the MNIST dataset. \subsubsection{Performance} \begin{figure}[h] \centering \small \begin{minipage} {0.49\linewidth} \centerline{\includegraphics[width=\textwidth] {2a.pdf}} \centerline{(a). The MNIST dataset}\centerline{ on the $100\times100$ neural network.} \end{minipage} \hfill \begin{minipage} {0.49\linewidth} \centerline{\includegraphics[width=\textwidth] {2b.pdf}} \centerline{(b). The Fashion MNIST dataset on} \centerline{the $100\times100$ neural network.} \end{minipage} \vfill \begin{minipage} {0.49\linewidth} \centerline{\includegraphics[width=\textwidth] {2c.pdf}} \centerline{(c). The MNIST dataset}\centerline{on the $500\times500$ neural network.} \end{minipage} \hfill \begin{minipage} {0.49\linewidth} \centerline{\includegraphics[width=\textwidth] {2d.pdf}} \centerline{(d). The Fashion MNIST dataset on} \centerline{the $500\times500$ neural network.} \end{minipage} \vspace{-0.3cm} \caption{Training accuracy of all methods for the MNIST and Fashion MNIST datasets on two neural network structures: DLAM algorithm performed competitively.} \label{fig:training accuracy} \vspace{-0.9cm} \end{figure} \begin{figure}[h] \centering \small \begin{minipage} {0.49\linewidth} \centerline{\includegraphics[width=\textwidth] {3a.pdf}} \centerline{(a). The MNIST dataset}\centerline{ on the $100\times100$ neural network.} \end{minipage} \hfill \begin{minipage} {0.49\linewidth} \centerline{\includegraphics[width=\textwidth] {3b.pdf}} \centerline{(b). The Fashion MNIST dataset on} \centerline{the $100\times100$ neural network.} \end{minipage} \vfill \begin{minipage} {0.49\linewidth} \centerline{\includegraphics[width=\textwidth] {3c.pdf}} \centerline{(c). The MNIST dataset}\centerline{on the $500\times500$ neural network.} \end{minipage} \hfill \begin{minipage} {0.49\linewidth} \centerline{\includegraphics[width=\textwidth] {3d.pdf}} \centerline{(d). The Fashion MNIST dataset on} \centerline{the $500\times500$ neural network.} \end{minipage} \caption{Test accuracy of all methods for the MNIST and Fashion MNIST datasets on two neural network structures: DLAM algorithm performed competitively.} \label{fig:test accuracy} \vspace{-0.4cm} \end{figure} \indent Figure \ref{fig:training accuracy} and Figure \ref{fig:test accuracy} show the curves of the training accuracy and test accuracy of our proposed DLAM algorithm and baselines, respectively. Overall, both the training accuracy and the test accuracy of our proposed DLAM outperformed all baselines for the MNIST dataset, while those of our proposed DLAM algortihm performed competitively for the Fashion MNIST dataset. Specifically, the curves of our DLAM algorithm soared to $0.7$ at the early stage, and then raised steadily towards to $0.8$ or more. The curves of the SGD-related methods, SGD, Adadelta, and Adagrad, moved more slowly than our proposed DLAM algorithm. The curves of the ADMM also rocketed to around $0.8$, but decreased slightly later on. \vspace{-0.3cm} \subsubsection{Efficiency} \indent In this subsection, the relationship between running time per iteration of our proposed DLAM algorithm and two potential factors, namely, the value of $\rho$, the size of training sample was explored. The running time was calculated by the average of 150 iterations. The computational result for the MNIST dataset and Fashion MNIST dataset on the $100\times100$ neural network is shown in Table \ref{tab:running time}. The number of training samples of the MNIST dataset ranged from 11,000 to 55,000, with an increase of 11,000 each time, whereas The number of training samples of the Fashion MNIST dataset ranged from 12,000 to 60,000, with an increase of 12,000 each time. The value of $\rho$ ranged from 0.0001 to 1, with multiplying by 10 each time. Generally, the running time increased as the training sample and the value of $\rho$ became larger. \begin{table}[!hbp] \centering \scriptsize \begin{tabular}{c|c|c|c|c|c} \hline\hline \multicolumn{6}{c}{MNIST dataset: From 11,000 to 55,000 training samples}\\ \hline \diagbox{$\rho$}{size} &11000&22000 &33000 &44000 &55000 \\\hline 0.0001& 0.1692& 0.3216& 0.5010& 0.7164& 0.9413\\\hline 0.001& 0.2061& 0.4328& 0.6951& 0.9792& 1.2442\\\hline 0.01& 0.3334& 0.6516& 1.0277& 1.3956& 1.7783\\\hline 0.1& 0.4795& 0.9428& 1.4524& 1.959& 2.4410\\\hline 1& 0.7684& 1.4810& 2.2626& 3.0299& 3.7504\\ \hline\hline \multicolumn{6}{c}{Fashion MNIST dataset: From 12,000 to 60,000 training samples }\\ \hline \diagbox{$\rho$}{size} &12,000&24,000&36,000&48,000&60,000\\ \hline 0.0001& 0.2500& 0.5081& 0.8492& 1.1911& 1.5092\\\hline 0.001& 0.2980& 0.5980& 0.9595& 1.3265& 1.6744\\\hline 0.01& 0.4199& 0.8028& 1.2787& 1.7535& 2.2025\\\hline 0.1& 0.5758& 1.0928& 1.7230& 2.3261& 2.9234\\\hline 1& 0.8795& 1.6464& 2.5580& 3.4492& 4.2902 \\\hline\hline \end{tabular} \caption{The relation between running time per iteration (in second) and size of training samples as well as value of $\rho$: generally, the running time increased as the training sample and the value of $\rho$ became larger.} \label{tab:running time} \vspace{-0.8cm} \end{table} \section{Conclusion} \indent Even though stochastic gradient descent (SGD) is a popular method to train deep neural networks, alternating minimization methods have attracted increasing attention from a great deal of researchers recently as they have several advantages including solid theoretical guarantees and avoiding gradient vanishing problems. In this paper, we propose a novel formulation of the original deep neural network problem and a novel Deep Learning Alternating Minimization (DLAM) algorithm. Specifically, the nonlinear constraint is projected into a convex set so that all subproblems are solvable. At the same time, the quadratic approximation technique and the backtracking algorithm are applied to boost up scalability. Furthermore, several mild assumptions are established to prove the global convergence of our DLAM algorithm. Experiments on real-world datasets demonstrate the convergence, effectiveness, and efficiency of our DLAM algorithm. \label{sec:conclusion}
1,314,259,993,991
arxiv
\section{Introduction} Dimensionality reduction and manifold learning are used for feature extraction from raw data. A family of dimensionality reduction methods is metric learning which learns a distance metric or an embedding space for separation of dissimilar points and closeness of similar points. In supervised metric learning, we aim to discriminate classes by learning an appropriate metric. Dimensionality reduction methods can be divided into spectral, probabilistic, and deep methods \cite{ghojogh2021data}. Spectral methods have a geometrical approach and usually are reduced to generalized eigenvalue problems \cite{ghojogh2019eigenvalue}. Probabilistic methods are based on probability distributions. Deep methods use neural network for learning. In each of these categories, there exist several metric learning methods. In this paper, we review and introduce the most important metric learning algorithms in these categories. Note that there exist some other surveys on metric learning such as \cite{yang2006distance,yang2007overview,kulis2013metric,bellet2013survey,wang2015survey,suarez2021tutorial}. A survey specific to deep metric learning is \cite{kaya2019deep}. A book on metric learning is \cite{bellet2015metric}. Finally, some Python toolboxes for metric learning are \cite{suarez2020pydml,de2020metric,musgrave2020pytorch}. The remainder of this paper is organized as follows. Section \ref{section_generalized_Mahalanobis_distance} defines distance metric and the generalized Mahalanobis distance. Sections \ref{section_spectral_metric_learning}, \ref{section_probabilistic_metric_learning}, and \ref{section_deep_metric_learning} introduce and discuss spectral, probabilistic, and deep metric learning methods, respectively. Finally, section \ref{section_conclusion} concludes the paper. The table of contents can be found at the end of paper. \section*{Required Background for the Reader} This paper assumes that the reader has general knowledge of calculus, probability, linear algebra, and basics of optimization. \section{Generalized Mahalanobis Distance Metric}\label{section_generalized_Mahalanobis_distance} \subsection{Distance Metric} \begin{definition}[Distance metric]\label{definition_distance_metric} Consider a metric space $\mathcal{X}$. A distance metric is a mapping $d: \mathcal{X} \times \mathcal{X} \rightarrow [0, \infty)$ which satisfies the following properties: \begin{itemize}\setlength\itemsep{0em} \item non-negativity: $d(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \geq 0$ \item identity: $d(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) = 0 \iff \ensuremath\boldsymbol{x}_i = \ensuremath\boldsymbol{x}_j$ \item symmetry: $d(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) = d(\ensuremath\boldsymbol{x}_j, \ensuremath\boldsymbol{x}_i)$ \item triangle inequality: $d(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \leq d(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_k) + d(\ensuremath\boldsymbol{x}_k, \ensuremath\boldsymbol{x}_j)$ \end{itemize} where $\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j, \ensuremath\boldsymbol{x}_k \in \mathcal{X}$. \end{definition} An example of distance metric is the Euclidean distance: \begin{align}\label{equation_Euclidean_distance} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_2 := \sqrt{(\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)}. \end{align} \subsection{Mahalanobis Distance} The Mahalanobis distance is another distance metric which was originally proposed in \cite{mahalanobis1930tests}. \begin{definition}[Mahalanobis distance \cite{mahalanobis1930tests}] Consider a $d$-dimensional metric space $\mathcal{X}$. Let two clouds or sets of points $\mathcal{X}_1$ and $\mathcal{X}_2$ be in the data, i.e., $\mathcal{X}_1, \mathcal{X}_2 \in \mathcal{X}$. A point is considered in each set, i.e., $\ensuremath\boldsymbol{x}_i \in \mathcal{X}_1$ and $\ensuremath\boldsymbol{x}_j \in \mathcal{X}_2$. The Mahalanobis distance between the two points is: \begin{align}\label{equation_Mahalanobis_distance} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{\Sigma}} := \sqrt{(\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{\Sigma}^{-1} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)}, \end{align} where $\ensuremath\boldsymbol{\Sigma} \in \mathbb{R}^{d \times d}$ is the covariance matrix of data in the two sets $\mathcal{X}_1$ and $\mathcal{X}_2$. If the points $\ensuremath\boldsymbol{x}_i$ and $\ensuremath\boldsymbol{x}_j$ are the means of the sets $\mathcal{X}_1$ and $\mathcal{X}_2$, respectively, as the representatives of the sets, this Mahalanobis distance is a good measure of distance of the sets \cite{mclachlan1999mahalanobis}: \begin{align} \|\ensuremath\boldsymbol{\mu}_1 - \ensuremath\boldsymbol{\mu}_2\|_{\ensuremath\boldsymbol{\Sigma}} := \sqrt{(\ensuremath\boldsymbol{\mu}_1 - \ensuremath\boldsymbol{\mu}_2)^\top \ensuremath\boldsymbol{\Sigma}^{-1} (\ensuremath\boldsymbol{\mu}_1 - \ensuremath\boldsymbol{\mu}_2)}, \end{align} where $\ensuremath\boldsymbol{\mu}_1$ and $\ensuremath\boldsymbol{\mu}_2$ are the means of the sets $\mathcal{X}_1$ and $\mathcal{X}_2$, respectively. Let $\mathcal{X}_1 := \{\ensuremath\boldsymbol{x}_{1,i}\}_{i=1}^{n_1}$ and $\mathcal{X}_2 := \{\ensuremath\boldsymbol{x}_{2,i}\}_{i=1}^{n_2}$. The unbiased sample covariance matrices of these two sets are: \begin{align*} \ensuremath\boldsymbol{\Sigma}_1 := \frac{1}{n_1-1} \sum_{i=1}^{n_1} (\ensuremath\boldsymbol{x}_{1,i} - \ensuremath\boldsymbol{\mu}_1) (\ensuremath\boldsymbol{x}_{1,i} - \ensuremath\boldsymbol{\mu}_1)^\top, \end{align*} and $\ensuremath\boldsymbol{\Sigma}_2$ similarly. The covariance matrix $\ensuremath\boldsymbol{\Sigma}$ can be an unbiased sample covariance matrix \cite{mclachlan1999mahalanobis}: \begin{align*} \ensuremath\boldsymbol{\Sigma} := \frac{1}{n_1 + n_2 - 2} \Big( (n_1 - 1) \ensuremath\boldsymbol{\Sigma}_1 + (n_2 - 1) \ensuremath\boldsymbol{\Sigma}_2 \Big). \end{align*} The Mahalanobis distance can also be defined between a point $\ensuremath\boldsymbol{x}$ and a cloud or set of points $\mathcal{X}$ \cite{de2000mahalanobis}. Let $\ensuremath\boldsymbol{\mu}$ and $\ensuremath\boldsymbol{\Sigma}$ be the mean and the (sample) covariance matrix of the set $\mathcal{X}$. The Mahalanobis distance of $\ensuremath\boldsymbol{x}$ and $\mathcal{X}$ is: \begin{align} \|\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{\mu}\|_{\ensuremath\boldsymbol{\Sigma}} := \sqrt{(\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{\mu})^\top \ensuremath\boldsymbol{\Sigma}^{-1} (\ensuremath\boldsymbol{x} - \ensuremath\boldsymbol{\mu})}. \end{align} \end{definition} \begin{figure}[!t] \centering \includegraphics[width=3in]{./images/Mahalanobis} \caption{An example for comparison of the Euclidean and Mahalanobis distances.} \label{figure_Mahalanobis} \end{figure} \begin{remark}[Justification of the Mahalanobis distance \cite{de2000mahalanobis}] Consider two clouds of data, $\mathcal{X}_1$ and $\mathcal{X}_2$, depicted in Fig. \ref{figure_Mahalanobis}. We want to compute the distance of a point $\ensuremath\boldsymbol{x}$ from these two data clouds to see which cloud this point is closer to. The Euclidean distance ignores the scatter/variance of clouds and only measures the distances of the point from the means of clouds. Hence, in this example, it says that $\ensuremath\boldsymbol{x}$ belongs to $\mathcal{X}_1$ because it is closer to the mean of $\mathcal{X}_1$ compared to $\mathcal{X}_2$. However, the Mahalanobis distance takes the variance of clouds into account and says that $\ensuremath\boldsymbol{x}$ belongs to $\mathcal{X}_2$ because it is closer to its scatter compared to $\mathcal{X}_1$. Visually, human also says $\ensuremath\boldsymbol{x}$ belongs to $\mathcal{X}_2$; hence, the Mahalanobis distance has performed better than the Euclidean distance by considering the variances of data. \end{remark} \subsection{Generalized Mahalanobis Distance} \begin{definition}[Generalized Mahalanobis distance] In Mahalanobis distance, i.e. Eq. (\ref{equation_Mahalanobis_distance}), the covariance matrix $\ensuremath\boldsymbol{\Sigma}$ and its inverse $\ensuremath\boldsymbol{\Sigma}^{-1}$ are positive semi-definite. We can replace $\ensuremath\boldsymbol{\Sigma}^{-1}$ with a positive semi-definite weight matrix $\ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}$ in the squared Mahalanobis distance. We name this distance a generalized Mahalanobis distance: \begin{equation}\label{equation_generalized_Mahalanobis_distance} \begin{aligned} & \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} := \sqrt{(\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)}. \\ &\therefore\,\,\,\, \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 := (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j). \end{aligned} \end{equation} We define the generalized Mahalanobis norm as: \begin{align} \|\ensuremath\boldsymbol{x}\|_{\ensuremath\boldsymbol{W}} := \sqrt{\ensuremath\boldsymbol{x}^\top \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{x}}. \end{align} \end{definition} \begin{lemma}[Triangle inequality of norm] Let $\|.\|$ be a norm. Using the Cauchy-Schwarz inequality, it satisfies the triangle inequality: \begin{align}\label{equation_xi_xj_triangle_inequality} \|\ensuremath\boldsymbol{x}_i + \ensuremath\boldsymbol{x}_j\| \leq \|\ensuremath\boldsymbol{x}_i\| + \|\ensuremath\boldsymbol{x}_j\|. \end{align} \end{lemma} \begin{proof} \begin{align*} \|\ensuremath\boldsymbol{x}_i + \ensuremath\boldsymbol{x}_j\|^2 &= (\ensuremath\boldsymbol{x}_i + \ensuremath\boldsymbol{x}_j)^\top (\ensuremath\boldsymbol{x}_i + \ensuremath\boldsymbol{x}_j) \\ &= \|\ensuremath\boldsymbol{x}_i\|^2 + \|\ensuremath\boldsymbol{x}_j\|^2 + 2\ensuremath\boldsymbol{x}_i^\top \ensuremath\boldsymbol{x}_j \\ &\overset{(a)}{\leq} \|\ensuremath\boldsymbol{x}_i\|^2 + \|\ensuremath\boldsymbol{x}_j\|^2 + 2\|\ensuremath\boldsymbol{x}_i\| \|\ensuremath\boldsymbol{x}_j\| \\ &= (\|\ensuremath\boldsymbol{x}_i\| + \|\ensuremath\boldsymbol{x}_j\|)^2, \end{align*} where $(a)$ is because of the Cauchy-Schwarz inequality, i.e., $\ensuremath\boldsymbol{x}_i^\top \ensuremath\boldsymbol{x}_j \leq \|\ensuremath\boldsymbol{x}_i\| \|\ensuremath\boldsymbol{x}_j\|$. Taking second root from the sides gives Eq. (\ref{equation_xi_xj_triangle_inequality}). Q.E.D. \end{proof} \begin{proposition} The generalized Mahalanobis distance is a valid distance metric. \end{proposition} \begin{proof} We show that the characteristics in Definition \ref{definition_distance_metric} are satisfied: \begin{itemize} \item As $\ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}$, Eq. (\ref{equation_generalized_Mahalanobis_distance}) is non-negative. \item identity: if $\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} = 0$, according to Eq. (\ref{equation_generalized_Mahalanobis_distance}), we have $\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j = 0 \implies \ensuremath\boldsymbol{x}_i = \ensuremath\boldsymbol{x}_j$. If $\ensuremath\boldsymbol{x}_i = \ensuremath\boldsymbol{x}_j$, we have $\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} = 0$ according to Eq. (\ref{equation_generalized_Mahalanobis_distance}). \item symmetry: \\$\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} = \sqrt{(\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)} = \sqrt{(\ensuremath\boldsymbol{x}_j - \ensuremath\boldsymbol{x}_i)^\top \ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{x}_j - \ensuremath\boldsymbol{x}_i)} = \|\ensuremath\boldsymbol{x}_j - \ensuremath\boldsymbol{x}_i\|_{\ensuremath\boldsymbol{W}}$. \item triangle inequality: $\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} = \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k + \ensuremath\boldsymbol{x}_k - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} \overset{(\ref{equation_xi_xj_triangle_inequality})}{\leq} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k\|_{\ensuremath\boldsymbol{W}} + \|\ensuremath\boldsymbol{x}_k - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}$. \end{itemize} \end{proof} \begin{remark} It is noteworthy that $\ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}$ is required so that the generalized Mahalanobis distance is convex and satisfies the triangle inequality. \end{remark} \begin{remark} The weight matrix $\ensuremath\boldsymbol{W}$ in Eq. (\ref{equation_generalized_Mahalanobis_distance}) weights the dimensions and determines some correlation between dimensions of data points. In other words, it changes the space in a way that the scatters of clouds are considered. \end{remark} \begin{remark} The Euclidean distance is a special case of the Mahalanobis distance where the weight matrix is the identity matrix, i.e., $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{I}$ (cf. Eqs. (\ref{equation_Euclidean_distance}) and (\ref{equation_generalized_Mahalanobis_distance})). In other words, the Euclidean distance does not change the space for computing the distance. \end{remark} \begin{proposition}[Projection in metric learning]\label{proposition_metric_learning_projection} Consider the eigenvalue decomposition of the weight matrix $\ensuremath\boldsymbol{W}$ in the generalized Mahalanobis distance with $\ensuremath\boldsymbol{V}$ and $\ensuremath\boldsymbol{\Lambda}$ as the matrix of eigenvectors and the diagonal matrix of eigenvalues of the weight, respectively. Let $\ensuremath\boldsymbol{U} := \ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Lambda}^{(1/2)}$. The generalized Mahalanobis distance can be seen as the Euclidean distance after applying a linear projection onto the column space of $\ensuremath\boldsymbol{U}$: \begin{equation}\label{equation_metric_learning_projection} \begin{aligned} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 &= (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_j)^\top (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_j) \\ &= \|\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_j\|_2^2. \end{aligned} \end{equation} \end{proposition} If $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$ with $p \leq d$, the column space of the projection matrix $\ensuremath\boldsymbol{U}$ is a $p$-dimensional subspace. \begin{proof} By the eigenvalue decomposition of $\ensuremath\boldsymbol{W}$, we have: \begin{align}\label{equation_W_U_UT} \ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Lambda} \ensuremath\boldsymbol{V}^\top \overset{(a)}{=} \ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Lambda}^{(1/2)} \ensuremath\boldsymbol{\Lambda}^{(1/2)} \ensuremath\boldsymbol{V}^\top \overset{(b)}{=} \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top, \end{align} where $(a)$ is because $\ensuremath\boldsymbol{W}$ is positive semi-definite so all its eigenvalues are non-negative and can be written as multiplication of its second roots. Also, $(b)$ is because we define $\ensuremath\boldsymbol{U} := \ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Lambda}^{(1/2)}$. Substituting Eq. (\ref{equation_W_U_UT}) in Eq. (\ref{equation_generalized_Mahalanobis_distance}) gives: \begin{align*} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 &= (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) \\ &= (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_j)^\top (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_j) \\ &= \|\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_j\|_2^2. \end{align*} Q.E.D. It is noteworthy that Eq. (\ref{equation_W_U_UT}) can also be obtained using singular value decomposition rather than eigenvalue decomposition. In that case, the matrices of right and left singular vectors are equal because of symmetry of $\ensuremath\boldsymbol{W}$. \end{proof} \subsection{The Main Idea of Metric Learning} Consider a $d$-dimensional dataset $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^n \subset \mathbb{R}^d$ of size $n$. Assume some data points are similar in some sense. For example, they have similar pattern or the same characteristics. Hence, we have a set of similar pair points, denotes by $\mathcal{S}$. In contrast, we can have dissimilar points which are different in pattern or characteristics. Let the set of dissimilar pair points be denoted by $\mathcal{D}$. In summary: \begin{equation} \begin{aligned} & (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S} \text{ if } \ensuremath\boldsymbol{x}_i \text{ and } \ensuremath\boldsymbol{x}_j \text{ are similar}, \\ & (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D} \text{ if } \ensuremath\boldsymbol{x}_i \text{ and } \ensuremath\boldsymbol{x}_j \text{ are dissimilar}. \end{aligned} \end{equation} The measure of similarity and dissimilarity can be belonging to the same or different classes, if class labels are available for dataset. In this case, we have: \begin{equation} \begin{aligned} & (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S} \text{ if } \ensuremath\boldsymbol{x}_i \text{ and } \ensuremath\boldsymbol{x}_j \text{ are in the same class}, \\ & (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D} \text{ if } \ensuremath\boldsymbol{x}_i \text{ and } \ensuremath\boldsymbol{x}_j \text{ are in different classes}. \end{aligned} \end{equation} In metric learning, we learn the weight matrix so that the distances of similar points become smaller and the distances of dissimilar points become larger. In this way, the variance of similar and dissimilar points get smaller and larger, respectively. A 2D visualization of metric learning is depicted in Fig. \ref{figure_Metric_learning}. If the class labels are available, metric learning tries to make the intra-class and inter-class variances smaller and larger, respectively. This is the same idea as the idea of Fisher Discriminant Analysis (FDA) \cite{fisher1936use,ghojogh2019fisher}. \begin{figure}[!t] \centering \includegraphics[width=3in]{./images/Metric_learning} \caption{Visualizing metric learning in 2D: (a) the contour of Euclidean distance which does not properly discriminate classes, and (b) the contour of Euclidean distance which is better in discrimination of classes.} \label{figure_Metric_learning} \end{figure} \section{Spectral Metric Learning}\label{section_spectral_metric_learning} \subsection{Spectral Methods Using Scatters} \subsubsection{The First Spectral Method} The first metric learning method was proposed in \cite{xing2002distance}. In this method, we minimize the distances of the similar points by the weight matrix $\ensuremath\boldsymbol{W}$ where this matrix is positive semi-definite: \begin{equation*} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation*} However, the solution of this optimization problem is trivial, i.e., $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{0}$. Hence, we add a constraint on the dissimilar points to have distances larger than some positive amount: \begin{equation}\label{equation_ML_optimization_spectral_first_method} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \\ & \text{subject to} & & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} \geq \alpha, \\ & & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} where $\alpha > 0$ is some positive number such as $\alpha = 1$. \begin{lemma}[\cite{xing2002distance}] If the constraint in Eq. (\ref{equation_ML_optimization_spectral_first_method}) is squared, i.e., $\sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \geq \alpha$, the solution of optimization will have rank $1$. Hence, we are using a non-squared constraint in the optimization problem. \end{lemma} \begin{proof} If the constraint in Eq. (\ref{equation_ML_optimization_spectral_first_method}) is squared, the problem is equivalent to (see {\citep[Appendix B]{ghojogh2019fisher}} for proof): \begin{equation*} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{maximize}} & & \frac{\sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2}{\sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2}, \end{aligned} \end{equation*} which is a Rayleigh-Ritz quotient \cite{ghojogh2019eigenvalue}. We can restate $\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2$ as: \begin{equation}\label{equation_spectral_ML_first_method_trace_W_Sigma_S} \begin{aligned} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 = \textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}), \\ \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 = \textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}), \end{aligned} \end{equation} where $\textbf{tr}(.)$ denotes the trace of matrix and: \begin{equation}\label{equation_spectral_ML_first_method_Sigma_S} \begin{aligned} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} := \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top, \\ \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} := \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top. \end{aligned} \end{equation} Hence, we have: \begin{equation*} \begin{aligned} & \frac{\sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2}{\sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2} = \frac{\textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}})}{\textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}})} \overset{(\ref{equation_W_U_UT})}{=} \frac{\textbf{tr}(\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}})}{\textbf{tr}(\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}})} \\ & \overset{(a)}{=} \frac{\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{U})}{\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \ensuremath\boldsymbol{U})} = \frac{\sum_{i=1}^d \ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{u}}{\sum_{i=1}^d \ensuremath\boldsymbol{u}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \ensuremath\boldsymbol{u}}, \end{aligned} \end{equation*} where $(a)$ is because of the cyclic property of trace and $(b)$ is because $\ensuremath\boldsymbol{U} = [\ensuremath\boldsymbol{u}_1, \dots, \ensuremath\boldsymbol{u}_d]$. Maximizing this Rayleigh-Ritz quotient results in the following generalized eigenvalue problem \cite{ghojogh2019eigenvalue}: \begin{align*} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{u}_1 = \lambda \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \ensuremath\boldsymbol{u}_1, \end{align*} where $\ensuremath\boldsymbol{u}_1$ is the eigenvector with largest eigenvalue and the other eigenvectors $\ensuremath\boldsymbol{u}_2, \dots, \ensuremath\boldsymbol{u}_d$ are zero vectors. Q.E.D. \end{proof} The Eq. (\ref{equation_ML_optimization_spectral_first_method}) can be restated as a maximization problem: \begin{equation}\label{equation_ML_optimization_spectral_first_method_2} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{maximize}} & & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} \\ & \text{subject to} & & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \leq \alpha, \\ & & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} We can solve this problem using projected gradient method \cite{ghojogh2021kkt} where a step of gradient ascent is followed by projection onto the two constraint sets: \begin{align*} & \ensuremath\boldsymbol{W} := \ensuremath\boldsymbol{W} + \eta \frac{\partial}{\partial \ensuremath\boldsymbol{W}} \Big( \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} \Big), \\ & \ensuremath\boldsymbol{W} := \arg \min_{\ensuremath\boldsymbol{Q}} \Big(\|\ensuremath\boldsymbol{Q} - \ensuremath\boldsymbol{W}\|_F^2 \,\text{ s.t.} \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~ \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{Q}}^2 \leq \alpha\Big), \\ & \ensuremath\boldsymbol{W} := \ensuremath\boldsymbol{V}\, \textbf{diag}(\max(\lambda_1, 0), \dots, \max(\lambda_d, 0))\, \ensuremath\boldsymbol{V}^\top, \end{align*} where $\eta>0$ is the learning rate and $\ensuremath\boldsymbol{V}$ and $\ensuremath\boldsymbol{\Lambda} = \textbf{diag}(\lambda_1, \dots, \lambda_d)$ are the eigenvectors and eigenvalues of $\ensuremath\boldsymbol{W}$, respectively (see Eq. (\ref{equation_W_U_UT})). \subsubsection{Formulating as Semidefinite Programming} Another metric learning method is \cite{ghodsi2007improving} which minimizes the distances of similar points and maximizes the distances of dissimilar points. For this, we minimize the distances of similar points and the negation of distances of dissimilar points. The weight matrix should be positive semi-definite to satisfy the triangle inequality and convexity. The trace of weight matrix is also set to a constant to eliminate the trivial solution $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{0}$. The optimization problem is: \begin{equation}\label{equation_ML_optimization_spectral_method2} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \\ & & & - \frac{1}{|\mathcal{D}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \\ & & & \textbf{tr}(\ensuremath\boldsymbol{W}) = 1, \end{aligned} \end{equation} where $|.|$ denotes the cardinality of set. \begin{lemma}[\cite{ghodsi2007improving}]\label{lemma_sepctral_ML_method2_objective} The objective function can be simplified as: \begin{equation} \begin{aligned} &\frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 - \frac{1}{|\mathcal{D}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} \\ &~~ = \textbf{vec}(\ensuremath\boldsymbol{W})^\top \Big( \frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \textbf{vec}\big((\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top\big) \\ &~~~~~~~~~~~~~~~ - \frac{1}{|\mathcal{D}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \textbf{vec}\big((\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top\big) \Big), \end{aligned} \end{equation} where $\textbf{vec}(.)$ vectorizes the matrix to a vector \cite{ghojogh2021kkt}. \end{lemma} \begin{proof} See {\citep[Section 2.1]{ghodsi2007improving}} for proof. \end{proof} According to Lemma \ref{lemma_sepctral_ML_method2_objective}, Eq. (\ref{equation_ML_optimization_spectral_method2}) is a Semidefinite Programming (SDP) problem. It can be solved iteratively using the interior-point method \cite{ghojogh2021kkt}. \subsubsection{Relevant to Fisher Discriminant Analysis}\label{section_relation_to_FDA} Another metric learning method is \cite{alipanahi2008distance} which has two approaches, introduced in the following. The relation of metric learning with Fisher discriminant analysis \cite{fisher1936use,ghojogh2019fisher} was discussed in this paper \cite{alipanahi2008distance}. \hfill\break \textbf{-- Approach 1: } As $\ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}$, the weight matrix can be decomposed as in Eq. (\ref{equation_W_U_UT}), i.e., $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top$. Hence, we have: \begin{align} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 &\overset{(\ref{equation_generalized_Mahalanobis_distance})}{=} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) \nonumber\\ &\overset{(a)}{=} \textbf{tr}\big((\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)\big) \nonumber\\ &\overset{(\ref{equation_W_U_UT})}{=} \textbf{tr}\big((\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)\big) \nonumber\\ &\overset{(b)}{=} \textbf{tr}\big(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{U}\big), \label{equation_ML_spectral_method3_trace_UT_x_xT_U} \end{align} where $(a)$ is because a scalar is equal to its trace and $(b)$ is because of the cyclic property of trace. We can substitute Eq. (\ref{equation_ML_spectral_method3_trace_UT_x_xT_U}) in Eq. (\ref{equation_ML_optimization_spectral_method2}) to obtain an optimization problem: \begin{equation}\label{equation_ML_optimization_spectral_method3} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{minimize}} & & \frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \textbf{tr}\big(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{U}\big) \\ & & & \!\!\!\!\!\!\!- \frac{1}{|\mathcal{D}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \textbf{tr}\big(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{U}\big) \\ & \text{subject to} & & \textbf{tr}(\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top) = 1, \end{aligned} \end{equation} whose objective variable is $\ensuremath\boldsymbol{U}$. Note that the constraint $\ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}$ is implicitly satisfied because of the decomposition $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top$. We define: \begin{equation}\label{equation_Sigma_S_prime_Sigma_D_prime} \begin{aligned} & \ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} := \frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \overset{(\ref{equation_spectral_ML_first_method_trace_W_Sigma_S})}{=} \frac{1}{|\mathcal{S}|} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}, \\ & \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}} := \frac{1}{|\mathcal{D}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \overset{(\ref{equation_spectral_ML_first_method_trace_W_Sigma_S})}{=} \frac{1}{|\mathcal{D}|} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}. \end{aligned} \end{equation} Hence, Eq. (\ref{equation_ML_optimization_spectral_method3}) can be restated as: \begin{equation}\label{equation_ML_optimization_spectral_method3_2} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{minimize}} & & \textbf{tr}(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}) \ensuremath\boldsymbol{U}) \\ & \text{subject to} & & \textbf{tr}(\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top) = 1, \end{aligned} \end{equation} whose Lagrangian is \cite{ghojogh2021kkt}: \begin{align*} \mathcal{L} = \textbf{tr}(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}) \ensuremath\boldsymbol{U}) - \lambda (\textbf{tr}(\ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top) - 1). \end{align*} Taking derivative of the Lagrangian and setting it to zero gives: \begin{align} &\frac{\partial \mathcal{L}}{\partial \ensuremath\boldsymbol{U}} = 2(\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}) \ensuremath\boldsymbol{U} - 2\lambda \ensuremath\boldsymbol{U} \overset{\text{set}}{=} \ensuremath\boldsymbol{0} \nonumber\\ &\implies (\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}) \ensuremath\boldsymbol{U} = \lambda \ensuremath\boldsymbol{U}, \label{equation_spectral_ML_method3_eig_problem} \end{align} which is the eigenvalue problem for $(\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}})$ \cite{ghojogh2019eigenvalue}. Hence, $\ensuremath\boldsymbol{U}$ is the eigenvector of $(\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}})$ with the smallest eigenvalue because Eq. (\ref{equation_ML_optimization_spectral_method3}) is a minimization problem. \hfill\break \textbf{-- Approach 2: } We can change the constraint in Eq. (\ref{equation_ML_optimization_spectral_method3_2}) to have orthogonal projection matrix, i.e., $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$. Rather, we can make the rotation of the projection matrix by the matrix $\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}}$ be orthogonal, i.e., $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$. Hence, the optimization problem becomes: \begin{equation}\label{equation_ML_optimization_spectral_method3_approach2} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{minimize}} & & \textbf{tr}(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}) \ensuremath\boldsymbol{U}) \\ & \text{subject to} & & \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}}\, \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}, \end{aligned} \end{equation} whose Lagrangian is \cite{ghojogh2021kkt}: \begin{align} &\mathcal{L} = \textbf{tr}(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}) \ensuremath\boldsymbol{U}) - \textbf{tr}(\ensuremath\boldsymbol{\Lambda}^\top (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}}\, \ensuremath\boldsymbol{U} - \ensuremath\boldsymbol{I})). \nonumber\\ &\frac{\partial \mathcal{L}}{\partial \ensuremath\boldsymbol{U}} = 2(\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}) \ensuremath\boldsymbol{U} - 2 \ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}}\, \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Lambda} \overset{\text{set}}{=} \ensuremath\boldsymbol{0} \nonumber\\ &\implies (\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}) \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}}\, \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{\Lambda}, \end{align} which is the generalized eigenvalue problem for $(\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}, \ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}})$ \cite{ghojogh2019eigenvalue}. Hence, $\ensuremath\boldsymbol{U}$ is a matrix whose columns are the eigenvectors sorted from the smallest to largest eigenvalues. The optimization problem is similar to the optimization of Fisher discriminant analysis (FDA) \cite{fisher1936use,ghojogh2019fisher} where $\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}}$ and $\ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}$ are replaced with the intra-class and inter-class covariance matrices of data, respectively. This shows the relation of this method with FDA. It makes sense because both metric learning and FDA have the same goal and that is decreasing and increasing the variances of similar and dissimilar points, respectively. \subsubsection{Relevant Component Analysis (RCA)}\label{section_RCA} Suppose the $n$ data points can be divided into $c$ clusters, or so-called chunklets. If class labels are available, classes are the chunklets. If $\mathcal{X}_l$ denotes the data of the $l$-th cluster and $\ensuremath\boldsymbol{\mu}_l$ is the mean of $\mathcal{X}_l$, the summation of intra-cluster scatters is: \begin{align}\label{equation_intra_cluster_scatter} \mathbb{R}^{d \times d} \ni \ensuremath\boldsymbol{S}_w := \frac{1}{n} \sum_{l=1}^c \sum_{\ensuremath\boldsymbol{x}_i \in \mathcal{X}_l} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{\mu}_l) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{\mu}_l)^\top. \end{align} Relevant Component Analysis (RCA) \cite{shental2002adjustment} is a metric learning method. In this method, we first apply Principal Component Analysis (PCA) \cite{ghojogh2019unsupervised} on data using the total scatter of data. Let the projection matrix of PCA be denoted by $\ensuremath\boldsymbol{U}$. After projection onto the PCA subspace, the summation of intra-cluster scatters is $\widehat{\ensuremath\boldsymbol{S}}_w := \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_w \ensuremath\boldsymbol{U}$ because of the quadratic characteristic of covariance. RCA uses $\widehat{\ensuremath\boldsymbol{S}}_w$ as the covariance matrix in the Mahalanobis distance, i.e., Eq. (\ref{equation_Mahalanobis_distance}). According to Eq. (\ref{equation_metric_learning_projection}), the subspace of RDA is obtained by the eigenvalue (or singular value) decomposition of $\widehat{\ensuremath\boldsymbol{S}}_w^{-1}$ (see Eq. (\ref{equation_W_U_UT})). \subsubsection{Discriminative Component Analysis (DCA)}\label{section_DCA} Discriminative Component Analysis (DCA) \cite{hoi2006learning} is another spectral metric learning method based on scatters of clusters/classes. Consider the $c$ clusters, chunklets, or classes of data. The intra-class scatter is as in Eq. (\ref{equation_intra_cluster_scatter}). The inter-class scatter is: \begin{equation}\label{equation_inter_cluster_scatter} \begin{aligned} &\mathbb{R}^{d \times d} \ni \ensuremath\boldsymbol{S}_b := \frac{1}{n} \sum_{l=1}^c \sum_{j=1}^c (\ensuremath\boldsymbol{\mu}_l - \ensuremath\boldsymbol{\mu}_j) (\ensuremath\boldsymbol{\mu}_l - \ensuremath\boldsymbol{\mu}_j)^\top, \text{ or } \\ &\mathbb{R}^{d \times d} \ni \ensuremath\boldsymbol{S}_b := \frac{1}{n} \sum_{l=1}^c (\ensuremath\boldsymbol{\mu}_l - \ensuremath\boldsymbol{\mu}) (\ensuremath\boldsymbol{\mu}_l - \ensuremath\boldsymbol{\mu})^\top, \end{aligned} \end{equation} where $\ensuremath\boldsymbol{\mu}_l$ is the mean of the $l$-th class and $\ensuremath\boldsymbol{\mu}$ is the total mean of data. According to Proposition \ref{proposition_metric_learning_projection}, metric learning can be seen as Euclidean distance after projection onto the column space of a projection matrix $\ensuremath\boldsymbol{U}$ where $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top$. Similar to Fisher discriminant analysis \cite{fisher1936use,ghojogh2019fisher}, DCA maximizes the inter-class variance and minimizes the intra-class variance after projection. Hence, its optimization is: \begin{equation}\label{equation_optimization_DCA} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \frac{\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_b \ensuremath\boldsymbol{U})}{\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_w \ensuremath\boldsymbol{U})}, \end{aligned} \end{equation} which is a generalized Rayleigh-Ritz quotient. The solution $\ensuremath\boldsymbol{U}$ to this optimization problem is the generalized eigenvalue problem $(\ensuremath\boldsymbol{S}_b, \ensuremath\boldsymbol{S}_w)$ \cite{ghojogh2019eigenvalue}. According to Eq. (\ref{equation_W_U_UT}), we can set the weight matrix of the generalized Mahalanobis distance as $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top$ where $\ensuremath\boldsymbol{U}$ is the matrix of eigenvectors. \subsubsection{High Dimensional Discriminative Component Analysis} Another spectral method for metric learning is \cite{xiang2008learning} which minimizes and maximizes the intra-class and inter-class variances, respectively, by the the same optimization problem as Eq. (\ref{equation_optimization_DCA}) with an additional constraint on the orthogonality of the projection matrix, i.e., $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$. This problem can be restated by posing penalty on the denominator: \begin{equation}\label{equation_optimization_Sb_lambda_Sw} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \textbf{tr}(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{S}_b - \lambda \ensuremath\boldsymbol{S}_w) \ensuremath\boldsymbol{U}) \\ & \text{subject to} & & \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}, \end{aligned} \end{equation} where $\lambda>0$ is the regularization parameter. The solution to this problem is the eigenvalue problem for $\ensuremath\boldsymbol{S}_b - \lambda \ensuremath\boldsymbol{S}_w$. The eigenvectors are the columns of $\ensuremath\boldsymbol{U}$ and the weight matrix of the generalized Mahalanobis is obtained using Eq. (\ref{equation_W_U_UT}). If the dimensionality of data is large, computing the eigenvectors of $(\ensuremath\boldsymbol{S}_b - \lambda \ensuremath\boldsymbol{S}_w) \in \mathbb{R}^{d \times d}$ is very time-consuming. According to {\citep[Theorem 3]{xiang2008learning}}, the optimization problem (\ref{equation_optimization_Sb_lambda_Sw}) can be solved in the orthogonal complement space of the null space of $\ensuremath\boldsymbol{S}_b + \ensuremath\boldsymbol{S}_w$ without loss of any information (see {\citep[Appendix A]{xiang2008learning}} for proof). Hence, if $d \gg 1$, we find $\ensuremath\boldsymbol{U}$ as follows. Let $\ensuremath\boldsymbol{X} := [\ensuremath\boldsymbol{x}_1, \dots, \ensuremath\boldsymbol{x}_n] \in \mathbb{R}^{d \times n}$ be the matrix of data. Let $\ensuremath\boldsymbol{A}_w$ and $\ensuremath\boldsymbol{A}_b$ be the adjacency matrices for the sets $\mathcal{S}$ and $\mathcal{D}$, respectively. For example, if $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}$, then $\ensuremath\boldsymbol{A}_w(i,j) = 1$; otherwise, $\ensuremath\boldsymbol{A}_w(i,j) = 0$. If $\ensuremath\boldsymbol{L}_w$ and $\ensuremath\boldsymbol{L}_b$ are the Laplacian matrices of $\ensuremath\boldsymbol{A}_w$ and $\ensuremath\boldsymbol{A}_b$, respectively, we have $\ensuremath\boldsymbol{S}_w = 0.5 \ensuremath\boldsymbol{X} \ensuremath\boldsymbol{L}_w \ensuremath\boldsymbol{X}^\top$ and $\ensuremath\boldsymbol{S}_b = 0.5 \ensuremath\boldsymbol{X} \ensuremath\boldsymbol{L}_b \ensuremath\boldsymbol{X}^\top$ (see \cite{belkin2002laplacian,ghojogh2021laplacian} for proof). We have $\textbf{tr}(\ensuremath\boldsymbol{S}_w + \ensuremath\boldsymbol{S}_b) = \textbf{tr}(\ensuremath\boldsymbol{X} (0.5\ensuremath\boldsymbol{L}_w + 0.5\ensuremath\boldsymbol{L}_b) \ensuremath\boldsymbol{X}^\top) = \textbf{tr}(\ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{X} (0.5\ensuremath\boldsymbol{L}_w + 0.5\ensuremath\boldsymbol{L}_b))$ because of the cyclic property of trace. If the rank of $\ensuremath\boldsymbol{L} := \ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{X} (0.5\ensuremath\boldsymbol{L}_w + 0.5\ensuremath\boldsymbol{L}_b) \in \mathbb{R}^{n \times n}$ is $r \leq n$, it has $r$ non-zero eigenvalues which we compute its corresponding eigenvectors. We stack these eigenvectors to have $\ensuremath\boldsymbol{V} \in \mathbb{R}^{d \times r}$. The projected intra-class and inter-class variances after projection onto the column space of $\ensuremath\boldsymbol{V}$ are $\ensuremath\boldsymbol{S}'_w := \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{S}_w \ensuremath\boldsymbol{V}$ and $\ensuremath\boldsymbol{S}'_b := \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{S}_b \ensuremath\boldsymbol{V}$, respectively. Then, we use $\ensuremath\boldsymbol{S}'_w$ and $\ensuremath\boldsymbol{S}'_b$ in Eq. (\ref{equation_optimization_Sb_lambda_Sw}) and the weight matrix of the generalized Mahalanobis is obtained using Eq. (\ref{equation_W_U_UT}). \subsubsection{Regularization by Locally Linear Embedding}\label{section_spectral_ML_regularization_by_LLE} The spectral metric learning methods using scatters can be modeled as maximization of the following Rayleigh–Ritz quotient \cite{baghshah2009semi}: \begin{equation}\label{equation_optimization_ML_LLE_1} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \frac{\sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}}{\sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} + \lambda \Omega(\ensuremath\boldsymbol{U})}, \\ & \text{subject to} & & \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}, \end{aligned} \end{equation} where $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top$ (see Eq. (\ref{equation_W_U_UT})), $\lambda>0$ is the regularization parameter, and $\Omega(\ensuremath\boldsymbol{U})$ is a penalty or regularization term on the projection matrix $\ensuremath\boldsymbol{U}$. This optimization maximizes and minimizes the distances of the similar and dissimilar points, respectively. According to Section \ref{section_relation_to_FDA}, Eq. (\ref{equation_optimization_ML_LLE_1}) can be restated as: \begin{equation}\label{equation_optimization_ML_LLE_2} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \frac{\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_b \ensuremath\boldsymbol{U})}{\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_w \ensuremath\boldsymbol{U}) + \lambda \Omega(\ensuremath\boldsymbol{U})}, \\ & \text{subject to} & & \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}. \end{aligned} \end{equation} As was discussed in Proposition \ref{proposition_metric_learning_projection}, metric learning can be seen as projection onto a subspace. The regularization term can be linear reconstruction of every projected point by its $k$ Nearest Neighbors ($k$NN) using the same reconstruction weights as before projection \cite{baghshah2009semi}. The weights for linear reconstruction in the input space can be found as in locally linear embedding \cite{roweis2000nonlinear,ghojogh2020locally}. If $s_{ij}$ denotes the weight of $\ensuremath\boldsymbol{x}_j$ in reconstruction of $\ensuremath\boldsymbol{x}_i$ and $\mathcal{N}(\ensuremath\boldsymbol{x}_i)$ is the set of $k$NN for $\ensuremath\boldsymbol{x}_i$, we have: \begin{align*} & \underset{s_{ij}}{\text{minimize}} & & \sum_{i=1}^n \Big\|\ensuremath\boldsymbol{x}_i - \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{N}(\ensuremath\boldsymbol{x}_i)} s_{ij} \ensuremath\boldsymbol{x}_j\Big\|_2^2, \\ & \text{subject to} & & \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{N}(\ensuremath\boldsymbol{x}_i)} s_{ij} = 1. \end{align*} The solution of this optimization is \cite{ghojogh2020locally}: \begin{align*} s_{ij}^* = \frac{\ensuremath\boldsymbol{G}_i^{-1} \ensuremath\boldsymbol{1}}{\ensuremath\boldsymbol{1}^\top \ensuremath\boldsymbol{G}_i^{-1} \ensuremath\boldsymbol{1}}, \end{align*} where $\ensuremath\boldsymbol{G}_i := (\ensuremath\boldsymbol{x}_i \ensuremath\boldsymbol{1}^\top - \ensuremath\boldsymbol{X}_i)^\top (\ensuremath\boldsymbol{x}_i \ensuremath\boldsymbol{1}^\top - \ensuremath\boldsymbol{X}_i)$ in which $\ensuremath\boldsymbol{X}_i \in \mathbb{R}^{d \times k}$ denotes the stack of $k$NN for $\ensuremath\boldsymbol{x}_i$. We define $\ensuremath\boldsymbol{S}^* := [s^*_{ij}] \in \mathbb{R}^{n \times n}$. The regularization term can be reconstruction in the subspace using the same reconstruction weights as in the input space \cite{baghshah2009semi}: \begin{align} \Omega(\ensuremath\boldsymbol{U}) &:= \sum_{i=1}^n \Big\|\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x} - \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{N}(\ensuremath\boldsymbol{x}_i)} s^*_{ij} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{x}_j \Big\|_2^2 \nonumber \\ &= \textbf{tr}(\ensuremath\boldsymbol{U}^\top\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{E}\ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{U}), \label{equation_optimization_ML_LLE_penalty} \end{align} where $\ensuremath\boldsymbol{X} = [\ensuremath\boldsymbol{x}_1, \dots, \ensuremath\boldsymbol{x}_n] \in \mathbb{R}^{d \times n}$ and $\mathbb{R}^{n \times n} \ni \ensuremath\boldsymbol{E} := (\ensuremath\boldsymbol{I} - \ensuremath\boldsymbol{S}^*)^\top (\ensuremath\boldsymbol{I} - \ensuremath\boldsymbol{S}^*)$. Putting Eq. (\ref{equation_optimization_ML_LLE_penalty}) in Eq. (\ref{equation_optimization_ML_LLE_2}) gives: \begin{equation}\label{equation_optimization_ML_LLE_3} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \frac{\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_b \ensuremath\boldsymbol{U})}{\textbf{tr}\big(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{S}_w + \lambda \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{E}\ensuremath\boldsymbol{X}^\top) \ensuremath\boldsymbol{U}\big)}, \\ & \text{subject to} & & \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}. \end{aligned} \end{equation} The solution to this optimization problem is the generalized eigenvalue problem $(\ensuremath\boldsymbol{S}_b, \ensuremath\boldsymbol{S}_w + \lambda \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{E}\ensuremath\boldsymbol{X}^\top)$ where $\ensuremath\boldsymbol{U}$ has the eigenvectors as its columns \cite{ghojogh2019eigenvalue}. According to Eq. (\ref{equation_W_U_UT}), the weight matrix of metric is $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top$. \subsubsection{Fisher-HSIC Multi-view Metric Learning (FISH-MML)} Fisher-HSIC Multi-view Metric Learning (FISH-MML) \cite{zhang2018fish} is a metric learning method for multi-view data. In multi-view data, we have different types of features for every data point. For example, an image dataset, which has a descriptive caption for every image, is multi-view. Let $\ensuremath\boldsymbol{X}^{(r)} := \{\ensuremath\boldsymbol{x}_i^{(r)}\}_{i=1}^n$ be the features of data points in the $r$-th view, $c$ be the number of classes/clusters, and $v$ be the number of views. According to Proposition \ref{proposition_metric_learning_projection}, metric learning is the Euclidean distance after projection with $\ensuremath\boldsymbol{U}$. The inter-class scatter of data, in the $r$-th view, is denoted by $\ensuremath\boldsymbol{S}_b^{(r)}$ and calculated using Eqs. (\ref{equation_inter_cluster_scatter}). The total scatter of data, in the $r$-th view, is denoted by $\ensuremath\boldsymbol{S}_t^{(r)}$ and is the covariance of data in that view. Inspired by Fisher discriminant analysis \cite{fisher1936use,ghojogh2019fisher}, we maximize the inter-class variances of projected data, $\sum_{r=1}^v \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_b^{(r)} \ensuremath\boldsymbol{U})$, to discriminate the classes after projection. Also, inspired by principal component analysis \cite{ghojogh2019unsupervised}, we maximize the total scatter of projected data, $\sum_{r=1}^v \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_t^{(r)} \ensuremath\boldsymbol{U})$, for expressiveness. Moreover, we maximize the dependence of the projected data in all views because various views of a point should be related. A measure of dependence between two random variables $X$ and $Y$ is the Hilbert-Schmidt Independence Criterion (HSIC) \cite{gretton2005measuring} whose empirical estimation is: \begin{align}\label{equation_HSIC} \text{HSIC}(X,Y) = \frac{1}{(n-1)^2} \textbf{tr}(\ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}_y \ensuremath\boldsymbol{H}), \end{align} where $\ensuremath\boldsymbol{K}_x$ and $\ensuremath\boldsymbol{K}_y$ are kernel matrices over $X$ and $Y$ variables, respectively, and $\ensuremath\boldsymbol{H} := \ensuremath\boldsymbol{I} - (1/n)\ensuremath\boldsymbol{1}\ensuremath\boldsymbol{1}^\top$ is the centering matrix. The HSIC between projection of two views $\ensuremath\boldsymbol{X}^{(r)}$ and $\ensuremath\boldsymbol{X}^{(w)}$ is: \begin{align*} &\text{HSIC}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}^{(r)}, \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}^{(w)}) \overset{(\ref{equation_HSIC})}{\propto} \textbf{tr}(\ensuremath\boldsymbol{K}^{(r)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}^{(w)} \ensuremath\boldsymbol{H}) \\ &\overset{(a)}{=} \textbf{tr}(\ensuremath\boldsymbol{X}^{(r)\top} \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}^{(r)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}^{(w)} \ensuremath\boldsymbol{H}) \\ &\overset{(b)}{=} \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}^{(r)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}^{(w)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{X}^{(r)\top} \ensuremath\boldsymbol{U}) \end{align*} where $(a)$ is because we use the linear kernel for $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}^{(r)}$, i.e., $\ensuremath\boldsymbol{K}^{(r)} := (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}^{(r)})^\top \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}^{(r)}$ and $(b)$ is because of the cyclic property of trace. In summary, we maximize the summation of inter-class scatter, total scatter, and the dependence of views, which is: \begin{align*} &\sum_{r=1}^v \big( \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_b^{(r)} \ensuremath\boldsymbol{U}) + \lambda_1 \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_t^{(r)} \ensuremath\boldsymbol{U}) \\ &~~~~~~~~ + \lambda_2 \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}^{(r)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}^{(w)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{X}^{(r)\top} \ensuremath\boldsymbol{U}) \big) \\ &= \sum_{r=1}^v \textbf{tr}\big(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{S}_b^{(r)} + \lambda_1 \ensuremath\boldsymbol{S}_t^{(r)} \\ &~~~~~~~~ + \lambda_2 \ensuremath\boldsymbol{X}^{(r)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}^{(w)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{X}^{(r)\top} ) \ensuremath\boldsymbol{U}\big), \end{align*} where $\lambda_1, \lambda_2 >0$ are the regularization parameters. The optimization problem is: \begin{equation} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \sum_{r=1}^v \textbf{tr}\big(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{S}_b^{(r)} + \lambda_1 \ensuremath\boldsymbol{S}_t^{(r)} \\ & & &~~~~~~ + \lambda_2 \ensuremath\boldsymbol{X}^{(r)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}^{(w)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{X}^{(r)\top} ) \ensuremath\boldsymbol{U}\big) \\ & \text{subject to} & & \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}, \end{aligned} \end{equation} whose solution is the eigenvalue problem for $\ensuremath\boldsymbol{S}_b^{(r)} + \lambda_1 \ensuremath\boldsymbol{S}_t^{(r)} + \lambda_2 \ensuremath\boldsymbol{X}^{(r)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{K}^{(w)} \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{X}^{(r)\top}$ where $\ensuremath\boldsymbol{U}$ has the eigenvectors as its columns \cite{ghojogh2019eigenvalue}. \subsection{Spectral Methods Using Hinge Loss} \subsubsection{Large-Margin Metric Learning}\label{section_large_margin_metric_learning} $k$-Nearest Neighbors ($k$NN) classification is highly impacted by the metric used for measuring distances between points. Hence, we can use metric learning for improving the performance of $k$NN classification \cite{weinberger2006distance,weinberger2009distance}. Let $y_{ij}=1$ if $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}$ and $y_{ij}=0$ if $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}$. Moreover, we consider $k$NN for similar points where we find the nearest neighbors of every point among the similar points to that point. Let $\eta_{ij} = 1$ if $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}$ and $\ensuremath\boldsymbol{x}_j$ is among $k$NN of $\ensuremath\boldsymbol{x}_i$. Otherwise, $\eta_{ij} = 0$. The optimization problem for finding the best weigh matrix in the metric can be \cite{weinberger2006distance,weinberger2009distance}: \begin{equation}\label{equation_optimization_largeMarginMetricLearning} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \sum_{i=1}^n \sum_{j=1}^n \eta_{ij} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \\ & & &+ \lambda \sum_{i=1}^n \sum_{j=1}^n \sum_{l=1}^n \eta_{ij} (1 - y_{il})\Big[1 \\ & & &~~~~~~~~~~~ + \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 - \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_l\|_{\ensuremath\boldsymbol{W}}^2\Big]_+, \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \end{aligned} \end{equation} where $\lambda>0$ is the regularization parameter, and $[.]_+ := \max(.,0)$ is the standard Hinge loss. The first term in Eq. (\ref{equation_optimization_largeMarginMetricLearning}) pushes the similar neighbors close to each other. The second term in this equation is the triplet loss \cite{schroff2015facenet} which pushes the similar neighbors to each other and pulls the dissimilar points away from one another. This is because minimizing $\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2$ for $\eta_{ij}=1$ decreases the distances of similar neighbors. Moreover, minimizing $- \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_l\|_{\ensuremath\boldsymbol{W}}^2$ for $1-y_{il}=1$ (i.e., $y_{il} = 0$) is equivalent to maximizing $\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_l\|_{\ensuremath\boldsymbol{W}}^2$ which maximizes the distances of dissimilar points. Minimizing the whole second term forces the distances of dissimilar points to be at least greater that the distances of similar points up to a threshold (or margin) of one. We can change the margin by changing $1$ in this term with some other positive number. In this sense, this loss is closely related to the triplet loss for neural networks \cite{schroff2015facenet} (see Section \ref{section_triplet_loss}). Eq. (\ref{equation_optimization_largeMarginMetricLearning}) can be restated using slack variables $\xi_{ijl}, \forall i,j,l \in \{1, \dots, n\}$. The Hinge loss in term $[1 + \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 - \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_l\|_{\ensuremath\boldsymbol{W}}^2]_+$ requires to have: \begin{align*} &1 + \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 - \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_l\|_{\ensuremath\boldsymbol{W}}^2 \geq 0 \\ &\implies \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_l\|_{\ensuremath\boldsymbol{W}}^2 - \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \leq 1. \end{align*} If $\xi_{ijl} \geq 0$, we can have sandwich the term $\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_l\|_{\ensuremath\boldsymbol{W}}^2 - \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2$ in order to minimize it: \begin{align*} & 1 - \xi_{ijl} \leq \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_l\|_{\ensuremath\boldsymbol{W}}^2 - \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \leq 1. \end{align*} Hence, we can replace the term of Hinge loss with the slack variable. Therefore, Eq. (\ref{equation_optimization_largeMarginMetricLearning}) can be restated as \cite{weinberger2006distance,weinberger2009distance}: \begin{equation}\label{equation_optimization_largeMarginMetricLearning_2} \begin{aligned} & \underset{\ensuremath\boldsymbol{W},\, \{\xi_{ijl}\}}{\text{minimize}} & & \sum_{i=1}^n \sum_{j=1}^n \eta_{ij} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \\ & & &+ \lambda \sum_{i=1}^n \sum_{j=1}^n \sum_{l=1}^n \eta_{ij} (1 - y_{il})\, \xi_{ijl} \\ & \text{subject to} & & \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_l\|_{\ensuremath\boldsymbol{W}}^2 - \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \geq 1 - \xi_{ijl}, \\ & & &~~~~~~~~~ \forall (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}, \eta_{ij}=1, (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_l) \in \mathcal{D}, \\ & & & \xi_{ijl} \geq 0, \\ & & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} This optimization problem is a semidefinite programming which can be solved iteratively using interior-point method \cite{ghojogh2021kkt}. This problem uses triplets of similar and dissimilar points, i.e., $\{\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j, \ensuremath\boldsymbol{x}_l\}$ where $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}$, $\eta_{ij}=1$, $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_l) \in \mathcal{D}$. Hence, triplets should be extracted randomly from the dataset for this metric learning. Solving semidefinite programming is usually slow and time-consuming especially for large datasets. Triplet minimizing can be used for finding the best triplets for learning \cite{poorheravi2020acceleration}. For example, the similar and dissimilar points with smallest and/or largest distances can be used to limit the number of triplets \cite{sikaroudi2020offline}. The reader can also refer to for Lipschitz analysis in large margin metric learning \cite{dong2019metric}. \subsubsection{Imbalanced Metric Learning (IML)} Imbalanced Metric Learning (IML) \cite{gautheron2019metric} is a spectral metric learning method which handles imbalanced classes by further decomposition of the similar set $\mathcal{S}$ and dissimilar set $\mathcal{D}$. Suppose the dataset is composed of two classes $c_0$ and $c_1$. Let $\mathcal{S}_0$ and $\mathcal{S}_1$ denote the similarity sets for classes $c_0$ and $c_1$, respectively. We define pairs of points taken randomly from these sets to have similarity and dissimilarity sets \cite{gautheron2019metric}: \begin{align*} & \text{Sim}_0 \subseteq \mathcal{S}_0 \times \mathcal{S}_0, \quad \text{Sim}_1 \subseteq \mathcal{S}_1 \times \mathcal{S}_1, \\ & \text{Dis}_0 \subseteq \mathcal{S}_0 \times \mathcal{S}_1, \quad \text{Dis}_1 \subseteq \mathcal{S}_1 \times \mathcal{S}_0. \end{align*} The optimization problem of IML is: \begin{align} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} ~~~~~~ \frac{\lambda}{4|\text{Sim}_0|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \text{Sim}_0} \big[\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 - 1\big]_+ \nonumber\\ &~~~~~~~~~ + \frac{\lambda}{4|\text{Sim}_1|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \text{Sim}_1} \big[\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 - 1\big]_+ \nonumber \end{align} \begin{align} &~~~~~~~~~ + \frac{1-\lambda}{4|\text{Dis}_0|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \text{Dis}_0} \big[\!-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 +1+m \big]_+ \nonumber \\ &~~~~~~~~~ + \frac{1-\lambda}{4|\text{Dis}_1|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \text{Dis}_1} \big[\!-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 +1+m \big]_+ \nonumber \\ &~~~~~~~~~ + \gamma \|\ensuremath\boldsymbol{W} - \ensuremath\boldsymbol{I}\|_F^2 \nonumber \\ & \text{subject to} ~~~~ \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \end{align} where $|.|$ denotes the cardinality of set, $[.]_+ := \max(.,0)$ is the standard Hinge loss, $m>0$ is the desired margin between classes, and $\lambda \in [0,1]$ and $\gamma>0$ are the regularization parameters. This optimization pulls the similar points to have distance less than $1$ and pushes the dissimilar points away to have distance more than $m+1$. Also, the regularization term $\|\ensuremath\boldsymbol{W} - \ensuremath\boldsymbol{I}\|_F^2$ tries to make the weight matrix is the generalized Mahalanobis distance close to identity for simplicity of metric. In this way, the metric becomes close to the Euclidean distance, preventing overfitting, while satisfying the desired margins in distances. \subsection{Locally Linear Metric Adaptation (LLMA)} Another method for metric learning is Locally Linear Metric Adaptation (LLMA) \cite{chang2004locally}. LLMA performs nonlinear and linear transformations globally and locally, respectively. For every point $\ensuremath\boldsymbol{x}_l$, we consider its $k$ nearest (similar) neighbors. The local linear transformation for every point $\ensuremath\boldsymbol{x}_l$ is: \begin{align} \mathbb{R}^d \ni \ensuremath\boldsymbol{y}_l := \ensuremath\boldsymbol{x}_l + \ensuremath\boldsymbol{B} \ensuremath\boldsymbol{\pi}_i, \end{align} where $\ensuremath\boldsymbol{B} \in \mathbb{R}^{d \times k}$ is the matrix of biases, $\mathbb{R}^k \ni \ensuremath\boldsymbol{\pi}_i = [\pi_{i1}, \dots, \pi_{ik}]^\top$, and $\pi_{ij} := \exp(-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_2^2 / 2 w^2)$ is a Gaussian measure of similarity between $\ensuremath\boldsymbol{x}_i$ and $\ensuremath\boldsymbol{x}_j$. The variables $\ensuremath\boldsymbol{B}$ and $w$ are found by optimization. In this method, we minimize the distances between the linearly transformed similar points while the distances of similar points are tried to be preserved after the transformation: \begin{equation}\label{equation_ML_optimization_spectral_LLMA} \begin{aligned} & \underset{\{\ensuremath\boldsymbol{y}_i\}_{i=1}^n, \ensuremath\boldsymbol{B}, w, \sigma}{\text{minimize}} & & \sum_{(\ensuremath\boldsymbol{y}_i, \ensuremath\boldsymbol{y}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{y}_i - \ensuremath\boldsymbol{y}_j\|_2^2 \\ & & & + \lambda\, \sum_{i=1}^n \sum_{j=1}^n (q_{ij} - d_{ij})^2 \exp(\frac{-d_{ij}^2}{\sigma^2}), \end{aligned} \end{equation} where $\lambda>0$ is the regularization parameter, $\sigma_2^2$ is the variance to be optimized, and $d_{ij} := \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_2$ and $q_{ij} := \|\ensuremath\boldsymbol{y}_i - \ensuremath\boldsymbol{y}_j\|_2$. This objective function is optimized iteratively until convergence. \subsection{Relevant to Support Vector Machine}\label{section_relation_to_SVM} Inspired by $\nu$-Support Vector Machine ($\nu$-SVM) \cite{scholkopf2000new}, the weight matrix in the generalized Mahalanobis distance can be obtained as \cite{tsang2003distance}: \begin{equation} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}, \gamma, \{\xi_{il}\}}{\text{minimize}} & & \frac{1}{2} \|\ensuremath\boldsymbol{W}\|_2^2 + \frac{\lambda_1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \\ & & & + \lambda_2 \Big( \nu \gamma + \frac{1}{|\mathcal{D}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_l) \in \mathcal{D}} \xi_{il} \Big) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \\ & & & \gamma \geq 0, \\ & & & \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 - \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_l\|_{\ensuremath\boldsymbol{W}}^2 \geq \gamma - \xi_{il}, \\ & & &~~~~~~~~~~~~~~~~~~ \forall (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}, (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_l) \in \mathcal{D}, \\ & & & \xi_{il} \geq 0, \quad \forall (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_l) \in \mathcal{D}, \end{aligned} \end{equation} where $\lambda_1, \lambda_2 > 0$ are regularization parameters. Using KKT conditions and Lagrange multipliers \cite{ghojogh2021kkt}, the dual optimization problem is (see \cite{tsang2003distance} for derivation): \begin{equation}\label{equation_relation_to_SVM_dual_optimization} \begin{aligned} & \underset{\{\alpha_{ij}\}}{\text{maximize}} ~~~~~ \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \alpha_{ij} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) \\ & -\frac{1}{2} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \sum_{(\ensuremath\boldsymbol{x}_k, \ensuremath\boldsymbol{x}_l) \in \mathcal{D}} \alpha_{ij} \alpha_{kl} ((\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top (\ensuremath\boldsymbol{x}_k - \ensuremath\boldsymbol{x}_l))^2 \\ & + \frac{\lambda_1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \sum_{(\ensuremath\boldsymbol{x}_k, \ensuremath\boldsymbol{x}_l) \in \mathcal{S}} \alpha_{ij} ((\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top (\ensuremath\boldsymbol{x}_k - \ensuremath\boldsymbol{x}_l))^2 \\ & \text{subject to} ~~~~~~~~~~ \frac{1}{\lambda_2} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \alpha_{ij} \geq \nu, \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~ \alpha_{ij} \in [0, \frac{\lambda_2}{|\mathcal{D}|}], \end{aligned} \end{equation} where $\{\alpha_{ij}\}$ are the dual variables. This problem is a quadratic programming problem and can be solved using optimization solvers. \subsection{Relevant to Multidimensional Scaling} Multidimensional Scaling (MDS) tries to preserve the distance after projection onto its subspace \cite{cox2008multidimensional,ghojogh2020multidimensional}. We saw in Proposition \ref{proposition_metric_learning_projection} that metric learning can be seen as projection onto the column space of $\ensuremath\boldsymbol{U}$ where $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top$. Inspired by MDS, we can learn a metric which preserves the distances between points after projection onto the subspace of metric \cite{zhang2003parametric}: \begin{equation} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \sum_{i=1}^n \sum_{j=1}^n (\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_2^2 - \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2)^2 \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} It can be solved using any optimization method \cite{ghojogh2021kkt}. \subsection{Kernel Spectral Metric Learning} Let $k(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) := \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i)^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)$ be the kernel function over data points $\ensuremath\boldsymbol{x}_i$ and $\ensuremath\boldsymbol{x}_j$, where $\ensuremath\boldsymbol{\phi}(.)$ is the pulling function to the Reproducing Kernel Hilbert Space (RKHS) \cite{ghojogh2021reproducing}. Let $\mathbb{R}^{n \times n} \ni \ensuremath\boldsymbol{K} := \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})$ be the kernel matrix of data. In the following, we introduce some of the kernel spectral metric learning methods. \subsubsection{Using Eigenvalue Decomposition of Kernel} One of the kernel methods for spectral metric learning is \cite{yeung2007kernel}. It has two approaches; we explain one of its approaches here. The eigenvalue decomposition of the kernel matrix is: \begin{align}\label{equation_kernel_eigenvalue_decomposition} \ensuremath\boldsymbol{K} = \sum_{r=1}^p \beta_r^2 \ensuremath\boldsymbol{\alpha}_r \ensuremath\boldsymbol{\alpha}_r^\top \overset{(a)}{=} \sum_{r=1}^p \beta_r^2 \ensuremath\boldsymbol{K}_r \end{align} where $p$ is the rank of kernel matrix, $\beta_r^2$ is the non-negative $r$-th eigenvalue (because $\ensuremath\boldsymbol{K} \succeq \ensuremath\boldsymbol{0}$), $\ensuremath\boldsymbol{\alpha}_r \in \mathbb{R}^n$ is the $r$-th eigenvector, and $(a)$ is because we define $\ensuremath\boldsymbol{K}_r := \ensuremath\boldsymbol{\alpha}_r \ensuremath\boldsymbol{\alpha}_r^\top$. We can consider $\{\beta_r^2\}_{r=1}^p$ as learnable parameters and not the eigenvalues. Hence, we learn $\{\beta_r^2\}_{r=1}^p$ for the sake of metric learning. The distance metric of pulled data points to RKHS is \cite{scholkopf2001kernel,ghojogh2021reproducing}: \begin{equation}\label{equation_distance_in_RKHS} \begin{aligned} \|\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - &\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\|_2^2 \\ &= k(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_i) + k(\ensuremath\boldsymbol{x}_j, \ensuremath\boldsymbol{x}_j) - 2 k(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j). \end{aligned} \end{equation} In metric learning, we want to make the distances of similar points small; hence the objective to be minimized is: Hence, we have: \begin{align*} & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\|_2^2 \\ &= \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} k(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_i) + k(\ensuremath\boldsymbol{x}_j, \ensuremath\boldsymbol{x}_j) - 2 k(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \\ &\overset{(\ref{equation_kernel_eigenvalue_decomposition})}{=} \sum_{r=1}^p \beta_r^2 \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} k_r(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_i) + k_r(\ensuremath\boldsymbol{x}_j, \ensuremath\boldsymbol{x}_j) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - 2 k_r(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \\ &\overset{(a)}{=} \sum_{r=1}^p \beta_r^2 \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} (\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j)^\top \ensuremath\boldsymbol{K}_r (\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j) \\ &\overset{(b)}{=} \sum_{r=1}^p \beta_r^2 f_r \overset{(c)}{=} \ensuremath\boldsymbol{\beta}^\top \ensuremath\boldsymbol{D}_{\mathcal{S}} \ensuremath\boldsymbol{\beta}, \end{align*} where $(a)$ is because $\ensuremath\boldsymbol{e}_i$ is the vector whose $i$-th element is one and other elements are zero, $(b)$ is because we define $f_r := \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} (\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j)^\top \ensuremath\boldsymbol{K}_r (\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j)$, and $(c)$ is because we define $\ensuremath\boldsymbol{D}_{\mathcal{S}} := \textbf{diag}([f_1, \dots, f_p]^\top)$ and $\ensuremath\boldsymbol{\beta} := [\beta_1, \dots, \beta_p]^\top$. By adding a constraint on the summation of $\{\beta_r^2\}_{r=1}^p$, the optimization problem for metric learning is: \begin{equation} \begin{aligned} & \underset{\ensuremath\boldsymbol{\beta}}{\text{minimize}} & & \ensuremath\boldsymbol{\beta}^\top \ensuremath\boldsymbol{D}_{\mathcal{S}} \ensuremath\boldsymbol{\beta} \\ & \text{subject to} & & \ensuremath\boldsymbol{1}^\top \ensuremath\boldsymbol{\beta} = 1. \end{aligned} \end{equation} This optimization is similar to the form of one of the optimization problems in locally linear embedding \cite{roweis2000nonlinear,ghojogh2020locally}. The Lagrangian for this problem is \cite{ghojogh2021kkt}: \begin{align*} &\mathcal{L} = \ensuremath\boldsymbol{\beta}^\top \ensuremath\boldsymbol{D}_{\mathcal{S}} \ensuremath\boldsymbol{\beta} - \lambda (\ensuremath\boldsymbol{1}^\top \ensuremath\boldsymbol{\beta} - 1), \end{align*} where $\lambda$ is the dual variable. Taking derivative of the Lagrangian w.r.t. the variables and setting to zero gives: \begin{align*} & \frac{\partial \mathcal{L}}{\partial \ensuremath\boldsymbol{\beta}} = 2 \ensuremath\boldsymbol{D}_{\mathcal{S}} \ensuremath\boldsymbol{\beta} - \lambda \ensuremath\boldsymbol{1} \overset{\text{set}}{=} 0 \implies \ensuremath\boldsymbol{\beta} = \frac{\lambda}{2} \ensuremath\boldsymbol{D}_{\mathcal{S}}^{-1} \ensuremath\boldsymbol{1}, \\ & \frac{\partial \mathcal{L}}{\partial \lambda} = \ensuremath\boldsymbol{1}^\top \ensuremath\boldsymbol{\beta} - 1 \overset{\text{set}}{=} 0 \implies \ensuremath\boldsymbol{1}^\top \ensuremath\boldsymbol{\beta} = 1, \\ & \implies \frac{\lambda}{2} \ensuremath\boldsymbol{1}^\top \ensuremath\boldsymbol{D}_{\mathcal{S}}^{-1} \ensuremath\boldsymbol{1} = 1 \implies \lambda = \frac{2}{\ensuremath\boldsymbol{1}^\top \ensuremath\boldsymbol{D}_{\mathcal{S}}^{-1} \ensuremath\boldsymbol{1}} \\ & \implies \ensuremath\boldsymbol{\beta} = \frac{\ensuremath\boldsymbol{D}_{\mathcal{S}}^{-1} \ensuremath\boldsymbol{1}}{\ensuremath\boldsymbol{1}^\top \ensuremath\boldsymbol{D}_{\mathcal{S}}^{-1} \ensuremath\boldsymbol{1}}. \end{align*} Hence, the optimal $\ensuremath\boldsymbol{\beta}$ is obtained for metric learning in the RKHS where the distances of similar points is smaller than in the input Euclidean space. \subsubsection{Regularization by Locally Linear Embedding} The method \cite{baghshah2009semi}, which was introduced in Section \ref{section_spectral_ML_regularization_by_LLE}, can be kernelized. Recall that this method used locally linear embedding for regularization. According to the representation theory \cite{ghojogh2021reproducing}, the solution in the RKHS can be represented as a linear combination of all pulled data points to RKHS: \begin{align}\label{equation_kernelization_representation_theory} \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U}) = \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{T}, \end{align} where $\ensuremath\boldsymbol{X} = [\ensuremath\boldsymbol{x}_1, \dots, \ensuremath\boldsymbol{x}_n]$ and $\ensuremath\boldsymbol{T} \in \mathbb{R}^{n \times p}$ ($p$ is the dimensionality of subspace) is the coefficients. We define the similarity and dissimilarity adjacency matrices as: \begin{equation} \begin{aligned} & \ensuremath\boldsymbol{A}_S(i,j) := \left\{ \begin{array}{ll} 1 & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}, \\ 0 & \mbox{otherwise.} \end{array} \right. \\ & \ensuremath\boldsymbol{A}_D(i,j) := \left\{ \begin{array}{ll} 1 & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}, \\ 0 & \mbox{otherwise.} \end{array} \right. \end{aligned} \end{equation} Let $\ensuremath\boldsymbol{L}_w$ and $\ensuremath\boldsymbol{L}_b$ denote the Laplacian matrices \cite{ghojogh2021laplacian} of these adjacency matrices: \begin{align*} & \ensuremath\boldsymbol{L}_w := \ensuremath\boldsymbol{D}_S - \ensuremath\boldsymbol{A}_S(i,j), \quad \ensuremath\boldsymbol{L}_b := \ensuremath\boldsymbol{D}_D - \ensuremath\boldsymbol{A}_D(i,j), \end{align*} where $\ensuremath\boldsymbol{D}_S(i,i) := \sum_{j=1}^n \ensuremath\boldsymbol{A}_S(i,j)$ and $\ensuremath\boldsymbol{D}_D(i,i) := \sum_{j=1}^n \ensuremath\boldsymbol{A}_D(i,j)$ are diagonal matrices. The terms in the objective of Eq. (\ref{equation_optimization_ML_LLE_3}) can be restated using Laplacian of adjacency matrices rather than the scatters: \begin{equation}\label{equation_optimization_ML_LLE_4} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \frac{\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{L}_b \ensuremath\boldsymbol{U})}{\textbf{tr}\big(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{L}_w + \lambda \ensuremath\boldsymbol{X}\ensuremath\boldsymbol{E}\ensuremath\boldsymbol{X}^\top) \ensuremath\boldsymbol{U}\big)}, \\ & \text{subject to} & & \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}. \end{aligned} \end{equation} According to the representation theory, the pulled Laplacian matrices to RKHS are $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{L}_b) = \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{L}_b \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top$ and $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{L}_w) = \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{L}_w \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top$. Hence, the numerator of Eq. (\ref{equation_optimization_ML_LLE_3}) in RKHS becomes: \begin{align*} &\textbf{tr}(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{L}_b \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})) \\ &= \textbf{tr}\big(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{L}_b \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{T}\big) \\ &\overset{(a)}{=} \textbf{tr}\big(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{L}_b \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{T}\big), \end{align*} where $(a)$ is because of the kernel trick \cite{ghojogh2021reproducing}, i.e., \begin{align}\label{equation_Kernel_X} \ensuremath\boldsymbol{K}_x := \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}). \end{align} similarly, the denominator of Eq. (\ref{equation_optimization_ML_LLE_3}) in RKHS becomes: \begin{align*} & \textbf{tr}\big(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})^\top (\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{L}_w \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top + \lambda \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{E}\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top) \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})\big) \\ & \overset{(\ref{equation_kernelization_representation_theory})}{=} \textbf{tr}\big(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top (\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{L}_w \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \lambda \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})\ensuremath\boldsymbol{E}\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top) \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{T}\big) \\ &\overset{(a)}{=} \textbf{tr}\big(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{K}_x (\ensuremath\boldsymbol{L}_w + \lambda \ensuremath\boldsymbol{E}) \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{T} \big), \end{align*} where $(a)$ is because of the kernel trick \cite{ghojogh2021reproducing}. The constrain in RKHS becomes: \begin{align*} & \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U}) \overset{(\ref{equation_kernelization_representation_theory})}{=} \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{T} \overset{(a)}{=} \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{T}, \end{align*} where $(a)$ is because of the kernel trick \cite{ghojogh2021reproducing}. The Eq. (\ref{equation_optimization_ML_LLE_3}) in RKHS is: \begin{equation}\label{equation_optimization_ML_LLE_3_kernel} \begin{aligned} & \underset{\ensuremath\boldsymbol{T}}{\text{maximize}} & & \frac{\textbf{tr}\big(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{L}_b \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{T}\big)}{\textbf{tr}\big(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{K}_x (\ensuremath\boldsymbol{L}_w + \lambda \ensuremath\boldsymbol{E}) \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{T} \big)}, \\ & \text{subject to} & & \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{T} = \ensuremath\boldsymbol{I}. \end{aligned} \end{equation} It can be solved using projected gradient method \cite{ghojogh2021kkt} to find the optimal $\ensuremath\boldsymbol{T}$. Then, the projected data onto the subspace of metric is found as: \begin{align} \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \overset{(\ref{equation_kernelization_representation_theory})}{=} \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \overset{(a)}{=} \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{K}_x, \end{align} where $(a)$ is because of the kernel trick \cite{ghojogh2021reproducing}. \subsubsection{Regularization by Laplacian} Another kernel spectral metric learning method is \cite{baghshah2010kernel} whose optimization is in the form: \begin{equation}\label{equation_optimization_kernel_soectral_ML_LaplacianRegularization} \begin{aligned} & \underset{\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})}{\text{minimize}} & & \frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \!\!\!\!\! \|\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\|_2^2 + \lambda \Omega(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})), \\ & \text{subject to} & & \|\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\|_2^2 \geq c, \quad \forall (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}, \end{aligned} \end{equation} where $c>0$ is a hyperparameter and $\lambda>0$ is the regularization parameter. Consider the $k$NN graph of data with an adjacency matrix $\ensuremath\boldsymbol{A} \in \mathbb{R}^{n \times n}$ whose $(i,j)$-th element is one if $\ensuremath\boldsymbol{x}_i$ and $\ensuremath\boldsymbol{x}_j$ are neighbors and is zero otherwise. Let the Laplacian matrix of this adjacency matrix be denoted by $\ensuremath\boldsymbol{L}$. In this method, the regularization term $\Omega(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}))$ can be the objective of Laplacian eigenmap \cite{ghojogh2021laplacian}: \begin{align*} \Omega(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})) := &\frac{1}{2n} \sum_{i=1}^n \sum_{j=1}^n \|\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\|_2^2 \ensuremath\boldsymbol{A}(i,j) \\ &\overset{(a)}{=} \textbf{tr}(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{L} \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top) \\ &\overset{(b)}{=} \textbf{tr}(\ensuremath\boldsymbol{L} \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})) \overset{(c)}{=} \textbf{tr}(\ensuremath\boldsymbol{L} \ensuremath\boldsymbol{K}_x), \end{align*} where $(a)$ is according to \cite{belkin2001laplacian} (see \cite{ghojogh2021laplacian} for proof), $(b)$ is because of the cyclic property of trace, and $(c)$ is because of the kernel trick \cite{ghojogh2021reproducing}. Moreover, according to Eq. (\ref{equation_distance_in_RKHS}), the distance in RKHS is $\|\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\|_2^2 = k(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_i) + k(\ensuremath\boldsymbol{x}_j, \ensuremath\boldsymbol{x}_j) - 2 k(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j)$. We can simplify the term in Eq. (\ref{equation_optimization_kernel_soectral_ML_LaplacianRegularization}) as: \begin{align*} &\frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \!\!\!\!\! \|\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\|_2^2 \\ &\overset{(\ref{equation_distance_in_RKHS})}{=} \frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \!\!\!\!\! k(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_i) + k(\ensuremath\boldsymbol{x}_j, \ensuremath\boldsymbol{x}_j) - 2 k(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \\ &= \frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \!\!\!\!\! (\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j)^\top \ensuremath\boldsymbol{K}_x (\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j) \overset{(a)}{=} \textbf{tr}(\ensuremath\boldsymbol{E}_{\mathcal{S}} \ensuremath\boldsymbol{K}_x), \end{align*} where $(a)$ is because the scalar is equal to its trace and we use the cyclic property of trace, i.e., $(\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j)^\top \ensuremath\boldsymbol{K}_x (\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j) = \textbf{tr}((\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j)^\top \ensuremath\boldsymbol{K}_x (\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j)) = \textbf{tr}((\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j) (\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j)^\top \ensuremath\boldsymbol{K}_x)$, and then we define $\ensuremath\boldsymbol{E}_{\mathcal{S}} := (1 / |\mathcal{S}|) \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} (\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j) (\ensuremath\boldsymbol{e}_i - \ensuremath\boldsymbol{e}_j)^\top$. Hence, Eq. (\ref{equation_optimization_kernel_soectral_ML_LaplacianRegularization}) can be restated as: \begin{equation}\label{equation_optimization_kernel_soectral_ML_LaplacianRegularization_2} \begin{aligned} & \underset{\ensuremath\boldsymbol{K}_x}{\text{minimize}} & & \textbf{tr}(\ensuremath\boldsymbol{E}_{\mathcal{S}} \ensuremath\boldsymbol{K}_x) + \lambda\, \textbf{tr}(\ensuremath\boldsymbol{L} \ensuremath\boldsymbol{K}_x), \\ & \text{subject to} & & k(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_i) + k(\ensuremath\boldsymbol{x}_j, \ensuremath\boldsymbol{x}_j) - 2 k(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \geq c, \\ & & & ~~~~~~~~~~~~~~ \quad \forall (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}, \\ & & & \ensuremath\boldsymbol{K}_x \succeq \ensuremath\boldsymbol{0}, \end{aligned} \end{equation} noticing that the kernel matrix is positive semidefinite. This problem is a Semidefinite Programming (SDP) problem and can be solved using the interior point method \cite{ghojogh2021kkt}. The optimal kernel matrix can be decomposed using eigenvalue decomposition to find the embedding of data in RKHS, i.e., $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})$: \begin{align*} \ensuremath\boldsymbol{K}_x = \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{\Sigma} \ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{\Sigma}^{(1/2} \ensuremath\boldsymbol{\Sigma}^{(1/2)} \ensuremath\boldsymbol{V} \overset{(\ref{equation_Kernel_X})}{=} \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}), \end{align*} where $\ensuremath\boldsymbol{V}$ and $\ensuremath\boldsymbol{\Sigma}$ are the eigenvectors and eigenvalues, $(a)$ is because $\ensuremath\boldsymbol{K}_x \succeq \ensuremath\boldsymbol{0}$ so its eigenvalues are non-negative can be taken second root of, and $(b)$ is because we get $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) := \ensuremath\boldsymbol{\Sigma}^{(1/2)} \ensuremath\boldsymbol{V}$. \subsubsection{Kernel Discriminative Component Analysis} Here, we explain the kernel version of DCA \cite{hoi2006learning} which was introduced in Section \ref{section_DCA}. \begin{lemma} The generalized Mahalanobis distance metric in RKHS, with the pulled weight matrix to RKHS denoted by $\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{W})$, can be seen as measuring the Euclidean distance in RKHS after projection onto the column subspace of $\ensuremath\boldsymbol{T}$ where $\ensuremath\boldsymbol{T}$ is the coefficient matrix in Eq. (\ref{equation_kernelization_representation_theory}). In other words: \begin{equation}\label{equation_Mahalanobis_distance_in_RKHS} \begin{aligned} \|\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - &\,\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\|_{\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{W})}^2 = \|\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j\|_{\ensuremath\boldsymbol{T}\ensuremath\boldsymbol{T}^\top}^2 \\ &= (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top \ensuremath\boldsymbol{T}\ensuremath\boldsymbol{T}^\top (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j), \end{aligned} \end{equation} where $\ensuremath\boldsymbol{k}_i := \ensuremath\boldsymbol{k}(\ensuremath\boldsymbol{X}, \ensuremath\boldsymbol{x}_i) = \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) = [k(\ensuremath\boldsymbol{x}_1, \ensuremath\boldsymbol{x}_i), \dots, k(\ensuremath\boldsymbol{x}_n, \ensuremath\boldsymbol{x}_i)]^\top \in \mathbb{R}^n$ is the kernel vector between $\ensuremath\boldsymbol{X}$ and $\ensuremath\boldsymbol{x}_i$. \end{lemma} \begin{proof} We can have the decomposition of the weight matrix, i.e. Eq. (\ref{equation_W_U_UT}), in RKHS which is: \begin{align}\label{equation_W_U_UT_RKHS} \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{W}) = \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U}) \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})^\top. \end{align} The generalized Mahalanobis distance metric in RKHS is: \begin{align*} &\|\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\|_{\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{W})}^2 \\ &\overset{(\ref{equation_W_U_UT})}{=} (\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j))^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U}) \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})^\top (\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)) \\ &= \big(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\big)^\top \\ & ~~~~~~~~~~~~~~~~~~~~~~ \big(\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\big) \\ &\overset{(\ref{equation_kernelization_representation_theory})}{=} \big(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\big)^\top \\ & ~~~~~~~~~~~~~~ \big(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\big) \\ &\overset{(a)}{=} \big(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{k}_j\big)^\top \big(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{k}_j\big) \\ &= \big(\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j\big)^\top \ensuremath\boldsymbol{T}\ensuremath\boldsymbol{T}^\top \big(\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j\big) = \|\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j\|_{\ensuremath\boldsymbol{T}\ensuremath\boldsymbol{T}^\top}^2, \end{align*} where $(a)$ is because of the kernel trick, i.e., $\ensuremath\boldsymbol{k}(\ensuremath\boldsymbol{X}, \ensuremath\boldsymbol{x}_i) = \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i)$. Q.E.D. \end{proof} Let $\ensuremath\boldsymbol{\nu}_l := [\frac{1}{n_l} \sum_{i=1}^{n_l} \ensuremath\boldsymbol{k}(\ensuremath\boldsymbol{x}_1, \ensuremath\boldsymbol{x}_i), \dots, \frac{1}{n_l} \sum_{i=1}^{n_l} \ensuremath\boldsymbol{k}(\ensuremath\boldsymbol{x}_n, \ensuremath\boldsymbol{x}_i)]^\top \in \mathbb{R}^n$ where $n_l$ denotes the cardinality of the $l$-th class. Let $\ensuremath\boldsymbol{K}_w$ and $\ensuremath\boldsymbol{K}_b$ be the kernelized versions of $\ensuremath\boldsymbol{S}_w$ and $\ensuremath\boldsymbol{S}_b$, respectively (see Eqs. (\ref{equation_intra_cluster_scatter}) and (\ref{equation_inter_cluster_scatter})). If $\mathcal{X}_l$ denotes the $l$-th class, we have: \begin{align} & \mathbb{R}^{n \times n} \ni \ensuremath\boldsymbol{K}_w := \frac{1}{n} \sum_{l=1}^c \sum_{\ensuremath\boldsymbol{x}_i \in \mathcal{X}_l} (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{\nu}_l) (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{\nu}_l)^\top \\ &\mathbb{R}^{n \times n} \ni \ensuremath\boldsymbol{K}_b := \frac{1}{n} \sum_{l=1}^c \sum_{j=1}^c (\ensuremath\boldsymbol{\nu}_l - \ensuremath\boldsymbol{\nu}_j) (\ensuremath\boldsymbol{\nu}_l - \ensuremath\boldsymbol{\nu}_j)^\top. \end{align} We saw the metric in RKHS can be seen as projection onto a subspace with the projection matrix $\ensuremath\boldsymbol{T}$. Therefore, Eq. (\ref{equation_optimization_DCA}) in RKHS becomes \cite{hoi2006learning}: \begin{equation}\label{equation_optimization_DCA_kernel} \begin{aligned} & \underset{\ensuremath\boldsymbol{T}}{\text{maximize}} & & \frac{\textbf{tr}(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{K}_b \ensuremath\boldsymbol{T})}{\textbf{tr}(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{K}_w \ensuremath\boldsymbol{T})}, \end{aligned} \end{equation} which is a generalized Rayleigh-Ritz quotient. The solution $\ensuremath\boldsymbol{T}$ to this optimization problem is the generalized eigenvalue problem $(\ensuremath\boldsymbol{K}_b, \ensuremath\boldsymbol{K}_w)$ \cite{ghojogh2019eigenvalue}. The weight matrix of the generalized Mahalanobis distance is obtained by Eqs. (\ref{equation_kernelization_representation_theory}) and (\ref{equation_W_U_UT_RKHS}). \subsubsection{Relevant to Kernel Fisher Discriminant Analysis} Here, we explain the kernel version of the metric learning method \cite{alipanahi2008distance} which was introduced in Section \ref{section_relation_to_FDA}. According to Eq. (\ref{equation_Mahalanobis_distance_in_RKHS}), we have: \begin{align*} &\|\ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\phi}(\ensuremath\boldsymbol{x}_j)\|_{\ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{W})}^2 = (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top \ensuremath\boldsymbol{T}\ensuremath\boldsymbol{T}^\top (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j) \\ &~~~~~~~~~~~~~~~~~~\overset{(a)}{=} \textbf{tr}\big((\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top \ensuremath\boldsymbol{T}\ensuremath\boldsymbol{T}^\top (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)\big) \\ &~~~~~~~~~~~~~~~~~~\overset{(b)}{=} \textbf{tr}\big(\ensuremath\boldsymbol{T}^\top (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j) (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top \ensuremath\boldsymbol{T}\big), \end{align*} where $(a)$ is because a scalar it equal to its trace and $(b)$ is because of the cyclic property of trace. Hence, Eq. (\ref{equation_Sigma_S_prime_Sigma_D_prime}) in RKHS becomes: \begin{align*} & \frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \textbf{tr}\big(\ensuremath\boldsymbol{T}^\top (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j) (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top \ensuremath\boldsymbol{T}\big) \\ & = \textbf{tr}\Big(\ensuremath\boldsymbol{T}^\top \big(\frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j) (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top \ensuremath\boldsymbol{T}\big)\Big) \\ &= \textbf{tr}(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{\Sigma}^{\phi}_{\mathcal{S}} \ensuremath\boldsymbol{T}), \end{align*} and likewise: \begin{align*} &\frac{1}{|\mathcal{D}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \textbf{tr}\big(\ensuremath\boldsymbol{T}^\top (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j) (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top \ensuremath\boldsymbol{T}\big) \\ &~~~~~~~~~~~~~~~~~~~~~~ = \textbf{tr}(\ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{\Sigma}^{\phi}_{\mathcal{D}} \ensuremath\boldsymbol{T}), \end{align*} where: \begin{align*} & \ensuremath\boldsymbol{\Sigma}^{\phi}_{\mathcal{S}} := \frac{1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j) (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top, \\ & \ensuremath\boldsymbol{\Sigma}^{\phi}_{\mathcal{D}} := \frac{1}{|\mathcal{D}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j) (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top. \end{align*} Hence, in RKHS, the objective of the optimization problem (\ref{equation_ML_optimization_spectral_method3_approach2}) becomes $\textbf{tr}(\ensuremath\boldsymbol{T}^\top (\ensuremath\boldsymbol{\Sigma}^{\phi}_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}^{\phi}_{\mathcal{D}}) \ensuremath\boldsymbol{T}^\top)$. We change the constraint in Eq. (\ref{equation_ML_optimization_spectral_method3_approach2}) to $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{U} = \ensuremath\boldsymbol{I}$. In RKHS, this constraint becomes: \begin{align*} \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{U}) &\overset{(\ref{equation_kernelization_representation_theory})}{=} \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X})^\top \ensuremath\boldsymbol{\Phi}(\ensuremath\boldsymbol{X}) \ensuremath\boldsymbol{T} \\ &\overset{(\ref{equation_Kernel_X})}{=} \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{T} \overset{\text{set}}{=} \ensuremath\boldsymbol{I}, \end{align*} Finally, (\ref{equation_ML_optimization_spectral_method3_approach2}) in RKHS becomes: \begin{equation}\label{equation_ML_optimization_spectral_method3_approach2_kernel} \begin{aligned} & \underset{\ensuremath\boldsymbol{T}}{\text{minimize}} & & \textbf{tr}(\ensuremath\boldsymbol{T}^\top (\ensuremath\boldsymbol{\Sigma}^{\phi}_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}^{\phi}_{\mathcal{D}}) \ensuremath\boldsymbol{T}) \\ & \text{subject to} & & \ensuremath\boldsymbol{T}^\top \ensuremath\boldsymbol{K}_x \ensuremath\boldsymbol{T} = \ensuremath\boldsymbol{I}, \end{aligned} \end{equation} whose solution is a generalized eigenvalue problem $(\ensuremath\boldsymbol{\Sigma}^{\phi}_{\mathcal{S}} - \ensuremath\boldsymbol{\Sigma}^{\phi}_{\mathcal{D}}, \ensuremath\boldsymbol{K}_x)$ where $\ensuremath\boldsymbol{T}$ is the matrix of eigenvectors. The weight matrix of the generalized Mahalanobis distance is obtained by Eqs. (\ref{equation_kernelization_representation_theory}) and (\ref{equation_W_U_UT_RKHS}). This is relevant to kernel Fisher discriminant analysis \cite{mika1999fisher,ghojogh2019fisher} which minimizes and maximizes the intra-class and inter-class variances in RKHS. \subsubsection{Relevant to Kernel Support Vector Machine} Here, we explain the kernel version of the metric learning method \cite{tsang2003distance} which was introduced in Section \ref{section_relation_to_SVM}. It is relevant to kernel SVM. Using kernel trick \cite{ghojogh2021reproducing} and Eq. (\ref{equation_Mahalanobis_distance_in_RKHS}), the Eq. (\ref{equation_relation_to_SVM_dual_optimization}) can be kernelized as \cite{tsang2003distance}: \begin{equation}\label{equation_relation_to_SVM_dual_optimization_kernel} \begin{aligned} & \underset{\{\alpha_{ij}\}}{\text{maximize}} ~~~~~ \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \alpha_{ij} \ensuremath\boldsymbol{T}^\top (k_{ii} + k_{jj} - 2k_{ij}) \\ & -\frac{1}{2} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \sum_{(\ensuremath\boldsymbol{x}_k, \ensuremath\boldsymbol{x}_l) \in \mathcal{D}} \alpha_{ij} \alpha_{kl} (k_{ik} - k_{il} - k_{jk} + k_{jl})^2 \\ & + \frac{\lambda_1}{|\mathcal{S}|} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \sum_{(\ensuremath\boldsymbol{x}_k, \ensuremath\boldsymbol{x}_l) \in \mathcal{S}} \alpha_{ij} (k_{ik} - k_{il} - k_{jk} + k_{jl})^2 \\ & \text{subject to} ~~~~~~~~~~ \frac{1}{\lambda_2} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \alpha_{ij} \geq \nu, \\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~ \alpha_{ij} \in [0, \frac{\lambda_2}{|\mathcal{D}|}], \end{aligned} \end{equation} which is a quadratic programming problem and can be solved by optimization solvers. \subsection{Geometric Spectral Metric Learning} Some spectral metric learning methods are geometric methods which use Riemannian manifolds. In the following, we introduce the mist well-known geometric methods. There are some other geometric methods, such as \cite{hauberg2012geometric}, which are not covered for brevity. \subsubsection{Geometric Mean Metric Learning}\label{section_geometric_mean_metric_learning} One of the geometric spectral metric learning is Geometric Mean Metric Learning (GMML) \cite{zadeh2016geometric}. Let $\ensuremath\boldsymbol{W}$ be the weight matrix in the generalized Mahalanobis distance for similar points. \hfill\break \textbf{-- Regular GMML:} In GMML, we use the inverse of weight matrix, i.e. $\ensuremath\boldsymbol{W}^{-1}$ , for the dissimilar points. The optimization problem of GMML is \cite{zadeh2016geometric}: \begin{equation}\label{equation_GMML_optimization} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \\ & & & ~~~~~~~~~~~~~~~~~~ + \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}^{-1}}^2 \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} According to Eq. (\ref{equation_spectral_ML_first_method_trace_W_Sigma_S}), this problem can be restated as: \begin{equation}\label{equation_GMML_optimization_1} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}) + \textbf{tr}(\ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \end{aligned} \end{equation} where $\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}$ and $\ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}$ are defined in Eq. (\ref{equation_spectral_ML_first_method_Sigma_S}). Taking derivative of the objective function w.r.t. $\ensuremath\boldsymbol{W}$ and setting it to zero gives: \begin{align} &\frac{\partial }{\partial \ensuremath\boldsymbol{W}} \big( \textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}) + \textbf{tr}(\ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}) \big) \nonumber\\ &= \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} - \ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{W}^{-1} \overset{\text{set}}{=} \ensuremath\boldsymbol{0} \implies \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} =\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \ensuremath\boldsymbol{W}. \label{equation_GMML_solution_1} \end{align} This equation is the Riccati equation \cite{riccati1724animadversiones} and its solution is the midpoint of the geodesic connecting $\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{-1}$ and $\ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}$ {\citep[Section 1.2.13]{bhatia2007positive}}. \begin{lemma}[{\citep[Chapter 6]{bhatia2007positive}}] The geodesic curve connecting two points $\ensuremath\boldsymbol{\Sigma}_1$ and $\ensuremath\boldsymbol{\Sigma}_2$ on the Symmetric Positive Definite (SPD) Riemannian manifold is denoted by $\ensuremath\boldsymbol{\Sigma}_1 \sharp_t \ensuremath\boldsymbol{\Sigma}_2$ and is computed as: \begin{align}\label{equation_SPD_geodesic} \ensuremath\boldsymbol{\Sigma}_1 \sharp_t \ensuremath\boldsymbol{\Sigma}_2 := \ensuremath\boldsymbol{\Sigma}_1^{(1/2)} \big(\ensuremath\boldsymbol{\Sigma}_1^{(-1/2)} \ensuremath\boldsymbol{\Sigma}_2 \ensuremath\boldsymbol{\Sigma}_1^{(-1/2)}\big)^t \ensuremath\boldsymbol{\Sigma}_1^{(1/2)}, \end{align} where $t \in [0,1]$. \end{lemma} Hence, the solution of Eq. (\ref{equation_GMML_solution_1}) is: \begin{align} \ensuremath\boldsymbol{W} &= \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{-1} \sharp_{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \nonumber\\ &\overset{(\ref{equation_SPD_geodesic})}{=} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(-1/2)} \big(\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)}\big)^{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(-1/2)}. \label{equation_GMML_solution_1_1} \end{align} The proof of Eq. (\ref{equation_GMML_solution_1_1}) is as follows \cite{hajiabadi2019layered}: \begin{align*} &\ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \overset{(\ref{equation_GMML_solution_1})}{=} \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \ensuremath\boldsymbol{W} \\ &\implies \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} = \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \\ &\implies (\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)})^{(1/2)} \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~ = (\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)})^{(1/2)} \\ &\implies (\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)})^{(1/2)} \\ &~~~~~~~~~~~~~~~~ \overset{(a)}{=} ((\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)}) (\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)}))^{(1/2)} \\ &~~~~~~~~~~~~~~~~ = (\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)}) \\ &\implies \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(-1/2)} (\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)})^{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(-1/2)} \\ &~~~~~~~~~~~~~~~~ = \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(-1/2)} (\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)}) \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(-1/2)} = \ensuremath\boldsymbol{W}, \end{align*} where $(a)$ is because $\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \succeq \ensuremath\boldsymbol{0}$ so its eigenvalues are non-negative and the matrix of eigenvalues can be decomposed by the second root in its eigenvalue decomposition to have $\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} = \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)}$. \hfill\break \textbf{-- Regularized GMML:} The matrix $\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}$ might be singular or near singular and hence non-invertible. Therefore, we regularize Eq. (\ref{equation_GMML_optimization_1}) to make the weight matrix close to a prior known positive definite matrix $\ensuremath\boldsymbol{W}_0$. \begin{equation}\label{equation_GMML_optimization_2} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}) + \textbf{tr}(\ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}) \\ & & & + \lambda \big( \textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{W}_0^{-1}) + \textbf{tr}(\ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{W}_0) - 2 d \big), \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \end{aligned} \end{equation} where $\lambda>0$ is the regularization parameter. The regularization term is the symmetrized log-determinant divergence between $\ensuremath\boldsymbol{W}$ and $\ensuremath\boldsymbol{W}_0$. Taking derivative of the objective function w.r.t. $\ensuremath\boldsymbol{W}$ and setting it to zero gives: \begin{align*} &\frac{\partial }{\partial \ensuremath\boldsymbol{W}} \big( \textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}) + \textbf{tr}(\ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}) + \lambda \textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{W}_0^{-1}) \\ &~~~~~~~~~ + \lambda \textbf{tr}(\ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{W}_0) - 2\lambda d \big) \nonumber\\ &= \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} - \ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{W}^{-1} + \lambda \ensuremath\boldsymbol{W}_0^{-1} \\ & ~~~~~~~~~~~~~~~~~~~~~~ + \lambda \ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{W}_0 \ensuremath\boldsymbol{W}^{-1} \overset{\text{set}}{=} \ensuremath\boldsymbol{0} \\ & \implies \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} + \lambda \ensuremath\boldsymbol{W}_0 =\ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} + \lambda \ensuremath\boldsymbol{W}_0^{-1}) \ensuremath\boldsymbol{W}, \end{align*} which is again a Riccati equation \cite{riccati1724animadversiones} whose solution is the midpoint of the geodesic connecting $(\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} + \lambda \ensuremath\boldsymbol{W}_0^{-1})^{-1}$ and $(\ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} + \lambda \ensuremath\boldsymbol{W}_0)$: \begin{align}\label{equation_GMML_solution_2} \ensuremath\boldsymbol{W} &= (\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} + \lambda \ensuremath\boldsymbol{W}_0^{-1})^{-1} \sharp_{(1/2)} (\ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} + \lambda \ensuremath\boldsymbol{W}_0). \end{align} \textbf{-- Weighted GMML:} Eq. (\ref{equation_GMML_optimization_1}) can be restated as: \begin{equation}\label{equation_GMML_optimization_3} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \delta^2(\ensuremath\boldsymbol{W}, \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{-1}) + \delta^2(\ensuremath\boldsymbol{W}, \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \end{aligned} \end{equation} where $\delta(.,.)$ is the Riemannian distance (or Fr{\'e}chet mean) on the SPD manifold {\citep[Eq 1.1]{arsigny2007geometric}}: \begin{align*} \delta(\ensuremath\boldsymbol{\Sigma}_1, \ensuremath\boldsymbol{\Sigma}_2) := \|\log(\ensuremath\boldsymbol{\Sigma}_2^{(-1/2)} \ensuremath\boldsymbol{\Sigma}_1 \ensuremath\boldsymbol{\Sigma}_2^{(-1/2)})\|_F, \end{align*} where $\|.\|_F$ is the Frobenius norm. We can weight the objective in Eq. (\ref{equation_GMML_optimization_3}): \begin{equation}\label{equation_GMML_optimization_4} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & (1-t) \delta^2(\ensuremath\boldsymbol{W}, \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{-1}) + t \delta^2(\ensuremath\boldsymbol{W}, \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \end{aligned} \end{equation} where $t \in [0,1]$ is a hyperparameter. The solution of this problem is the weighted version of Eq. (\ref{equation_GMML_solution_2}): \begin{align}\label{equation_GMML_solution_3} \ensuremath\boldsymbol{W} &= (\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} + \lambda \ensuremath\boldsymbol{W}_0^{-1})^{-1} \sharp_t (\ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} + \lambda \ensuremath\boldsymbol{W}_0). \end{align} \subsubsection{Low-rank Geometric Mean Metric Learning} We can learn a low-rank weight matrix in GMML \cite{bhutani2018low}, where the rank of wight matrix is set to be $p \ll d$: \begin{equation}\label{equation_GMML_lowRank_optimization_1} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}) + \textbf{tr}(\ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \\ & & & \textbf{rank}(\ensuremath\boldsymbol{W}) = p. \end{aligned} \end{equation} We can decompose it using eigenvalue decomposition as done in Eq. (\ref{equation_W_U_UT}), i.e., $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Lambda} \ensuremath\boldsymbol{V}^\top = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top$, where we only have $p$ eigenvectors and $p$ eigenvalues. Therefore, the sizes of matrices are $\ensuremath\boldsymbol{V} \in \mathbb{R}^{d \times p}$, $\ensuremath\boldsymbol{\Lambda} \in \mathbb{R}^{p \times p}$, and $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$. By this decomposition, the objective function in Eq. (\ref{equation_GMML_lowRank_optimization_1} can be restated as: \begin{align*} &\textbf{tr}(\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Lambda} \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}) + \textbf{tr}(\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Lambda}^{-1} \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}) \\ &\overset{(a)}{=} \textbf{tr}(\ensuremath\boldsymbol{\Lambda} \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \ensuremath\boldsymbol{V}) + \textbf{tr}(\ensuremath\boldsymbol{\Lambda}^{-1} \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{V}) \\ &\overset{(b)}{=} \textbf{tr}(\ensuremath\boldsymbol{\Lambda} \widetilde{\ensuremath\boldsymbol{\Sigma}}_{\mathcal{S}}) + \textbf{tr}(\ensuremath\boldsymbol{\Lambda}^{-1} \widetilde{\ensuremath\boldsymbol{\Sigma}}_{\mathcal{D}}), \end{align*} where $(\ensuremath\boldsymbol{V}^\top)^{-1} = \ensuremath\boldsymbol{V}$ because it is orthogonal, $(a)$ is because of the cyclic property of trace, and $(b)$ is because we define $\widetilde{\ensuremath\boldsymbol{\Sigma}}_{\mathcal{S}} := \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \ensuremath\boldsymbol{V}$ and $\widetilde{\ensuremath\boldsymbol{\Sigma}}_{\mathcal{D}} := \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{V}$. Noticing that the matrix of eigenvectors $\ensuremath\boldsymbol{V}$ is orthogonal, the Eq. (\ref{equation_GMML_lowRank_optimization_1}) is restated to: \begin{equation}\label{equation_GMML_lowRank_optimization_2} \begin{aligned} & \underset{\ensuremath\boldsymbol{\Lambda}, \ensuremath\boldsymbol{V}}{\text{minimize}} & & \textbf{tr}(\ensuremath\boldsymbol{\Lambda} \widetilde{\ensuremath\boldsymbol{\Sigma}}_{\mathcal{S}}) + \textbf{tr}(\ensuremath\boldsymbol{\Lambda}^{-1} \widetilde{\ensuremath\boldsymbol{\Sigma}}_{\mathcal{D}}) \\ & \text{subject to} & & \ensuremath\boldsymbol{\Lambda} \succeq \ensuremath\boldsymbol{0}, \\ & & & \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{I}, \end{aligned} \end{equation} where $\textbf{rank}(\ensuremath\boldsymbol{W}) = p$ is automatically satisfied by taking $\ensuremath\boldsymbol{V} \in \mathbb{R}^{d \times p}$ and $\ensuremath\boldsymbol{\Lambda} \in \mathbb{R}^{p \times p}$ in the decomposition. This problem can be solved by the alternative optimization \cite{ghojogh2021kkt}. If the variable $\ensuremath\boldsymbol{V}$ is fixed, minimization w.r.t. $\ensuremath\boldsymbol{\Lambda}$ is similar to the problem (\ref{equation_GMML_optimization_1}); hence, its solution is similar to Eq. (\ref{equation_GMML_solution_1_1}), i.e., $\ensuremath\boldsymbol{\Lambda} = {\widetilde{\ensuremath\boldsymbol{\Sigma}}_{\mathcal{S}}}^{-1} \sharp_{(1/2)} \widetilde{\ensuremath\boldsymbol{\Sigma}}_{\mathcal{D}}$ (see Eq. (\ref{equation_SPD_geodesic}) for the definition of $\sharp_t$). If $\ensuremath\boldsymbol{\Lambda}$ is fixed, the orthogonality constraint $\ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{I}$ can be modeled by $\ensuremath\boldsymbol{V}$ belonging to the Grassmannian manifold $G(p,d)$ which is the set of $p$-dimensional subspaces of $\mathbb{R}^d$. To sum up, the alternative optimization is: \begin{align*} & \ensuremath\boldsymbol{\Lambda}^{(\tau+1)} = (\ensuremath\boldsymbol{V}^{(\tau)\top} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \ensuremath\boldsymbol{V}^{(\tau)})^{-1} \sharp_{(1/2)} (\ensuremath\boldsymbol{V}^{(\tau)\top} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{V}^{(\tau)}), \\ & \ensuremath\boldsymbol{V}^{(\tau+1)} := \arg \min_{\ensuremath\boldsymbol{V} \in G(p,d)} \Big( \textbf{tr}(\ensuremath\boldsymbol{\Lambda}^{(\tau+1)} \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}} \ensuremath\boldsymbol{V}) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + \textbf{tr}((\ensuremath\boldsymbol{\Lambda}^{(\tau+1)})^{-1} \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{V})\Big), \end{align*} where $\tau$ is the iteration index. Optimization of $\ensuremath\boldsymbol{V}$ can be solved by Riemannian optimization \cite{absil2009optimization}. \subsubsection{Geometric Mean Metric Learning for Partial Labels} Partial label learning \cite{cour2011learning} refers to when a set of candidate labels is available for every data point. GMML can be modified to be used for partial label learning \cite{zhou2018geometric}. Let $\mathcal{Y}_i$ denote the set of candidate labels for $\ensuremath\boldsymbol{x}_i$. If there are $q$ candidate labels in total, we denote $\ensuremath\boldsymbol{y}_i = [y_{i1}, \dots, y_{iq}]^\top \in \{0,1\}^q$ where $y_{ij}$ is one if the $j$-th label is a candidate label for $\ensuremath\boldsymbol{x}_i$ and is zero otherwise. We define $\ensuremath\boldsymbol{X}_i^+ := \{\ensuremath\boldsymbol{x}_j | j=1, \dots, n, j \neq i, \mathcal{Y}_i \cap \mathcal{Y}_j \neq \varnothing\}$ and $\ensuremath\boldsymbol{X}_i^- := \{\ensuremath\boldsymbol{x}_j | j=1, \dots, n, \mathcal{Y}_i \cap \mathcal{Y}_j = \varnothing\}$. In other words, $\ensuremath\boldsymbol{X}_i^+$ and $\ensuremath\boldsymbol{X}_i^-$ are the data points which share and do not share some candidate labels with $\ensuremath\boldsymbol{x}_i$, respectively. Let $\mathcal{N}_i^+$ be the indices of the $k$ nearest neighbors of $\ensuremath\boldsymbol{x}_i$ among $\ensuremath\boldsymbol{X}_i^+$. Also, let $\mathcal{N}_i^-$ be the indices of points in $\ensuremath\boldsymbol{X}_i^-$ whose distance from $\ensuremath\boldsymbol{x}_i$ are smaller than the distance of the furthest point in $\mathcal{N}_i^+$ from $\ensuremath\boldsymbol{x}_i$. In other words, $\mathcal{N}_i^- := \{j | j=1, \dots, n, \ensuremath\boldsymbol{x}_j \in \ensuremath\boldsymbol{X}_i^-, \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_2 \leq \max_{t \in \mathcal{N}_i^+} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_t\|_2\}$. Let $\ensuremath\boldsymbol{w}_i^{(1)} = [w_{i,t}^{(1)}, \forall t \in \mathcal{N}_i^+]^\top \in \mathbb{R}^k$ contain the probabilities that each of the $k$ neighbors of $\ensuremath\boldsymbol{x}_i$ share the same label with $\ensuremath\boldsymbol{x}_i$. It can be estimated by linear reconstruction of $\ensuremath\boldsymbol{y}_i$ by the neighbor $\ensuremath\boldsymbol{y}_t$'s: \begin{equation*} \begin{aligned} & \underset{\ensuremath\boldsymbol{w}_i^{(1)}}{\text{minimize}} & & \frac{1}{q} \big\|\ensuremath\boldsymbol{y}_i - \sum_{t \in \mathcal{N}_i^+} w_{i,t}^{(1)} \ensuremath\boldsymbol{y}_t\big\|_2^2 + \frac{\lambda_1}{k} \sum_{t \in \mathcal{N}_i^+} (w_{i,t}^{(1)})^2 \\ & \text{subject to} & & w_{i,t}^{(1)} \geq 0, \quad t \in \mathcal{N}_i^+, \end{aligned} \end{equation*} where $\lambda_1 > 0$ is the regularization parameter. Let $\ensuremath\boldsymbol{w}_i^{(2)} = [w_{i,t}^{(2)}, \forall t \in \mathcal{N}_i^+]^\top \in \mathbb{R}^k$ denote the coefficients for linear reconstruction of $\ensuremath\boldsymbol{x}_i$ by its $k$ nearest neighbors. It is obtained as: \begin{equation*} \begin{aligned} & \underset{\ensuremath\boldsymbol{w}_i^{(2)}}{\text{minimize}} & & \big\|\ensuremath\boldsymbol{x}_i - \sum_{t \in \mathcal{N}_i^+} w_{i,t}^{(2)} \ensuremath\boldsymbol{x}_t\big\|_2^2 \\ & \text{subject to} & & w_{i,t}^{(2)} \geq 0, \quad t \in \mathcal{N}_i^+. \end{aligned} \end{equation*} These two optimization problems are quadratic programming and can be solved using the interior point method \cite{ghojogh2021kkt}. The main optimization problem of GMML for partial labels is \cite{zhou2018geometric}: \begin{equation}\label{equation_optimization_GMML_partial_labels} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \textbf{tr}(\ensuremath\boldsymbol{W}\ensuremath\boldsymbol{\Sigma}'_\mathcal{S}) + \textbf{tr}(\ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{\Sigma}'_\mathcal{D}) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \end{aligned} \end{equation} where: \begin{align*} & \ensuremath\boldsymbol{\Sigma}'_\mathcal{S} := \sum_{i=1}^n \Bigg( \frac{\sum_{t \in \mathcal{N}_i^+} w_{i,t}^{(1)} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_t) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_t)^\top}{\sum_{t \in \mathcal{N}_i^+} w_{i,t}^{(1)}} \\ & ~~~~~~~~~~ + \lambda \Big(\ensuremath\boldsymbol{x}_i - \sum_{t \in \mathcal{N}_i^+} w_{i,t}^{(2)} \ensuremath\boldsymbol{x}_t\Big) \Big(\ensuremath\boldsymbol{x}_i - \sum_{t \in \mathcal{N}_i^+} w_{i,t}^{(2)} \ensuremath\boldsymbol{x}_t\Big)^\top \Bigg), \\ & \ensuremath\boldsymbol{\Sigma}'_\mathcal{D} := \sum_{i=1}^n \sum_{t \in \mathcal{N}_i^-} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_t) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_t)^\top. \end{align*} Minimizing the first term of $\ensuremath\boldsymbol{\Sigma}'_\mathcal{S}$ in $\textbf{tr}(\ensuremath\boldsymbol{W}\ensuremath\boldsymbol{\Sigma}'_\mathcal{S})$ decreases the distances of similar points which share some candidate labels. Minimizing the second term of $\ensuremath\boldsymbol{\Sigma}'_\mathcal{S}$ in $\textbf{tr}(\ensuremath\boldsymbol{W}\ensuremath\boldsymbol{\Sigma}'_\mathcal{S})$ tries to preserve linear reconstruction of $\ensuremath\boldsymbol{x}_i$ by its neighbors after projection onto the subspace of metric. Minimizing $\textbf{tr}(\ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{\Sigma}'_\mathcal{D})$ increases the the distances of dissimilar points which do not share any candidate labels. The problem (\ref{equation_optimization_GMML_partial_labels}) is similar to the problem (\ref{equation_GMML_optimization_1}); hence, its solution is similar to Eq. (\ref{equation_GMML_solution_1_1}), i.e., $\ensuremath\boldsymbol{W} = {\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}}}^{-1} \sharp_{(1/2)} \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}$ (see Eq. (\ref{equation_SPD_geodesic}) for the definition of $\sharp_t$). \subsubsection{Geometric Mean Metric Learning on SPD and Grassmannian Manifolds} The GMML method \cite{zadeh2016geometric}, introduced in Section \ref{section_geometric_mean_metric_learning}, can be implemented on Symmetric Positive Definite (SPD) and Grassmannian manifolds \cite{zhu2018towards}. If $\ensuremath\boldsymbol{X}_i, \ensuremath\boldsymbol{X}_j \in \mathcal{S}_{++}^d$ is a point on the SPD manifold, the distance metric on this manifold is \cite{zhu2018towards}: \begin{align} d_{\ensuremath\boldsymbol{W}}(\ensuremath\boldsymbol{T}_i, \ensuremath\boldsymbol{T}_j) := \textbf{tr}\big(\ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{T}_i - \ensuremath\boldsymbol{T}_j) (\ensuremath\boldsymbol{T}_i - \ensuremath\boldsymbol{T}_j)\big), \end{align} where $\ensuremath\boldsymbol{W} \in \mathbb{R}^{d \times d}$ is the weight matrix of metric and $\ensuremath\boldsymbol{T}_i := \log(\ensuremath\boldsymbol{X}_i)$ is the logarithm operation on the SPD manifold. The Grassmannian manifold $Gr(k,d)$ is the $k$-dimensional subspaces of the $d$-dimensional vector space. A point in $Gr(k,d)$ is a linear subspace spanned by a full-rank $\ensuremath\boldsymbol{X}_i \in \mathbb{R}^{d \times k}$ which is orthogonal, i.e., $\ensuremath\boldsymbol{X}_i^\top \ensuremath\boldsymbol{X}_i = \ensuremath\boldsymbol{I}$. If $\ensuremath\boldsymbol{M} \in \mathbb{R}^{d \times r}$ is any matrix, We define $\ensuremath\boldsymbol{X}'_i$ in a way that $\ensuremath\boldsymbol{M}^\top \ensuremath\boldsymbol{X}'_i$ is the orthogonal components of $\ensuremath\boldsymbol{M}^\top \ensuremath\boldsymbol{X}_i$. If $\mathbb{R}^{d \times d} \ni \ensuremath\boldsymbol{T}_{ij} := \ensuremath\boldsymbol{X}'_i \ensuremath\boldsymbol{X}^{'\top}_i - \ensuremath\boldsymbol{X}'_j \ensuremath\boldsymbol{X}^{'\top}_j$, the distance on the Grassmannian manifold is \cite{zhu2018towards}: \begin{align} d_{\ensuremath\boldsymbol{W}}(\ensuremath\boldsymbol{T}_{ij}) := \textbf{tr}\big(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{T}_{ij} \ensuremath\boldsymbol{T}_{ij}\big), \end{align} $\ensuremath\boldsymbol{W} \in \mathbb{R}^{d \times d}$ is the weight matrix of metric. Similar to the optimization problem of GMML, i.e. Eq. (\ref{equation_GMML_optimization}), we solve the following problem for the SPD manifold: \begin{equation} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \sum_{(\ensuremath\boldsymbol{T}_i, \ensuremath\boldsymbol{T}_j) \in \mathcal{S}} \textbf{tr}\big(\ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{T}_i - \ensuremath\boldsymbol{T}_j) (\ensuremath\boldsymbol{T}_i - \ensuremath\boldsymbol{T}_j)\big) \\ & & & + \sum_{(\ensuremath\boldsymbol{T}_i, \ensuremath\boldsymbol{T}_j) \in \mathcal{D}} \textbf{tr}\big(\ensuremath\boldsymbol{W}^{-1} (\ensuremath\boldsymbol{T}_i - \ensuremath\boldsymbol{T}_j) (\ensuremath\boldsymbol{T}_i - \ensuremath\boldsymbol{T}_j)\big) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} Likewise, for the Grassmannian manifold, the optimization problem is: \begin{equation} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \sum_{(\ensuremath\boldsymbol{T}_i, \ensuremath\boldsymbol{T}_j) \in \mathcal{S}} \textbf{tr}\big(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{T}_{ij} \ensuremath\boldsymbol{T}_{ij}\big) \\ & & & + \sum_{(\ensuremath\boldsymbol{T}_i, \ensuremath\boldsymbol{T}_j) \in \mathcal{D}} \textbf{tr}\big(\ensuremath\boldsymbol{W}^{-1} \ensuremath\boldsymbol{T}_{ij} \ensuremath\boldsymbol{T}_{ij}\big) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} Suppose, for the SPD manifold, we define: \begin{align*} & \ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} := \sum_{(\ensuremath\boldsymbol{T}_i, \ensuremath\boldsymbol{T}_j) \in \mathcal{S}} (\ensuremath\boldsymbol{T}_i - \ensuremath\boldsymbol{T}_j) (\ensuremath\boldsymbol{T}_i - \ensuremath\boldsymbol{T}_j), \\ & \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}} := \sum_{(\ensuremath\boldsymbol{T}_i, \ensuremath\boldsymbol{T}_j) \in \mathcal{D}} (\ensuremath\boldsymbol{T}_i - \ensuremath\boldsymbol{T}_j) (\ensuremath\boldsymbol{T}_i - \ensuremath\boldsymbol{T}_j). \end{align*} and, for the Grassmannian manifold, we define: \begin{align*} & \ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}} := \sum_{(\ensuremath\boldsymbol{T}_i, \ensuremath\boldsymbol{T}_j) \in \mathcal{S}} \ensuremath\boldsymbol{T}_{ij} \ensuremath\boldsymbol{T}_{ij}, \\ & \ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}} := \sum_{(\ensuremath\boldsymbol{T}_i, \ensuremath\boldsymbol{T}_j) \in \mathcal{D}} \ensuremath\boldsymbol{T}_{ij} \ensuremath\boldsymbol{T}_{ij}. \end{align*} Hence, for either SPD or Grassmannian manifold, the optimization problem becomes Eq. (\ref{equation_GMML_optimization_1}) in which $\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}$ and $\ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}$ are replaced with $\ensuremath\boldsymbol{\Sigma}'_{\mathcal{S}}$ and $\ensuremath\boldsymbol{\Sigma}'_{\mathcal{D}}$, respectively. \subsubsection{Metric Learning on Stiefel and SPD Manifolds} According to Eq. (\ref{equation_W_U_UT}), the weight matrix in the metric can be decomposed as $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Lambda} \ensuremath\boldsymbol{V}^\top$. If we do not restrict $\ensuremath\boldsymbol{V}$ and $\ensuremath\boldsymbol{\Lambda}$ to be the matrices of eigenvectors and eigenvalues as in Eq. (\ref{equation_W_U_UT}), we can learn both $\ensuremath\boldsymbol{V} \in \mathbb{R}^{d \times p}$ and $\ensuremath\boldsymbol{\Lambda} \in \mathbb{R}^{p \times p}$ by optimization \cite{harandi2017joint}. The optimization problem in this method is: \begin{equation} \begin{aligned} & \underset{\ensuremath\boldsymbol{V}, \ensuremath\boldsymbol{\Lambda}}{\text{minimize}} & & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \log(1 + q_{ij}) \\ & & &+ \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \log(1 + q_{ij}^{-1}) \\ & & &+\lambda \Big( \textbf{tr}(\ensuremath\boldsymbol{\Lambda} \ensuremath\boldsymbol{\Lambda}_0^{-1}) - \log\big(\textbf{det}(\ensuremath\boldsymbol{\Lambda} \ensuremath\boldsymbol{\Lambda}_0^{-1})\big) - p \Big) \\ & \text{subject to} & & \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{I}, \\ & & & \ensuremath\boldsymbol{\Lambda} \succeq \ensuremath\boldsymbol{0}, \end{aligned} \end{equation} where $\lambda>0$ is the regularization parameter, $\textbf{det}(.)$ denotes the determinant of matrix, and $q_{ij}$ models Gaussian distribution with the generalized Mahalanobis distance metric: \begin{align*} & q_{ij} := \exp(\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{V} \ensuremath\boldsymbol{\Lambda} \ensuremath\boldsymbol{V}^\top}). \end{align*} The constraint $\ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{I}$ means that the matrix $\ensuremath\boldsymbol{V}$ belongs to the Stiefel manifold $\text{St}(p,d) := \{\ensuremath\boldsymbol{V} \in \mathbb{R}^{d \times p} | \ensuremath\boldsymbol{V}^\top \ensuremath\boldsymbol{V} = \ensuremath\boldsymbol{I}\}$ and the constraint $\ensuremath\boldsymbol{\Lambda} \succeq \ensuremath\boldsymbol{0}$ means $\ensuremath\boldsymbol{\Lambda}$ belongs to the SPD manifold $\mathcal{S}^p_{++}$. Hence, these two variables belong to the product manifold $\text{St}(p,d) \times \mathcal{S}^p_{++}$. Hence, we can solve this optimization problem using Riemannian optimization methods \cite{absil2009optimization}. This method can also be kernelized; the reader can refer to {\citep[Section 4]{harandi2017joint}} for its kernel version. \subsubsection{Curvilinear Distance Metric Learning (CDML)} \begin{lemma}[\cite{chen2019curvilinear}] The generalized Mahalanobis distance can be restated as: \begin{align}\label{equation_metric_int_arc_length} & \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 = \sum_{l=1}^p \|\ensuremath\boldsymbol{u}_l\|_2^2 \Big( \int_{T_l(\ensuremath\boldsymbol{x}_i)}^{T_l(\ensuremath\boldsymbol{x}_j)} \|\ensuremath\boldsymbol{u}_l\|_2\, dt \Big)^2, \end{align} where $\ensuremath\boldsymbol{u}_l \in \mathbb{R}^d$ is the $l$-th column of $\ensuremath\boldsymbol{U}$ in Eq. (\ref{equation_W_U_UT}), $t \in \mathbb{R}$, and $T_l(\ensuremath\boldsymbol{x}) \in \mathbb{R}$ is the projection of $\ensuremath\boldsymbol{x}$ satisfying $(\ensuremath\boldsymbol{u}_l T_l(\ensuremath\boldsymbol{x}) - \ensuremath\boldsymbol{x})^\top \ensuremath\boldsymbol{u}_l = 0$. \end{lemma} \begin{proof} \begin{align*} & \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 = (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) \\ &\overset{(\ref{equation_W_U_UT})}{=} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) = \|\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)\|_2^2 \\ &= \|[\ensuremath\boldsymbol{u}_1^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j), \dots, \ensuremath\boldsymbol{u}_p^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)]^\top\|_2^2 \\ &= \sum_{l=1}^p \big( \ensuremath\boldsymbol{u}_l^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) \big)^2 \\ &\overset{(a)}{=} \sum_{l=1}^p \|\ensuremath\boldsymbol{u}_l\|_2^2 \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_2^2 \cos^2(\ensuremath\boldsymbol{u}_l, \ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) \\ &\overset{(b)}{=} \sum_{l=1}^p \|\ensuremath\boldsymbol{u}_l\|_2^2 \|\ensuremath\boldsymbol{u}_l T_l(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{u}_l T_l(\ensuremath\boldsymbol{x}_j)\|_2^2, \end{align*} where $(a)$ is because of the law of cosines and $(b)$ is because of $(\ensuremath\boldsymbol{u}_l T_l(\ensuremath\boldsymbol{x}) - \ensuremath\boldsymbol{x})^\top \ensuremath\boldsymbol{u}_l = 0$. The distance $\|\ensuremath\boldsymbol{u}_l T_l(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{u}_l T_l(\ensuremath\boldsymbol{x}_j)\|_2$ can be replaced by the length of the arc between $T_l(\ensuremath\boldsymbol{x}_i)$ and $T_l(\ensuremath\boldsymbol{x}_j)$ on the straight line $\ensuremath\boldsymbol{u}_l t$ for $t \in \mathbb{R}$. This gives the Eq. (\ref{equation_metric_int_arc_length}). Q.E.D. \end{proof} The condition $(\ensuremath\boldsymbol{u}_l T_l(\ensuremath\boldsymbol{x}) - \ensuremath\boldsymbol{x})^\top \ensuremath\boldsymbol{u}_l = 0$ is equivalent to finding the nearest neighbor to the line $\ensuremath\boldsymbol{u}_l t, \forall t \in \mathbb{R}$, i.e., $T_l(\ensuremath\boldsymbol{x}) := \arg \min_{t \in \mathbb{R}} \|\ensuremath\boldsymbol{u}_l t - \ensuremath\boldsymbol{x}\|_2^2$ \cite{chen2019curvilinear}. This equation can be generalized to find the nearest neighbor to the geodesic curve $\ensuremath\boldsymbol{\theta}_l(t)$ rather than the line $\ensuremath\boldsymbol{u}_l t$: \begin{align} T_{\ensuremath\boldsymbol{\theta}_l}(\ensuremath\boldsymbol{x}) := \arg \min_{t \in \mathbb{R}} \|\ensuremath\boldsymbol{\theta}_l(t) - \ensuremath\boldsymbol{x}\|_2^2. \end{align} Hence, we can replace the arc length of the straight line in Eq. (\ref{equation_metric_int_arc_length}) with the arc length of the curve: \begin{align}\label{equation_metric_int_arc_length_2} & \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 = \sum_{l=1}^p \alpha_l \Big( \int_{T_{\ensuremath\boldsymbol{\theta}_l}(\ensuremath\boldsymbol{x}_i)}^{T_{\ensuremath\boldsymbol{\theta}_l}(\ensuremath\boldsymbol{x}_j)} \|\ensuremath\boldsymbol{\theta}'_l(t)\|_2\, dt \Big)^2, \end{align} where $\ensuremath\boldsymbol{\theta}'_l(t)$ is derivative of $\ensuremath\boldsymbol{\theta}_l(t)$ w.r.t. $t$ and $\alpha_l := ( \int_{0}^{1} \|\ensuremath\boldsymbol{\theta}'_l(t)\|_2\, dt )^2$ is the scale factor. The Curvilinear Distance Metric Learning (CDML) \cite{chen2019curvilinear} uses this approximation of distance metric by the above curvy geodesic on manifold, i.e., Eq. (\ref{equation_metric_int_arc_length_2}). The optimization problem in CDML is: \begin{equation}\label{equation_CDML_optimization} \begin{aligned} & \underset{\ensuremath\boldsymbol{\Theta}}{\text{minimize}} & & \frac{1}{n} \sum_{i=1}^n \mathcal{L}(\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2; y_{ij}) + \lambda \Omega(\ensuremath\boldsymbol{\Theta}), \end{aligned} \end{equation} where $n$ is the number of points, $\ensuremath\boldsymbol{\Theta} := [\ensuremath\boldsymbol{\theta}_1, \dots, \ensuremath\boldsymbol{\theta}_p]$, $y_{ij}=1$ if $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}$ and $y_{ij}=0$ if $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}$, $\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2$ is defined in Eq. (\ref{equation_metric_int_arc_length_2}), $\lambda>0$ is the regularization parameter, $\mathcal{L}(.)$ is some loss function, and $\Omega(\ensuremath\boldsymbol{\Theta})$ is some penalty term. The optimal $\ensuremath\boldsymbol{\Theta}$, obtained from Eq. (\ref{equation_CDML_optimization}), can be used in Eq. (\ref{equation_metric_int_arc_length_2}) to have the optimal distance metric. A recent follow-up of CDML is \cite{zhang2021curvilinear}. \subsection{Adversarial Metric Learning (AML)} Adversarial Metric Learning (AML) \cite{chen2018adversarial} uses adversarial learning \cite{goodfellow2014generative,ghojogh2021generative} for metric learning. On one hand, we have a distinguishment stage which tries to discriminate the dissimilar points and push similar points close to one another. On the other hand, we have an confusion or adversarial stage which tries to fool the metric learning method by pulling the dissimilar points close to each other and pushing the similar points away. The distinguishment and confusion stages are trained simultaneously and they make each other stronger gradually. From the dataset, we form random pairs $\mathcal{X} := \{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}'_i)\}_{i=1}^{n/2}$. If $\ensuremath\boldsymbol{x}_i$ and $\ensuremath\boldsymbol{x}'_i$ are similar points, we set $y_i=1$ and if they are dissimilar, we have $y_i=-1$. We also generate some random new points in pairs $\mathcal{X}^g := \{(\ensuremath\boldsymbol{x}_i^g, \ensuremath\boldsymbol{x}^{g'}_i)\}_{i=1}^{n/2}$. The generated points are updated iteratively by optimization of the confusion stage to fool the metric. The loss functions for both stages are Eq. (\ref{equation_GMML_optimization}) used in geometric mean metric learning (see Section \ref{section_geometric_mean_metric_learning}). The alternative optimization \cite{ghojogh2021kkt} used in AML is: \begin{equation*} \begin{aligned} & \ensuremath\boldsymbol{W}^{(t+1)} := \arg \min_{\ensuremath\boldsymbol{W}} \Big( \sum_{y_i = 1} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}'_i\|_{\ensuremath\boldsymbol{W}}^2 \\ & + \sum_{y_i = -1} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}'_i\|_{\ensuremath\boldsymbol{W}^{-1}}^2 + \lambda_1 \big( \sum_{y_i = 1} \|\ensuremath\boldsymbol{x}^{g(t)}_i - \ensuremath\boldsymbol{x}^{g'(t)}_i\|_{\ensuremath\boldsymbol{W}}^2 \\ & + \sum_{y_i = -1} \|\ensuremath\boldsymbol{x}^{g(t)}_i - \ensuremath\boldsymbol{x}^{g'(t)}_i\|_{\ensuremath\boldsymbol{W}^{-1}}^2 \big) \Big), \end{aligned} \end{equation*} \begin{equation} \begin{aligned} & {\mathcal{X}}^{g(t+1)} := \arg \min_{\mathcal{X}'} \Big( \sum_{y_i = -1} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}'_i\|_{\ensuremath\boldsymbol{W}^{(t+1)}}^2 \\ & + \sum_{y_i = 1} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}'_i\|_{(\ensuremath\boldsymbol{W}^{(t+1)})^{-1}}^2 + \lambda_2 \big( \sum_{i=1}^{n/2} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}^g_i\|_{\ensuremath\boldsymbol{W}^{(t+1)}}^2 \\ &+ \sum_{i=1}^{n/2} \|\ensuremath\boldsymbol{x}'_i - \ensuremath\boldsymbol{x}^{g'}_i\|_{\ensuremath\boldsymbol{W}^{(t+1)}}^2 \big) \Big), \end{aligned} \end{equation} until convergence, where $\lambda_1, \lambda_2>0$ are the regularization parameters. Updating $\ensuremath\boldsymbol{W}$ and $\mathcal{X}^g$ are the distinguishment and confusion stages, respectively. In the distinguishment stage, we find a weight matrix $\ensuremath\boldsymbol{W}$ to minimize the distances of similar points in both $\mathcal{X}$ and $\mathcal{X}^g$ and maximize the distances of dissimilar points in both $\mathcal{X}$ and $\mathcal{X}^g$. In the confusion stage, we generate new points $\mathcal{X}^g$ to adversarially maximize the distances of similar points in $\mathcal{X}$ and adversarially minimize the distances of dissimilar points in $\mathcal{X}$. In this stage, we also make the points $\ensuremath\boldsymbol{x}_i^g$ and $\ensuremath\boldsymbol{x}_i^{g'}$ similar to their corresponding points $\ensuremath\boldsymbol{x}_i$ and $\ensuremath\boldsymbol{x}'_i$, respectively. \section{Probabilistic Metric Learning}\label{section_probabilistic_metric_learning} Probabilistic methods for metric learning learn the weight matrix in the generalized Mahalanobis distance using probability distributions. They define some probability distribution for each point accepting other points as its neighbors. Of course, the closer points have higher probability for being neighbors. \subsection{Collapsing Classes} One probabilistic method for metric learning is collapsing similar points to the same class while pushing the dissimilar points away from one another \cite{globerson2005metric}. The probability distribution between points for being neighbors can be a Gaussian distribution which uses the generalized Mahalanobis distance as its metric. The distribution for $\ensuremath\boldsymbol{x}_i$ to take $\ensuremath\boldsymbol{x}_j$ as its neighbor is \cite{goldberger2005neighbourhood}: \begin{align}\label{equation_P_W_classCollapse} p^W_{ij} := \frac{\exp(-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2)}{\sum_{k \neq i} \exp(-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k\|_{\ensuremath\boldsymbol{W}}^2)}, \quad j \neq i, \end{align} where we define the normalization factor, also called the partition function, as $Z_i := \sum_{k \neq i} \exp(-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k\|_{\ensuremath\boldsymbol{W}}^2)$. This factor makes the summation of distribution one. Eq. (\ref{equation_P_W_classCollapse}) is a Gaussian distribution whose covariance matrix is $\ensuremath\boldsymbol{W}^{-1}$ because it is equivalent to: \begin{align*} p^W_{ij} := \frac{1}{Z_i} \exp\big(-(\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)\big). \end{align*} We want the similar points to collapse to the same point after projection onto the subspace of metric (see Proposition \ref{proposition_metric_learning_projection}). Hence, we define the desired neighborhood distribution to be a bi-level distribution \cite{globerson2005metric}: \begin{align} p^0_{ij} := \left\{ \begin{array}{ll} 1 & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S} \\ 0 & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}. \end{array} \right. \end{align} This makes all similar points of a group/class a same point after projection. \subsubsection{Collapsing Classes in the Input Space} For making $p^W_{ij}$ close to the desired distribution $p^0_{ij}$, we minimize the KL-divergence between them \cite{globerson2005metric}: \begin{equation}\label{equation_optimization_collapse_classes} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \sum_{i=1}^n \sum_{j=1, j \neq i}^n \text{KL}(p^0_{ij}\, \|\, p^W_{ij}) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} \begin{lemma}[\cite{globerson2005metric}]\label{lemma_classCollapse_gradient} Let the the objective function in Eq. (\ref{equation_optimization_collapse_classes}) be denoted by $c := \sum_{i=1}^n \sum_{j=1, j \neq i}^n \text{KL}(p^0_{ij}\, \|\, p^W_{ij})$. The gradient of this function w.r.t. $\ensuremath\boldsymbol{W}$ is: \begin{align} &\frac{\partial c}{\partial \ensuremath\boldsymbol{W}} = \sum_{i=1}^n \sum_{j=1, j \neq i}^n (p^0_{ij} - p^W_{ij}) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top. \label{equation_classCollapse_gradient} \end{align} \end{lemma} \begin{proof} The derivation is similar to the derivation of gradient in Stochastic Neighbor Embedding (SNE) and t-SNE \cite{hinton2003stochastic,maaten2008visualizing,ghojogh2020stochastic}. Let: \begin{align}\label{equation_SNE_r} \mathbb{R} \ni r_{ij} := d_{ij}^2 = ||\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j||_{\ensuremath\boldsymbol{W}}^2. \end{align} By changing $\ensuremath\boldsymbol{x}_i$, we only have change impact in $d_{ij}$ and $d_{ji}$ (or $r_{ij}$ and $r_{ji}$) for all $j$'s. According to chain rule, we have: \begin{align*} \frac{\partial c}{\partial \ensuremath\boldsymbol{W}} = \sum_{i,j} \big(\frac{\partial c}{\partial r_{ij}} \frac{\partial r_{ij}}{\partial \ensuremath\boldsymbol{W}} + \frac{\partial c}{\partial r_{ji}} \frac{\partial r_{ji}}{\partial \ensuremath\boldsymbol{W}}\big). \end{align*} According to Eq. (\ref{equation_SNE_r}), we have: \begin{align*} & r_{ij} = ||\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j||_{\ensuremath\boldsymbol{W}}^2 = \textbf{tr}((\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{W} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j))\\ &~~~~~~~~~~~~~ \overset{(a)}{=} \textbf{tr}((\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{W}) \\ &\implies \frac{\partial r_{ij}}{\partial \ensuremath\boldsymbol{W}} = (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top, \\ & r_{ji} = ||\ensuremath\boldsymbol{x}_j - \ensuremath\boldsymbol{x}_i||_{\ensuremath\boldsymbol{W}}^2 = ||\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j||_{\ensuremath\boldsymbol{W}}^2 = r_{ij} \\ &\implies \frac{\partial r_{ji}}{\partial \ensuremath\boldsymbol{W}} = (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top, \end{align*} where $(a)$ is because of the cyclic property of trace. Therefore: \begin{align}\label{equation_SNE_deriv_c1_y} \therefore ~~~~ \frac{\partial c}{\partial \ensuremath\boldsymbol{W}} = 2 \sum_{i,j} \big(\frac{\partial c}{\partial r_{ij}} \big) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top. \end{align} The dummy variables in cost function can be re-written as: \begin{align*} c &= \sum_{k} \sum_{l\neq k} p_0(l|k) \log (\frac{p_0(l|k)}{p_W(l|k)}) \\ &= \sum_{k \neq l} p_0(l|k) \log (\frac{p_0(l|k)}{p_W(l|k)}) \\ &= \sum_{k \neq l} \big(p_0(l|k) \log (p_0(l|k)) - p_0(l|k) \log (p_W(l|k)) \big), \end{align*} whose first term is a constant with respect to $p_W(l|k)$ and thus to $\ensuremath\boldsymbol{W}$. We have: \begin{align*} \mathbb{R} \ni \frac{\partial c}{\partial r_{ij}} = - \sum_{k \neq l} p_0(l|k) \frac{\partial (\log (p_W(l|k)))}{\partial r_{ij}}. \end{align*} According to Eqs. (\ref{equation_P_W_classCollapse}) and (\ref{equation_SNE_r}), the $p_W(l|k)$ is: \begin{align*} p_W(l|k) := \frac{\exp(-d_{kl}^2)}{\sum_{k \neq f}\exp(-d_{kf}^2)} = \frac{\exp(-r_{kl})}{\sum_{k \neq f}\exp(-r_{kf})}. \end{align*} We take the denominator of $p_W(l|k)$ as: \begin{align}\label{equation_SNE_beta} \beta := \sum_{k \neq f} \exp(- d_{kf}^2) = \sum_{k \neq f} \exp(- r_{kf}). \end{align} We have $\log (p_W(l|k)) = \log (p_W(l|k)) + \log \beta - \log \beta = \log (p_W(l|k)\, \beta) - \log \beta$. Therefore: \begin{align*} &\therefore ~~~ \frac{\partial c}{\partial r_{ij}} = - \sum_{k \neq l} p_0(l|k) \frac{\partial \big(\log (p_W(l|k) \beta) - \log \beta\big)}{\partial r_{ij}} \\ &= - \sum_{k \neq l} p_0(l|k) \bigg[\frac{\partial \big(\log (p_W(l|k) \beta)\big)}{\partial r_{ij}} - \frac{\partial \big(\log \beta\big)}{\partial r_{ij}}\bigg] \\ &= - \sum_{k \neq l} p_0(l|k) \bigg[\frac{1}{p_W(l|k) \beta}\frac{\partial \big( p_W(l|k) \beta\big)}{\partial r_{ij}} - \frac{1}{\beta}\frac{\partial \beta}{\partial r_{ij}}\bigg]. \end{align*} The $p_W(l|k) \beta$ is: \begin{align*} p_W(l|k) \beta &= \frac{\exp(-r_{kl})}{\sum_{f \neq k}\exp(-r_{kf})} \times \sum_{k \neq f} \exp(- r_{kf}) \\ &= \exp(-r_{kl}). \end{align*} Therefore, we have: \begin{align*} &\therefore ~~~ \frac{\partial c}{\partial r_{ij}} = \\ & - \sum_{k \neq l} p_0(l|k) \bigg[\frac{1}{p_W(l|k) \beta}\frac{\partial \big( \exp(-r_{kl}) \big)}{\partial r_{ij}} - \frac{1}{\beta}\frac{\partial \beta}{\partial r_{ij}}\bigg]. \end{align*} The $\partial \big( \exp(-r_{kl}) \big)/\partial r_{ij}$ is non-zero for only $k=i$ and $l=j$; therefore: \begin{align*} \frac{\partial \big( \exp(-r_{ij}) \big)}{\partial r_{ij}} &= - \exp(-r_{ij}), \\ \frac{\partial \beta}{\partial r_{ij}} &= \frac{\partial \sum_{k \neq f} \exp(- r_{kf})}{\partial r_{ij}} = \frac{\partial \exp(- r_{ij})}{\partial r_{ij}} \\ &= - \exp(- r_{ij}). \end{align*} Therefore: \begin{align*} &\therefore ~~~ \frac{\partial c}{\partial r_{ij}} = \\ &- \bigg( p^0_{ij} \Big[\frac{-1}{p^W_{ij} \beta} \exp(-r_{ij})\Big] + 0 + \dots + 0 \bigg) \\ & - \sum_{k \neq l} p_0(l|k) \Big[\frac{1}{\beta} \exp(- r_{ij}) \Big]. \end{align*} We have $\sum_{k \neq l} p_0(l|k) = 1$ because summation of all possible probabilities is one. Thus: \begin{align} \frac{\partial c}{\partial r_{ij}} &= - p^0_{ij} \Big[\frac{-1}{p^W_{ij} \beta} \exp(-r_{ij})\Big] - \Big[\frac{1}{\beta} \exp(- r_{ij}) \Big] \nonumber \\ &= \underbrace{\frac{\exp(- r_{ij})}{\beta}}_{=p^W_{ij}} \Big[\frac{p^0_{ij}}{p^W_{ij}} - 1\Big] = p^0_{ij} - p^W_{ij}. \label{equation_SNE_derivative_r_ij} \end{align} Substituting the obtained derivative in Eq. (\ref{equation_SNE_deriv_c1_y}) gives Eq. (\ref{equation_classCollapse_gradient}). Q.E.D. \end{proof} The optimization problem (\ref{equation_optimization_collapse_classes}) is convex; hence, it has a unique solution. We can solve it using any optimization method such as the projected gradient method, where after every gradient descent step, we project the solution onto the positive semi-definite cone \cite{ghojogh2021kkt}: \begin{align*} & \ensuremath\boldsymbol{W} := \ensuremath\boldsymbol{W} - \eta \frac{\partial c}{\partial \ensuremath\boldsymbol{W}}, \\ & \ensuremath\boldsymbol{W} := \ensuremath\boldsymbol{V}\, \textbf{diag}(\max(\lambda_1, 0), \dots, \max(\lambda_d, 0))\, \ensuremath\boldsymbol{V}^\top, \end{align*} where $\eta>0$ is the learning rate and $\ensuremath\boldsymbol{V}$ and $\ensuremath\boldsymbol{\Lambda} = \textbf{diag}(\lambda_1, \dots, \lambda_d)$ are the eigenvectors and eigenvalues of $\ensuremath\boldsymbol{W}$, respectively (see Eq. (\ref{equation_W_U_UT})). \subsubsection{Collapsing Classes in the Feature Space} According to Eq. (\ref{equation_Mahalanobis_distance_in_RKHS}), the distance in the feature space can be stated using kernels as $\|\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j\|_{\ensuremath\boldsymbol{T}\ensuremath\boldsymbol{T}^\top}^2$ where $\ensuremath\boldsymbol{k}_i \in \mathbb{R}^n$ is the kernel vector between dataset $\ensuremath\boldsymbol{X}$ and the point $\ensuremath\boldsymbol{x}_i$. We define $\ensuremath\boldsymbol{R} := \ensuremath\boldsymbol{T}\ensuremath\boldsymbol{T}^\top \in \mathbb{R}^{n \times n}$. Hence, in the feature space, Eq. (\ref{equation_P_W_classCollapse}) becomes: \begin{align} p^R_{ij} := \frac{\exp(-\|\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j\|_{\ensuremath\boldsymbol{R}}^2)}{\sum_{k \neq i} \exp(-\|\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_k\|_{\ensuremath\boldsymbol{R}}^2)}, \quad j \neq i. \end{align} The gradient in Eq. (\ref{equation_classCollapse_gradient}) becomes: \begin{align} &\frac{\partial c}{\partial \ensuremath\boldsymbol{R}} = \sum_{i=1}^n \sum_{j=1, j \neq i}^n (p^0_{ij} - p^R_{ij}) (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j) (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top. \label{equation_classCollapse_gradient_kernel} \end{align} Again, we can find the optimal $\ensuremath\boldsymbol{R}$ using projected gradient method. This gives us the optimal metric for collapsing classes in the feature space \cite{globerson2005metric}. Note that we can also regularize the objective function, using the trace operator or Frobenius norm, for avoiding overfitting. \subsection{Neighborhood Component Analysis Methods} Neighborhood Component Analysis (NCA) is one of the most well-known probabilistic metric learning methods. In the following, we introduce different variants of NCA. \subsubsection{Neighborhood Component Analysis (NCA)}\label{section_NCA_spectral} In the original NCA \cite{goldberger2005neighbourhood}, the probability that $\ensuremath\boldsymbol{x}_j$ takes $\ensuremath\boldsymbol{x}_i$ as its neighbor is as in Eq. (\ref{equation_P_W_classCollapse}), where we assume $p^W_{ii} = 0$ by convention: \begin{align}\label{equation_P_W_NCA} p^W_{ij} := \left\{ \begin{array}{ll} \frac{\exp(-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2)}{\sum_{k \neq i} \exp(-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k\|_{\ensuremath\boldsymbol{W}}^2)} & \mbox{if } j \neq i \\ 0 & \mbox{if } j = i. \end{array} \right. \end{align} Consider the decomposition of the weight matrix of metric as in Eq. (\ref{equation_W_U_UT}), i.e., $\ensuremath\boldsymbol{W} = \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top$. Let $\mathcal{S}_i$ denote the set of similar points to $\ensuremath\boldsymbol{x}_i$ where $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}$. The optimization problem of NCA is to find a $\ensuremath\boldsymbol{U}$ to maximize this probability distribution for similar points \cite{goldberger2005neighbourhood}: \begin{equation}\label{equation_optimization_NCA} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} p^W_{ij} = \sum_{i=1}^n \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} p^W_{ij} = \sum_{i=1}^n p^W_i, \end{aligned} \end{equation} where: \begin{align} p^W_i := \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} p^W_{ij}. \end{align} Note that the required constraint $\ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}$ is already satisfied because of the decomposition in Eq. (\ref{equation_P_W_classCollapse}). \begin{lemma}[\cite{goldberger2005neighbourhood}] Suppose the objective function of Eq. (\ref{equation_optimization_NCA}) is denoted by $c$. The gradient of this cost function w.r.t. $\ensuremath\boldsymbol{U}$ is: \begin{equation}\label{equation_NCA_gradient} \begin{aligned} \frac{\partial c}{\partial \ensuremath\boldsymbol{U}} = &\,2 \sum_{i=1}^n \Big( p^W_i \sum_{k=1}^n p^W_{ik} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k)^\top \\ &- \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} p^W_{ij} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \Big) \ensuremath\boldsymbol{U}. \end{aligned} \end{equation} \end{lemma} The derivation of this gradient is similar to the approach in the proof of Lemma \ref{lemma_classCollapse_gradient}. We can use gradient ascent for solving the optimization. Another approach is to maximize the log-likelihood of neighborhood probability \cite{goldberger2005neighbourhood}: \begin{equation}\label{equation_optimization_NCA_2} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \sum_{i=1}^n \log\Big( \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} p^W_{ij} \Big), \end{aligned} \end{equation} whose gradient is \cite{goldberger2005neighbourhood}: \begin{equation} \begin{aligned} \frac{\partial c}{\partial \ensuremath\boldsymbol{U}} = &\,2 \sum_{i=1}^n \Big( \sum_{k=1}^n p^W_{ik} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k)^\top \\ & - \frac{\sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} p^W_{ij} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top}{\sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} p^W_{ij}} \Big) \ensuremath\boldsymbol{U}. \end{aligned} \end{equation} Again, gradient ascent can give us the optimal $\ensuremath\boldsymbol{U}$. As explained in Proposition \ref{proposition_metric_learning_projection}, the subspace is metric is the column space of $\ensuremath\boldsymbol{U}$ and projection of points onto this subspace reduces the dimensionality of data. \subsubsection{Regularized Neighborhood Component Analysis} It is shown by some experiments that NCA can overfit to training data for high-dimensional data \cite{yang2007regularized}. Hence, we can regularize it to avoid overfitting. In regularized NCA \cite{yang2007regularized}, we use the log-posterior of the matrix $\ensuremath\boldsymbol{U}$ which is equal to: \begin{align}\label{equation_regularized_NCA_posterior} \mathbb{P}(\ensuremath\boldsymbol{U}|\ensuremath\boldsymbol{x}_i, \mathcal{S}_i) = \frac{\mathbb{P}(\ensuremath\boldsymbol{x}_i, \mathcal{S}_i|\ensuremath\boldsymbol{U})\, \mathbb{P}(\ensuremath\boldsymbol{U})}{\mathbb{P}(\ensuremath\boldsymbol{x}_i, \mathcal{S}_i)}, \end{align} according to the Bayes' rule. We can use Gaussian distribution for the prior: \begin{align}\label{equation_regularized_NCA_prior} \mathbb{P}(\ensuremath\boldsymbol{U}) = \prod_{k=1}^d \prod_{l=1}^d c\, \exp(-\lambda (\ensuremath\boldsymbol{U}(k,l))^2), \end{align} where $c>0$ is a constant factor including the normalization factor, $\lambda>0$ is the inverse of variance, and $\ensuremath\boldsymbol{U}(k,l)$ is the $(k,l)$-th element of $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times d}$. Note that we can have $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d \times p}$ if we truncate it to have $p$ leading eigenvectors of $\ensuremath\boldsymbol{W}$ (see Eq. (\ref{equation_W_U_UT})). The likelihood \begin{align}\label{equation_regularized_NCA_likelihood} \mathbb{P}(\ensuremath\boldsymbol{x}_i, \mathcal{S}_i|\ensuremath\boldsymbol{U}) \propto \exp\Big( \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} p^W_{ij} \Big). \end{align} The regularized NCA maximizes the log-posterior \cite{yang2007regularized}: \begin{equation*} \begin{aligned} &\log \mathbb{P}(\ensuremath\boldsymbol{U}|\ensuremath\boldsymbol{x}_i, \mathcal{S}_i) \overset{(\ref{equation_regularized_NCA_posterior})}{=} \log \mathbb{P}(\ensuremath\boldsymbol{x}_i, \mathcal{S}_i|\ensuremath\boldsymbol{U}) + \log \mathbb{P}(\ensuremath\boldsymbol{U}) \\ &- \underbrace{\log \mathbb{P}(\ensuremath\boldsymbol{x}_i, \mathcal{S}_i)}_{\text{constant w.r.t. } \ensuremath\boldsymbol{U}} \overset{(a)}{=} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} p^W_{ij} - \lambda \|\ensuremath\boldsymbol{U}\|_F^2, \end{aligned} \end{equation*} where $(a)$ is because of Eqs. (\ref{equation_regularized_NCA_prior}) and (\ref{equation_regularized_NCA_likelihood}) and $\|.\|_F$ denotes the Frobenius norm. Hence, the optimization problem of regularized NCA is \cite{yang2007regularized}: \begin{equation}\label{equation_optimization_regularized_NCA} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} p^W_{ij} - \lambda \|\ensuremath\boldsymbol{U}\|_F^2, \end{aligned} \end{equation} where $\lambda>0$ can be seen as the regularization parameter. The gradient is similar to Eq. (\ref{equation_NCA_gradient}) but plus the derivative of the regularization term which is $-2\lambda \ensuremath\boldsymbol{U}$. \subsubsection{Fast Neighborhood Component Analysis}\label{section_fast_NCA} \textbf{-- Fast NCA:} The fast NCA \cite{yang2012fast} accelerates NCA by using $k$-Nearest Neighbors ($k$NN) rather than using all points for computing the neighborhood distribution of every point. Let $\mathcal{N}_i$ and $\mathcal{M}_i$ denote the $k$NN of $\ensuremath\boldsymbol{x}_i$ among the similar points to $\ensuremath\boldsymbol{x}_i$ (denoted by $\mathcal{S}_i$) and dissimilar points (denoted by $\mathcal{D}_i$), respectively. Fast NCA uses following probability distribution for $\ensuremath\boldsymbol{x}_i$ to take $\ensuremath\boldsymbol{x}_i$ as its neighbor \cite{yang2012fast}: \begin{align}\label{equation_P_W_fastNCA} &p^W_{ij} := \left\{ \begin{array}{ll} \frac{\exp(-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}})}{\sum_{\ensuremath\boldsymbol{x}_k \in \mathcal{N}_i \cup \mathcal{M}_i} \exp(-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k\|_{\ensuremath\boldsymbol{W}})} & \mbox{if } \ensuremath\boldsymbol{x}_k \in \mathcal{N}_i \cup \mathcal{M}_i \\ 0 & \mbox{otherwise.} \end{array} \right. \end{align} The optimization problem of fast NCA is similar to Eq. (\ref{equation_optimization_regularized_NCA}): \begin{equation}\label{equation_optimization_fast_NCA} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \sum_{i=1}^n \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{M}_i} p^W_{ij} - \lambda \|\ensuremath\boldsymbol{U}\|_F^2, \end{aligned} \end{equation} where $p^W_{ij}$ is Eq. (\ref{equation_P_W_fastNCA}) and $\ensuremath\boldsymbol{U}$ is the matrix in the decomposition of $\ensuremath\boldsymbol{W}$ (see Eq. (\ref{equation_W_U_UT})). \begin{lemma}[\cite{yang2012fast}] Suppose the objective function of Eq. (\ref{equation_optimization_fast_NCA}) is denoted by $c$. The gradient of this cost function w.r.t. $\ensuremath\boldsymbol{U}$ is: \begin{equation}\label{equation_fastNCA_gradient} \begin{aligned} &\frac{\partial c}{\partial \ensuremath\boldsymbol{U}} = \sum_{i=1}^n \Big( p^W_i \sum_{\ensuremath\boldsymbol{x}_k \in \mathcal{N}_i} p^W_{ik} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_k)^\top \\ &+ (p^W_i-1) \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{M}_i} p^W_{ij} (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j) (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \Big) \ensuremath\boldsymbol{U} - 2\lambda \ensuremath\boldsymbol{U}. \end{aligned} \end{equation} \end{lemma} This is similar to Eq. (\ref{equation_NCA_gradient}). See \cite{yang2012fast} for the derivation. We can use gradient ascent for solving the optimization. \textbf{-- Kernel Fast NCA:} According to Eq. (\ref{equation_Mahalanobis_distance_in_RKHS}), the distance in the feature space is $\|\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j\|_{\ensuremath\boldsymbol{T}\ensuremath\boldsymbol{T}^\top}^2$ where $\ensuremath\boldsymbol{k}_i \in \mathbb{R}^n$ is the kernel vector between dataset $\ensuremath\boldsymbol{X}$ and the point $\ensuremath\boldsymbol{x}_i$. We can use this distance metric in Eq. (\ref{equation_P_W_fastNCA}) to have kernel fast NCA \cite{yang2012fast}. Hence, the gradient of kernel fast NCA is similar to Eq. (\ref{equation_fastNCA_gradient}): \begin{equation}\label{equation_kernel_fastNCA_gradient} \begin{aligned} &\frac{\partial c}{\partial \ensuremath\boldsymbol{T}} = \sum_{i=1}^n \Big( p^W_i \sum_{\ensuremath\boldsymbol{x}_k \in \mathcal{N}_i} p^W_{ik} (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_k) (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_k)^\top \\ &+ (p^W_i-1) \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{M}_i} p^W_{ij} (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j) (\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top \Big) \ensuremath\boldsymbol{T} - 2\lambda \ensuremath\boldsymbol{T}. \end{aligned} \end{equation} Again, we can find the optimal $\ensuremath\boldsymbol{T}$ using gradient ascent. Note that the same technique can be used to kernelize the original NCA. \subsection{Bayesian Metric Learning Methods} In this section, we introduce the Bayesian metric learning methods which use variational inference \cite{ghojogh2021factor} for metric learning. In Bayesian metric learning, we learn a distribution for the distance metric between every two points; we sample the pairwise distances from these learned distributions. First, we provide some definition required in these methods. According to Eq. (\ref{equation_W_U_UT}), we can decompose the weight matrix in the metric using the eigenvalue decomposition. Accordingly, we can approximate this matrix by: \begin{align}\label{equation_Bayesian_NCA_W_decomposition} \ensuremath\boldsymbol{W} \approx \ensuremath\boldsymbol{V}_x \ensuremath\boldsymbol{\Lambda} \ensuremath\boldsymbol{V}_x^\top, \end{align} where $\ensuremath\boldsymbol{V}_x$ contains the eigenvectors of $\ensuremath\boldsymbol{X}\ensuremath\boldsymbol{X}^\top$ and $\ensuremath\boldsymbol{\Lambda} = \textbf{diag}([\lambda_1, \dots, \lambda_d]^\top)$ is the diagonal matrix of eigenvalues which we learn in Bayesian metric learning. Let $X$ and $Y$ denote the random variables for data and labels, respectively, and let $\ensuremath\boldsymbol{\lambda} = [\lambda_1, \dots, \lambda_d]^\top \in \mathbb{R}^d$ denote the learnable eigenvalues. Let $\ensuremath\boldsymbol{v}_x^l \in \mathbb{R}^d$ denote the $l$-th column of $\ensuremath\boldsymbol{V}_x$. We define $\ensuremath\boldsymbol{w}_{ij} = [w_{ij}^1, \dots, w_{ij}^d]^\top := [((\ensuremath\boldsymbol{v}_x^1)^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j))^2, \dots, ((\ensuremath\boldsymbol{v}_x^d)^\top (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j))^2]^\top \in \mathbb{R}^d$. The reader should not confuse $\ensuremath\boldsymbol{w}_{ij}$ with $\ensuremath\boldsymbol{W}$ which is the weight matrix of metric in out notations. \subsubsection{Bayesian Metric Learning Using Sigmoid Function} One of the Bayesian metric learning methods is \cite{yang2007bayesian}. We define: \begin{align}\label{equation_y_ij} y_{ij} := \left\{ \begin{array}{ll} 1 & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S} \\ -1 & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}. \end{array} \right. \end{align} We can consider a sigmoid function for the likelihood \cite{yang2007bayesian}: \begin{align} \mathbb{P}(Y|X,\ensuremath\boldsymbol{\Lambda}) = \frac{1}{1 + \exp(y_{ij} (\sum_{l=1}^d \lambda_l w_{ij}^l - \mu))}, \end{align} where $\mu>0$ is a threshold. We can also derive an evidence lower bound for $\mathbb{P}(\mathcal{S}, \mathcal{D})$; we do not provide the derivation for brevity (see \cite{yang2007bayesian} for derivation of the lower bound). As in the variational inference, we maximize this lower bound for likelihood maximization \cite{ghojogh2021factor}. We assume a Gaussian distribution with mean $\ensuremath\boldsymbol{m}_{\lambda} \in \mathbb{R}^d$ and covariance $\ensuremath\boldsymbol{V}_{\lambda} \in \mathbb{R}^{d \times d}$ for the distribution $\mathbb{P}(\ensuremath\boldsymbol{\lambda})$. By maximizing the lower bound, we can estimate these parameters as \cite{yang2007bayesian}: \begin{align} & \ensuremath\boldsymbol{V}_T := \Big(\delta \ensuremath\boldsymbol{I} + 2\sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \frac{\tanh(\xi_{ij}^s)}{4 \xi_{ij}^s} \ensuremath\boldsymbol{w}_{ij} \ensuremath\boldsymbol{w}_{ij}^\top \nonumber \\ &~~~~~~~~~~~~~ + 2\sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \frac{\tanh(\xi_{ij}^d)}{4 \xi_{ij}^d} \ensuremath\boldsymbol{w}_{ij} \ensuremath\boldsymbol{w}_{ij}^\top \Big)^{-1}, \label{equation_Bayesian_ML_V}\\ & \ensuremath\boldsymbol{m}_T := \ensuremath\boldsymbol{V}_T \Big(\delta \ensuremath\boldsymbol{\gamma}_0 - \frac{1}{2} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \ensuremath\boldsymbol{w}_{ij} + \frac{1}{2} \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \ensuremath\boldsymbol{w}_{ij} \Big), \label{equation_Bayesian_ML_m} \end{align} where $\delta>0$ and $\ensuremath\boldsymbol{\gamma}_0$ are hyper-parameters related to the priors on the weight matrix of metric and the threshold. We define the following variational parameter \cite{yang2007bayesian}: \begin{align} & \xi_{ij}^s := \sqrt{(\ensuremath\boldsymbol{m}_T^\top \ensuremath\boldsymbol{w}_{ij})^2 + \ensuremath\boldsymbol{w}_{ij}^\top \ensuremath\boldsymbol{V}_T \ensuremath\boldsymbol{w}_{ij}}, \label{equation_Bayesian_ML_xi} \end{align} for $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}$. We similarly define the variational parameter $\xi_{ij}^d$ for $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}$. The variables $\ensuremath\boldsymbol{V}_T$, $\ensuremath\boldsymbol{m}_T$, $\xi_{ij}^s$, and $\xi_{ij}^d$ are updated iteratively by Eqs. (\ref{equation_Bayesian_NCA_V}), (\ref{equation_Bayesian_NCA_m}), and (\ref{equation_Bayesian_ML_xi}), respectively, until convergence. After these parameters are learned, we can sample the eigenvalues from the posterior, $\ensuremath\boldsymbol{\lambda} \sim \mathcal{N}(\ensuremath\boldsymbol{m}_T, \ensuremath\boldsymbol{V}_T)$. These eigenvalues can be used in Eq. (\ref{equation_Bayesian_NCA_W_decomposition}) to obtain the weight matrix in the metric. Note that Bayesian metric learning can also be used for active learning (see \cite{yang2007bayesian} for details). \subsubsection{Bayesian Neighborhood Component Analysis} Bayesian NCA \cite{wang2017bayesian} using variational inference \cite{ghojogh2021factor} in the NCA formulation. If $\mathcal{N}_{im}$ denotes the dataset index of the $m$-th nearest neighbor of $\ensuremath\boldsymbol{x}_i$, we define $\ensuremath\boldsymbol{W}_i^j := [w_{ij} - w_{i \mathcal{N}_{i1}}, \dots, w_{ij} - w_{i \mathcal{N}_{ik}}] \in \mathbb{R}^{d \times k}$. As in the variational inference \cite{ghojogh2021factor}, we consider an evidence lower-bound on the log-likelihood: \begin{align*} \log(\mathbb{P}(Y|X,\ensuremath\boldsymbol{\Lambda})) > \sum_{i=1}^n \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{N}_i} \Big( &-\frac{1}{2} \ensuremath\boldsymbol{\lambda}^\top \ensuremath\boldsymbol{W}_i^j \ensuremath\boldsymbol{H} (\ensuremath\boldsymbol{W}_i^j)^\top \ensuremath\boldsymbol{\lambda} \\ &+ \ensuremath\boldsymbol{b}_{ij}^\top (\ensuremath\boldsymbol{W}_i^j)^\top \ensuremath\boldsymbol{\lambda} - c_{ij}\Big), \end{align*} where $\mathcal{N}_i$ was defined before in Section \ref{section_fast_NCA}, $\ensuremath\boldsymbol{H} := \frac{1}{2} (\ensuremath\boldsymbol{I} - \frac{1}{k+1} \ensuremath\boldsymbol{1 }\ensuremath\boldsymbol{1}^\top) \in \mathbb{R}^{k \times k}$ is the centering matrix, and: \begin{align} & \mathbb{R}^k \ni \ensuremath\boldsymbol{b}_{ij} := \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{\psi}_{ij} \nonumber\\ &- \exp\Bigg(\ensuremath\boldsymbol{\psi}_{ij} - \log\Big(1 + \sum_{\ensuremath\boldsymbol{x}_t \in \mathcal{N}_i} \exp\big( (\ensuremath\boldsymbol{w}_{ij} - \ensuremath\boldsymbol{w}_{it})^\top \ensuremath\boldsymbol{\lambda} \big)\Big)\Bigg), \label{equation_Bayesian_NCA_b} \end{align} in which $\ensuremath\boldsymbol{\psi}_{ij} \in \mathbb{R}^k$ is the learnable variational parameter. See \cite{wang2017bayesian} for the derivation of this lower-bound. The sketch of this derivation is using Eq. (\ref{equation_P_W_classCollapse}) but for the $k$NN among the similar points, i.e., $\mathcal{N}_i$. Then, the lower-bound is obtained by a logarithm inequality as well as the Bohning's quadratic bound \cite{murphy2012machine}. We assume a Gaussian distribution for the prior of $\ensuremath\boldsymbol{\lambda}$ with mean $\ensuremath\boldsymbol{m}_0 \in \mathbb{R}^d$ and covariance $\ensuremath\boldsymbol{V}_0 \in \mathbb{R}^{d \times d}$. This prior is assumed to be known. Likewise, we assume a Gaussian distribution with mean $\ensuremath\boldsymbol{m}_T \in \mathbb{R}^d$ and covariance $\ensuremath\boldsymbol{V}_T \in \mathbb{R}^{d \times d}$ for the posterior $\mathbb{P}(X,\ensuremath\boldsymbol{\Lambda}|Y)$. Using Bayes' rule and the above lower-bound on the likelihood, we can estimate these parameters as \cite{wang2017bayesian}: \begin{align} & \ensuremath\boldsymbol{V}_T := \Big(\ensuremath\boldsymbol{V}_0^{-1} + \sum_{i=1}^n \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{N}_i} \ensuremath\boldsymbol{W}_i^j \ensuremath\boldsymbol{H} (\ensuremath\boldsymbol{W}_i^j)^\top \Big)^{-1}, \label{equation_Bayesian_NCA_V}\\ & \ensuremath\boldsymbol{m}_T := \ensuremath\boldsymbol{V}_T \Big(\ensuremath\boldsymbol{V}_0^{-1} \ensuremath\boldsymbol{m}_0 + \sum_{i=1}^n \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{N}_i} \ensuremath\boldsymbol{W}_i^j \ensuremath\boldsymbol{b}_{ij} \Big). \label{equation_Bayesian_NCA_m} \end{align} The variational parameter can also be obtained by \cite{wang2017bayesian}: \begin{align}\label{equation_Bayesian_NCA_psi} \ensuremath\boldsymbol{\psi}_{ij} := (\ensuremath\boldsymbol{W}_i^j)^\top \ensuremath\boldsymbol{m}_T. \end{align} The variables $\ensuremath\boldsymbol{b}_{ij}$, $\ensuremath\boldsymbol{V}_T$, $\ensuremath\boldsymbol{m}_T$, and $\ensuremath\boldsymbol{\psi}_{ij}$ are updated iteratively by Eqs. (\ref{equation_Bayesian_NCA_b}), (\ref{equation_Bayesian_NCA_V}), (\ref{equation_Bayesian_NCA_m}), and (\ref{equation_Bayesian_NCA_psi}), respectively, until convergence. After these parameters are learned, we can sample the eigenvalues from the posterior, $\ensuremath\boldsymbol{\lambda} \sim \mathcal{N}(\ensuremath\boldsymbol{m}_T, \ensuremath\boldsymbol{V}_T)$. These eigenvalues can be used in Eq. (\ref{equation_Bayesian_NCA_W_decomposition}) to obtain the weight matrix in the metric. Alternatively, we can directly sample the distance metric from the following distribution: \begin{align} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \sim \mathcal{N}(\ensuremath\boldsymbol{w}_{ij}^\top \ensuremath\boldsymbol{m}_T, \ensuremath\boldsymbol{w}_{ij}^\top \ensuremath\boldsymbol{V}_T \ensuremath\boldsymbol{w}_{ij}). \end{align} \subsubsection{Local Distance Metric (LDM)} Let the set of similar and dissimilar points for the point $\ensuremath\boldsymbol{x}_i$ be denoted by $\mathcal{S}_i$ and $\mathcal{D}_i$, respectively. In Local Distance Metric (LDM) \cite{yang2006efficient}, we consider the following for the likelihood: \begin{align} \mathbb{P}(y_i|\ensuremath\boldsymbol{x}_i) = &\sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} \exp(-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2) \nonumber\\ &\times \Big(\sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} \exp(-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2) \nonumber\\ &~~~~~ + \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{D}_i} \exp(-\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2)\Big)^{-1}. \end{align} If we consider Eq. (\ref{equation_Bayesian_NCA_W_decomposition}) for decomposition of the weight matrix, the log-likelihood becomes: \begin{align*} &\sum_{i=1}^n \log (\mathbb{P}(y_i|\ensuremath\boldsymbol{x}_i,\ensuremath\boldsymbol{\Lambda})) = \\ &~~~~~~~~~~~~ \sum_{i=1}^n \log \Big(\sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} \exp\big(\!-\sum_{l=1}^d \lambda_l w_{ij}^l\big)\Big) \\ &~~~~~~~~~~~~\sum_{i=1}^n \log \Big(\sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} \exp\big(\!-\sum_{l=1}^d \lambda_l w_{ij}^l\big) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~+ \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{D}_i} \exp\big(\!-\sum_{l=1}^d \lambda_l w_{ij}^l\big)\Big). \end{align*} We want to maximize this log-likelihood for learning the variables $\{\lambda_1, \dots, \lambda_d\}$. An evidence lower bound on this log-likelihood can be \cite{yang2006efficient}: \begin{equation}\label{equation_LDM_lower_bound} \begin{aligned} \sum_{i=1}^n \log &(\mathbb{P}(y_i|\ensuremath\boldsymbol{x}_i,\ensuremath\boldsymbol{\Lambda})) \geq \\ &\sum_{i=1}^n \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} \phi_{ij} \sum_{l=1}^d \lambda_l w_{ij}^l \\ & - \sum_{i=1}^n \log \Big(\sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} \exp\big(\!-\sum_{l=1}^d \lambda_l w_{ij}^l\big) \\ &~~~~~~~~~~~~ + \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{D}_i} \exp\big(\!-\sum_{l=1}^d \lambda_l w_{ij}^l\big)\Big), \end{aligned} \end{equation} where $\phi_{ij}$ is the variational parameter which is: \begin{equation}\label{equation_LDM_phi} \begin{aligned} & \phi_{ij} := \frac{\exp\big(\!-\sum_{l=1}^d \lambda_l w_{ij}^l\big)}{\sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} \exp\big(\!-\sum_{l=1}^d \lambda_l w_{ij}^l\big)} \times \\ &~~~~~~~~~~~~ \Big(1 + \frac{\exp\big(\!-\sum_{l=1}^d \lambda_l w_{ij}^l\big)}{\sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{S}_i} \exp\big(\!-\sum_{l=1}^d \lambda_l w_{ij}^l\big)}\Big)^{-1}. \end{aligned} \end{equation} See \cite{yang2006efficient} for derivation of the lower bound. Iteratively, we maximize the lower bound, i.e. Eq. (\ref{equation_LDM_lower_bound}), and update $\phi_{ij}$ by Eq. (\ref{equation_LDM_phi}). The learned parameters $\{\lambda_1, \dots, \lambda_d\}$ can be used in Eq. (\ref{equation_Bayesian_NCA_W_decomposition}) to obtain the weight matrix in the metric. \subsection{Information Theoretic Metric Learning} There exist information theoretic approaches for metric learning where KL-divergence (relative entropy) or mutual information is used. \subsubsection{Information Theoretic Metric Learning with a Prior Weight Matrix} One of the information theoretic methods for metric learning is using a prior weight matrix \cite{davis2007information} where we consider a known weight matrix $\ensuremath\boldsymbol{W}_0$ as the regularizer and try to minimize the KL-divergence between the distributions with $\ensuremath\boldsymbol{W}$ and $\ensuremath\boldsymbol{W}_0$: \begin{align} \text{KL}(p_{ij}^{W_0} \| p_{ij}^W) := \sum_{i=1}^n \sum_{j=1}^n p_{ij}^{W_0} \log\Big(\frac{p_{ij}^{W_0}}{p_{ij}^{W}}\Big). \end{align} There are both offline and online approaches for metric learning using batch and streaming data, respectively. \textbf{-- Offline Information Theoretic Metric Learning:} We consider a Gaussian distribution, i.e. Eq. (\ref{equation_P_W_classCollapse}), for the probability of $\ensuremath\boldsymbol{x}_i$ taking $\ensuremath\boldsymbol{x}_j$ as its neighbor, i.e. $p_{ij}^W$. While we make the weight matrix similar to the prior weight matrix through KL-divergence, we find a weight matrix which makes all the distances of similar points less than an upper bound $u>0$ and all the distances of dissimilar points larger than a lower bound $l$ (where $l>u$). Note that, for Gaussian distributions, the KL divergence is related to the LogDet $D_{ld}(.,.)$ between covariance matrices \cite{dhillon2007differential}; hence, we can say: \begin{align*} &\text{KL}(p_{ij}^{W_0} \| p_{ij}^W) = \frac{1}{2} D_{ld}(\ensuremath\boldsymbol{W}_0^{-1}, \ensuremath\boldsymbol{W}^{-1}) = \frac{1}{2} D_{ld}(\ensuremath\boldsymbol{W}, \ensuremath\boldsymbol{W}_0) \\ &~~~~~~~~~~~~~~~~~ \overset{(a)}{=} \textbf{tr}(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{W}_0^{-1}) - \log (\det(\ensuremath\boldsymbol{W} \ensuremath\boldsymbol{W}_0^{-1})) - n, \end{align*} where $(a)$ is because of the definition of LogDet. Hence, the optimization problem can be \cite{davis2007information}: \begin{equation}\label{equation_optimization_information_theoretic} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & D_{ld}(\ensuremath\boldsymbol{W}, \ensuremath\boldsymbol{W}_0) \\ & \text{subject to} & & \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \leq u, \quad \forall (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}, \\ & & & \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \geq l, \quad \forall (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}. \end{aligned} \end{equation} \textbf{-- Online Information Theoretic Metric Learning:} The online information theoretic metric learning \cite{davis2007information} is suitable for streaming data. For this, we use the offline approach where the known weight matrix $\ensuremath\boldsymbol{W}_0$ is learned weight matrix by the data which have been received so far. Consider the time slot $t$ where we have been accumulated some data until then and some new data points are received at this time. The optimization problem is Eq. (\ref{equation_optimization_information_theoretic}) where $\ensuremath\boldsymbol{W}_0 = \ensuremath\boldsymbol{W}_t$ which is the learned weight matrix so far at time $t$. Note that if there is some label information available, we can incorporate it in the optimization problem as a regularizer. \subsubsection{Information Theoretic Metric Learning for Imbalanced Data} Distance Metric by Balancing KL-divergence (DMBK) \cite{feng2018learning} can be used for imbalanced data where the cardinality of classes are different. Assume the classes have Gaussian distributions where $\ensuremath\boldsymbol{\mu}_i \in \mathbb{R}^d$ and $\ensuremath\boldsymbol{\Sigma}_i \in \mathbb{R}^{d \times d}$ denote the mean and covariance of the $i$-th class. Recall the projection matrix $\ensuremath\boldsymbol{U}$ in Eq. (\ref{equation_W_U_UT}) and Proposition \ref{proposition_metric_learning_projection}. The KL-divergence between the probabilities of the $i$-th and $j$-th classes after projection onto the subspace of metric is \cite{feng2018learning}: \begin{equation} \begin{aligned} \text{KL}(p_i \| p_j) = &\,\frac{1}{2} \Big(\log\big(\det(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}_j \ensuremath\boldsymbol{U})\big) \\ &- \log\big(\det(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}_i \ensuremath\boldsymbol{U})\big) \\ &+ \textbf{tr}\big( (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}_j \ensuremath\boldsymbol{U})^{-1} \ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{\Sigma}_i + \ensuremath\boldsymbol{D}_{ij}) \ensuremath\boldsymbol{U} \big) \Big), \end{aligned} \end{equation} where $\ensuremath\boldsymbol{D}_{ij} := (\ensuremath\boldsymbol{\mu}_i - \ensuremath\boldsymbol{\mu}_j) (\ensuremath\boldsymbol{\mu}_i - \ensuremath\boldsymbol{\mu}_j)^\top$. To cancel the effect of cardinality of classes in imbalanced data, we use the normalized divergence of classes: \begin{align} e_{ij} := \frac{n_i n_j \text{KL}(p_i \| p_j)}{\sum_{1 \leq k < l \leq c} n_k n_l \text{KL}(p_k \| p_l)}, \end{align} where $n_i$ and $c$ denote the number of the $i$-th class and the number of classes, respectively. We maximize the geometric mean of this divergence between pairs of classes to separate classes after projection onto the subspace of metric. A regularization term is used to increase the distances of dissimilar points and a constraint is used to decrease the similar points \cite{feng2018learning}: \begin{equation}\label{equation_optimization_information_theoretic_imbalanced} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{maximize}} & & \log\Big(\Big(\prod_{1 \leq i < j \leq c} e_{ij}\Big)^{\frac{1}{c(c-1)}}\Big) \\ & & & ~~~~~~~~~~~ + \lambda \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}} \\ & \text{subject to} & & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}} \|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 \leq 1, \\ & & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \end{aligned} \end{equation} where $\lambda>0$ is the regularization parameter. This problem can be solved using projected gradient method \cite{ghojogh2021kkt}. \subsubsection{Probabilistic Relevant Component Analysis Methods} Recall the Relevant Component Analysis (RCA method) \cite{shental2002adjustment} which was introduced in Section \ref{section_RCA}. Here, we introduce probabilistic RCA \cite{bar2003learning,bar2005learning} which uses information theory. Suppose the $n$ data points can be divided into $c$ clusters, or so-called chunklets. Let $\mathcal{X}_l$ denote the data of the $l$-th chunklet and $\ensuremath\boldsymbol{\mu}_l$ be the mean of $\mathcal{X}_l$. Consider Eq. (\ref{equation_W_U_UT}) for decomposition of the weight matrix in the metric where the column-space of $\ensuremath\boldsymbol{U}$ is the subspace of metric. Let projection of data onto this subspace be denoted by $\ensuremath\boldsymbol{Y} = \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}$, the projected data in the $l$-th chunklet be $\mathcal{Y}_l$, and $\ensuremath\boldsymbol{\mu}^y_l$ be the mean of $\mathcal{Y}_l$. In probabilistic RCA, we maximize the mutual information between data and the projected data while we want the summation of distances of points in a chunklet from the mean of chunklet is less than a threshold or margin $m>0$. The mutual information is related to the entropy as $I(X,Y) := H(Y) - H(Y|X)$; hence, we can maximize the entropy of projected data $H(Y)$ rather than the mutual information. Because $\ensuremath\boldsymbol{Y} = \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}$, we have $H(Y) \propto \det(\ensuremath\boldsymbol{U})$. According to Eq. (\ref{equation_W_U_UT}), we have $\det(\ensuremath\boldsymbol{U}) \propto \det(\ensuremath\boldsymbol{W})$. Hence, the optimization problem can be \cite{bar2003learning,bar2005learning}: \begin{equation}\label{equation_optimization_probabilistic_RCA} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{maximize}} & & \det(\ensuremath\boldsymbol{W}) \\ & \text{subject to} & & \sum_{l=1}^c \sum_{\ensuremath\boldsymbol{y}_i \in \mathcal{Y}_l} \|\ensuremath\boldsymbol{y}_i - \ensuremath\boldsymbol{\mu}^y_l\|_{\ensuremath\boldsymbol{W}}^2 \leq m, \\ & & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} This preserves the information of data after projection while the inter-chunklet variances are upper-bounded by a margin. If we assume Gaussian distribution for each chunklet with the covariance matrix $\ensuremath\boldsymbol{\Sigma}_l$ for the $l$-th chunklet, we have $\det(\ensuremath\boldsymbol{W}) \propto \log(\det(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}_l \ensuremath\boldsymbol{U}))$ because of the quadratic characteristic of covariance. In this case, the optimization problem becomes: \begin{equation}\label{equation_optimization_probabilistic_RCA_2} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{maximize}} & & \sum_{l=1}^c \log(\det(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{\Sigma}_l \ensuremath\boldsymbol{U})) \\ & \text{subject to} & & \sum_{l=1}^c \sum_{\ensuremath\boldsymbol{y}_i \in \mathcal{Y}_l} \|\ensuremath\boldsymbol{y}_i - \ensuremath\boldsymbol{\mu}^y_l\|_{\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top}^2 \leq m, \end{aligned} \end{equation} where $\ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}$ is already satisfied because of Eq. (\ref{equation_W_U_UT}). \subsubsection{Metric Learning by Information Geometry} Another information theoretic methods for metric learning is using information geometry in which kernels on data and labels are used \cite{wang2009information}. Let $\ensuremath\boldsymbol{L} \in \mathbb{R}^{c \times n}$ denote the one-hot encoded labels of $n$ data points with $c$ classes and let $\ensuremath\boldsymbol{X} \in \mathbb{R}^{d \times n}$ be the data points. The kernel matrix on the labels is $\ensuremath\boldsymbol{K}_L = \ensuremath\boldsymbol{Y}^\top \ensuremath\boldsymbol{Y} + \lambda \ensuremath\boldsymbol{I}$ whose main diagonal is strengthened by a small positive number $\lambda$ to have a full rank. Recall Proposition \ref{proposition_metric_learning_projection} and Eq. (\ref{equation_W_U_UT}) where $\ensuremath\boldsymbol{U}$ is the projection matrix onto the subspace of metric. The kernel matrix over the projected data, $\ensuremath\boldsymbol{Y} = \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}$, is: \begin{align} \ensuremath\boldsymbol{K}_Y = &\ensuremath\boldsymbol{Y}^\top \ensuremath\boldsymbol{Y} = (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X})^\top (\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X}) \nonumber \\ &= \ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{U} \ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{X} \overset{(\ref{equation_W_U_UT})}{=} \ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{X}. \label{equation_ML_information_geometry_K_Y} \end{align} We can minimize the KL-divergence between the distributions of kernels $\ensuremath\boldsymbol{K}_Y$ and $\ensuremath\boldsymbol{K}_L$ \cite{wang2009information}: \begin{equation}\label{equation_optimization_by_information_geometry} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \text{KL}(\ensuremath\boldsymbol{K}_Y \| \ensuremath\boldsymbol{K}_L) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} For simplicity, we assume Gaussian distributions for the kernels. The KL divergence between the distributions of two matrices, $\ensuremath\boldsymbol{K}_Y \in \mathbb{R}^{n \times n}$ and $\ensuremath\boldsymbol{K}_L \in \mathbb{R}^{n \times n}$, with Gaussian distributions is simplified to {\citep[Theorem 1]{wang2009information}}: \begin{align*} &\text{KL}(\ensuremath\boldsymbol{K}_Y \| \ensuremath\boldsymbol{K}_L) = \frac{1}{2} \Big(\textbf{tr}(\ensuremath\boldsymbol{K}_L^{-1} \ensuremath\boldsymbol{K}_Y) + \log(\det(\ensuremath\boldsymbol{K}_L)) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - \log(\det(\ensuremath\boldsymbol{K}_Y)) - n\Big) \\ &\overset{(\ref{equation_ML_information_geometry_K_Y})}{\propto} \frac{1}{2} \Big(\textbf{tr}(\ensuremath\boldsymbol{K}_L^{-1} \ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{X}) + \log(\det(\ensuremath\boldsymbol{K}_L)) \\ &~~~~~~~~~~~~~ - \log(\det(\ensuremath\boldsymbol{W})) - n\Big). \end{align*} After ignoring the constant terms w.r.t. $\ensuremath\boldsymbol{W}$, we can restate Eq. (\ref{equation_optimization_by_information_geometry}) to: \begin{equation}\label{equation_optimization_by_information_geometry_2} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{minimize}} & & \textbf{tr}(\ensuremath\boldsymbol{K}_L^{-1} \ensuremath\boldsymbol{X}^\top \ensuremath\boldsymbol{W} \ensuremath\boldsymbol{X}) - \log(\det(\ensuremath\boldsymbol{W})) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} If we take the derivative of the objective function in Eq. (\ref{equation_optimization_by_information_geometry_2}) and set it to zero, we have: \begin{align} & \frac{\partial c}{\partial \ensuremath\boldsymbol{W}} = \ensuremath\boldsymbol{X} \ensuremath\boldsymbol{K}_L^{-1} \ensuremath\boldsymbol{X}^\top - \ensuremath\boldsymbol{W}^{-1} \overset{\text{set}}{=} \ensuremath\boldsymbol{0} \nonumber \\ &\implies \ensuremath\boldsymbol{W} = (\ensuremath\boldsymbol{X} \ensuremath\boldsymbol{K}_L^{-1} \ensuremath\boldsymbol{X}^\top)^{-1}. \label{equation_optimization_by_information_geometry_solution} \end{align} Note that the constraint $\ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}$ is already satisfied by the solution, i.e., Eq. (\ref{equation_optimization_by_information_geometry_solution}). Although this method has used kernels, it can be kernelized further. We can also have a kernel version of this method by using Eq. (\ref{equation_Mahalanobis_distance_in_RKHS}) as the generalized Mahalanobis distance in the feature space, where $\ensuremath\boldsymbol{T}$ (defined in Eq. (\ref{equation_kernelization_representation_theory})) is the projection matrix for the metric. Using this in Eqs. (\ref{equation_optimization_by_information_geometry_2}) and (\ref{equation_optimization_by_information_geometry_solution}) can give us the kernel version of this method. See \cite{wang2009information} for more information about it. \subsection{Empirical Risk Minimization in Metric Learning} We can learn the metric by minimizing some empirical risk. In the following, some metric learning metric learning methods by risk minimization are introduced. \subsubsection{Metric Learning Using the Sigmoid Function} One of the metric learning methods by risk minimization is \cite{guillaumin2009you}. The distribution for $\ensuremath\boldsymbol{x}_i$ to take $\ensuremath\boldsymbol{x}_j$ as its neighbor can be stated using a sigmoid function: \begin{align}\label{equation_P_W_ERM_1} p^W_{ij} &:= \frac{1}{1 + \exp(\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}}^2 - b)}, \end{align} where $b>0$ is a bias, because close-by points should have larger probability. We can maximize and minimize this probability for similar and dissimilar points, respectively: \begin{equation}\label{equation_optimization_ERM_1} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}}{\text{maximize}} & & \sum_{i=1}^n \sum_{j=1}^n y_{ij} \log(p^W_{ij}) + (1-y_{ij}) \log(1 - p^W_{ij}) \\ & \text{subject to} & & \ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}, \end{aligned} \end{equation} where $y_{ij}$ is defined in Eq. (\ref{equation_y_ij}). This can be solved using projected gradient method \cite{ghojogh2021kkt}. This optimization can be seen as minimization of the empirical risk where close-by points are pushed toward each other and dissimilar points are pushed away to have less error. \subsubsection{Pairwise Constrained Component Analysis (PCCA)} Pairwise Constrained Component Analysis (PCCA) \cite{mignon2012pcca} minimizes the following empirical risk to minimize and maximize the distances of similar points and dissimilar points, respectively: \begin{equation}\label{equation_optimization_PCCA} \begin{aligned} & \underset{\ensuremath\boldsymbol{U}}{\text{minimize}} \\ &~~~~~~~~~ \sum_{i=1}^n \sum_{j=1}^n \log\Big(1 + \exp\big(y_{ij} (\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top}^2 - b)\big)\Big), \end{aligned} \end{equation} where $y_{ij}$ is defined in Eq. (\ref{equation_y_ij}), $b>0$ is a bias, $\ensuremath\boldsymbol{W} \succeq \ensuremath\boldsymbol{0}$ is already satisfied because of Eq. (\ref{equation_W_U_UT}). This can be solved using projected gradient method \cite{ghojogh2021kkt} with the gradient \cite{mignon2012pcca}: \begin{equation}\label{equation_gradient_PCCA} \begin{aligned} \frac{\partial c}{\partial \ensuremath\boldsymbol{U}} &= 2 \sum_{i=1}^n \sum_{j=1}^n \frac{y_{ij}}{1 + \exp\big(y_{ij} (\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top}^2 - b)\big)} \\ &~~~~~~~~~~~~ \times (\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)(\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{U}. \end{aligned} \end{equation} Note that we can have kernel PCCA by using Eq. (\ref{equation_Mahalanobis_distance_in_RKHS}). In other words, we can replace $\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{U}\ensuremath\boldsymbol{U}^\top}^2$ and $(\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)(\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j)^\top \ensuremath\boldsymbol{U}$ with $\|\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j\|_{\ensuremath\boldsymbol{T}\ensuremath\boldsymbol{T}^\top}^2$ and $(\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)(\ensuremath\boldsymbol{k}_i - \ensuremath\boldsymbol{k}_j)^\top \ensuremath\boldsymbol{T}$, respectively, to have PCCA in the feature space. \subsubsection{Metric Learning for Privileged Information} In some applications, we have a dataset with privileged information where for every point, we have two feature vector; one for the main feature (denoted by $\{\ensuremath\boldsymbol{x}_i\}_{i=1}^n$) and one for the privileged information (denoted by $\{\ensuremath\boldsymbol{z}_i\}_{i=1}^n$). A metric learning method for using privileged information is \cite{yang2016empirical} where we minimize and maximize the distances of similar and dissimilar points, respectively, for the main features. Simultaneously, we make the distances of privileged features close to the distances of main features. Having these two simultaneous goals, we minimize the following empirical risk \cite{yang2016empirical}: \begin{equation}\label{equation_optimization_ERM_3} \begin{aligned} & \underset{\ensuremath\boldsymbol{W}_1, \ensuremath\boldsymbol{W}_2}{\text{minimize}} & & \sum_{i=1}^n \log\Big(1 + \\ & & &\exp\big(y_{ij}\, (\|\ensuremath\boldsymbol{x}_i - \ensuremath\boldsymbol{x}_j\|_{\ensuremath\boldsymbol{W}_1}^2 - \|\ensuremath\boldsymbol{z}_i - \ensuremath\boldsymbol{z}_j\|_{\ensuremath\boldsymbol{W}_2}^2 ) \big) \Big) \\ & \text{subject to} & & \ensuremath\boldsymbol{W}_1 \succeq \ensuremath\boldsymbol{0}, \quad \ensuremath\boldsymbol{W}_2 \succeq \ensuremath\boldsymbol{0}. \end{aligned} \end{equation} \section{Deep Metric Learning}\label{section_deep_metric_learning} We saw in Sections \ref{section_spectral_metric_learning} and \ref{section_probabilistic_metric_learning} that both spectral and probabilistic metric learning methods use the generalized Mahalanobis distance, i.e. Eq. (\ref{equation_generalized_Mahalanobis_distance}), and learn the weight matrix in the metric. Deep metric learning, however, has a different approach. The methods in deep metric learning usually do not use a generalized Mahalanobis distance but they earn an embedding space using a neural network. The network learns a $p$-dimensional embedding space for discriminating classes or the dissimilar points and making the similar points close to each other. The network embeds data in the embedding space (or subspace) of metric. Then, any distance metric $d(.,.): \mathbb{R}^p \times \mathbb{R}^p \rightarrow \mathbb{R}$ can be used in this embedding space. In the loss functions of network, we can use the distance function $d(.,.)$ in the embedding space. For example, an option for the distance function is the squared $\ell_2$ norm or squared Euclidean distance: \begin{align} d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^1), \textbf{f}(\ensuremath\boldsymbol{x}_i^2)\big) := \|\textbf{f}(\ensuremath\boldsymbol{x}_i^1) - \textbf{f}(\ensuremath\boldsymbol{x}_i^2)\|_2^2, \end{align} where $\textbf{f}(\ensuremath\boldsymbol{x}_i) \in \mathbb{R}^p$ denotes the output of network for the input $\ensuremath\boldsymbol{x}_i$ as its $p$-dimensional embedding. We train the network using mini-batch methods such as the mini-batch stochastic gradient descent and denote the mini-batch size by $b$. The shared weights of sub-networks are denoted by the learnable parameter $\theta$. \subsection{Reconstruction Autoencoders} \subsubsection{Types of Autoencoders} An autoencoder is a model consisting of an encoder $E(.)$ and a decoder $D(.)$. There are several types of autoencoders. All types of autoencoders learn a code layer in the middle of encoder and decoder. Inferential autoencoders learn a stochastic latent space in the code layer between the encoder and decoder. Variational autoencoder \cite{ghojogh2021factor} and adversarial autoencoder \cite{ghojogh2021generative} are two important types of inferential autoencoders. Another type of autoencoder is the reconstruction autoencoder consisting of an encoder, transforming data to a code, and a decoder, transforming the code back to the data. Hence, the decoder reconstructs the input data to the encoder. The code is a representation for data. Each of the encoder and decoder can be multiple layers of neural network with activation functions. \subsubsection{Reconstruction Loss} We denote the input data point to the encoder by $\ensuremath\boldsymbol{x} \in \mathbb{R}^d$ where $d$ is the dimensionality of data. The reconstructed data point is the output of decoder and is denoted by $\widehat{\ensuremath\boldsymbol{x}} \in \mathbb{R}^d$. The representation code, which is the output of encoder and the input of decoder, is denoted by $\textbf{f}(\ensuremath\boldsymbol{x}) := E(\ensuremath\boldsymbol{x}) \in \mathbb{R}^p$. We have $\widehat{\ensuremath\boldsymbol{x}} = D(E(\ensuremath\boldsymbol{x})) = D(\textbf{f}(\ensuremath\boldsymbol{x}))$. If the dimensionality of code is greater than the dimensionality of input data, i.e. $p > d$, the autoencoder is called an over-complete autoencoder \cite{goodfellow2016deep}. Otherwise, if $p < d$, the autoencoder is an under-complete autoencoder \cite{goodfellow2016deep}. The loss function of reconstruction autoencoder tries to make the reconstructed data close to the input data: \begin{equation}\label{equation_reconstruction_autoencoder_loss} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ \sum_{i=1}^b \Big( d\big(\ensuremath\boldsymbol{x}_i, \widehat{\ensuremath\boldsymbol{x}}_i\big) + \lambda \Omega(\theta) \Big), \end{aligned} \end{equation} where $\lambda \geq 0$ is the regularization parameter and $\Omega(\theta)$ is some penalty or regularization on the weights. Here, the distance function $d(.,.)$ is defined on $\mathbb{R}^d \times \mathbb{R}^d$. Note that the penalty term can be regularization on the code $\textbf{f}(\ensuremath\boldsymbol{x}_i)$. If the used distance metric is the squared Euclidean distance, this loss is named the regularized Mean Squared Error (MSE) loss. \subsubsection{Denoising Autoencoder} A problem with over-complete autoencoder is that its training only copies each feature of data input to one of the neurons in the code layer and then copies it back to the corresponding feature of output layer. This is because the number of neurons in the code layer is greater than the number of neurons in the input and output layers. In other words, the networks just memorizes or gets overfit. This coping happens by making some of the weights equal to one (or a scale of one depending on the activation functions) and the rest of weights equal to zero. To avoid this problem in over-complete autoencoders, one can add some noise to the input data and try to reconstruct the data without noise. For this, Eq. (\ref{equation_reconstruction_autoencoder_loss}) is used while the input to the network is the mini-batch plus some noise. This forces the over-complete autoencoder to not just copy data to the code layer. This autoencoder can be used for denoising as it reconstructs the data without noise for a noisy input. This network is called the Denoising Autoencoder (DAE) \cite{goodfellow2016deep}. \subsubsection{Metric Learning by Reconstruction Autoencoder} The under-complete reconstruction autoencoder can be used for metric learning and dimensionality reduction, especially when $p \ll d$. The loss function for learning a low-dimensional representation code and reconstructing data by the autoencoder is Eq. (\ref{equation_reconstruction_autoencoder_loss}). The code layer between the encoder and decoder is the embedding space of metric. Note that if the activation functions of all layers are linear, the under-complete autoencoder is reduced to Principal Component Analysis \cite{ghojogh2019unsupervised}. Let $\ensuremath\boldsymbol{U}_l$ denote the weight matrix of the $l$-th layer of network, $\ell_e$ be the number of layers of encoder, and $\ell_d$ be the number of layers of decoder. With linear activation function, the encoder and decoder are: \begin{align*} &\text{encoder: } \quad \mathbb{R}^p \ni \textbf{f}(\ensuremath\boldsymbol{x}_i) = \underbrace{\ensuremath\boldsymbol{U}_{\ell_e}^\top \ensuremath\boldsymbol{U}_{\ell_e-1}^\top \dots \ensuremath\boldsymbol{U}_{1}^\top}_{\ensuremath\boldsymbol{U}_e^\top} \ensuremath\boldsymbol{x}_i, \\ &\text{decoder: } \quad \mathbb{R}^d \ni \widehat{\ensuremath\boldsymbol{x}}_i = \underbrace{\ensuremath\boldsymbol{U}_{1} \dots \ensuremath\boldsymbol{U}_{\ell_d-1} \ensuremath\boldsymbol{U}_{\ell_d}}_{\ensuremath\boldsymbol{U}_d} \textbf{f}(\ensuremath\boldsymbol{x}_i), \end{align*} where linear projection by $\ell$ projection matrices can be replaced by linear projection with one projection matrices $\ensuremath\boldsymbol{U}_e$ and $\ensuremath\boldsymbol{U}_d$. For learning complicated data patterns, we can use nonlinear activation functions between layers of the encoder and decoder to have nonlinear metric learning and dimensionality reduction. It is noteworthy that nonlinear neural network can be seen as an ensemble or concatenation of dimensionality reduction (or feature extraction) and kernel methods. The justification of this claim is as follows. Let the dimensionality for a layer of network be $\ensuremath\boldsymbol{U} \in \mathbb{R}^{d_1 \times d_2}$ so it connects $d_1$ neurons to $d_2$ neurons. Two cases can happen: \begin{itemize} \item If $d_1 \geq d_2$, this layer acts as dimensionality reduction or feature extraction because it has reduced the dimensionality of its input data. If this layer has a nonlinear activation function, the dimensionality reduction is nonlinear; otherwise, it is linear. \item If $d_1 < d_2$, this layer acts as a kernel method which maps its input data to the high-dimensional feature space in some Reproducing Kernel Hilbert Space (RKHS). This kernelization can help nonlinear separation of some classes which are not separable linearly \cite{ghojogh2021reproducing}. An example use of kernelization in machine learning is kernel support vector machine \cite{vapnik1995nature}. \end{itemize} Therefore, a neural network is a complicated feature extraction method as a concatenation of dimensionality reduction and kernel methods. Each layer of network learns its own features from data. \subsection{Supervised Metric Learning by Supervised Loss Functions} Various loss functions exist for supervised metric learning by neural networks. Supervised loss functions can teach the network to separate classes in the embedding space \cite{sikaroudi2020supervision}. For this, we use a network whose last layer is for classification of data points. The features of the one-to-last layer can be used for feature embedding. The last layer after the embedding features is named the classification layer. The structure of this network is shown in Fig. \ref{figure_Supervised_losses_embedding}. Let the $i$-th point in the mini-batch be denoted by $\ensuremath\boldsymbol{x}_i \in \mathbb{R}^d$ and its label be denoted by $y_i \in \mathbb{R}$. Suppose the network has one output neuron and its output for the input $\ensuremath\boldsymbol{x}_i$ is denoted by $\textbf{f}_o(\ensuremath\boldsymbol{x}_i) \in \mathbb{R}$. This output is the estimated class label by the network. We denote output of the the one-to-last layer by $\textbf{f}(\ensuremath\boldsymbol{x}_i) \in \mathbb{R}^p$ where $p$ is the number of neurons in that layer which is equivalent to the dimensionality of the embedding space. The last layer of network, connecting the $p$ neurons to the output neuron is a fully-connected layer. The network until the one-to-last layer can be any feed-forward or convolutional network depending on the type of data. If the network is convolutional, it should be flattened at the one-to-last layer. The network learns to classify the classes, by the supervised loss functions, so the features of the one-to-last layers will be discriminating features and suitable for embedding. \begin{figure}[!t] \centering \includegraphics[width=3in]{./images/Supervised_losses_embedding} \caption{The structure of network for metric learning with supervised loss function.} \label{figure_Supervised_losses_embedding} \end{figure} \subsubsection{Mean Squared Error and Mean Absolute Value Losses} One of the supervised losses is the Mean Squared Error (MSE) which makes the estimated labels close to the true labels using squared $\ell_2$ norm: \begin{equation} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ \sum_{i=1}^b (\textbf{f}_o(\ensuremath\boldsymbol{x}_i) - y_i)^2. \end{aligned} \end{equation} One problem with this loss function is exaggerating outliers because of the square but its advantage is its differentiability. Another loss function is the Mean Absolute Error (MAE) which makes the estimated labels close to the true labels using $\ell_1$ norm or the absolute value: \begin{equation} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ \sum_{i=1}^b |\textbf{f}_o(\ensuremath\boldsymbol{x}_i) - y_i|. \end{aligned} \end{equation} The distance used in this loss is also named the Manhattan distance. This loss function does not have the problem of MSE and it can be used for imposing sparsity in the embedding. It is not differentiable at the point $\textbf{f}(\ensuremath\boldsymbol{x}_i) = y_i$ but as the derivatives are calculated numerically by the neural network, this is not a big issue nowadays. \subsubsection{Huber and KL-Divergence Losss} Another loss function is the Huber loss which is a combination of the MSE and MAE to have the advantages of both of them: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}} ~~~ \\ &~~~~\sum_{i=1}^b \left\{ \begin{array}{ll} 0.5 (\textbf{f}_o(\ensuremath\boldsymbol{x}_i) - y_i)^2 & \mbox{if } |\textbf{f}_o(\ensuremath\boldsymbol{x}_i) - y_i| \leq \delta \\ \delta (|\textbf{f}_o(\ensuremath\boldsymbol{x}_i) - y_i| - 0.5 \delta) & \mbox{otherwise}. \end{array} \right. \end{aligned} \end{equation} KL-divergence loss function makes the distribution of the estimated labels close to the distribution of the true labels: \begin{equation} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ \text{KL}(\mathbb{P}(\textbf{f}(\ensuremath\boldsymbol{x})) \| \mathbb{P}(y)) = \sum_{i=1}^b \textbf{f}(\ensuremath\boldsymbol{x}_i) \log(\frac{\textbf{f}(\ensuremath\boldsymbol{x}_i)}{y_i}). \end{aligned} \end{equation} \subsubsection{Hinge Loss} If there are two classes, i.e. $c=2$, we can have true labels as $y_i \in \{-1, 1\}$. In this case, a possible loss function is the Hinge loss: \begin{equation} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ \sum_{i=1}^b \big[m - y_i\, \textbf{f}_o(\ensuremath\boldsymbol{x}_i)\big]_+, \end{aligned} \end{equation} where $[\cdot]_+ := \max(\cdot,0)$ and $m>0$ is the margin. If the signs of the estimated and true labels are different, the loss is positive which should be minimized. If the signs are the same and $|\textbf{f}_o(\ensuremath\boldsymbol{x}_i)| \geq m$, then the loss function is zero. If the signs are the same but $|\textbf{f}_o(\ensuremath\boldsymbol{x}_i)| < m$, the loss is positive and should be minimized because the estimation is correct but not with enough margin from the incorrect estimation. \subsubsection{Cross-entropy Loss} For any number of classes, denoted by $c$, we can have a cross-entropy loss. For this loss, we have $c$ neurons, rather than one neuron, at the last layer. In contrast to the MSE, MAE, Huber, and KL-divergence losses which use linear activation function at the last layer, cross-entropy requires softmax or sigmoid activation function at the last layer so the output values are between zero and one. For this loss, we have $c$ outputs, i.e. $\textbf{f}_o(\ensuremath\boldsymbol{x}_i) \in \mathbb{R}^c$ (continuous values between zero and one), and the true labels are one-hot encoded, i.e., $\ensuremath\boldsymbol{y}_i \in \{0,1\}^c$. This loss is defined as: \begin{equation}\label{equation_cross_entropy_loss} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ -\sum_{i=1}^b \sum_{l=1}^c (\ensuremath\boldsymbol{y}_i)_l \log\big(\textbf{f}_o(\ensuremath\boldsymbol{x}_i)_l\big), \end{aligned} \end{equation} where $(\ensuremath\boldsymbol{y}_i)_l$ and $\textbf{f}_o(\ensuremath\boldsymbol{x}_i)_l$ denote the $l$-th element of $\ensuremath\boldsymbol{y}_i$ and $\textbf{f}_o(\ensuremath\boldsymbol{x}_i)$, respectively. Minimizing this loss separates classes for classification; this separation of classes also gives us discriminating embedding in the one-to-last layer \cite{sikaroudi2020supervision,boudiaf2020unifying}. The reason for why cross-entropy can be suitable for metric learning is theoretically justified in \cite{boudiaf2020unifying}, explained in the following. Consider the mutual information between the true labels $Y$ and the estimated labels $\textbf{f}_o(X)$: \begin{align} I(\textbf{f}_o(X); Y) &= H(\textbf{f}_o(X)) - H(\textbf{f}_o(X)|Y) \label{equation_mutual_information_true_and_estimated_labels_1} \\ &= H(Y) - H(Y|\textbf{f}_o(X)), \label{equation_mutual_information_true_and_estimated_labels_2} \end{align} where $H(.)$ denotes entropy. On the one hand, Eq. (\ref{equation_mutual_information_true_and_estimated_labels_1}) has a generative view which exists in the metric learning loss functions generating embedding features. Eq. (\ref{equation_mutual_information_true_and_estimated_labels_2}), one the other hand, has a discriminative view used in the cross-entropy loss function. Therefore, the metric learning losses and the cross-entropy loss are related. It is shown in {\citep[Proposition 1]{boudiaf2020unifying}} that the cross-entropy is an upper-bound on the metric learning losses so its minimization for classification also provides embedding features. It is noteworthy that another supervised loss function is triplet loss, introduced in the next section. Triplet loss can be used for both hard labels (for classification) and soft labels (for similarity and dissimilarity of points). The triplet loss also does not need a last classification layer; therefore, the embedding layer can be the last layer for this loss. \subsection{Metric Learning by Siamese Networks}\label{section_metric_learning_Siamese} \begin{figure*}[!t] \centering \includegraphics[width=5in]{./images/Siamese} \caption{The structure of Siamese network with (a) two and (b) three sub-networks.} \label{figure_Siamese} \end{figure*} \subsubsection{Siamese and Triplet Networks} One of the important deep metric learning methods is Siamese network which is widely used for feature extraction. Siamese network, originally proposed in \cite{bromley1993signature}, is a network consisting of several equivalent sub-networks sharing their weights. The number of sub-networks in a Siamese network can be any number but it usually is two or three. A Siamese network with three sub-networks is also called a triplet network \cite{hoffer2015deep}. The weights of sub-networks in a Siamese network are trained in a way that the intra- and inter-class variances are decreased and increased, respectively. In other words, the similar points are pushed toward each other while the dissimilar points are pulled away from one another. Siamese networks have been used in various applications such as computer vision \cite{schroff2015facenet} and natural language processing \cite{yang2020beyond}. \subsubsection{Pairs and Triplets of Data Points} Depending on the number of sub-networks in the Siamese network, we have loss functions for training. The loss functions of Siamese networks usually require pairs or triplets of data points. Siamese networks do not use the data points one by one but we need to make pairs or triplets of points out of dataset for training a Siamese network. For making the pairs or triplets, we consider every data point as the anchor point, denoted by $\ensuremath\boldsymbol{x}_i^a$. Then, we take one of the similar points to the anchor point as the positive (or neighbor) point, denoted by $\ensuremath\boldsymbol{x}_i^p$. We also take one of the dissimilar points to the anchor point as the negative (or distant) point, denoted by $\ensuremath\boldsymbol{x}_i^n$. If class labels are available, we can use them to find the positive point as one of the points in the same class as the anchor point, and to find the the negative point as one of the points in a different class from the anchor point's class. Another approach is to augment the anchor point, using one of the augmentation methods, to obtain a positive points for the anchor point \cite{khodadadeh2019unsupervised,chen2020simple}. For Siamese networks with two sub-networks, we make pairs of anchor-positive points $\{(\ensuremath\boldsymbol{x}_i^a, \ensuremath\boldsymbol{x}_i^p)\}_{i=1}^{n_t}$ and anchor-negative points $\{(\ensuremath\boldsymbol{x}_i^a, \ensuremath\boldsymbol{x}_i^n)\}_{i=1}^{n_t}$, where $n_t$ is the number of pairs. For Siamese networks with three sub-networks, we make triplets of anchor-positive-negative points $\{(\ensuremath\boldsymbol{x}_i^a, \ensuremath\boldsymbol{x}_i^p, \ensuremath\boldsymbol{x}_i^n)\}_{i=1}^{n_t}$, where $n_t$ is the number of triplets. If we consider every point of dataset as an anchor, the number of pairs/triplets is the same as the number of data points, i.e., $n_t = n$. Various loss functions of Siamese networks use pairs or triplets of data points to push the positive point towards the anchor point and pull the negative point away from it. Doing this iteratively for all pairs or triplets will make the intra-class variances smaller and the inter-class variances larger for better discrimination of classes or clusters. Later in the following, we introduce some of the loss functions for training a Siamese network. \subsubsection{Implementation of Siamese Networks} A Siamese network with two and three sub-networks is depicted in Fig. \ref{figure_Siamese}. We denote the output of Siamese network for input $\ensuremath\boldsymbol{x} \in \mathbb{R}^d$ by $\textbf{f}(\ensuremath\boldsymbol{x}) \in \mathbb{R}^p$ where $p$ is the dimensionality of embedding (or the number of neurons at the last layer of the network) which is usually much less than the dimensionality of data, i.e., $p \ll d$. Note that the sub-networks of a Siamese network can be any fully-connected or convolutional network depending on the type of data. The used network structure for the sub-networks is usually called the backbone network. \begin{figure*}[!t] \centering \includegraphics[width=5in]{./images/triplet_contrastive_losses} \caption{Visualization of what contrastive and triplet losses do: (a) a triplet of anchor (green circle), positive (blue circle), and negative (red diamond) points, (b) the effect of contrastive loss making a margin between the anchor and negative point, and (c) the effect of triplet loss making a margin between the positive and negative points.} \label{figure_triplet_contrastive_losses} \end{figure*} The weights of sub-networks are shared in the sense that the values of their weights are equal. Implementation of a Siamese network can be done in two ways: \begin{enumerate} \item We can implement several sub-networks in the memory. In the training phase, we feed every data point in the pairs or triplets to one of the sub-networks and take the outputs of sub-networks to have $\textbf{f}(\ensuremath\boldsymbol{x}_i^a)$, $\textbf{f}(\ensuremath\boldsymbol{x}_i^p)$, and $\textbf{f}(\ensuremath\boldsymbol{x}_i^n)$. We use these in the loss function and update the weights of only one of the sub-networks by backpropagation \cite{ghojogh2021kkt}. Then, we copy the updated weights to the other sub-networks. We repeat this for all mini-batches and epochs until convergence. In the test phase, we feed the test point $\ensuremath\boldsymbol{x}$ to only one of the sub-networks and get the output $\textbf{f}(\ensuremath\boldsymbol{x})$ as its embedding. \item We can implement only one sub-network in the memory. In the training phase, we feed the data points in the pairs or triplets to the sub-network ont by one and take the outputs of sub-network to have $\textbf{f}(\ensuremath\boldsymbol{x}_i^a)$, $\textbf{f}(\ensuremath\boldsymbol{x}_i^p)$, and $\textbf{f}(\ensuremath\boldsymbol{x}_i^n)$. We use these in the loss function and update the weights of the sub-network by backpropagation \cite{ghojogh2021kkt}. We repeat this for all mini-batches and epochs until convergence. In the test phase, we feed the test point $\ensuremath\boldsymbol{x}$ to the sub-network and get the output $\textbf{f}(\ensuremath\boldsymbol{x})$ as its embedding. \end{enumerate} The advantage of the first approach is to have all the sub-networks ready and we do not need to feed the points of pairs or triplets one by one. Its disadvantage is using more memory. As the number of points in the pairs or triplets is small (i.e., only two or three), the second approach is more recommended as it is memory-efficient. \subsubsection{Contrastive Loss} One loss function for Siamese networks is the contrastive loss which uses the anchor-positive and anchor-negative pairs of points. Suppose, in each mini-batch, we have $b$ pairs of points $\{(\ensuremath\boldsymbol{x}_i^1, \ensuremath\boldsymbol{x}_i^2)\}_{i=1}^{b}$ some of which are anchor-positive and some are anchor-negative pairs. The points in an anchor-positive pair are similar, i.e. $(\ensuremath\boldsymbol{x}_i^1, \ensuremath\boldsymbol{x}_i^2) \in \mathcal{S}$, and the points in an anchor-negative pair are dissimilar, i.e. $(\ensuremath\boldsymbol{x}_i^1, \ensuremath\boldsymbol{x}_i^2) \in \mathcal{D}$, where $\mathcal{S}$ and $\mathcal{D}$ denote the similar and dissimilar sets. \hfill\break \textbf{-- Contrastive Loss:} We define: \begin{align}\label{equation_y_i_in_contrastive_loss} y_i := \left\{ \begin{array}{ll} 0 & \mbox{if } (\ensuremath\boldsymbol{x}_i^1, \ensuremath\boldsymbol{x}_i^2) \in \mathcal{S} \\ 1 & \mbox{if } (\ensuremath\boldsymbol{x}_i^1, \ensuremath\boldsymbol{x}_i^2) \in \mathcal{D}. \end{array} \right. \quad \forall i \in \{1, \dots, n_t\}. \end{align} The main contrastive loss was proposed in \cite{hadsell2006dimensionality} and is: \begin{equation}\label{equation_constrastive_loss} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ \sum_{i=1}^b &\Big( (1-y_i) d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^1), \textbf{f}(\ensuremath\boldsymbol{x}_i^2)\big) \\ &+ y_i \big[\! -d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^1), \textbf{f}(\ensuremath\boldsymbol{x}_i^2)\big) + m \big]_+ \Big), \end{aligned} \end{equation} where $m>0$ is the margin and $[.]_+ := \max(.,0)$ is the standard Hinge loss. The first term of loss minimizes the embedding distances of similar points and the second term maximizes the embedding distances of dissimilar points. As shown in Fig. \ref{figure_triplet_contrastive_losses}-b, it tries to make the distances of similar points as small as possible and the distances of dissimilar points at least greater than a margin $m$ (because the term inside the Hinge loss should become close to zero). \hfill\break \textbf{-- Generalized Contrastive Loss:} The $y_i$, defined in Eq. (\ref{equation_y_i_in_contrastive_loss}), is used in the contrastive loss, i.e., Eq. (\ref{equation_constrastive_loss}). This variable is binary and a hard measure of similarity and dissimilarity. Rather than this hard measure, we can have a soft measure of similarity and dissimilarity, denoted by $\psi_i$, which states how similar $\ensuremath\boldsymbol{x}_i^1$ and $\ensuremath\boldsymbol{x}_i^2$ are. This measure is between zero (completely similar) and one (completely dissimilar). It can be either given by the dataset as a hand-set measure or can be computed using any similarity measure such as the cosine function: \begin{align} [0,1] \ni \psi_i := \frac{1}{2} \big(-\cos(\ensuremath\boldsymbol{x}_i^1, \ensuremath\boldsymbol{x}_i^2) + 1\big). \end{align} In this case, the pairs $\{(\ensuremath\boldsymbol{x}_i^1, \ensuremath\boldsymbol{x}_i^2)\}_{i=1}^{b}$ need not be completely similar or dissimilar points but they can be any two random points from the dataset with some level of similarity/dissimilarity. The generalized contrastive loss generalizes the contrastive loss using this soft measure of similarity \cite{leyva2021generalized}: \begin{equation}\label{equation_constrastive_loss_generalized} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ \sum_{i=1}^b &\Big( (1-\psi_i) d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^1), \textbf{f}(\ensuremath\boldsymbol{x}_i^2)\big) \\ &+ \psi_i \big[\! -d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^1), \textbf{f}(\ensuremath\boldsymbol{x}_i^2)\big) + m \big]_+ \Big). \end{aligned} \end{equation} \subsubsection{Triplet Loss}\label{section_triplet_loss} One of the losses for Siamese networks with three sub-networks is the triplet loss \cite{schroff2015facenet} which uses the triplets in mini-batches, denoted by $\{(\ensuremath\boldsymbol{x}_i^a, \ensuremath\boldsymbol{x}_i^p, \ensuremath\boldsymbol{x}_i^n)\}_{i=1}^{b}$. It is defined as: \begin{equation}\label{equation_triplet_loss} \begin{aligned} \underset{\theta} {\text{minimize}}\, \sum_{i=1}^b &\Big[ d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^p)\big) - d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^n)\big) + m \Big]_+, \end{aligned} \end{equation} where $m>0$ is the margin and $[.]_+ := \max(.,0)$ is the standard Hinge loss. As shown in Fig. \ref{figure_triplet_contrastive_losses}-c, because of the used Hinge loss, this loss makes the distances of dissimilar points greater than the distances of similar points by at least a margin $m$; in other words, there will be a distance of at least margin $m$ between the positive and negative points. This loss desires to eventually have: \begin{align}\label{equation_triplet_loss_desire} d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^p)\big) + m \leq d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^n)\big), \end{align} for all triplets. The triplet loss is closely related to the cost function for spectral large margin metric learning \cite{weinberger2006distance,weinberger2009distance} (see Section \ref{section_large_margin_metric_learning}). It is also noteworthy that using the triplet loss as regularization for cross-entropy loss has been shown to increase robustness of network to some adversarial attacks \cite{mao2019metric}. \subsubsection{Tuplet Loss} In triplet loss, i.e. Eq. (\ref{equation_triplet_loss}), we use one positive and one negative point per anchor point. The tuplet loss \cite{sohn2016improved} uses several negative points per anchor point. If $k$ denotes the number of negative points per anchor point and $\ensuremath\boldsymbol{x}_i^{n,j}$ denotes the $j$-th negative point for $\ensuremath\boldsymbol{x}_i$, the tuplet loss is \cite{sohn2016improved}: \begin{equation}\label{equation_tuplet_loss} \begin{aligned} \underset{\theta} {\text{minimize}}\, \sum_{i=1}^b \sum_{j=1}^k &\Big[ d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^p)\big) \\ &- d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^{n,j})\big) + m \Big]_+. \end{aligned} \end{equation} This loss function pushes multiple negative points away from the anchor point simultaneously. \subsubsection{Neighborhood Component Analysis Loss} Neighborhood Component Analysis (NCA) \cite{goldberger2005neighbourhood} was originally proposed as a spectral metric learning method (see Section \ref{section_NCA_spectral}). After the success of deep learning, it was used as the loss function of Siamese networks where we minimize the negative log-likelihood using Gaussian distribution or the softmax form within the mini-batch. Assume we have $c$ classes in every mini-batch. We denote the class index of $\ensuremath\boldsymbol{x}_i$ by $c(\ensuremath\boldsymbol{x}_i)$ and the data points of the $j$-th class in the mini-batch by $\mathcal{X}_j$. The NCA loss is: \begin{equation}\label{equation_NCA_loss_Siamese} \begin{aligned} &\underset{\theta} {\text{minimize}} ~-\! \sum_{i=1}^{b} \log \Big(\exp\big(\!-\!d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^p)\big)\big) \\ &\times \Big[\sum_{j=1, j \neq c(\ensuremath\boldsymbol{x}_i)}^c \sum_{\ensuremath\boldsymbol{x}_j^n \in \mathcal{X}_j} \exp\big(\!-\!d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a) - \textbf{f}(\ensuremath\boldsymbol{x}_j^n)\big)\big)\Big]^{-1}\Big). \end{aligned} \end{equation} The numerator minimizes the distances of similar points and the denominator maximizes the distances of dissimilar points. \subsubsection{Proxy Neighborhood Component Analysis Loss} Computation of terms, especially the normalization factor in the denominator, is time- and memory-consuming in the NCA loss function (see Eq. (\ref{equation_NCA_loss_Siamese})). Proxy-NCA loss functions define some proxy points in the embedding space of network and use them in the NCA loss to accelerate computation and make it memory-efficient \cite{movshovitz2017no}. The proxies are representatives of classes in the embedding space and they can be defined in various ways. The simplest way is to define the proxy of every class as the mean of embedded points of that class. Of course, new mini-batches come during training. We can accumulate the embedded points of mini-batches and update the proxies after training the network by every mini-batch. Another approach for defining proxies is to cluster the embedded points into $c$ clusters (e.g., by K-means) and use the centroid of clusters. Let the set of proxies be denotes by $\mathcal{P}$ whose cardinality is the number of classes, i.e., $c$. Every embedded point is assigned to one of the proxies by \cite{movshovitz2017no}: \begin{align} \Pi(\textbf{f}(\ensuremath\boldsymbol{x}_i)) := \arg \min_{\ensuremath\boldsymbol{\pi} \in \mathcal{P}} \|\textbf{f}(\ensuremath\boldsymbol{x}_i) - \ensuremath\boldsymbol{\pi}\|_2^2, \end{align} or we can assign every point to the proxy of its own class. Let $\ensuremath\boldsymbol{pi}_j$ denote the proxy associated with the $j$-th class. The Proxy-NCA loss is the NCA loss, i.e. Eq. (\ref{equation_NCA_loss_Siamese}), but using proxies \cite{movshovitz2017no}: \begin{equation}\label{equation_proxy_NCA_loss_Siamese} \begin{aligned} &\underset{\theta} {\text{minimize}} ~-\! \sum_{i=1}^{b} \log \Big(\exp\big(\!-\!d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \Pi(\textbf{f}(\ensuremath\boldsymbol{x}_i^p))\big)\big) \\ &\times \Big[\sum_{j=1, j \neq c(\ensuremath\boldsymbol{x}_i)}^c \exp\big(\!-\!d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a) - \ensuremath\boldsymbol{\pi}_j\big)\big)\Big]^{-1}\Big). \end{aligned} \end{equation} It is shown in \cite{movshovitz2017no} that the Proxy-NCA loss, i.e. Eq. (\ref{equation_proxy_NCA_loss_Siamese}), is an upper-bound on the NCA loss, i.e. Eq. (\ref{equation_NCA_loss_Siamese}); hence, its minimization also achieves the goal of NCA. Comparing Eqs. (\ref{equation_NCA_loss_Siamese}) and (\ref{equation_proxy_NCA_loss_Siamese}) shows that Proxy-NCA is faster and more efficient than NCA because it uses only proxies of negative classes rather than using all negative points in the mini-batch. Proxy-NCA has also been used in feature extraction from medical images \cite{teh2020learning}. It is noteworthy that we can incorporate temperature scaling \cite{hinton2014distilling} in the Proxy-NCA loss. The obtained loss is named Proxy-NCA++ \cite{teh2020proxyncaPlusPlus} and is defined as: \begin{equation}\label{equation_proxy_NCA_plus_plus_loss_Siamese} \begin{aligned} &\underset{\theta} {\text{minimize}} ~-\! \sum_{i=1}^{b} \log \Big(\exp\big(\!-\!d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \Pi(\textbf{f}(\ensuremath\boldsymbol{x}_i^p))\big) \times \frac{1}{\tau} \big) \\ &\times \Big[\sum_{j=1, j \neq c(\ensuremath\boldsymbol{x}_i)}^c \exp\big(\!-\!d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a) - \ensuremath\boldsymbol{\pi}_j\big) \times \frac{1}{\tau} \big)\Big]^{-1}\Big), \end{aligned} \end{equation} where $\tau>0$ is the temperature which is a hyper-parameter. \subsubsection{Softmax Triplet Loss} Consider a mini-batch containing points from $c$ classes where $c(\ensuremath\boldsymbol{x}_i)$ is the class index of $\ensuremath\boldsymbol{x}_i$ and $\mathcal{X}_j$ denotes the points of the $j$-th class in the mini-batch. We can use the softmax function or the Gaussian distribution for the probability that the point $\ensuremath\boldsymbol{x}_i$ takes $\ensuremath\boldsymbol{x}_j$ as its neighbor. Similar to Eq. (\ref{equation_P_W_classCollapse}) or Eq. (\ref{equation_NCA_loss_Siamese}), we can have the softmax function used in NCA \cite{goldberger2005neighbourhood}: \begin{align} p_{ij} := \frac{\exp\big(\!-\!d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j)\big)\big)}{\sum_{k \neq i, k=1}^b \exp\big(\!-\!d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_k)\big)\big)}, \quad j \neq i. \end{align} Another approach for the softmax form is to use inner product in the exponent \cite{ye2019unsupervised}: \begin{align}\label{equation_softmax_triplet_loss_innerproduct} p_{ij} := \frac{\exp\big(\textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \textbf{f}(\ensuremath\boldsymbol{x}_j)\big)}{\sum_{k = 1, k \neq i}^b \exp\big(\textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \textbf{f}(\ensuremath\boldsymbol{x}_k)\big)}, \quad j \neq i. \end{align} The loss function for training the network can be the negative log-likelihood which can be called the softmax triplet loss \cite{ye2019unsupervised}: \begin{equation} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ - \sum_{i=1}^b \Big( &\sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \log(p_{ij}) \\ &- \sum_{\ensuremath\boldsymbol{x}_j \not \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \log(1 - p_{ij}) \Big). \end{aligned} \end{equation} This decreases and increases the distances of similar points and dissimilar points, respectively. \subsubsection{Triplet Global Loss} The triplet global loss \cite{kumar2016learning} uses the mean and variance of the anchor-positive pairs and anchor-negative pairs. It is defined as: \begin{equation} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ (\sigma_p^2 + \sigma_n^2) + \lambda\, [\mu_p - \mu_n + m]_+, \end{aligned} \end{equation} where $\lambda>0$ is the regularization parameter, $m>0$ is the margin, the means of pairs are: \begin{align*} & \mu_p := \frac{1}{b} \sum_{i=1}^b d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^p)\big), \\ & \mu_n := \frac{1}{b} \sum_{i=1}^b d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^n)\big), \end{align*} and the variances of pairs are: \begin{align*} & \sigma_p^2 := \frac{1}{b} \sum_{i=1}^b \Big( d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^p)\big) - \mu_p \Big)^2, \\ & \sigma_n^2 := \frac{1}{b} \sum_{i=1}^b \Big( d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^n)\big) - \mu_n \Big)^2. \end{align*} The first term of this loss minimizes the variances of anchor-positive and anchor-negative pairs. The second term, however, discriminates the anchor-positive pairs from the anchor-negative pairs. Hence, the negative points are separated from the positive points. \subsubsection{Angular Loss} For a triplet $(\ensuremath\boldsymbol{x}_i^a, \ensuremath\boldsymbol{x}_i^p, \ensuremath\boldsymbol{x}_i^n)$, consider a triangle whose vertices are the anchor, positive, and negative points. To satisfy Eq. (\ref{equation_triplet_loss_desire}) in the triplet loss, the angle at the vertex $\ensuremath\boldsymbol{x}_i^n$ should be small so the edge $d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^n)\big)$ becomes larger than the edge $d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^p)\big)$. Hence, we need to have and upper bound $\alpha>0$ on the angle at the vertex $\ensuremath\boldsymbol{x}_i^n$. If $\ensuremath\boldsymbol{x}_i^c := (\ensuremath\boldsymbol{x}_i^a + \ensuremath\boldsymbol{x}_i^p) / 2$, the angular loss is defined to be \cite{wang2017deep}: \begin{equation}\label{equation_angular_loss} \begin{aligned} &\underset{\theta} {\text{minimize}}\, \\ &\sum_{i=1}^b \Big[ d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^p)\big) - 4 \tan^2\!\big(\alpha\, d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^c)\big)\big) \Big]_+. \end{aligned} \end{equation} This loss reduces the distance of the anchor and positive points and increases the distance of anchor and $\ensuremath\boldsymbol{x}_i^c$ and the upper bound $\alpha$. This increases the distance of the anchor and negative points for discrimination of dissimilar points. \subsubsection{SoftTriple Loss} If we normalize the points to have unit length, Eq. (\ref{equation_triplet_loss_desire}) can be restated by using inner products: \begin{align}\label{equation_triplet_loss_desire_innerProduct} \textbf{f}(\ensuremath\boldsymbol{x}_i^a)^\top \textbf{f}(\ensuremath\boldsymbol{x}_i^p) + m \leq \textbf{f}(\ensuremath\boldsymbol{x}_i^a)^\top \textbf{f}(\ensuremath\boldsymbol{x}_i^n), \end{align} whose margin is not exactly equal to the margin in Eq. (\ref{equation_triplet_loss_desire}). Consider a Siamese network whose last layer's weights are $\{\ensuremath\boldsymbol{w}_l \in \mathbb{R}^p\}_{l=1}^c$ where $p$ is the dimensionality of the one-to-last layer and $c$ is the number of classes and the number of output neurons. We consider $k$ centers for the embedding of every class; hence, we define $\ensuremath\boldsymbol{w}_l^j \in \mathbb{R}^p$ as $\ensuremath\boldsymbol{w}_l$ for its $j$-th center. It is shown in \cite{qian2019softtriple} that softmax loss results in Eq. (\ref{equation_triplet_loss_desire_innerProduct}). Therefore, we can use the SoftTriple loss for training a Siamese network \cite{qian2019softtriple}: \begin{equation}\label{equation_SoftTriple_loss} \begin{aligned} &\underset{\theta} {\text{minimize}} ~-\! \sum_{i=1}^{b} \log \Big( \exp(\lambda(s_{i,y_i} - \delta)) \\ &~~~~~~~~~~ \times \big(\exp(\lambda(s_{i,y_i} - \delta)) + \sum_{l=1, l \neq y_i}^c \exp(\lambda s_{i,l}) \big)^{-1} \Big), \end{aligned} \end{equation} where $\lambda,\delta>0$ are hyper-parameters, $y_i$ is the label of $\ensuremath\boldsymbol{x}_i$, and: \begin{align*} s_{i,l} := \sum_{j=1}^k \frac{\exp\big(\textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \ensuremath\boldsymbol{w}_l^j\big)}{\sum_{t=1}^k \exp\big(\textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \ensuremath\boldsymbol{w}_l^t\big)} \textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \ensuremath\boldsymbol{w}_l^k. \end{align*} This loss increases and decreases the intra-class and inter-class distances, respectively. \subsubsection{Fisher Siamese Losses}\label{section_Fisher_Siamese_losses} Fisher Discriminant Analysis (FDA) \cite{fisher1936use,ghojogh2019fisher} decreases the intra-class variance and increases the inter-class variance by maximizing the Fisher criterion. This idea is very similar to the idea of loss functions for Siamese networks. Hence, we can combine the methods of FDA and Siamese loss functions. Consider a Siamese network whose last layer is denoted by the projection matrix $\ensuremath\boldsymbol{U}$. We consider the features of the one-to-last layer in the mini-batch. The covariance matrices of similar points and dissimilar points (one-to-last layer features) in the mini-batch are denoted by $\ensuremath\boldsymbol{S}_W$ and $\ensuremath\boldsymbol{S}_B$. These covariances become $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_W \ensuremath\boldsymbol{U}$ and $\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_B \ensuremath\boldsymbol{U}$, respectively, after the later layer's projection because of the quadratic characteristic of covariance. As in FDA, we can maximize the Fisher criterion or equivalently minimize the negative Fisher criterion: \begin{equation*} \begin{aligned} \underset{\ensuremath\boldsymbol{U}} {\text{minimize}} ~~~ \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_W \ensuremath\boldsymbol{U}) - \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_B \ensuremath\boldsymbol{U}). \end{aligned} \end{equation*} This problem is ill-posed because it increases the total covariance of embedded data to increase the term $\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_B \ensuremath\boldsymbol{U})$. Hence, we add minimization of the total covariance as the regularization term: \begin{equation*} \begin{aligned} \underset{\ensuremath\boldsymbol{U}} {\text{minimize}} ~~~ &\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_W \ensuremath\boldsymbol{U}) - \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_B \ensuremath\boldsymbol{U}) \\ &+ \epsilon \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_T \ensuremath\boldsymbol{U}), \end{aligned} \end{equation*} where $\epsilon \in (0,1)$ is the regularization parameter and $\ensuremath\boldsymbol{S}_T$ is the covariance of all points of the mini-batch in the one-to-last layer. The total scatter can be written as the summation of $\ensuremath\boldsymbol{S}_W$ and $\ensuremath\boldsymbol{S}_B$; hence: \begin{align*} & \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_W \ensuremath\boldsymbol{U}) - \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_B \ensuremath\boldsymbol{U}) + \epsilon \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_T \ensuremath\boldsymbol{U}) \\ &= \textbf{tr}\big(\ensuremath\boldsymbol{U}^\top (\ensuremath\boldsymbol{S}_W - \ensuremath\boldsymbol{S}_W + \epsilon \ensuremath\boldsymbol{S}_W + \epsilon \ensuremath\boldsymbol{S}_B) \ensuremath\boldsymbol{U}\big) \\ &= (2-\lambda) \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_W \ensuremath\boldsymbol{U}) - \lambda \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_B \ensuremath\boldsymbol{U}), \end{align*} where $\lambda := 1-\epsilon$. Inspired by Eq. (\ref{equation_triplet_loss}), we can have the following loss, named the Fisher discriminant triplet loss \cite{ghojogh2020fisher}: \begin{equation}\label{equation_Fisher_triplet_loss} \begin{aligned} \underset{\theta} {\text{minimize}}~~~~ \, &\Big[ (2-\lambda) \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_W \ensuremath\boldsymbol{U}) \\ &~~~~~~~~ - \lambda \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_B \ensuremath\boldsymbol{U}) + m \Big]_+, \end{aligned} \end{equation} where $m>0$ is the margin. Backpropagating the error of this loss can update both $\ensuremath\boldsymbol{U}$ and other layers of network. Note that the summation over the mini-batch is integrated in the computation of covariance matrices $\ensuremath\boldsymbol{S}_W$ and $\ensuremath\boldsymbol{S}_B$. Inspired by Eq. (\ref{equation_constrastive_loss}), we can also have the Fisher discriminant contrastive loss \cite{ghojogh2020fisher}: \begin{equation}\label{equation_Fisher_constrastive_loss} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ & (2-\lambda) \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_W \ensuremath\boldsymbol{U}) \\ &+ \big[\! - \lambda \textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_B \ensuremath\boldsymbol{U}) + m \big]_+. \end{aligned} \end{equation} Note that the variable $y_i$ used in the contrastive loss (see Eq. (\ref{equation_y_i_in_contrastive_loss})) is already used in computation of the covariances $\ensuremath\boldsymbol{S}_W$ and $\ensuremath\boldsymbol{S}_B$. There exist some other loss functions inspired by Fisher discriminant analysis but they are not used for Siamese networks. Those methods will be introduced in Section \ref{section_deep_discriminant_analysis}. \subsubsection{Deep Adversarial Metric Learning} In deep adversarial metric learning \cite{duan2018deep}, negative points are generated in an adversarial learning \cite{goodfellow2014generative,ghojogh2021generative}. In this method, we have a generator $G(.)$ which tries to generate negative points fooling the metric learning. Using triplet inputs $\{(\ensuremath\boldsymbol{x}_i^a, \ensuremath\boldsymbol{x}_i^p, \ensuremath\boldsymbol{x}_i^n)\}_{i=1}^b$, the loss function of generator is \cite{duan2018deep}: \begin{equation} \begin{aligned} &\mathcal{L}_G := \sum_{i=1}^b \Big( \|G(\ensuremath\boldsymbol{x}_i^a, \ensuremath\boldsymbol{x}_i^p, \ensuremath\boldsymbol{x}_i^n) - \ensuremath\boldsymbol{x}_i^a\|_2^2 \\ &~~~~~~~~ + \lambda_1 \|G(\ensuremath\boldsymbol{x}_i^a, \ensuremath\boldsymbol{x}_i^p, \ensuremath\boldsymbol{x}_i^n) - \ensuremath\boldsymbol{x}_i^n\|_2^2 \\ &~~~~~~~~ + \lambda_2 \big[ d(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(G(\ensuremath\boldsymbol{x}_i^a, \ensuremath\boldsymbol{x}_i^p, \ensuremath\boldsymbol{x}_i^n))) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - d(\textbf{f}(\ensuremath\boldsymbol{x}_i^a), \textbf{f}(\ensuremath\boldsymbol{x}_i^p)) + m \big]_+ \Big), \end{aligned} \end{equation} where $\lambda_1, \lambda_2>0$ are the regularization parameters. This loss makes the generated negative point close to the real negative point (to be negative) and the anchor point (for fooling metric learning adversarially). The Hinge loss makes the generated negative point different from the anchor and positive points so it also acts like a real negative. If $\mathcal{L}_M$ denotes any loss function for Siamese network, such as the triplet loss, the total loss function in deep adversarial metric learning is minimizing $\mathcal{L}_G + \lambda_3 \mathcal{L}_M$ where $\lambda_3>0$ is the regularization parameter \cite{duan2018deep}. It is noteworthy that there exists another adversarial metric learning which is not for Siamese networks but for cross-modal data \cite{xu2019deep}. \subsubsection{Triplet Mining}\label{section_triplet_mining} In every mini-batch containing data points from $c$ classes, we can select and use triplets of data points in different ways. For example, we can use all similar and dissimilar points for every anchor point as positive and negative points, respectively. Another approach is to only use some of the similar and dissimilar points within the mini-batch. These approaches for selecting and using triplets are called triplet mining \cite{sikaroudi2020offline}. In the following, we review some of the most important triplet mining methods. We use triplet mining methods for the triplet loss, i.e., Eq. (\ref{equation_triplet_loss}). Suppose $b$ is the mini-batch size, $c(\ensuremath\boldsymbol{x}_i)$ is the class index of $\ensuremath\boldsymbol{x}_i$, $\mathcal{X}_j$ denotes the points of the $j$-th class in the mini-batch, and $\mathcal{X}$ denotes the data points in the mini-batch. \hfill\break \textbf{-- Batch-all:} Batch-all triplet mining \cite{ding2015deep} considers every point in the mini-batch as an anchor point. All points in the mini-batch which are in the same class the anchor point are used as positive points. All points in the mini-batch which are in a different class from the class of anchor point are used as negative points: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}}\,\,\, \sum_{i=1}^b \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \sum_{\ensuremath\boldsymbol{x}_k \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \Big[ d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j)\big) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_k)\big) + m \Big]_+. \end{aligned} \end{equation} Batch-all mining makes use of all data points in the mini-batch to utilize all available information. \hfill\break \textbf{-- Batch-hard:} Batch-hard triplet mining \cite{hermans2017defense} considers every point in the mini-batch as an anchor point. The hardest positive, which is the farthest point from the anchor point in the same class, is used as the positive point. The hardest negative, which is the closest point to the anchor point from another class, is used as the negative point: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}}\,\,\, \sum_{i=1}^b \Big[ \max_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j)\big) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~ - \min_{\ensuremath\boldsymbol{x}_k \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_k)\big) + m \Big]_+. \end{aligned} \end{equation} Bath-hard mining uses hardest points so that the network learns the hardest cases. By learning the hardest cases, other cases are expected to be learned properly. Learning the hardest cases can also be justified by the opposition-based learning \cite{tizhoosh2005opposition}. Batch-hard mining has been used in many applications such as person re-identification \cite{wang2019improved}. \hfill\break \textbf{-- Batch-semi-hard:} Batch-semi-hard triplet mining \cite{schroff2015facenet} considers every point in the mini-batch as an anchor point. All points in the mini-batch which are in the same class the anchor point are used as positive points. The hardest negative (closest to the anchor point from another class), which is farther than the positive point, is used as the negative point: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}}\,\,\, \sum_{i=1}^b \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \Big[ d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j)\big) \\ &~~~~~~~~~~~~~ - \min_{\ensuremath\boldsymbol{x}_k \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \big\{d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_k)\big)\, |\, \\ &~~~~~~~~~~~~~~ d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_k)\big) > d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j)\big)\big\} + m \Big]_+. \end{aligned} \end{equation} \textbf{-- Easy-positive:} Easy-positive triplet mining \cite{xuan2020improved} considers every point in the mini-batch as an anchor point. The easiest positive (closest to the anchor point from the same class) is used as the positive point. All points in the mini-batch which are in a different class from the class of anchor point are used as negative points: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}}\,\,\, \sum_{i=1}^b \sum_{\ensuremath\boldsymbol{x}_k \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \Big[ \min_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j)\big) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_k)\big) + m \Big]_+. \end{aligned} \end{equation} We can use this triplet mining approach in NCA loss function such as in Eq. (\ref{equation_softmax_triplet_loss_innerproduct}). For example, we can have \cite{xuan2020improved}: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}}\,\,\, \sum_{i=1}^b \bigg( \min_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \exp\big(\textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \textbf{f}(\ensuremath\boldsymbol{x}_j)\big) \\ &~~~~~~~ \times \Big( \min_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \exp\big(\textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \textbf{f}(\ensuremath\boldsymbol{x}_j)\big) \\ &~~~~~~~~~~~~ + \sum_{\ensuremath\boldsymbol{x}_k \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \exp\big(\textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \textbf{f}(\ensuremath\boldsymbol{x}_k)\big) \Big)^{-1} \bigg), \end{aligned} \end{equation} where the embeddings for all points of the mini-batch are normalized to have length one. \hfill\break \textbf{-- Lifted embedding loss:} The lifted embedding loss \cite{oh2016deep} is related to the anchor-positive distance and the smallest (hardest) anchor-negative distance: \begin{equation} \begin{aligned} \underset{\theta} {\text{minimize}} ~~~ &\sum_{i=1}^b \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \Big( \Big[ d(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j)) \\ & + \max\Big( \max_{\ensuremath\boldsymbol{x}_k \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \big\{m - d(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_k))\big\}, \\ & \max_{\ensuremath\boldsymbol{x}_l \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_j)}} \big\{m - d(\textbf{f}(\ensuremath\boldsymbol{x}_j), \textbf{f}(\ensuremath\boldsymbol{x}_l))\big\} \Big) \Big]_+ \Big)^2, \end{aligned} \end{equation} This loss is using triplet mining because of using extreme distances. Alternatively, another version of this loss function uses logarithm and exponential operators \cite{oh2016deep}: \begin{equation} \begin{aligned} \underset{\theta} {\text{minimize}} &\sum_{i=1}^b \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \Big( \Big[ d(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j)) \\ & + \log\Big( \sum_{\ensuremath\boldsymbol{x}_k \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \exp\big(m - d(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_k))\big), \\ & \sum_{\ensuremath\boldsymbol{x}_l \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_j)}} \exp\big(m - d(\textbf{f}(\ensuremath\boldsymbol{x}_j), \textbf{f}(\ensuremath\boldsymbol{x}_l))\big) \Big) \Big]_+ \Big)^2. \end{aligned} \end{equation} \hfill\break \textbf{-- Hard mining center-triplet loss:} Let the mini-batch contain data points from $c$ classes. Hard mining center–triplet loss \cite{lv2019novel} considers the mean of every class as an anchor point. The hardest (farthest) positive point and the hardest (closest) negative point are used in this loss as \cite{lv2019novel}: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}}\,\,\, \sum_{l=1}^c \Big[ \max_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\bar{\ensuremath\boldsymbol{x}}^l)}} d\big(\textbf{f}(\bar{\ensuremath\boldsymbol{x}}^l), \textbf{f}(\ensuremath\boldsymbol{x}_j)\big) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~ - \min_{\ensuremath\boldsymbol{x}_k \in \mathcal{X} \setminus \mathcal{X}_{c(\bar{\ensuremath\boldsymbol{x}}^l)}} d\big(\textbf{f}(\bar{\ensuremath\boldsymbol{x}}^l), \textbf{f}(\ensuremath\boldsymbol{x}_k)\big) + m \Big]_+. \end{aligned} \end{equation} where $\bar{\ensuremath\boldsymbol{x}}^l$ denotes the mean of the $l$-th class. \hfill\break \textbf{-- Triplet loss with cross-batch memory:} A version of triplet loss can be \cite{wang2020cross}: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}}\,\,\, \sum_{i=1}^b \bigg( -\sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \textbf{f}(\ensuremath\boldsymbol{x}_j) \\ &~~~~~~~~~~~~~~~~~~~~~~ + \sum_{\ensuremath\boldsymbol{x}_k \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \textbf{f}(\ensuremath\boldsymbol{x}_k) \bigg). \end{aligned} \end{equation} This triplet loss can use a cross-batch memory where we accumulate a few latest mini-batches. Every coming mini-batch updates the memory. Let the capacity of the memory be $w$ points and the mini-batch size be $b$. Let $\widetilde{\ensuremath\boldsymbol{x}}_i$ denote the $i$-th data point in the memory. The triplet loss with cross-batch memory is defined as \cite{wang2020cross}: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}}\,\,\, \sum_{i=1}^b \bigg( -\sum_{\widetilde{\ensuremath\boldsymbol{x}}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \textbf{f}(\widetilde{\ensuremath\boldsymbol{x}}_j) \\ &~~~~~~~~~~~~~~~~~~~~~~ + \sum_{\widetilde{\ensuremath\boldsymbol{x}}_k \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \textbf{f}(\ensuremath\boldsymbol{x}_i)^\top \textbf{f}(\widetilde{\ensuremath\boldsymbol{x}}_k) \bigg), \end{aligned} \end{equation} which takes the positive and negative points from the memory rather than from the coming mini-batch. \subsubsection{Triplet Sampling} Rather than using the extreme (hardest or easiest) positive and negative points \cite{sikaroudi2020offline}, we can sample positive and negative points from the points in the mini-batch or from some distributions. There are several approaches for the positive and negative points to be sampled \cite{ghojogh2021data}: \begin{itemize}\setlength\itemsep{0.1em} \item Sampled by extreme distances of points, \item Sampled randomly from classes, \item Sampled by distribution but from existing points, \item Sampled stochastically from distributions of classes. \end{itemize} These approaches are used for triplet sampling. The first approach was introduced in Section \ref{section_triplet_mining}. The first, second, and third approaches sample the positive and negative points from the set of points in the mini-batch. This type of sampling is called survey sampling \cite{ghojogh2020sampling}. The third and fourth approaches sample points from distributions stochastically. In the following, we introduce some of the triplet sampling methods. \hfill\break \textbf{-- Distance weighted sampling:} Distance weighted sampling \cite{wu2017sampling} is a method in the third approach, i.e., sampling by distribution but from existing points. The distribution of the pairwise distances is proportional to \cite{wu2017sampling}: \begin{align*} \mathbb{P}\big(d(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j))\big) & \sim \big( d(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j))\big)^{p-2} \times \nonumber \\ &\Big(1 - 0.25 \big( d(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j))\big)^2\Big)^{(b-3)/2}, \end{align*} where $b$ is the number of points in the mini-batch and $p$ is the dimensionality of embedding space (i.e., the number of neurons in the last layer of the Siamese network). In every mini-batch, we consider every point once as an anchor point. For an anchor point, we consider all points of the mini-batch which are in a different class as candidates for the negative point. We sample a negative point, denoted by $\ensuremath\boldsymbol{x}_*^n$ from these candidates \cite{wu2017sampling}: \begin{align*} \ensuremath\boldsymbol{x}_*^n \sim \min\Big(\lambda, \mathbb{P}^{-1}\big(d(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j))\big)\Big), \quad \forall j \neq i, \end{align*} where $\lambda>0$ is a hyperparameter to ensure that all candidates have a chance to be chosen. This sampling is performed for every mini-batch. The loss function in distance weighted sampling is \cite{wu2017sampling}: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}}\,\,\, \sum_{i=1}^b \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \Big[ d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j)\big) \\ &~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - d\big(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_*^n)\big) + m \Big]_+. \end{aligned} \end{equation} \textbf{-- Sampling by Bayesian updating theorem:} We can sample triplets from distributions of classes which is the forth approach of sampling, mentioned above. One method for this sampling is using the Bayesian updating theorem \cite{sikaroudi2021batch} which is updating the posterior by the Bayes' rule from some new data. In this method, we assume $p$-dimensional Gaussian distribution for every class in the embedding space where $p$ is the dimensionality of embedding space. We accumulate the embedded points for every class when the new mini-batches are introduced to the network. The distributions of classes are updated based on both the existing points available so far and the new-coming data points. It can be shown that the posterior of mean and covariance of a Gaussian distribution is a normal inverse Wishart distribution \cite{murphy2007conjugate}. The mean and covariance of a Gaussian distribution have a generalized Student-t distribution and inverse Wishart distribution, respectively \cite{murphy2007conjugate}. Let the so-far available data have sample size $n_0$, mean $\ensuremath\boldsymbol{\mu}^0$, and covariance $\ensuremath\boldsymbol{\Sigma}^0$. Also, let the newly coming data have sample size $n'$, mean $\ensuremath\boldsymbol{\mu}'$, and covariance $\ensuremath\boldsymbol{\Sigma}'$. We update the mean and covariance by expectation of these distributions \cite{sikaroudi2021batch}: \begin{align*} & \ensuremath\boldsymbol{\mu}^{0} \leftarrow \mathbb{E}(\ensuremath\boldsymbol{\mu}\, |\, \ensuremath\boldsymbol{x}^{0}) = \frac{n' \ensuremath\boldsymbol{\mu}' + n_0 \ensuremath\boldsymbol{\mu}^{0}}{n' + n_0}, \\ & \ensuremath\boldsymbol{\Sigma}^{0} \leftarrow \mathbb{E}(\ensuremath\boldsymbol{\Sigma}\, |\, \ensuremath\boldsymbol{x}^{0}) = \frac{\ensuremath\boldsymbol{\Upsilon}^{-1}}{n'\!+\! n_0\! -\! p \!-\! 1},~~~ \forall\, n'\! +\! n_0\! >\! p\! +\! 1, \end{align*} where: \begin{align*} \mathbb{R}^{d \times d} \ni \ensuremath\boldsymbol{\Upsilon} := &\, n' \ensuremath\boldsymbol{\Sigma}' + n_0 \ensuremath\boldsymbol{\Sigma}^0 \\ &+ \frac{n'_1 n_0}{n'_1 + n_0} (\ensuremath\boldsymbol{\mu}^0 - \ensuremath\boldsymbol{\mu}') (\ensuremath\boldsymbol{\mu}^0 - \ensuremath\boldsymbol{\mu}')^\top. \end{align*} The updated mean and covariance are used for Gaussian distributions of the classes. Then, we sample triplets from the distributions of classes rather than from the points of mini-batch. We consider every point of the new mini-batch as an anchor point and sample a positive point from the distribution of the same class. We sample $c-1$ negative points from the distributions of $c-1$ other classes. If this triplet sampling procedure is used with triplet and contrastive loss functions, the approach is named Bayesian Updating with Triplet loss (BUT) and Bayesian Updating with NCA loss (BUNCA) \cite{sikaroudi2021batch}. \hfill\break \textbf{-- Hard negative sampling:} Let the anchor, positive, and negative points be denoted by $\ensuremath\boldsymbol{x}^a$, $\ensuremath\boldsymbol{x}^p$, and $\ensuremath\boldsymbol{x}^n$, respectively. Consider the following distributions for the negative and positive points \cite{robinson2020contrastive}: \begin{align*} & \mathbb{P}(\ensuremath\boldsymbol{x}^n) \propto \alpha \mathbb{P}_n(\ensuremath\boldsymbol{x}^n) + (1 - \alpha) \mathbb{P}_p(\ensuremath\boldsymbol{x}^n), \\ & \mathbb{P}_n(\ensuremath\boldsymbol{x}) \propto \exp\big(\beta \textbf{f}(\ensuremath\boldsymbol{x}^a)^\top \textbf{f}(\ensuremath\boldsymbol{x})\big)\, \mathbb{P}(\ensuremath\boldsymbol{x} | c(\ensuremath\boldsymbol{x}) \neq c(\ensuremath\boldsymbol{x}^a)), \\ & \mathbb{P}_p(\ensuremath\boldsymbol{x}) \propto \exp\big(\beta \textbf{f}(\ensuremath\boldsymbol{x}^a)^\top \textbf{f}(\ensuremath\boldsymbol{x})\big)\, \mathbb{P}(\ensuremath\boldsymbol{x} | c(\ensuremath\boldsymbol{x}) = c(\ensuremath\boldsymbol{x}^a)), \end{align*} where $\alpha \in (0,1)$ is a hyper-parameter. The loss function with hard negative sampling is \cite{robinson2020contrastive}: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}} ~-\! \sum_{i=1}^{b} \mathbb{E}_{\ensuremath\boldsymbol{x}^p \sim \mathbb{P}_p(\ensuremath\boldsymbol{x})} \log \bigg( \exp\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a)^\top \textbf{f}(\ensuremath\boldsymbol{x}^p)\big) \\ &~~~~~~~~~~ \Big( \exp\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a)^\top \textbf{f}(\ensuremath\boldsymbol{x}^p)\big) \\ &~~~~~~~~~~ + \mathbb{E}_{\ensuremath\boldsymbol{x}^n \sim \mathbb{P}(\ensuremath\boldsymbol{x}^n)}\big[ \exp\big(\textbf{f}(\ensuremath\boldsymbol{x}_i^a)^\top \textbf{f}(\ensuremath\boldsymbol{x}^n)\big) \big] \Big)^{-1} \bigg), \end{aligned} \end{equation} where positive and negative points are sampled from positive and negative distributions defined above. The expectations can be estimated using the Monte Carlo approximation \cite{ghojogh2020sampling}. This time of triplet sampling is a method in the fourth type of triplet sampling, i.e., sampling stochastically from distributions of classes. \subsection{Deep Discriminant Analysis Metric Learning}\label{section_deep_discriminant_analysis} Deep discriminant analysis metric learning methods use the idea of Fisher discriminant analysis \cite{fisher1936use,ghojogh2019fisher} in deep learning, for learning an embedding space which separates classes. Some of these methods are deep probabilistic discriminant analysis \cite{li2019discriminant}, discriminant analysis with virtual samples \cite{kim2021virtual}, Fisher Siamese losses \cite{ghojogh2020fisher}, and deep Fisher discriminant analysis \cite{diaz2017deep,diaz2019deep}. The Fisher Siamese losses were already introduced in Section \ref{section_Fisher_Siamese_losses}. \subsubsection{Deep Probabilistic Discriminant Analysis} Deep probabilistic discriminant analysis \cite{li2019discriminant} minimizes the inverse Fisher criterion: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}} ~~~ \frac{\mathbb{E}[\textbf{tr}(\text{cov}(\textbf{f}(\ensuremath\boldsymbol{x})|y))]}{\textbf{tr}(\text{cov}(\mathbb{E}[\textbf{f}(\ensuremath\boldsymbol{x})|y]))} = \frac{\sum_{i=1}^b \mathbb{E}[\text{var}(\textbf{f}(\ensuremath\boldsymbol{x}_i)|y_i)]}{\sum_{i=1}^b \text{var}(\mathbb{E}[\textbf{f}(\ensuremath\boldsymbol{x}_i)|y_i])} \\ &\overset{(a)}{=} \frac{\sum_{i=1}^b \mathbb{E}[\text{var}(\textbf{f}(\ensuremath\boldsymbol{x}_i)|y_i)]}{\sum_{i=1}^b \big( \text{var}(\textbf{f}(\ensuremath\boldsymbol{x}_i)) - \mathbb{E}[\text{var}(\textbf{f}(\ensuremath\boldsymbol{x}_i)|y_i)] \big)} \\ &\overset{(b)}{=} \frac{\sum_{i=1}^b \sum_{l=1}^c \mathbb{P}(y=l) \text{var}(\textbf{f}(\ensuremath\boldsymbol{x}_i)|y_i=l)}{\sum_{i=1}^b \big( \text{var}(\textbf{f}(\ensuremath\boldsymbol{x}_i)) - \sum_{l=1}^c \mathbb{P}(y=l) \text{var}(\textbf{f}(\ensuremath\boldsymbol{x}_i)|y_i=l) \big)}, \end{aligned} \end{equation} where $b$ is the mini-batch size, $c$ is the number of classes, $y_i$ is the class label of $\ensuremath\boldsymbol{x}_i$, $\text{cov}(.)$ denotes covariance, $\text{var}(.)$ denotes variance, $\mathbb{P}(y=l)$ is the prior of the $l$-th class (estimated by the ratio of class population to the total number of points in the mini-batch), $(a)$ is because of the law of total variance, and $(b)$ is because of the definition of expectation. The numerator and denominator represent the intra-class and inter-class variances, respectively. \subsubsection{Discriminant Analysis with Virtual Samples} In discriminant analysis metric learning with virtual samples \cite{kim2021virtual}, we consider any backbone network until the one-to-last layer of neural network and a last layer with linear activation function. Let the outputs of the one-to-last layer be denoted by $\{\textbf{f}'(\ensuremath\boldsymbol{x}_i)\}_{i=1}^b$ and the weights of the last layer be $\ensuremath\boldsymbol{U}$. We compute the intra-class scatter $\ensuremath\boldsymbol{S}_W$ and inter-class scatter $\ensuremath\boldsymbol{S}_B$ for the one-to-last layer's features $\{\textbf{f}'(\ensuremath\boldsymbol{x}_i)\}_{i=1}^b$. If we see the last layer as a Fisher discriminant analysis model with projection matrix $\ensuremath\boldsymbol{U}$, the solution is the eigenvalue problem \cite{ghojogh2019eigenvalue} for $\ensuremath\boldsymbol{S}_W^{-1} \ensuremath\boldsymbol{S}_B$. Let $\lambda_j$ denote the $j$-th eigenvalue of this problem. Assume $\mathcal{S}_b$ and $\mathcal{D}_b$ denote the similar and dissimilar points in the mini-batch where $|\mathcal{S}_b| = |\mathcal{D}_b| = q$. We define \cite{kim2021virtual}: \begin{align*} & \ensuremath\boldsymbol{g}_p := [\exp(-\textbf{f}'(\ensuremath\boldsymbol{x}_i)^\top \textbf{f}'(\ensuremath\boldsymbol{x}_j))\, |\, (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S}_b]^\top \in \mathbb{R}^q, \\ & \ensuremath\boldsymbol{g}_n := [\exp(-\textbf{f}'(\ensuremath\boldsymbol{x}_i)^\top \textbf{f}'(\ensuremath\boldsymbol{x}_j))\, |\, (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}_b]^\top \in \mathbb{R}^q, \\ & s_{ctr} := \frac{1}{2q} \sum_{i=1}^q \big( \ensuremath\boldsymbol{g}_p(i) + \ensuremath\boldsymbol{g}_n(i) \big), \end{align*} where $\ensuremath\boldsymbol{g}(i)$ is the $i$-th element of $\ensuremath\boldsymbol{g}$. We sample $q$ numbers, namely virtual samples, from the uniform distribution $U(s_{ctr} - \epsilon \bar{\lambda}, s_{ctr} + \epsilon \bar{\lambda})$ where $\epsilon$ is a small positive number and $\bar{\lambda}$ is the mean of eigenvalues $\lambda_j$'s. The $q$ virtual samples are put in a vector $\ensuremath\boldsymbol{r} \in \mathbb{R}^q$. The loss function for discriminant analysis with virtual samples is \cite{kim2021virtual}: \begin{equation} \begin{aligned} &\underset{\theta, \ensuremath\boldsymbol{U}} {\text{minimize}} ~~~ \frac{1}{q} \sum_{i=1}^q \Big[ \frac{1}{q}\, \ensuremath\boldsymbol{g}_p(i)\, \|\ensuremath\boldsymbol{r}\|_1 - \frac{1}{q}\, \ensuremath\boldsymbol{g}_n(i)\, \|\ensuremath\boldsymbol{r}\|_1 + m \Big]_+ \\ &~~~~~~~~~~~~~~~~~~~~ - 10^{-5} \frac{\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_B \ensuremath\boldsymbol{U})}{\textbf{tr}(\ensuremath\boldsymbol{U}^\top \ensuremath\boldsymbol{S}_W \ensuremath\boldsymbol{U})}, \end{aligned} \end{equation} where $\|.\|_1$ is the $\ell_1$ norm, $[.]_+ := \max(.,0)$, $m>0$ is the margin, and the second term is maximization of the Fisher criterion. \subsubsection{Deep Fisher Discriminant Analysis} It is shown in \cite{hart2000pattern} that the solution to the following least squares problem is equivalent to the solution of Fisher discriminant analysis: \begin{equation}\label{equation_FDA_least_squares} \begin{aligned} &\underset{\ensuremath\boldsymbol{w}_0 \in \mathbb{R}^{c}, \ensuremath\boldsymbol{W} \in \mathbb{R}^{d \times c}} {\text{minimize}} ~~~ \frac{1}{2} \|\ensuremath\boldsymbol{Y} - \ensuremath\boldsymbol{1}_{n \times 1} \ensuremath\boldsymbol{w}_0^\top - \ensuremath\boldsymbol{X} \ensuremath\boldsymbol{W}\|_F^2, \end{aligned} \end{equation} where $\|.\|_F$ is the Frobenius norm, $\ensuremath\boldsymbol{X} \in \mathbb{R}^{n \times d}$ is the row-wise stack of data points, $\ensuremath\boldsymbol{Y} := \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{E} \ensuremath\boldsymbol{\Pi}^{-(1/2)} \in \mathbb{R}^{n \times c}$ where $\ensuremath\boldsymbol{H} := \ensuremath\boldsymbol{I} - (1/n) \ensuremath\boldsymbol{1} \ensuremath\boldsymbol{1}^\top \in \mathbb{R}^{n \times n}$ is the centering matrix, $\ensuremath\boldsymbol{E} \in \{0,1\}^{n \times c}$ is the one-hot-encoded labels stacked row-wise, $\ensuremath\boldsymbol{\Pi} \in \mathbb{R}^{c \times c}$ is the diagonal matrix whose $(l,l)$-th element is the cardinality of the $l$-th class. Deep Fisher discriminant analysis \cite{diaz2017deep,diaz2019deep} implements Eq. (\ref{equation_FDA_least_squares}) by a nonlinear neural network with loss function: \begin{equation} \begin{aligned} &\underset{\theta}{\text{minimize}} ~~~ \frac{1}{2} \|\ensuremath\boldsymbol{Y} - \textbf{f}(\ensuremath\boldsymbol{X}; \theta)\|_F^2, \end{aligned} \end{equation} where $\theta$ is the weights of network, $\ensuremath\boldsymbol{X} \in \mathbb{R}^{n \times d}$ denotes the row-wise stack of points in the mini-batch of size $b$, $\ensuremath\boldsymbol{Y} := \ensuremath\boldsymbol{H} \ensuremath\boldsymbol{E} \ensuremath\boldsymbol{\Pi}^{-(1/2)} \in \mathbb{R}^{b \times c}$ is computed in every mini-batch, and $\textbf{f}(.) \in \mathbb{R}^{b \times c}$ is the row-wise stack of output embeddings of the network. After training, the output $\textbf{f}(\ensuremath\boldsymbol{x})$ is the embedding for the input point $\ensuremath\boldsymbol{x}$. \subsection{Multi-Modal Deep Metric Learning} Data has several modals where a separate set of features is available for every modality of data. In other words, we can have several features for every data point. Note that the dimensionality of features may differ. Multi-modal deep metric learning \cite{roostaiyan2017multi} addresses this problem in metric learning. Let $m$ denote the number of modalities. Consider $m$ stacked autoencoders each of which is for one of the modalities. The $l$-th autoencoder gets the $l$-th modality of the $i$-th data point, denoted by $\ensuremath\boldsymbol{x}_i^l$, and reconstructs it as output, denoted by $\widehat{\ensuremath\boldsymbol{x}}_i^l$. The embedding layer, or the layer between encoder and decoder, is shared between all $m$ autoencoders. We denote the output of this shared embedding layer by $\textbf{f}(\ensuremath\boldsymbol{x}_i)$. The loss function for training the $m$ stacked autoencoders with the shared embedding layer can be \cite{roostaiyan2017multi}: \begin{equation} \begin{aligned} &\underset{\theta} {\text{minimize}} ~~~ \sum_{i=1}^b \sum_{l=1}^m \|\ensuremath\boldsymbol{x}_i^l - \widehat{\ensuremath\boldsymbol{x}}_i^l\|_2^2 \\ &+ \lambda_1 \sum_{i=1}^b \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \big[d(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j)) - m_1\big]_+ \\ &+ \lambda_2 \sum_{i=1}^b \sum_{\ensuremath\boldsymbol{x}_j \in \mathcal{X} \setminus \mathcal{X}_{c(\ensuremath\boldsymbol{x}_i)}} \big[-d(\textbf{f}(\ensuremath\boldsymbol{x}_i), \textbf{f}(\ensuremath\boldsymbol{x}_j)) + m_2\big]_+, \end{aligned} \end{equation} where $\lambda_1, \lambda_2>0$ are the regularization parameters and $m_1, m_2>0$ are the margins. The first term is the reconstruction loss and the second and third terms are for metric learning which collapses each class to a margin $m_1$ and discriminates classes by a margin $m_2$. This loss function is optimized in a stacked autoencoder setup \cite{hinton2006reducing,wang2014effective}. Then, it is fine-tuned by backpropagation \cite{ghojogh2021restricted}. After training, the embedding layer can be used for embedding data points. Note that another there exists another multi-modal deep metric learning, which is \cite{xu2019deep}. \subsection{Geometric Metric Learning by Neural Network} There exist some works, such as \cite{huang2017riemannian}, \cite{hauser2017principles}, and \cite{hajiabadi2019layered}, which have implemented neural networks on the Riemannian manifolds. Layered geometric learning \cite{hajiabadi2019layered} implements Geometric Mean Metric Learning (GMML) \cite{zadeh2016geometric} (recall Section \ref{section_geometric_mean_metric_learning}) in a neural network framework. In this method, every layer of network is a metric layer which projects the output of its previous layer onto the subspace of its own metric (see Proposition \ref{proposition_metric_learning_projection} and Proposition \ref{equation_metric_learning_projection}). For the $l$-th layer of network, we denote the weight matrix (i.e., the projection matrix of metric) and the output of layer for the $i$-th data point by $\ensuremath\boldsymbol{U}_l$ and $\ensuremath\boldsymbol{x}_{i,l}$, respectively. Hence, the metric in the $l$-th layer models $\|\ensuremath\boldsymbol{x}_{i,l} - \ensuremath\boldsymbol{x}_{j,l}\|_{\ensuremath\boldsymbol{U}_l\ensuremath\boldsymbol{U}_l^\top}$. Consider the dataset of $n$ points $\ensuremath\boldsymbol{X} \in \mathbb{R}^{d \times n}$. We denote the output of the $l$-th layer by $\ensuremath\boldsymbol{X}_l \in \mathbb{R}^{d \times n}$. The projection of a layer onto its metric subspace is $\ensuremath\boldsymbol{X}_l = \ensuremath\boldsymbol{U}_l^\top \ensuremath\boldsymbol{X}_{l-1}$. Every layer solves the optimization problem of GMML \cite{zadeh2016geometric}, i.e., Eq. (\ref{equation_GMML_optimization}). For this, we start from the first layer and proceed to the last layer by feed-propagation. The $l$-th layer computes $\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}$ and $\ensuremath\boldsymbol{\Sigma}_{\mathcal{D}}$ for $\ensuremath\boldsymbol{X}_{l-1}$ by Eq. (\ref{equation_spectral_ML_first_method_Sigma_S}). Then, the solution of optimization (\ref{equation_GMML_optimization}) is computed which is the Eq. (\ref{equation_GMML_solution_1_1}), i.e., $\ensuremath\boldsymbol{W}_l = \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{-1} \sharp_{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} = \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(-1/2)} \big(\ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{D}} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(1/2)}\big)^{(1/2)} \ensuremath\boldsymbol{\Sigma}_{\mathcal{S}}^{(-1/2)}$. Then, using Eq. (\ref{equation_W_U_UT}), we decompose the obtained $\ensuremath\boldsymbol{W}_l$ to find $\ensuremath\boldsymbol{U}_l$. Then, data points are projected onto the metric subspace as $\ensuremath\boldsymbol{X}_l = \ensuremath\boldsymbol{U}_l^\top \ensuremath\boldsymbol{X}_{l-1}$. If we want the output of layers lie on the positive semi-definite manifold, the activation function of every layer can be projection onto the positive semi-definite cone \cite{ghojogh2021kkt}: \begin{align*} \ensuremath\boldsymbol{X}_l := \ensuremath\boldsymbol{V}\, \textbf{diag}(\max(\lambda_1, 0), \dots, \max(\lambda_d, 0))\, \ensuremath\boldsymbol{V}^\top, \end{align*} where $\ensuremath\boldsymbol{V}$ and $\{\lambda_1, \dots, \lambda_d\}$ are the eigenvectors and eigenvalues of $\ensuremath\boldsymbol{X}_l$, respectively. This activation function is called the eigenvalue rectification layer in \cite{huang2017riemannian}. Finally, it is noteworthy that there is another work, named backprojection \cite{ghojogh2020backprojection}, which has similar idea but in the Euclidean and Hilbert spaces and not in the Riemannian space. \subsection{Few-shot Metric Learning} Few-shot learning refers to learning from a few data points rather than from a large enough dataset. Few-shot learning is used for domain generalization to be able to use for unseen data in the test phase \cite{wang2020generalizing}. The training phase of few-shot learning is episodic where in every iteration or so-called episode of training, we have a support set and a query set. In other words, the training dataset is divided into mini-batches where every mini-batch contains a support set and a query set \cite{triantafillou2020meta}. Consider a training dataset with $c_\text{tr}$ classes and a test dataset with $c_\text{te}$ classes. As mentioned before, test and training datasets are usually disjoint in few-shot learning so it is useful for domain generalization. In every episode, also called the task or the mini-batch, we train using some (and not all) training classes by randomly sampling from classes. The support set is $\mathcal{S}_s := \{(\ensuremath\boldsymbol{x}_{s,i}, y_{s,i})\}_{i=1}^{|\mathcal{S}_s|}$ where $\ensuremath\boldsymbol{x}$ and $y$ denote the data point and its label, respectively. The query set is $\mathcal{S}_q := \{(\ensuremath\boldsymbol{x}_{q,i}, y_{q,i})\}_{i=1}^{|\mathcal{S}_q|}$. The training data of every episode (mini-batch) is the union of the support and query sets. At every episode, we randomly sample $c_s$ classes out of the total $c_\text{tr}$ classes of training dataset, where we usually have $c_s \ll c_\text{tr}$. Then, we sample $k_s$ training data points from these $c_s$ selected classes. These $c_s \times k_s = |\mathcal{S}_s|$ form the support set. This few-shot setup is called $c_s$-way, $k_s$-shot in which the support set contains $c_s$ classes and $k_s$ points in every class. The number of classes and every class's points in the query set of every episode may or may not be the same as in the support set. In every episode of the training phase of few-shot learning, we update the network weights by back-propagating error using the support set. These updated weights are not finalized yet. We feed the query set to the network with the updated weights and back-propagate error using the query set. This second back-propagation with the query set updates the weights of network finally at the end of episode. In other words, the query set is used to evaluate how good the update by support set are. This learning procedure for few-shot learning is called meta-learning \cite{finn2017model}. There are several family of methods for few-shot learning, one of which is some deep metric learning methods. Various metric learning methods have been proposed for learning from few-shot data. For example, Siamese network, introduced in Section \ref{section_metric_learning_Siamese}, has been used for few-shot learning \cite{koch2015siamese,li2020revisiting}. In the following, we introduce two metric learning methods for few-shot learning. \subsubsection{Multi-scale Metric Learning} Multi-scale metric learning \cite{jiang2020multi} learns the embedding space by learning multiple scales of middle features in the training process. It has several steps. In the first step, we use a pre-trained network with multiple output layers which produce several different scales of features for both the support and query sets. In the second step, within every scale of support set, we take average of the $k_s$ features in every class. This gives us $c_s$ features for every scale in the support set. This and the features of the query set are fed to the third step. In the third step, we feed every scale to a sub-network where larger scales are fed to sub-networks with more number of layers as they contain more information to process. These sub-networks are concatenated to give a scalar output for every data point with multiple scales of features. Hence, we obtain a scalar score for every data point in the support and query sets. Finally, a combination of a classification loss function, such as the cross-entropy loss (see Eq. (\ref{equation_cross_entropy_loss})), and triplet loss (see Eq. \ref{equation_triplet_loss}) is used in the support-query setup explained before. \subsubsection{Metric Learning with Continuous Similarity Scores} Another few-shot metric learning is \cite{xu2019zero} which takes pairs of data points as the input support and query sets. For the pair $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j)$, consider binary similarity score, $y_{ij}$, defined as: \begin{align}\label{equation_y_ij_deep} y_{ij} := \left\{ \begin{array}{ll} 1 & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S} \\ 0 & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}. \end{array} \right. \end{align} where $\mathcal{S}$ and $\mathcal{D}$ denote the sets of similar and dissimilar points, respectively. We can define continuous similarity score, $y'_{ij}$, as \cite{xu2019zero}: \begin{align}\label{equation_y_ij_deep_continuous} y'_{ij} := \left\{ \begin{array}{ll} (\beta-1) d(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) + 1 & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S} \\ -\alpha d(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) + \alpha & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}, \end{array} \right. \end{align} where $0<\alpha<\beta<1$ and $d(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j)$ is the normalized squared Euclidean distance (we normalize distances within every mini-batch). The ranges of these continuous similarities are: \begin{align*} y'_{ij} \in \left\{ \begin{array}{ll} \, [\beta, 1] & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{S} \\ \, [0, \alpha] & \mbox{if } (\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{D}. \end{array} \right. \end{align*} In every episode (mini-batch), the pairs are fed to a network with several feature vector outputs. For every pair $(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j)$, these feature vectors are fed to another network which outputs a scalar similarity score $s_{ij}$. The loss function of metric learning in this method is \cite{xu2019zero}: \begin{equation} \begin{aligned} & \underset{\theta}{\text{maximize}} & & \sum_{(\ensuremath\boldsymbol{x}_i, \ensuremath\boldsymbol{x}_j) \in \mathcal{X}} (1 + \lambda) (s_{ij} - y'_{ij})^2, \\ & \text{subject to} & & \beta \leq s_{ij}, y'_{ij} \leq 1 \quad \text{ if } \quad y_{ij} = 1, \\ & & & 0 \leq s_{ij}, y'_{ij} \leq \alpha \quad \text{ if } \quad y_{ij} = 0, \end{aligned} \end{equation} where $\lambda>0$ is the regularization parameter and $\mathcal{X}$ is the mini-batch of the support or query set depending on whether it is the phase of support or query. \section{Conclusion}\label{section_conclusion} This was a tutorial and survey on spectral, probabilistic, and deep metric learning. We started with defining distance metric. In spectral methods, we covered methods using scatters of data, methods using Hinge loss, locally linear metric adaptation, kernel methods, geometric methods, and adversarial metric learning. In probabilistic category, we covered collapsing classes, neighborhood component analysis, Bayesian metric learning, information theoretic methods, and empirical risk minimization approaches. In deep learning methods, we explain reconstruction autoencoders, supervised loss functions, Siamese networks, deep discriminant analysis methods, multi-modal learning, geometric deep metric learning, and few-shot metric learning.
1,314,259,993,992
arxiv
\section{Introduction} In \cite{Cra}, T.A. Crawford showed that if $E \leq 5 |d|+10$, the subset $\mbox{\rm Harm}_{d,E} (\CC P^2)$ of the space of harmonic maps from $S^2$ to $\CC P^2$ consisting of those maps of degree $d$ and energy $4\pi E$ can be given the structure of a complex manifold and that this manifold is connected. This he did by showing that the space $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ of full holomorphic maps from $S^2$ to $\CC P^2$ of degree $k$ and ramification index $r$ is a complex manifold if $r \leq (k+1)/2$, and that the ``Gauss transform'' $G'_{k,r}$ which maps $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ to $\mbox{\rm Harm}_{k-r-2,3k-r-2}(\CC P^2)$ bijectively is a homeomorphism, so that the manifold structure of $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ can be transported to the topological space $\mbox{\rm Harm}_{d,E}(\CC P^2)$. This does not prove that the transported structure is the one induced by the natural inclusion of $\mbox{\rm Harm}_{d,E}(\CC P^2)$ in the space of maps from $S^2$ to $\CC P^2$. In this paper, after giving a treatment of Crawford's result adapted to our needs, we show that $G'_{k,r}$ is a {\bf smooth} map from $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ to $C^j(S^2,\CC P^2)$ \ (for $j \geq 2$) and has {\bf injective differential}. From this we obtain: \begin{theorem} \label{th:1.1} For $0 \leq r \leq \displaystyle{\frac{k+1}{2}}$ and $\displaystyle{\frac{4k-11}{3} \leq r \leq \frac{3}{2}k-3}$ \ the map $$ G'_{k,r} : \mbox{\rm Hol}_{k,r}^* (\CC P^2) \to C^j(S^2,\CC P^2) $$ is a smooth embedding onto $\mbox{\rm Harm}_{k-r-2,3k-r-2}(\CC P^2)$ for any $j \geq 2$. Each component $\mbox{\rm Harm}_{d,E}(\CC P^2)$ of $\mbox{\rm Harm}(\CC P^2)$ with $E \leq 5 |d| + 10$ is a closed smooth submanifold of $C^j(S^2,\CC P^2)$ of dimension $6E+4$ if $E=|d|$ (in which case it consists of holomorphic or antiholomorphic maps) and of dimension $2E+8$ (otherwise). \end{theorem} Added in proof: In a revised version of [4], Crawford has shown that $\mbox{\rm Hol}_{k,r}^* (\CC P^2)$ is a manifold for all $k,r$. It follows that all our results (Theorems \ref{th:1.1}, \ref{th:1.3}, Proposition \ref{prop:3.1}, \ref{prop:4.1}, \ref{prop:5.2} and Lemma \ref{lem:5.1}) are valid for this range and the restriction $E \leq 5 |d| + 10$ can be removed from Theorem \ref{th:1.1}. Proofs are unchanged except for those in Sec. 3, see below. \begin{remark} \label{rem:1.2} The space $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ is non empty precisely for $k \geq 2$, \\ $0 \leq r \leq \frac{3}{2} k-3$ (see Proposition \ref{prop:2.7} below). \end{remark} We shall first of all prove that $\mbox{\rm Hol}^*_{k,r}(\CC P^2)$ is a complex manifold for the range $k \geq 2$, \ $0 \leq r \leq (k+1)/2$\ . The passage from this range to the second $k \geq 3$, \ $(4k-11)/3 \leq r \leq 3k/2 -3$ is achieved by the conjugate polar (see Definition \ref{def:2.3} below). In fact, we have: \begin{theorem} \label{th:1.3} For $0 \leq r \leq \displaystyle{\frac{k+1}{2}}$ and for $\displaystyle{\frac{4k-11}{3} \leq r \leq \frac{3}{2}k-3}$, $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ is a complex submanifold of the complex manifold $\mbox{\rm Hol}_{k}^*(\CC P^2)$ of dimension $3k-r+2$. \end{theorem} Added in proof: Note that Lemma \ref{lem:3.3} in the proof is false outside the above range; we give a counter example for $k=6$, $r=4$. Hence the description of the manifold structure on $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ in Sec. 3 is special to the range $r \leq (k+1)/2$ --- see the revised version of \cite{Cra} for a description valid for any $k,r$. \bigskip The contents of the subsequent sections are as follows. In Section 2, we recall the construction of J. Eells and the second author of the harmonic maps of $S^2$ to $\CC P^2$, stressing the limits of the values of the parameters involved, and we give some examples illustrating the behaviour of this construction. In Section 3, we present a proof of the result of T.A. Crawford on the structure of $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$, adapted to our needs. We use ideas from the paper of Crawford, and a construction suggested to us by M. Guest. In Sections 4 and 5, we show successively that the Gauss transform is smooth and that it is an embedding. The main ingredient is a property of smooth dependence of the common factor of smooth families of polynomials (Lemma \ref{lem:4.5}). We note at this point that the methods of the present paper will not generalize easily to ${\CC} P^n$ for $n > 2$, however M. Guest and Y. Ohnita \cite{G-O} show that some topological questions on $\CC P^n$ reduce to $\CC P^2$. Recall finally that the harmonic maps from $S^2$ to any manifold are precisely the minimal branched immersions of $S^2$, in particular, they are conformal (see e.g. \cite{E-L} (5.15)) \noindent {\bf Acknowledgements}\\ Both authors would like to thank the organizers of the first MSJ International Research Institute (Sendai, 1993) for invitations to the conference where this work was begun, F. Burstall, J. Eells, M. Guest and A. West for helpful discussions, and L. Simon for posing questions at the above conference which led to this research. The second author would like to thank the Belgian Contact Group for an invitation to their 1993 meeting which allowed this work to continue. This research was partially supported by the EU Human Capital and Mobility Programme (Contract CHRX-CT92-0050), the Belgian F.N.R.S. and the British Council. \section{Gauss transform and polars} Following work of others \cite{G-S,D-Z}, see also \cite{Bur}, J. Eells and the second author classified the harmonic maps from the 2-sphere $S^2$ to a complex projective space ${\CC} P^n$, as follows: \begin{theorem} \label{th:2.1} \cite{E-W}. There is a bijective correspondence between the set of pairs $(f,s)$ where $f : S^2 \to C P^n$ is a full holomorphic map and $s$ an integer, $0 \leq s \leq n$, and the set of full harmonic maps $\varphi : S^2 \to \CC P^n$. \end{theorem} Here ``full'' means ``not having image in any proper projective subspace of $\CC P^n$\ ''. For $s=0$, $\varphi=f$ is holomorphic, for $s \neq 0,n$, $\varphi$ is neither holomorphic nor antiholomorphic, and for $s=n$, $\varphi$ is antiholomorphic and is called the {\bf polar} of $f$. We now restrict to the case of $\CC P^2$, where the description of this construction is simpler. Let $f : S^2 \to \CC P^2$ be a holomorphic map. Identifying $S^2$ with $\CC \cup \{ \infty \}$ by stereographic projection, $f$ can be represented on $\CC$ by a map $p : \CC \to \CC^3 \setminus \{0\}$ where $p(z) = (p_0(z),p_1(z),p_2(z))$ is a triple of coprime polynomials with max (degree $p_0$, degree $p_1$, degree $p_2$) = degree of $f$. We shall write $f=[p_0,p_1,p_2]$. A map $S^2 \to \CC P^2$ is called {\bf full} if its image lies in no complex projective line. Note that if a harmonic map is not full, its image lies in a $\CC P^1$and it is then $\pm$-holomorphic, since it is a conformal map between surfaces. Thus all harmonic non $\pm$-holomorphic maps $f : S^2 \to \CC P^2$ are full. We denote by $\mbox{\rm Hol}_k^*(\CC P^2)$ the space of full holomorphic maps of degree $k$. All values of $k \geq 2$ occur and $\mbox{\rm Hol}_k^* (\CC P^2)$ is a complex manifold of dimension $3k+2$ with coordinate charts given by the coefficients. Recall that a holomorphic map $f : S^2 \to \CC P^2$ is said to be {\bf ramified} at a point $z \in S^2$ if $df(z)=0$. The ramification index of $f$ at $z$ is the order of the zero of $df(z)$ and the {\bf total ramification index} $r$ of $f$ is the sum of ramification indices. Consider a full holomorphic map $f$. The harmonic map $\varphi$ associated to the pair $(f,1)$ in Theorem \ref{th:2.1} is obtained by means of the $\partial'$-{\bf Gauss transform} (in the terminology of \cite{B-W} - in \cite{Wol} it is called the $\partial$-{\bf transform}) which is defined as follows \cite{E-W}: Let $\pi : \CC^3 \setminus \{0\} \to \CC P^2$ be the canonical projection sending $(z_0,z_1,z_2)$ to the point of $\CC P^2$ with homogeneous coordinates $[z_0,z_1,z_2]$. For a map $f : S^2 \to \CC P^2$, say that $F : U \to \CC^3 \setminus \{0\}$ represents $f$ on the open set $U$ if $f |_U = \pi \circ F$, in which case we write $f=[F]$. For $f$ holomorphic and full, the {\bf first associated curve} $f_{(1)} : S^2 \to G_2(\CC^3)$ is the holomorphic map defined as follows. Let $F : U \to \CC^3 \setminus \{0\}$ represent $f$ on a domain $(U,z)$ of $\CC$. Consider the map $F \wedge F' : U \to \wedge^2 \CC^3$ where $\displaystyle{F'=\frac{dF}{dz}}$. At a point $x$ where $f$ is not ramified, $F \wedge F'$ is non zero and so defines a complex two-dimensional subspace $f_{(1)}(x)$. If, on the other hand, $f$ is ramified at $x$ with ramification index $k$, then $F \wedge F' = (z-x)^k \cdot \Psi$ for some smooth nonzero map $\Psi : U' \to \wedge^2 \CC^3$ on an open neighbourhood of $x$. Since $\Psi(y)$ is decomposable for all $y \in U'$, $y \neq x$, it remains decomposable for $y=x$ and we can define $f_{(1)}(x)$ as the complex two-dimensional subspace defined by $\Psi(x)$. The resulting map $f_{(1)} : S^2 \to G_2(\CC^3)$ is well-defined and smooth. This leads us to two maps associated to $f$~: \begin{definition} \label{def:2.2} The {\bf $\partial'$-Gauss transform} $\varphi=G'(f) : S^2 \to \CC P^2$ is defined by the formula \begin{equation}\label{equ:2.1} \varphi(x) = f(x)^\perp \cap f_{(1)}(x). \end{equation} \end{definition} \noindent By Theorem 2.1, it is a smooth and full harmonic map. \begin{definition} \label{def:2.3} The {\bf polar} of the holomorphic map $f$ is the antiholomorphic map $$ g(x) =f_{(1)}(x)^\perp \ . $$ \end{definition} Note that for any $x \in S^2$, $f(x), \varphi(x)$ and $g(x)$ are Hermitian orthogonal complex lines. For convenience, we consider the {\bf conjugate polar} $h$ of $f$ defined by taking in $\CC^3$ the complex conjugate of the values of $g(x) : h(x) = \overline{f_{(1)}(x)^\perp}$. More explicitly, represent $f \in \mbox{\rm Hol}_k^*(\CC P^2)$ by $[p_0,p_1,p_2]$ as above. Identifying $\wedge^2 \CC^3$ with $\CC^3$, the first associated curve, or equivalently the conjugate polar $h$ of $f$, is represented by the polynomials \begin{eqnarray*} h &=& [p_{12},p_{20},p_{01}]\\ &=& [p_1p'_2-p'_1p_2,p_2p'_0-p'_2p_0,p_0p'_1-p'_0p_1] \end{eqnarray*} once they have been divided by their common factor. Explicitly, if $f$ has finite ramification points $z_I$ with multiplicities $r_I$ $(I=1,\ldots,R)$, then the {\bf ramification divisor} $R(f)$ is the monic polynomial $R(f)=\prod (z-z_I)^{r_I}$. If $f$ is not ramified at $\infty$ this has degree equal to the ramification index, otherwise it has lower degree. In either case, $R(f)$ is the highest common factor of the polynomials $p_{ij}$, and $$ h = \left[ \frac{p_{12}}{R(f)}, \frac{p_{20}}{R(f)}, \frac{p_{01}}{R(f)} \right]. $$ One checks easily that $h$ has degree $2k-2-r$ (indeed, the terms of degree $2k-1$ in $p_{ij}$ cancel). Note that formula (\ref{equ:2.1}) has a counterpart: $$ \overline{\varphi(x)} = h(x)^\perp \cap h_{(1)} (x). $$ Theorem \ref{th:2.1} can be rephrased in $\CC P^2$ by saying that {\em the Gauss transform defines a bijection between the space of full holomorphic maps and the space of harmonic non $\pm$-holomorphic maps}, and that {\em the passage to the polar defines a bijection between full holomorphic and antiholomorphic maps}. However, this does not immediately provide a simple description of the space of harmonic maps. Indeed, the Gauss transform $G': \mbox{\rm Hol}_k^*(\CC P^2) \to \mbox{\rm Harm}(\CC P^2)$ is not a continuous map, when the spaces are equipped with their $C^0$-topology. This appears for instance in the following example (brought to our attention by F. Burstall). \begin{example} \label{ex:2.4} Let $f_t : S^2 \to \CC P^2$ be defined by $f_t(z) = [F_t(z)]$, where $$ F_t(z) = (1,tz+z^3,z^2) \quad (z \in \CC, t\in {\bf R}) $$ (so that $f_t(\infty)=[0,1,0]$). Note that $f_t(0)=[1,0,0]$ for all $t$ and that $f_t$ is a smooth family of full holomorphic maps. Then $F'_t(z)=(0,t+3z^2,2z)$ and $F_t \wedge F'_t(z)=(tz^2-z^4,-2z,t+3z^2)$. If $t \neq 0$, at $z=0$ this equals $(0,0,t)=t(1,0,0) \wedge (0,1,0)$ so that the first associated curve has the value $f_{t(1)}(0) = \mbox{span}\{(1,0,0),(0,1,0)\}$ and $G'(f_t)(0)=[0,1,0]$. However, if $t=0$, then $F_t \wedge F'_t(z)=(-z^4,-2z,3z^2)=z\psi(z)$ where $\psi(z)=(-z^3,-2,3z)$. In particular $\psi(0)=(0,-2,0)=2(1,0,0) \wedge(0,0,1)$ so that $f_{0(1)}(0) = \mbox{span}\{(1,0,0),(0,0,1)\}$ and $G'(f_0)(0)=[0,0,1]$. This shows that $f_{t(1)}$ and $G'(f_t)$ do not vary continuously with $t$. The reason for this is that $f_t$ is unramified when $t \neq 0$ but ramified with ramification index 1 at $z=0$ when $t=0$, and that in the presence of ramification, both $G'$ and $g$ involve division of polynomials by their common factor, a discontinuous process when the degree of the factor changes. \end{example} To proceed, define $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ as the space of full holomorphic maps of degree $k$ and total ramification index $r$, and $\mbox{\rm Harm}_{d,E}(\CC P^2)$ the space of all harmonic maps of degree $d$ and energy $4 \pi E$. By results of \cite{E-W}, Theorem \ref{th:2.1} specializes to \begin{proposition} \label{prop:2.5} For each pair of integers $k \geq 2$, $0 \leq r \leq \frac{3}{2}k-3$, there is a bijective correspondence $$ G'_{k,r} : \mbox{\rm Hol}_{k,r}^*(\CC P^2) \to \mbox{\rm Harm}_{d,E}(\CC P^2) $$ given by the restriction of the Gauss transform, where $d = k-r-2$ and $E=3k-r-2$. \end{proposition} \begin{proposition} \label{prop:2.6} For each pair of integers $k \geq 2$, $0\leq r \leq \frac{3}{2}k-3$, the map $f \mapsto$ conjugate polar of $f$ restricts to a bijection $$ \mbox{\rm Hol}_{k,r}^*(\CC P^2) \to \mbox{\rm Hol}_{k',r'}^*(\CC P^2) $$ where $k'=2k-r-2$, $r'=3k-2r-6$. \end{proposition} This allows us to specify the values of $k$ and $r$ as follows: \begin{proposition} \label{prop:2.7} The space $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ is non-empty precisely for the range $k \geq 2$, $0 \leq r \leq \frac{3}{2}k-3$. \end{proposition} \noindent {\bf Proof.} The well known examples (see \cite{C-M-R}) $$ f(z) = [1,(z+1)^{k-r+1},z^k] $$ provide maps $f$ in $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ for all $k \geq 2$, $0 \leq r \leq k-2$. The Pl\"ucker formulae (see e.g. \cite{E-W}) show that the involutive map $f \mapsto$ conjugate polar of $f$ restricts to bijections $$ \mbox{\rm Hol}_{k,r}^*(\CC P^2) \to \mbox{\rm Hol}_{k',r'}^*(\CC P^2) $$ with $k',r'$ as in Proposition 2.6. Thus to get $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ (and $\mbox{\rm Hol}_{k',r'}^*(\CC P^2)$) non empty, we need $r' \geq 0$, i.e. $r \leq \frac{3}{2}k-3$. On the other hand, the conjugate polars of the above examples provide maps in $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ for all $$ k-2 \leq r \leq \frac{3}{2} k-3 \ . $$ \medskip As in \cite{C-M-R}, we note that the above shows that $\mbox{\rm Harm}_{d,E} (\CC P^2)$ is non-empty precisely for pairs $(d,E)$ of integers with either $E=|d|$ (in which case it consists of $\pm$-holomorphic maps) or $E=3|d|+4+2r$ for some $r \geq 0$ (otherwise). Indeed, all such values of $E$ with $d \geq 0$ are achieved with $0 \leq r \leq k-2$, the range $k-2 < r \leq 3k/2-3$ giving $d < 0$. The main contribution of the present paper is to show that $$ G'_{k,r} : \mbox{\rm Hol}_{k,r}^*(\CC P^2) \to \mbox{\rm Harm}_{d,E}(\CC P^2) \subset C^j(S^2,\CC P^2) $$ is a smooth embedding and that the map $f \mapsto$ conjugate polar of $f$ is a complex analytic equivalence from $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ to $\mbox{\rm Hol}_{k',r'}^*(\CC P^2)$. We conclude this section by an example showing that in $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$, the ramification divisor of a smooth family of polynomials can vary smoothly, even when individual common roots of the $p_{ij}$'s vary only continuously. \begin{example} \label{ex:2.8} Identifying $S^2$ with $\CC \cup \{0\}$ by stereographic projection, let $f_t : S^2 \to \CC P^2$ be defined by $$ F_t(z)=\big (z^4+1,(1-3t^2)z^3+(-3t+t^3)z,2tz^2+(1-t^2) \big) \quad (z \in \CC, t \in {{\Bbb R}}) $$ (so that $f_t(\infty)=[1,0,0]$). Identifying $\Lambda^2 \CC^3$ with $\CC^3$ we have $(F_t \wedge F'_t)(z)=(z^2-t)\psi(z)$ where $$ \psi(z) = \big( (-2t+6t^3)z^2+(-3+t^2)(1-t^2),4z(tz^2+1),(-1+3t^2)z^4+8tz^2+3-t^2 \big) \ . $$ which shows that, if $t \neq 0$, $f_t$ is ramified at $z=\pm \sqrt{t}$ with index 1, but if $t=0$, these ramification points coalesce into a ramification point at $z=0$ of index 2. Further $f_{t(1)}(z)=[\psi(z)]$. We see from this that $f_t \in \mbox{\rm Hol}_{4,2}^*(\CC P^2)$ for all $t$ and that $f_{t(1)}$ and so $G'(f_t)$ vary smoothly with $t$, even though each root does not. \end{example} \section{Spaces of holomorphic maps} In this section, we give a proof of the \begin{proposition} \label{prop:3.1} \cite{Cra} For any $k \geq 2$ and $0 \leq r \leq \displaystyle{\frac{k+1}{2}}$, $\mbox{\rm Hol}_{k,r}^* (\CC P^2)$ is a complex submanifold of $\mbox{\rm Hol}_k^*(\CC P^2)$ of dimension $3k-r+2$. \end{proposition} \noindent {\bf Proof.} Let \begin{eqnarray*} \mbox{\rm Hol}'_k(\CC P^2) & = & \{ f \in \mbox{\rm Hol}_k^*(\CC P^2) : f = [p_0,p_1,p_2],\ p_0 \mbox{ is monic of}\\ & & \mbox{degree } k \mbox{ with distinct roots}, f \mbox{ is not ramified at } \infty \} \ . \end{eqnarray*} Here, $p_0,p_1$ and $p_2$ are always assumed to be coprime. $\mbox{\rm Hol}'_k(\CC P^2)$ is an open subset of $\mbox{\rm Hol}_k^*(\CC P^2)$ and so a complex manifold. It can be embedded as an open subset in $\CC^{3k+2}$ by sending the polynomials $(p_0,p_1,p_2)$ to their coefficients (omitting the leading coefficient of $p_0$, which is equal to 1). The group $G=PGL_2(\CC) \times PGL_3(\CC)$ acts on $\mbox{\rm Hol}_k^*(\CC P^2)$ in a natural way preserving the subsets $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$. Given $f \in \mbox{\rm Hol}_k^*(\CC P^2)$, a variation of the proof of Lemma \ref{lem:4.5} below shows that there exist $g \in G$ and a neighbourhood $U$ of $f$ in $\mbox{\rm Hol}_k^*({\bf C } P^2)$ such that $g(U) \subset \mbox{\rm Hol}'_k(\CC P^2)$. Setting $\mbox{\rm Hol}'_{k,r}(\CC P^2)=\mbox{\rm Hol}_{k,r}^*(\CC P^2) \cap \mbox{\rm Hol}'_k(\CC P^2)$, it suffices therefore to show that $\mbox{\rm Hol}'_{k,r}(\CC P^2)$ is a complex submanifold of $\mbox{\rm Hol}'_k(\CC P^2) \subset {\CC}^{3k+2}$. To do this, we use a construction suggested by M. Guest following ideas of T.A. Crawford. Set \begin{eqnarray*} X''_{k,r} & = & \{ (a,f)=(a, (p_0,p_1,p_2)) \in \CC^r \times \CC^{3k+2}~:\ f \in \mbox{\rm Hol}'_k(\CC P^2) \ , \\ & & a \mbox{ is a monic polynomial of degree } r , (a,p_0)\mbox{ are coprime and } a \mbox{ divides } R(f) \} \ . \end{eqnarray*} Note that $X''_{k,r}$ is an algebraic subvariety of $\CC^r \times {\bf C}^{3k+2}$. We shall prove that $X''_{k,r}$ is a complex submanifold provided $r \leq (k+1)/2$. There is an injective map $$ i : \mbox{\rm Hol}'_{k,r}(\CC P^2) \to X''_{k,r} $$ given by $i(f)=(R(f),(p_0,p_1,p_2))$ where $f=[p_0,p_1,p_2]$ as above, with image $X'_{k,r}=\{(a,f) \in X''_{k,r} : \deg f = k, \mbox{ ramification index } (f)=r \}$. To check this, we need only show that in $(R(f),f)=(a,(p_0,p_1,p_2))$, $a$ and $p_0$ are coprime. Suppose, to the contrary, that there is an $x$ with $a(x)=p_0(x)=0$. Then $p_0(z)=(z-x)p(z)$, and since $a$ divides $p_0p'_i-p_ip'_0$, it follows that $p_i(x)p(x)=0$ for $i=1,2$. Since $p_0$ has distinct roots, $p(x) \neq 0$ so that $p_i(x)=0$ for $i=1,2$, contradicting the fact that $p_0,p_1$ and $p_2$ are coprime. By Lemma \ref{lem:4.5} below, the injective map $i$ is complex analytic, since the map $f \mapsto R(f)$ is. Note that since $f \in \mbox{\rm Hol}'_{k,r}(\CC P^2)$ is not ramified at infinity, $R(f)$ is a polynomial of degree $r$. The complement of $X'_{k,r}$ in $X''_{k,r}$ is a proper subvariety of $X''_{k,r}$, so that if $X''_{k,r}$ is a complex submanifold of $\CC^r \times \CC^{3k+2}$, so is $X'_{k,r}$. To study $X''_{k,r}$, we embed it in the trivial holomorphic vector bundle $\pi : E \to A$, where $A$ is the open set in $\CC^{r+k}$ given by \begin{eqnarray*} A & = & \{(a,p_0) \in \CC^r \times \CC^k : a \mbox{ and } p_0 \mbox{ are monic coprime polynomials } \\ & & \mbox{of degrees } r \mbox{ and } k \mbox{ respectively and } p_0 \mbox{ has no repeated root} \} \ , \\ E & = & \{(a,p_0,p_1,p_2):(a,p_0)\in A, (p_1,p_2) \in \CC^{k+1} \times \CC^{k+1} \} \end{eqnarray*} and $\pi$ is the natural projection. For $(a,p_0) \in A$, let $T_{(a,p_0)} : \CC^{k+1} \to \CC^r$ be the linear map which sends a polynomial $p$ of degree $\leq k$ (represented by its coefficients) to the remainder of the division of $p_0p'-p'_0p$ by $a$. We have: \begin{lemma} \label{lem:3.2} Let $(a,p_0,p_1,p_2) \in E$ and $f=[p_0,p_1,p_2]$. Then $a | R(f)$ if and only if $p_1$ and $p_2$ lie in $\ker T_{(a,p_0)}$. \end{lemma} \noindent {\bf Proof.} With the notation $p_{ij}=p_ip'_j-p_jp'_i$, we have immediately $$ p_1 p_{02} - p_2 p_{01}=p_0p_{12}. $$ If $p_1,p_2 \in \ker T_{(a,p_0)}$, then $a$ divides the left hand side, and since $a$ and $p_0$ are coprime, $a$ must divide $p_{12}$. Therefore $a$ divides $R(f)$. The converse is immediate. \medskip Now, note that $X''_{k,r}$ is the kernel of the morphism of holomorphic vector bundles $$ E=A \times (\CC^{k+1})^2 \to A \times (\CC^r)^2 $$ defined by $$ ((a,p_0),(p_1,p_2)) \to ((a,p_0),T_{(a,p_0)}(p_1),T_{(a,p_0)}(p_2)). $$ Hence $X''_{k,r}$ is a complex submanifold of $\CC^r \times {\CC}^{k+2}$ if $\dim \ker T_{(a,p_0)}$ is independent of $(a,p_0) \in A$. \begin{lemma} \label{lem:3.3} If $\displaystyle{r \leq \frac{k+1}{2}}$, $\dim \ker T_{(a,p_0)}=k+1-r$ \ $\forall (a,p_0) \in A$. \end{lemma} \noindent {\bf Proof.} Let the zeros of $a$ be $\alpha_1, \ldots, \alpha_R$ with multiplicities $m_1,\ldots,m_R$, so that $\displaystyle{\sum_{J=1}^R m_J=r}$. For any $p$, set $h(p)=p_0p'-p'_0p$. Then $p \in \ker T_{(a,p_0)}$ iff \begin{equation}\label{equ:3.1} (h(p))^{(I)}(\alpha_J)=0 \quad \forall J = 1,\ldots,R,\; I=0,\ldots,m_J-1 \end{equation} where $(h(p))^{(I)}$ denotes the $I^{\mbox{th}}$ derivative of the polynomial $h(p)$. Now (\ref{equ:3.1}) is a system of $r$ linear equations in $k+1$ unknows. Indeed, we can replace $T_{(a,p_0)}$ by the linear map $\CC^{k+1} \to \CC^r$ which sends $p \in \CC^{k+1}$ to the vector $$ \left( (h(p))^{(I)}(\alpha_J), \, J=1,\ldots,R, \, I=0,\ldots,m_J-1 \right) \in \CC^r. $$ We shall show that this map has rank $r$ by finding $r$ polynomials $P_{K,L} \in \CC^{k+1}$ $(L=1,\ldots,R$, $K=1, \ldots,m_L)$ whose images are linearly independent. To do this, choose for $P_{K,L}$ a polynomial of degree $\leq k$ with roots $\alpha_L$ of multiplicity $K$ and $\alpha_J$ (for $J \neq L$) of multiplicity $m_J+1$. This is possible since \begin{eqnarray*} m_L + \sum_{J \neq L}(m_J+1) &=& r + R -1 \\ &\leq& 2r-1 \leq k \end{eqnarray*} by the hypothesis $r \leq (k+1)/2$. \noindent Then $$ (h(P_{K,L}))^{(I)}(\alpha_J) = \left\{ \begin{array}{lll} 0 & &\mbox{if } J \neq L \\ &\mbox{or} &J=L \mbox{ and } I < K-1 \ . \\ \mbox{non zero } & &\mbox{if } J=L,\ I=K-1 \end{array} \right. $$ If we order the components of the vector $(h(p))^{(I)}(\alpha_J)$ in lexicographical order, viz. $$ (J,I)=(1,0),(1,1),\ldots,(1,m_1-1),(2,0),\ldots,(R,m_R-1), $$ we observe that the matrix $$ (h(P_{K,L}))^{(I)}(\alpha_J) $$ is in echelon form. \noindent Thus, the images of the $P_{K,L}$ are linearly independent, which shows that the rank of $T_{(a,p_0)}$ is $r$, and so its kernel has dimension $k+1-r$. \begin{remark} \label{rem:3.4} An example, obtained in conjunction with M. Guest, shows this lemma to be false for $k=6,r=4$. Namely, when $$ p_0=4z^6-12z^5+10z^4+2z^2-4z+4 $$ and $$ a = z(z-1)(z+1)(z-2), $$ the kernel of $T_{(a,p_0)}$ is of dimension 4, instead of 3. \end{remark} We deduce from the lemma that $X''_{k,r}$, and so $X'_{k,r}$, is a complex submanifold of dimension $3k-r+2$. Now the restriction of the projection $X'_{k,r} \to \mbox{\rm Hol}'_k(\CC P^2)$ which forgets $a$ is complex analytic and has image $\mbox{\rm Hol}'_{k,r}(\CC P^2)$ (indeed, it is the inverse of the map $i : \mbox{\rm Hol}'_{k,r}(\CC P^2) \to X''_{k,r}, \, f \mapsto (R(f),f))$. Thus $\mbox{\rm Hol}'_{k,r}(\CC P^2)$ is a complex analytic submanifold of $\CC^{3k+r+2}$, which concludes the proof of Proposition \ref{prop:3.1}. \section{The smooth nature of $G'_{k,r}$} In this section we shall prove, with notation as in \S 1, \begin{proposition} \label{prop:4.1} For $0 \leq r \leq \displaystyle{\frac{k+1}{2}}$ and for $\displaystyle{\frac{4k-11}{3} \leq r \leq \frac{3}{2}k-3}$, \ $G'_{k,r} : \mbox{\rm Hol}_{k,r}^*(\CC P^2) \to C^j (S^2,\CC P^2)$ is a $C^\infty$ map between $C^\infty$ manifolds, for any $2 \leq j < \infty$. \end{proposition} \begin{lemma} \label{lem:4.2} Let $g_t$ and $h_t$ be two families of polynomials in a single (complex) variable which depend smoothly (resp. complex analytically) on a parameter $t \in U \subset {\Bbb R}^N$ (resp. $\CC^N$), where $U$ is open. Suppose that the degrees of $g_t$, $h_t$ and of their highest common factor $l_t$ are all constant, i.e. do not vary with $t$. Then the polynomial $l_t$ depends smoothly (resp. complex analytically) on $t$. \end{lemma} \begin{remark} \label{rem:4.3} We can take the polynomial $l_t$ to be monic; then the statement of the lemma means that the remaining coefficients depend smoothly (or complex analytically) on $t$. \end{remark} \begin{remark} \label{rem:4.4} In the situation of Lemma \ref{lem:4.2}, each root of the polynomials depends continuously on $t$, but not smoothly, in general, when the multiplicity changes. So the linear factors of the polynomials do not always vary smoothly. \end{remark} \noindent {\bf Proof of Lemma \ref{lem:4.2}.} It is sufficient to prove the lemma for $t$ close to a fixed $t_0 \in U$. We can suppose deg $l_t > 0$, or there is nothing to prove. \noindent {\bf First case:} Suppose that for one value of $t$, $g_t$ divides $h_t$ or $h_t$ divides $g_t$. If for instance $g_t$ divides $h_t$, we have $l_t=a(t)g_t$, where $a(t)$ is a scalar function. Since all degrees are constant, we see that, for all $s$, deg $l_s=$ deg $g_s$, so that again $l_s=a(s)g_s$, and $l_s$ is smooth.\\ {\bf Second case:} For each $t,\ g_t$ (resp. $h_t$) does not divide $h_t$ (resp. $g_t$). In particular, $g_t$ and $h_t$ are not proportional. For brevity of notation we shall now omit the parameter $t$ from the notation. {\bf Claim 1.} There exist unique polynomials $\lambda$ and $\mu$ with \begin{equation}\label{equ:4.1} \mbox{deg } \lambda < \mbox{ deg } (h/l) \quad \mbox{ and } \quad \mbox{ deg } \mu < \mbox{ deg } (g/l) \end{equation} such that $\lambda g + \mu h=l$. The Euclidean algorithm ensures the existence of $\lambda$ and $\mu$ such that $\lambda g + \mu h = l$. Suppose deg $\lambda \geq$ deg $(h/l)$ and let $\tilde{\lambda}$ be the unique polynomial such that $$ \lambda = q \cdot \frac{h}{l} + \tilde{\lambda}, $$ with $\tilde{\lambda}=0$ or deg $\tilde{\lambda} <$ deg $(h/l)$. If $\tilde{\lambda} = 0$, we have $\lambda = q \cdot h/l$ and $\lambda g + \mu h = 1$ becomes $(qg/l+\mu)h=l$, which is impossible with $l \not \equiv 0$ and deg $l <$ deg $h$. If, instead, $\tilde{\lambda} \neq 0$, and deg $\tilde{\lambda} <$ deg $(h/l)$, we have $\tilde{\lambda}=\lambda -q \cdot h/l$. \\ Setting $\tilde{\mu} = \mu + q \cdot g/l$, we see that $$ \tilde{\lambda} g + \tilde{\mu}h = l $$ and $$ \mbox{deg } \tilde{\mu} + \mbox{ deg } h = \mbox{ deg } \tilde{\lambda} + \mbox{ deg } g. $$ This implies \begin{eqnarray*} \mbox{deg } \tilde{\mu} &<& \mbox{ deg } h - \mbox{ deg } l + \mbox{ deg } g - \mbox{ deg } h \\ &=& \mbox{ deg } (g/l) \ . \end{eqnarray*} Thus we have existence of $\tilde{\lambda}$ and $\tilde{\mu}$ satisfying (\ref{equ:4.1}). Unicity is easily checked. {\bf Claim 2}. Let $\lambda$ and $\mu$ be polynomials satisfying (\ref{equ:4.1}) and such that deg$(\lambda g + \mu h) \leq$ deg $l$. Then $\lambda g + \mu h=a \cdot l$, with $a(t)$ a scalar function. Since $l$ divides $g$ and $h$, we have deg$(\lambda g + \mu h) \geq$ deg $l$ or $\lambda g + \mu h=0$. In the second case, we have $\lambda g/l=- \mu h/l$, and by (\ref{equ:4.1}) $g/l$ must have a common factor with $h/l$, a contradiction. Therefore, deg $(\lambda g + \mu h) \geq$ deg $l$. With the hypotheses of the claim, we have deg $(\lambda g + \mu h)=$ deg $l$ and $l$ divides $\lambda g + \mu h$ so that $\lambda g + \mu h = a \cdot l$. \medskip We conclude from the two claims that for $\lambda$ and $\mu$ satisfying (\ref{equ:4.1}), $l$ is characterized up to a non-zero scalar factor as the unique polynomial of the form $\lambda g + \mu h$ such that deg $(\lambda g + \mu h) \leq$ deg $l \equiv L$. Writing the parameter $t$ back in, this is equivalent to $\displaystyle{\frac{d^{L+1}}{dz^{L+1}}(\lambda_tg_t+\mu_th_t)=0}$, a system of homogeneous linear equations in the unknown coefficients of $\lambda_t$ and $\mu_t$, with coefficients smooth in $t$. At $t=t_0$, consider any non zero coefficient of $\lambda_t$ or $\mu_t$ and scale the solution by setting the coefficient equal to 1 for $t$ close to $t_0$. The system becomes inhomogeneous, and can be solved by Cramer's rule, so that the solutions are smooth, which proves Lemma \ref{lem:4.2}. \begin{lemma} \label{lem:4.5} Let $g_t, h_t$ and $k_t$ be three families of polynomials in a single complex variable which depend smoothly (resp. complex analytically) on a parameter $t \in U \subset {\Bbb R}^N$ (resp. $\CC^N$). Suppose that the degrees of $g_t$ and of the highest common factor $l_t$ of $g_t, h_t, k_t$ are constant and that deg $h_t \leq$ deg $ g_t$ and deg $k_t \leq$ deg $g_t$ for all $t$. Then $l_t$ depends smoothly (resp. complex analytically) on $t$. \end{lemma} \noindent {\bf Proof.} The idea of the proof is to replace $g_t, h_t$ and $k_t$ by linear combinations $\tilde{g}_t, \tilde{h}_t$ and $\tilde{k}_t$, so that the common factor remains the same, but any two of the three polynomials have no further common factor. First, replace $h_t$ by $h_t + a \cdot g_t$ and $k_t$ by $k_t + b \cdot g_t$ so that the three polynomials (still denoted by $g_t, h_t$ and $k_t$) all have the same constant degree. Consider now a fixed value of the parameter -- say $t=0$. We shall show that there exists $\epsilon > 0$ such that $l_t$ is smooth for $\| t \| < \epsilon$. Let $A$ be a 3 by 3 matrix, which we shall choose close to the identity matrix $I$, and in particular invertible. Set \begin{equation} \left( \begin{array}{c} \tilde{g}_t \\ \tilde{h}_t \\ \tilde{k}_t \end{array} \right) = A \left( \begin{array}{c} {g}_t \\ {h}_t \\ {k}_t \end{array} \right) \ . \label{equ:4.2} \end{equation} For $\| t \| < \epsilon_1$, all roots of the three polynomials move in a compact set of $\CC$, since the degrees are constant. If $(a_t^1,\ldots,a_t^r)$ are the roots common to $g_t,h_t$ and $k_t$, they are also the roots common to $\tilde{g}_t, \tilde{h}_t$ and $\tilde{k}_t$. We shall now study the common roots of two (but not three) of these polynomials. Suppose that $\alpha$ is a root of $g_0$ and $\beta$ a root of $h_0$, with $\alpha \neq \beta$. By continuity of the roots of a family of polynomials, for $A-I$ small enough, the corresponding roots of $\tilde{g}_0$ and $\tilde{h}_0$ remain distinct. Applying this remark a finite number of times, we see that any pair of distinct roots of the polynomials remain distinct after tranformation by $A$, when $A-I$ is small enough. Consider now a complex number $b$ which is a root of $g_t$ and $h_t$, but not $k_t$, so that $g_t(b)=h_t(b)=0$ and $k_t(b)=B \neq 0$. Then for $\theta \in \CC$ small enough, replace $g_0$ by $g_0 + \theta k_0$. We see that $g_0+\theta k_0$ and $h_0$ do not any more have the common root $b$. By the preceeding remark no new common root has been created. Note that the same applies if $b$ is one of the $a_0^s$, by which we mean that $b$ is a root of the three polynomials, of order $m+n$ for $g_0$ and $h_0$ and of order $m$ for $k_0$, with $m \ge 1$, $n \geq 1$. Indeed, in this case, \begin{eqnarray*} g_0(z) &=& (z-b)^m(z-b)^n\tilde{g}(z), \\ k_0(z) &=& (z-b)^m\tilde{k}(z) \end{eqnarray*} and $$ (g_0+\theta k_0)(z)= (z-b)^m((z-b)^n\tilde{g}(z)+\tilde{k}(z)), $$ the last factor being non zero at $b$. Thus $b$ is not any more a common root of $g_0+\theta k_0$ and $h_0$, except for the multiplicity of the root in all three polynomials. Repeating this argument a finite number of times, we can replace $g_0,h_0$ and $k_0$ by three new polynomials given by $$ \left( \begin{array}{c} \tilde{g}_0 \\ \tilde{h}_0 \\ \tilde{k}_0 \end{array} \right) = A \left( \begin{array}{c} {g}_0 \\ {h}_0 \\ {k}_0 \end{array} \right) $$ in such a way that the only roots common to two of them are in fact common to all three. Defining $\tilde{g}_t, \tilde{h}_t$ and $\tilde{k}_t$ by (\ref{equ:4.2}) for the same matrix $A$, we see that for $\|t\|$ small enough the roots have the same property. Thus, the common factor $l_t$ of $g_t,h_t$ and $k_t$ is also the common factor of, for example, $g_t$ and $h_t$. By Lemma 4.2, it varies smoothly (resp. complex analytically) with $t$. \medskip \noindent {\bf Proof of Proposition \ref{prop:4.1}.} Let $f \in \mbox{\rm Hol}_{k,r}^* (\CC P^2)$. Then identifying $S^2$ with $\CC \cup \{ \infty \}$ by stereographic projection, $f$ can be represented (at least on $\CC \subset S^2)$ by a map $p : \CC \to \CC^3 \setminus \{0\}$ with $p(z) =(p_0(z),p_1(z),p_2(z))$, \ $(z \in \CC)$, a triple of polynomials with no common zeros and with max (degree $p_0$, degree $p_1$, degree $p_2)=k$. Now since $f$ has a finite number of ramification points (in fact no more than $r$) we can choose the pole of the stereographic projection such that none of them is at $\infty$\ , then let the ramification points be $z_1,\ldots,z_t \in \CC$ with ramification indices $k_1,\ldots,k_t$. Note that $\sum_{i=1}^t k_i = r$. Next, the first associated curve (see \S 2) $f_{(1)}$ is represented by $q=p \wedge p' : \CC \to \Lambda^2 \CC^3 \equiv \CC^3$. This is a triple of polynomials $q=(g,h,k)$ and it is easily seen that, since $f$ is not ramified at infinity, the maximum of the degrees of $f,g,k$ is equal to $2k-2$. Further $(g,h,k)$ have a common zero at $z_i$ of order $k_i$ if and only if $z_i$ is a ramification point of $f$ of ramification index $k_i$ so that the highest common factor of $(g,h,k)$ is the {\em ramification divisor} of $f$ given by $R(z)=\displaystyle{\prod_{i=1}^t(z-z_i)^{k_i}}$, a polynomial of degree $r$. Then $q/R$ is a triple of polynomials with no common roots and, for $z \in \CC$, $f_{(1)}(z)$ is the 2-plane spanned by $q(z)/R(z)$. Now suppose that $f_t \in \mbox{\rm Hol}_{k,r}^*(\CC P^2)$ is a family of holomorphic maps depending smoothly on a parameter $t \in U \subset {\Bbb R}^N$. Then we can choose a family of polynomial maps $p_t : \CC \to \CC^3 \setminus \{0\}$ as above representing $f_t$ and depending smoothly on $t$ with no ramification point at infinity. Here we use the fact that the ramification points, being roots of polynomials, vary continuously with $t$. Since the total ramification stays constant ($=r$), the ramification divisor $R_t$ of $f_t$ has constant degree and we can apply Lemma 4.5 to see that $R_t$ depends smoothly on $t$. Hence the corresponding $q_t/R_t$ depends smoothly on $t$ and so does $(f_t)_{(1)}$. Since $\varphi_t=f_t^\perp \cap f_{t(1)}$ it is clear that this too varies smoothly with $t$ and the proposition is proven. \noindent {\bf Proof of Theorem \ref{th:1.3}.} Similarly, Lemma \ref{lem:4.5} allows us to conclude that the passage from $f$ to its conjugate polar is a complex analytic map from $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ to $\mbox{\rm Hol}_{k',r'}^*(\CC P^2)$, and that the same applies to its inverse. \section{The diffeomorphic nature of $G'_{k,r}$} In this section, we complete the proof of Theorem 1.1. First we show \begin{lemma} \label{lem:5.1} For $k \geq 2$, \ $0 \leq r \leq \displaystyle{\frac{k+1}{2}}$ and $k \geq 3$,\ $\displaystyle{\frac{4k-11}{3} \leq r \leq \frac{3}{2}k-3}$ \ $G'_{k,r} : \mbox{\rm Hol}_{k,r}^* (\CC P^2) \to C^j(S^2,{\bf C}P^2)$ has injective differential at all points $f_0 \in \mbox{\rm Hol}_{k,r}^* (\CC P^2)$, for any $j \geq 2$. \end{lemma} \noindent {\bf Proof.} Let $f_t \in \mbox{\rm Hol}_{k,r}^* (\CC P^2)$ be a family of holomorphic maps depending smoothly on a real parameter $t \in (- \epsilon, \epsilon)$, $\epsilon > 0$, i.e. a smooth curve in $\mbox{\rm Hol}_{k,r}^* (\CC P^2)$. Then, by Proposition \ref{prop:4.1} and the proof of Theorem \ref{th:1.3}, $\varphi_t = G'_{k,r}(f_t)$ and $g_t$ (the polar of $f_t$) are smooth curves in $C^j(S^2,\CC P^2)$. Working on a coordinate domain $(U,z)$, let $F_t, \Phi_t,G_t : U \to \CC^3 \setminus \{0\}$ be families of smooth maps representing $f_t, \phi_t, g_t$ respectively with $F_t$ holomorphic and $G_t$ antiholomorphic. Then for each $z \in U, \ t \in (-\epsilon, \epsilon)$, $\{F_t(z), \Phi_t(z),G_t(z)\}$ is a Hermitian orthogonal basis of $\CC^3$. Now $\displaystyle{\frac{d \varphi_t}{dt} = d \pi \left( \frac{d \Phi_t}{dt} \right)}$ (where, as before, $\pi : \CC^3 \setminus \{0\} \to \CC P^2$ is the canonical projection). Suppose that $\displaystyle{\frac{d \varphi_t}{dt}=0}$ for some value of $t$. Then $\displaystyle{\frac{d \Phi_t}{dt}}$ must be in direction $\Phi_t$, so that, in particular, $$ <\frac{d \Phi_t}{dt},F_t>=0. $$ (Here $<,>$ denotes the standard Hermitian inner product on ${\bf C}^3$). This last equation is equivalent to $$ <\frac{d F_t}{dt},\Phi_t>=0, $$ hence \begin{equation}\label{equ:5.1} \frac{dF_t}{dt}=\alpha F_t + \beta G_t \end{equation} for some smooth functions $\alpha, \beta$ on $U \times (-\epsilon,\epsilon)$. Differentiating with respect to $\overline{z}$, since $F_t$ is holomorphic we obtain $$ 0=\frac{\partial \alpha}{\partial \overline{z}}F_t + \frac{\partial \beta}{\partial \overline{z}}G_t + \beta\frac{\partial G_t}{\partial \overline{z}}. $$ Now the triple $\big\{ F_t,\displaystyle{\frac{\partial G_t}{\partial \overline{z}},G_t} \big\}$ is linearly independent except at the isolated points where $h_t=\bar{g}_t$ is ramified. Hence $\beta \equiv 0$ and so from (\ref{equ:5.1}) $$ \frac{d F_t}{dt} = \alpha F_t $$ which implies that $df_t/dt=0$. The lemma follows. The proof of Theorem \ref{th:1.1} is completed by \begin{proposition} \label{prop:5.2} For $k \geq 2$, \ $0 \leq r \leq \displaystyle{\frac{k+1}{2}}$ and $k \geq 3$,\ $\displaystyle{\frac{4k-11}{3} \leq r \leq \frac{3}{2}k-3}$ \ $G'_{k,r} : \mbox{\rm Hol}_{k,r}^*(\CC P^2) \to C^j(S^2,\CC P^2)$ is an embedding with image the closed submanifold $\mbox{\rm Harm}_{k-2-r,3k-2-r}(\CC P^2)$ of $C^j(S^2,\CC P^2)$ for any $j \geq 2$. \end{proposition} \noindent {\bf Proof.} Since $\mbox{\rm Hol}_{k,r}^*(\CC P^2)$ is finite dimensional, the differential of $G'_{k,r}$ splits at each point and so by \cite{Lan} $G'_{k,r}$ is an immersion. By Proposition \ref{prop:2.5} it is injective and has image $\mbox{\rm Harm}_{k-2-r,3k-2-r}(\CC P^2)$, and we show below that it is a homeomorphism onto its image. Thus it is an embedding and its image is a closed submanifold of $C^j(S^2,\CC P^2)$. The dimension of the space $\mbox{\rm Harm}_{k-2-r,3k-2-r}(\CC P^2)$ is thus equal to the (real) dimension of $\mbox{\rm Hol}_{k,r}^* (\CC P^2)$ which is $6k-2r+4$; the stated dimensions follow easily. To show that $G'_{k,r}$ is a homeomorphism it suffices to show that it is proper. To do this, following \cite{Cra},\cite[Lemma 3.3]{F-G-K-O} consider a sequence $(\phi_n)$ which converges to $\phi$ in $G'_{k,r}(\mbox{\rm Hol}^*_{k,r}(\CC P^2)) = \mbox{\rm Harm}_{k-2-r, 3k-2-r}(\CC P^2)$. Let $f_n = (G'_{k,r})^{-1}(\phi_n) \in \mbox{\rm Hol}^*_{k,r}(\CC P^2)$. It suffices to prove that a subsequence converges in $\mbox{\rm Hol}^*_{k,r}(\CC P^2)$. Note first that $\mbox{\rm Hol}^*_{k,r}(\CC P^2)$ can be injected into the projective space $P(\CC^{3k+3})$ by the map $i:f \mapsto$ projective class of the coefficients of the polynomials describing $f$. Since that space is compact, a subsequence of $(i(f_n))$ converges to $[p] = [p_0,p_1,p_2]$. (Note that $(p_0, p_1, p_2)$ need not be coprime and that their maximal degree need not be $k$.) We retain the notation $i(f_n)$ for the subsequence. We can then write $[p] = [b q_0,b q_1, b q_2]$ where the $q_i$'s are coprime and $[q] = [q_0, q_1, q_2]$ lies in $\mbox{\rm Hol}_{k-m}(\CC P^2)$, the space of not necessarily full holomorphic maps from $S^2$ to $\CC P^2$ of degree $k-m$, for some $m \geq 0$ . For each $n$, let $h_n$ be the conjugate polar of $f_n$ belonging to $\mbox{\rm Hol}_{2k-2-r}^*(\CC P^2)$, and consider its image by $i$ in the appropriate projective space of coefficients $P(\CC^{6k-3-3r})$. Again, a subsequence converges to $[t] = [t_0,t_1,t_2]$, and we have $[t_0,t_1,t_2] = [a s_0, a s_1, a s_2]$ with the $s_i$'s coprime and of possibly lower degree than $2k-2-r$. Now $f_n \perp \overline{g_n}$ for all $n$ \ so that $q \perp \bar{s}$ and $\psi = (q \oplus \bar{s})^{\perp}$ is well-defined. Further on $S^2 \setminus \{\{\mbox{zeros of } a \} \cup \{ \mbox{zeros of } b \} \cup \{\infty\}\}$, \ $G'(f_n)(x)$ converges to $\psi(x)$ so that $\phi$ coincides with $\psi$ on a dense set, and therefore everywhere. Then $3k-2-r = E(\phi) = E(\psi) = \deg q + \deg s$ with $\deg q \leq k$ and $\deg s \leq 2k-2-r$ so that both these inequalities must be equalities and there can be no loss of degree above. Thus $q$ must have degree $k$ and ramification index $r$. Further, since $r \leq 3k/2 -3$, \ $q$ is full, otherwise by the Riemann-Hurwitz formula we would have $r = 2k-2 > 3k/2 -3$. Hence $q \in \mbox{\rm Hol}_{k,r}^*(\CC P^2)$ as required.
1,314,259,993,993
arxiv
\section*{Background/Introduction} Cardiovascular magnetic resonance (CMR) provides insights into myocardial structure and function noninvasively, with high diagnostic accuracy and without ionising radiation. Late Gadolinium Enhancement (LGE) has become the reference standard for non-invasive imaging of myocardial scar and focal fibrosis in both ischaemic \cite{kim2000use} and non-ischaemic cardiomyopathy \cite{patel2017role}. LGE is useful in cardiac conditions which have stark regional differences within the myocardium, but it cannot correctly visualise myocardial pathology that is diffuse in nature and affects the myocardium uniformly. Examples include diffuse myocardial inflammation, fibrosis, hypertrophy, and infiltration \cite{sado2013identification}. In contrast, native T\textsubscript{1}-mapping provides quantitative myocardial tissue characterisation, without the need for gadolinium \cite{moon2013myocardial}. Previous work has shown that T\textsubscript{1} mapping can help to detect diffuse myocardial disease in early disease stages and aids in diagnosing the diseases' underlying cardiac dysfunction. \\ Despite its recognised potential, T\textsubscript{1} mapping analysis typically requires time-consuming manual segmentation of T\textsubscript{1} maps. Moreover, external factors, such as hematocrit and blood flow, impact the obtained values and create variability that reduces the ability to separate healthy from diseased myocardium. Several blood correction models have been proposed to limit the impact of external factors \cite{nickander2017blood,shang2018blood,reiter2013normal}. However, these methods have not been evaluated in large cohort studies. Automating T\textsubscript{1} analysis of myocardial tissue characterisation sequences could facilitate the clinical use of T\textsubscript{1} mapping and unlock the potential to obtain T\textsubscript{1} data in large populations.\\ In recent years, deep learning methods have shown great success in segmenting anatomical and pathological structures in medical images \cite{bai2018automated, ruijsink2019fully, fahmy2019automated}. For many tasks, their accuracy is comparable to human-level performance, or even surpasses it. In the context of CMR imaging, semi-automatic and automatic techniques for cardiac cine \cite{ruijsink2019fully, bai2018automated} and flow \cite{goel2014fully} imaging have been developed. One paper has proposed an automated segmentation method for native T\textsubscript{1} maps \cite{fahmy2019automated}. However, this method only extracted global left ventricle (LV) myocardial T\textsubscript{1} values, whereas regional assessment of septal and/or focal lesion T\textsubscript{1} values is typically used to characterise diseases \cite{liu2017measurement, messroghli2017clinical}. Furthermore, T\textsubscript{1} values were only reported for healthy subjects and a pooled group of cardiovascular diseases (CVD), without distinguishing between different myocardial disease processes and these values were not corrected for myocardial blood volume. In this paper we add further insight into the aforementioned areas.\\ Medical segmentation problems are often characterised by ambiguities, some of them inherent to the data such as poor contrast, inhomogeneous appearance and variations in imaging protocol, and some due to inter- and intra-observer variability in the annotated data used for training. To limit the effect of these factors and detect failed cases, some groups have proposed to incorporate quality control (QC) techniques \cite{fahmy2019automated, ruijsink2019fully, robinson2019automated}. We believe that modeling uncertainty at a per-pixel level is an important step in understanding the reliability of the segmentations and increasing clinicians' trust in the model's outputs. Several works have investigated uncertainty estimation for deep neural networks \cite{kendall2017uncertainties, lakshminarayanan2017simple, zhu2018bayesian}. A popular approach to account for the uncertainty in the learned model parameters is to use variational Bayesian methods, which are a family of techniques for approximating Bayesian inference over the network weights. These methods can be used to automatically segment the anatomy of interest, but additionally provide a pixel-wise uncertainty map of the confidence of the model in segmenting the input image. Budd \textit{et al} \cite{budd2019confident} proposed to use this approach to automatically estimate fetal Head Circumference from Ultrasound imaging and provide real-time feedback on measurement robustness.\\ In this paper, we develop a tool for automated segmentation and analysis of T\textsubscript{1} maps. We use the PHiSeg network \cite{baumgartner2019} to segment the images, and additionally use the generated uncertainty information in a novel QC process to identify uncertain (and potentially inaccurate) segmentations. To the best of our knowledge this is the first time that segmentation uncertainty information has been used for QC in medical image segmentation. By incorporating this QC process, our framework automatically controls the quality of the segmentations and rejects those that are uncertain. We hypothesise that this method can be used to derive high quality T\textsubscript{1} data without human interaction from large-scale databases. Using the proposed method we compute mean global and regional native T\textsubscript{1} values from 14,683 subjects from the UK Biobank, which represents the largest cohort for T\textsubscript{1} mapping images to date. We report reference values for healthy subjects and interrogate typical values obtained in important relevant subgroups of cardiomyopathies. In addition, we investigate if a blood correction model for T\textsubscript{1} \cite{nickander2017blood} provides better discrimination between healthy and diseased myocardium. \\ \section*{Materials and Methods} \subsection*{UK Biobank dataset} CMR imaging was carried out on a 1.5 Tesla scanner (Siemens Healthcare, Erlangen, Germany). For each subject, the Shortened Modified Look-Locker Inversion recovery technique (ShMOLLI, WIP780B) was used to perform native (non-contrast) myocardial T\textsubscript{1} mapping in a single mid-ventricular short axis (SAX) slice (TE/TR/flip-angle (FA): 1.04ms/ 2.6ms/ 35$^{\circ}$, voxel size 0.9 x 0.9 x 8.0 mm). The matrix size of all images was unified to 192 x 192. Details of the full image acquisition protocol can be found in \cite{petersen2015uk}. \\ As pre-processing, all of the CMR DICOM images were converted into NIfTI format. Training and validation data were obtained through manual segmentation of the images by an experienced CMR cardiologist. The LV endocardial and epicardial borders and the right ventricle (RV) endocardial border were traced using the ITK-SNAP interactive image visualisation and segmentation tool \cite{yushkevich2006user}.\\ From the UK Biobank database, we first excluded any subject with systemic disease. Subsequently, we identified patients with CVD from the included cohort using ICD10 codes. We included 11 relevant groups of CVD: acute myocarditis, aortic stenosis (AS), atrial fibrillation (AF), cardiac sarcoidosis, chronic coronary artery disease (CAD), dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM), pheochromocytoma, obsesity or takotsubo cardiomyopathy. Of the remaining subjects, those who had no history of CVD nor any cardiovascular risk factors were included as healthy subjects. \\ From the selected study population, we randomly chose three sub-cohorts: a set of 800 subjects (consisting of both healthy subjects as well as subjects with a wide variety of CVDs) for training, a set of 100 subjects (50 healthy subjects and 50 chronic cardiomyopathy subjects) for validation of the segmentations provided by the PHiSeg network, and a set of 700 subjects (500 healthy subjects and 200 chronic cardiomyopathy subjects) for the validation of the QC process. \subsection*{Automated image analysis} The proposed workflow for automated T\textsubscript{1} map analysis is summarised in Figure \ref{fig:Fig1_pipeline} and described in detail in the following subsections. \subsubsection*{Deep neural network with Bayesian inference for segmentation} In this work, we used a Probabilistic Hierarchical Segmentation (PHiSeg) network \cite{baumgartner2019}, a recently proposed deep learning network with Bayesian inference for segmentation of the LV blood pool, LV myocardium and RV blood pool from T\textsubscript{1} mapping images (Figure \ref{fig:Fig1_pipeline}). The PHiSeg network employs convolutions to learn task-specific representations of the input data and predicts a pixel-wise segmentation from an input image based on this representation. In addition, an uncertainty map is generated which quantifies the pixel-wise uncertainty of the segmentation.\\ The PHiSeg network \cite{baumgartner2019} models the segmentation problem at multiple scales from fine to coarse. Performing inference with this model using a conditional variational autoencoder approach results in a network architecture resembling the commonly used U-Net. However, in contrast to a U-Net, this network allows modelling of the joint probability of all pixels in the segmentation map. Specifically, it allows sampling multiple plausible segmentation hypotheses for an input image. In this manner, in addition to producing a per pixel prediction of the label class, it also allows estimation of the uncertainty corresponding to each pixel. The network architecture features a number of convolutional layers, each using a 3x3 kernel and a rectified linear unit (ReLU) activation function. After every three convolutions, the feature map is downsampled by a factor of 2 to learn more global scale features. After performing probabilistic inference at each level, the learned features are upsampled and fused to produce a predicted segmentation mask and a uncertainty map at the original image resolution. To train the model, we aim to find the neural network parameters which maximise the evidence lower bound (ELBO), which models the marginal likelihood of the observed data. The general idea is that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data \cite{yang2017understanding}. A detailed description of the method, as well as the network architecture can be found in \cite{baumgartner2019}.\\ \subsubsection*{Network training and testing} For training of the network, all images were cropped using the manual segmentation to the same size of 192$\times$192 and intensity normalised to the range of [0,1]. Data augmentation was performed on-the-fly using random translations, rotations, scalings and intensity transformations to each mini-batch of images before feeding them to the network. Each mini-batch consisted of 20 native T\textsubscript{1} images. To optimise the loss function we used the Adam optimiser, with the momentum set to 0.9 and the learning rate to $10^{-3}$. The models were trained for 50,000 iterations on a NVIDIA GeForce GTX TITAN GPU and the model with highest average Dice score (on the validation set) over all classes was selected. \subsubsection*{Generating uncertainty maps} During test time, we used the PHiSeg network to sample $T$ different segmentation output samples for a single given input (we used $T$=100). From these multiple segmentations the final predicted segmentation was calculated as the average softmax probability over all of the segmentation samples, and the uncertainty map was generated by computing the cross entropy between the mean segmentation mask and the segmentation samples. \subsubsection*{Quality control} Our QC process comprises two steps, both based upon different aspects of the uncertainty information provided by the PHiSeg network. To train these QC steps, we manually labelled the PHiSeg-obtained segmentations as correct or incorrect in a cohort of 800 subjects (consisting of both healthy subjects as well as subjects with a wide variety of CVDs).\\ First, we used the ELBO output \cite{baumgartner2019, yang2017understanding} of the trained PHiSeg network to reject uncertain segmentations. The ELBO quantifies how likely it is that the segmentation is correct. We used the manual labellings to determine a threshold and any ELBO value above this threshold resulted in the segmentation being rejected.\\ The second QC step is defined as an image classification problem, where each image/segmentation pair is classified as accurate or inaccurate. The outputs of the PHiSeg network (i.e. the segmentation and uncertainty map) were used as input to a deep learning image classifier. For the image classifier, we used a VGG-16 CNN network \cite{simonyan2014very}, which consists of a stack of convolutional layers followed by three fully-connected layers for classification. Each convolutional layer uses a 3x3 kernel and is followed by batch normalisation and ReLU. Details of the VGG-16 network can be found in \cite{simonyan2014very}.\\ Data are rejected if either of these two QC steps fails. The combination of these two steps ensures that T\textsubscript{1} images acquired on different planes or with inaccurate segmentations are identified and rejected for further analysis. \subsubsection*{T\textsubscript{1} map analysis} Myocardial T\textsubscript{1} values were measured from the mid-ventricular SAX slice for the whole myocardium, as well as for the interventricular septum and free-wall segments separately. From the predicted segmentation, the RV-LV intersection points were automatically detected from the LV/RV segmentation masks (RV1 and RV2 in Figure \ref{fig:Fig1_pipeline}) using the hit-or-miss transform, which is a morphological operation that detects a given configuration (or pattern) in an image. In our case, we aimed to detect the intersection between background, LV myocardium and RV labels. These RV-LV intersections were used to divide the LV myocardium mask into a LV interventricular septum (LVIVS) mask and a LV free-wall (LVFW) mask. \subsubsection*{Myocardial T\textsubscript{1} blood correlation} For myocardial T\textsubscript{1} blood correlation, we used the model proposed by Nickander \textit{et al} \cite{nickander2017blood}, which used a linear correlation between myocardial T\textsubscript{1} and blood measurements as follows: \begin{equation} \text{T\textsubscript{1}\textsuperscript{corrected}} = \text{T\textsubscript{1}\textsuperscript{uncorrected}} + \alpha \cdot(\text{R\textsubscript{mean}} - \text{R\textsubscript{patient}}), \label{eq:blood_correction} \end{equation} where T\textsubscript{1}\textsuperscript{uncorrected} is the native myocardial T\textsubscript{1} value, R\textsubscript{mean} is the mean R\textsubscript{1} for the patient cohort, and $\alpha$ is calculated as the slope of the linear regression between myocardial T\textsubscript{1} and blood T\textsubscript{1} measurements.\\ The blood T\textsubscript{1} value was computed from RV and LV blood pool regions of interest (ROI) in the mid-ventricular SAX T\textsubscript{1} map. To generate the LV and RV ROIs we eroded the LV/RV blood pool segmentations to generate a mask that has 1/3 of the area of the original mask (see Figure \ref{fig:Fig2_ROI_T1_corr}). To ensure that no papillary muscles or trabeculae were included, we rejected any pixel whose T\textsubscript{1} value was less than 1.5 times the interquartile range below the first quartile of the blood pool values. The blood T\textsubscript{1} value was calculated as the mean of the LV and RV values calculated in this way, and then converted to the T\textsubscript{1} relaxation rate (R\textsubscript{1}=1/T\textsubscript{1}). \\ \subsection*{Reference values} In total, we analysed CMR scans of 14,683 subjects (62 $\pm$ 22 yrs., 48\% males) included in the UK Biobank cohort using our method. First, we derived reference values in 5,685 healthy subjects, selected using stringent criteria to exclude any disease or risk factor that impacts the heart or vasculature (see details in our previous paper \cite{ruijsink2019fully}). Next, we analysed data of patients known to have one of 11 different CVD's. We also obtained indexed LV end diastolic volume (iLVEDV), LV ejection fraction (LVEF) and indexed LV mass (iLVM) from cine CMR data, using the automated method described in \cite{ruijsink2019fully}. Outliers were defined a priori as values 3 interquartile ranges below the first or above the third quartile and they were removed from the analysis. \subsubsection*{Evaluation of the method} We evaluated the performance of the automated network as follows:\\ \textbf{Deep neural network with Bayesian inference}: To validate the PHiSeg network, a cohort of 50 healthy volunteers and 50 chronic cardiomyopathy patients was selected and manually segmented. These subjects were not used for training the PHiSeg network. We used the Dice metric to measure the degree of overlap between the automated and manual segmentations. The Dice metric has values between 0 and 1, where 0 denotes no overlap and 1 denotes perfect agreement. Furthermore, Bland-Altman analysis and Pearson’s correlation were used to compare the obtained global LV native myocardial T\textsubscript{1} values, and the T\textsubscript{1} values in the LVIVS and LVFW, between the automated and manual segmentations. \\ \textbf{Quality control}: To assess the accuracy of the QC process, we manually labelled the PHiSeg obtained segmentations as correct or incorrect in a cohort of 500 healthy volunteers and 200 chronic cardiomyopathy patients, which are independent from the training cohort. We computed sensitivity (\% of manually labelled as incorrect image/segmentation pairs that were correctly detected), specificity (\% of manually labelled as correct image/segmentation pairs that were correctly identified), and balanced accuracy (averaged percentages of correct answers for correct/incorrect classes individually). \subsubsection*{Statistical analysis} Statistical analysis was performed using Statsmodels, a Python library for statistical and econometric analysis \cite{seabold2010statsmodels}. Normality of distributions was tested with the Kolmogorov-Smirnov test. Categorical data are expressed as percentages, and continuous variables as mean $\pm$ standard deviation (SD) or median and interquartile range, as appropriate. A paired 2-tailed Student's $t$-test was used to assess paired data, and an unpaired 2-tailed Student's $t$-test was used to assess unpaired samples. Comparison of more than three normally distributed variables was performed using analysis of variance (ANOVA, with Bonferroni’s post-hoc correction). For the Bland-Altman analysis, paired t-tests versus zero values were used to verify the significance of the biases, and paired t-tests were used to analyse the mean absolute errors of all parameters between healthy subjects and patients. Linear regression was performed to estimate the slope used for correction of myocardial T\textsubscript{1} from blood T\textsubscript{1}. The relationships between corrected and uncorrected mean native myocardial T\textsubscript{1} were investigated by computing the SD of the mean native myocardial T\textsubscript{1}, and evaluated for difference with a F-test. To investigate whether blood correction improved the discrimination between healthy subjects and patients with CVD's, we calculated the z-scores of the patients in each CVD group with respect to our healthy population for the uncorrected and corrected T\textsubscript{1} maps and compared the average z-scores using a paired 2-tailed Student's $t$-test. Associations between native T\textsubscript{1} values, clinical demographics and LV function were explored by single and multivariate linear regressions. In all cases, p $<$ 0.05 denotes statistical significance. To compute reference values, healthy subjects were used as the study controls and unpaired t-tests were used for comparison. \section*{Results} \subsection*{Deep neural network with Bayesian inference} Table \ref{tab1:dsc_coeffs} reports the Dice scores between automated and manual segmentations evaluated on the cohort of 100 subjects. Overall, the Dice score between manual and automated segmentations was 0.84 for the LV myocardium.\\ The Bland-Altman plot showed strong agreement between the pipeline and manual analysis, see Figure \ref{fig:Fig3_BA_CA_TA}. There was a small negative bias for all of the native T\textsubscript{1} values (-5.04 ms, 5.89 ms and -4.97 for global LV, LVIVS and LVFW respectively). Table \ref{tab2:T1_values} shows the automatically and manually calculated T\textsubscript{1} values within the three regions averaged over the test cohort. There was no significant difference in mean absolute error between chronic cardiomyopathy patients and healthy volunteers for any of the T\textsubscript{1} values extracted except for LVIVS T\textsubscript{1} values for chronic cardiomyopathy patients. The automatically reconstructed T\textsubscript{1} maps showed a strong correlation with the T\textsubscript{1} values based on manual segmentations (r=0.97) (see Supplementary Figure \ref{fig:Fig3_BA_CA_TA}). \\ Figure \ref{fig:Fig4_ExamplesT1} shows an example of a manual segmentation, the predicted segmentation and the uncertainty map for a healthy subject and for a chronic cardiomyopathy patient. Note that the manual and the automatic segmentations agree well. \\ \subsection*{Quality control} The balanced accuracy for the QC process was 97.08\%, the sensitivity was 90.08\% and the specificity was 96.44\%.\\ Figure \ref{fig:Fig5_unc_examples} shows examples of uncertainty maps for a cohort of subjects that have been rejected by the QC steps. Note that the QC is able to identify inaccurate data with a wide range of underlying causes such as incorrect planning (i.e. images (a), (b), (c) and (d) in Figure \ref{fig:Fig5_unc_examples}), motion artefacts (i.e. images (e) and (f) in Figure \ref{fig:Fig5_unc_examples}) or segmentation failure (images (g) and (h) in Figure \ref{fig:Fig5_unc_examples}). \subsection*{Uncertainty quantification} To understand the impact of uncertainty in the predicted T\textsubscript{1} values, from the test cohort that contains 50 healthy volunteers and 50 chronic cardiomyopathy patients, we computed the distribution of the global LV T\textsubscript{1} values over the $T$ predicted segmentations. Figure \ref{fig:Fig_AF1_error_t1} shows a graphical representations of the variability of these estimates. The solid lines indicate the mean T\textsubscript{1} values from the test cohort and the shaded region represents one SD of uncertainty. \subsection*{Myocardial T\textsubscript{1} blood correlation} The constants from the linear regression model between the myocardial T\textsubscript{1} and the blood measurements were used to correct native myocardial T\textsubscript{1} values according to Equation \ref{eq:blood_correction}. The global R\textsuperscript{2} was 0.28 (which is comparable to that found in \cite{nickander2017blood}) and the mean squared error was 106.4. The linear regression had a slope of -0.35 and an intercept of 935. \\ The mean uncorrected myocardial T\textsubscript{1} in the healthy cohort was 946.44 $\pm$ 61.64, and the mean corrected myocardial T\textsubscript{1} in the healthy cohort was 927.62 $\pm$ 46.41, showing a statistically significant decrease of the SD ((p $<$ 0.001)). \subsubsection*{Reference values} From the 14,683 subjects, 156 subjects were rejected by our QC process due to inaccurate T\textsubscript{1} segmentations. Subject characteristics are summarised in Table \ref{tab3:demographics}. Compared to healthy subjects, all patients were in the same age range; patients with obesity had statistically higher body mass index (BMI) and body surface area (BSA); patients with DCM, AF and chronic CAD had significantly lower LVEF (all p $<$ 0.05); patients with HCM, AS and chronic CAD had significantly higher iLVM (all p $<$ 0.05). Native T\textsubscript{1} in three different regions (LV, LVIVS, LVFW) in CVD subjects and in healthy subjects are shown in Table \ref{tab5:ref_vals_add}. Gender did not affect myocardial T\textsubscript{1} values significantly (2-way ANOVA, p = 0.01), and thus only overall T\textsubscript{1} values are reported. \\ Native T\textsubscript{1} values in typical tissue classes in CVD subjects and in healthy subjects for uncorrected and corrected myocardial LVIVS T\textsubscript{1} values are shown in Figure \ref{fig:Fig6_ref_vals_boxplot_uncorrected_corrected_IVS}. Table \ref{tab5:ref_vals_add} shows global and regional native correlated T\textsubscript{1} values. Compared to healthy subjects, patients with DCM, HCM, Takotsubo cardiomyopathy, acute myocarditis and cardiac sarcoidosis had significantly higher native T\textsubscript{1} values. \\ Finally, Table \ref{tab4:ref_vals_age} shows LVIVS native correlated T\textsubscript{1} values stratified by age by decade (45 to 54, 55 to 64, and 65 to 74 years), and reported as mean and reference ranges (95\% prediction intervals). Note that, in the first age range, i.e. 45-55, there were not enough data to compute the confidence interval for the AS group. \section*{Discussion} In this work, we have proposed a fully automated pipeline with a novel quality control step to automatically quantify myocardial tissue from native T\textsubscript{1} mapping, which allows extraction of reference values from large-scale databases. The method is fast and scalable, overcoming limitations associated with current clinical CMR image analysis workflows, which are manual, time-consuming and prone to subjective error. The method has potential to automate T\textsubscript{1} mapping analyses from CMR in clinical practice and research. Using the proposed pipeline we present reference ranges for global and regional myocardial native T\textsubscript{1} in healthy subjects from the UK Biobank dataset and show that blood correction improves discrimination between healthy subjects and patients with CVD. \subsection*{Automatic analysis with quality control} We validated our segmentation network by comparing between automated and manual analysis in a cohort of healthy and diseased subjects. Results show a strong agreement for both segmentations (see Table \ref{tab1:dsc_coeffs}) as well as estimated T\textsubscript{1} values (Figures \ref{fig:Fig3_BA_CA_TA} and \ref{fig:Fig3_BA_CA_TA}). Residual biases in automated T\textsubscript{1} calculations might not necessarily correspond to T\textsubscript{1} estimation errors but might be related to the different methods used to compute T\textsubscript{1} between manual and automatic analysis. Residual biases are within the range of inter- and intraobserver variabilities previously reported \cite{lin2018variability}. Also, residual biases (ranging between -4.97ms and -5.04ms) are consistent between healthy and cardiac patients and are unlikely to have significant clinical impact. The Dice scores we obtained are comparable to previous works \cite{fahmy2019automated}. \\ Quality control techniques are essential to be able to translate deep learning algorithms into a clinical setting. However, many works proposed for the analysis of CMR data have not taken this need into account making it impossible to deploy them for the processing of large-scale databases. In our framework, we employed a novel approach that used the uncertainty information produced by a CNN with Bayesian inference to identify incorrect segmentations, which can be rejected or flagged for revision by an expert cardiologist. We show that this QC process yields over 90\% sensitivity of detecting errors, getting close to clinical standards. One way to further optimize QC in our network would be to incorporate anatomical information into the uncertainty estimation. At the moment, segmentation errors of both RV and LV are accounted for similarly, while the segmentation of the LV is most important for T\textsubscript{1} calculations. \subsection*{Reference values:} We used our framework to analyse native T\textsubscript{1} maps in an unprecedentedly large cohort of healthy volunteers and patients with heart disease. Using this cohort we were able to provide reference values for normal myocardium in aging subjects. T\textsubscript{1} values differ between different scanners, vendors and protocols. Our values can therefore not directly compare to other publications, but they are in agreement with results obtained in smaller cohorts of manual assessment \cite{reiter2013normal, ferreira2014myocardial, piechnik2013normal}. We further used our cohort to interrogate T\textsubscript{1} values in patients with 11 different CVDs. We show that for CVDs in which diffuse myocardial disease is prominent (acute myocarditis, cardiac sarcoidosis, HCM, DCM and tako Tsubo disease) we detect significantly larger T\textsubscript{1} values. For the other CVD's, we show that T\textsubscript{1} values did not significantly change from healthy data. It is likely that the extent of myocardial damage in these groups is less high, explaining the lower T\textsubscript{1} values. There is large variability seen in native T\textsubscript{1} values, which is known to be caused by several factors, including intracardiac blood flow and hematocrite levels. We investigated a previously proposed method for correction of T\textsubscript{1} values based on blood pool T\textsubscript{1} dynamics \cite{nickander2017blood}. This method has the benefit on working using image data alone, but has not previously been tested in a large cohort of patients. We demonstrate that indeed, discrimination between health and disease improves using blood pool correction. Whether this technique is better than using hematocrite-correction or other methods needs is to be investigated in further studies. Investigating myocardial disease processes should include additional measures to native T\textsubscript{1}, such as extracellular volume or T\textsubscript{2} imaging. However, the data presented in our study remain valuable. The UK Biobank cohort will contain highly detailed imaging and non-imaging data and follow-up in nearly half a million subjects. On this large scale, new relationships between T\textsubscript{1} and population characteristics can yield important insights into development and progression of CVDs. \subsection*{Limitations and future directions:} A limitation of our work is that the PHiSeg network was trained on a single dataset, the UK Biobank dataset, which is relatively homogeneous. To obtain similar performance in other databases it would be necessary to retrain the networks using a small amount of data. However, the proposed segmentation model and QC steps will remain applicable.\\ Another limitation of this study is the lack of availability of paired LGE and native T\textsubscript{1}-mapping data to assess the correlation between these two measurements, and in which cases T\textsubscript{1}-mapping could provide a better insight into cardiac pathologies. Based on previous studies, it is known that T\textsubscript{1}-mapping may enable detection of early pathological processes, and serve as a tool for early diagnosis or screening, or differentiation of cardiomyopathies from normal phenotypes. The provided reference ranges could help to identity subjects at risk at an early stage. \\ The T\textsubscript{1} values presented in this study were derived using a single T\textsubscript{1}-mapping technique. It is important to take into account that even within the same T\textsubscript{1}-mapping technique, different versions of sequences can lead to small differences in T\textsubscript{1}-estimations. Therefore, it might not be possible to directly translate the T\textsubscript{1} values derived in this study to other T\textsubscript{1}-mapping techniques. In future work we aim to extend the automatic pipeline to be able to accurately segment T\textsubscript{1}-mapping data from different sequences/vendors, to make the proposed framework generalisable.\\ \section*{Conclusions} We presented and validated a pipeline for automated quantification of myocardial tissue from native T\textsubscript{1}-mapping. The proposed method uses the uncertainty of a deep learning segmentation network in a novel QC process to detect inaccurate segmentations. We used the proposed framework to obtain reference values from the largest cohort of subjects to date, which include data from healthy subjects and from patients with the most common myocardial tissue conditions. \section*{List of abbreviations} \begin{abbrv} \item[AF] Atrial fibrillation \item[ANOVA] Analysis of variance \item[AS] Aortic stenosis \item[BMI] Body mass index \item[BPM] Beats per minute \item[BSA] Body surface area \item[CAD] Coronary artery disease \item[CMR] Cardiac magnetic resonance \item[CNN] Convolutional neural networks \item[CVAE] Conditional variational autoencoder \item[CVD] Cardiovascular disease \item[DCM] Dilated cardiomyopathy \item[ELBO] Evidence lower bound \item[HCM] Hypertrophic cardiomyopathy \item[iLVEDV] Indexed LV end diastolic volume \item[iLVM] LV mass index \item[LGE] Late gadolinium enhancement \item[LV] Left ventricle \item[LVEF] LV ejection fraction \item[LVIVS] Left ventricle interventricular septum \item[LVFW] Left ventricle free-wall \item[QC] Quality control \item[PHiSeg] Probabilistic Hierarchical Segmentation \item[ReLU] Rectified linear unit \item[ROI] Regions of interest \item[RV] Right ventricle \item[SAX] Short axis \item[SD] Standard deviation \end{abbrv} \begin{backmatter} \section*{Ethics declarations} \subsection*{Ethics approval and consent to participate} UK Biobank has approval from the North West Research Ethics Committee (REC reference: 11/NW/0382). \subsection*{Consent for publication} Not applicable. \subsection*{Competing interests} MS is an employee of HeartFlow, Redwood City, California. All other authors have reported that they have no relationships relevant to the contents of this paper to disclose. \section*{Acknowledgements} This research has been conducted mainly using the UK Biobank Resource under Application Number 17806. The authors wish to thank all UK Biobank participants and staff. This research has been conducted using a GPU generously donated by NVIDIA Corporation. \subsection*{Funding} This work was supported by the Wellcome EPSRC Centre for Medical Engineering at Kings College London (WT 203148/Z/16/Z), the EPSRC (EP/R005516/1 and EP/P001009/1) and by the NIHR Cardiovascular MedTech Co-operative. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR, EPSRC, or the Department of Health. \subsection*{Availability of data and materials} The imaging data and manual annotations were provided by the UK Biobank Resource under Application Number 17806. Researchers can apply to use the UK Biobank data resource for health-related research in the public interest \cite{UKBB}. \section*{Author's contributions} Author contribution are as following: conception and study design (EPA, BR, RR and APK); development of algorithms and analysis software (EPA, CFB, EK and MS); data pre-processing (EPA and MS); data analysis (EPA and BR); clinical advice (BR and RR); manual image annotation (BR); interpretation of data and results (EPA, BR, RR and APK); drafting (EPA, BR and APK); revising (EPA, BR, CFB, EK, MS, RR and APK). All authors read and approved the final manuscript. \bibliographystyle{bmc-mathphys}
1,314,259,993,994
arxiv
\section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section*{Acknowledgment} The authors would like to thank the Apollo Quality Assurance teams for designing validation cases in our simulation environment, Apollo Car Operations Team for algorithm validation on an autonomous vehicle. \bibliographystyle{IEEEtran} \section{INTRODUCTION} This template provides authors with most of the formatting specifications needed for preparing electronic versions of their papers. All standard paper components have been specified for three reasons: (1) ease of use when formatting individual papers, (2) automatic compliance to electronic requirements that facilitate the concurrent or later production of electronic products, and (3) conformity of style throughout a conference proceedings. Margins, column widths, line spacing, and type styles are built-in; examples of the type styles are provided throughout this document and are identified in italic type, within parentheses, following the example. Some components, such as multi-leveled equations, graphics, and tables are not prescribed, although the various table text styles are provided. The formatter will need to create these components, incorporating the applicable criteria that follow. \section{PROCEDURE FOR PAPER SUBMISSION} \subsection{Selecting a Template (Heading 2)} First, confirm that you have the correct template for your paper size. This template has been tailored for output on the US-letter paper size. It may be used for A4 paper size if the paper size setting is suitably modified. \subsection{Maintaining the Integrity of the Specifications} The template is used to format your paper and style the text. All margins, column widths, line spaces, and text fonts are prescribed; please do not alter them. You may note peculiarities. For example, the head margin in this template measures proportionately more than is customary. This measurement and others are deliberate, using specifications that anticipate your paper as one part of the entire proceedings, and not as an independent document. Please do not revise any of the current designations \section{MATH} Before you begin to format your paper, first write and save the content as a separate text file. Keep your text and graphic files separate until after the text has been formatted and styled. Do not use hard tabs, and limit use of hard returns to only one return at the end of a paragraph. Do not add any kind of pagination anywhere in the paper. Do not number text heads-the template will do that for you. Finally, complete content and organizational editing before formatting. Please take note of the following items when proofreading spelling and grammar: \subsection{Abbreviations and Acronyms} Define abbreviations and acronyms the first time they are used in the text, even after they have been defined in the abstract. Abbreviations such as IEEE, SI, MKS, CGS, sc, dc, and rms do not have to be defined. Do not use abbreviations in the title or heads unless they are unavoidable. \subsection{Units} \begin{itemize} \item Use either SI (MKS) or CGS as primary units. (SI units are encouraged.) English units may be used as secondary units (in parentheses). An exception would be the use of English units as identifiers in trade, such as `3.5-inch disk drive'. \item Avoid combining SI and CGS units, such as current in amperes and magnetic field in oersteds. This often leads to confusion because equations do not balance dimensionally. If you must use mixed units, clearly state the units for each quantity that you use in an equation. \item Do not mix complete spellings and abbreviations of units: `Wb/m2' or `webers per square meter', not `webers/m2'. Spell out units when they appear in text: `. . . a few henries', not `. . . a few H'. \item Use a zero before decimal points: `0.25', not `.25'. Use `cm3', not `cc'. (bullet list) \end{itemize} \subsection{Equations} The equations are an exception to the prescribed specifications of this template. You will need to determine whether or not your equation should be typed using either the Times New Roman or the Symbol font (please no other font). To create multileveled equations, it may be necessary to treat the equation as a graphic and insert it into the text after your paper is styled. Number equations consecutively. Equation numbers, within parentheses, are to position flush right, as in (1), using a right tab stop. To make your equations more compact, you may use the solidus ( / ), the exp function, or appropriate exponents. Italicize Roman symbols for quantities and variables, but not Greek symbols. Use a long dash rather than a hyphen for a minus sign. Punctuate equations with commas or periods when they are part of a sentence, as in $$ \alpha + \beta = \chi \eqno{(1)} $$ Note that the equation is centered using a center tab stop. Be sure that the symbols in your equation have been defined before or immediately following the equation. Use `(1)', not `Eq. (1)' or `equation (1)', except at the beginning of a sentence: `Equation (1) is . . .' \subsection{Some Common Mistakes} \begin{itemize} \item The word `data' is plural, not singular. \item The subscript for the permeability of vacuum 0, and other common scientific constants, is zero with subscript formatting, not a lowercase letter `o'. \item In American English, commas, semi-/colons, periods, question and exclamation marks are located within quotation marks only when a complete thought or name is `d, such as a title or full quotation. When quotation marks are used, instead of a bold or italic typeface, to highlight a word or phrase, punctuation should appear outside of the quotation marks. A parenthetical phrase or statement at the end of a sentence is punctuated outside of the closing parenthesis (like this). (A parenthetical sentence is punctuated within the parentheses.) \item A graph within a graph is an `inset', not an `insert'. The word alternatively is preferred to the word `alternately' (unless you really mean something that alternates). \item Do not use the word `essentially' to mean `approximately' or `effectively'. \item In your paper title, if the words `that uses' can accurately replace the word `using', capitalize the `u'; if not, keep using lower-cased. \item Be aware of the different meanings of the homophones `affect' and `effect', `complement' and `compliment', `discreet' and `discrete', `principal' and `principle'. \item Do not confuse `imply' and `infer'. \item The prefix `non' is not a word; it should be joined to the word it modifies, usually without a hyphen. \item There is no period after the `et' in the Latin abbreviation `et al.'. \item The abbreviation `i.e.' means `that is', and the abbreviation `e.g.' means `for example'. \end{itemize} \section{USING THE TEMPLATE} Use this sample document as your LaTeX source file to create your document. Save this file as {\bf root.tex}. You have to make sure to use the cls file that came with this distribution. If you use a different style file, you cannot expect to get required margins. Note also that when you are creating your out PDF file, the source file is only part of the equation. {\it Your \TeX\ $\rightarrow$ PDF filter determines the output file size. Even if you make all the specifications to output a letter file in the source - if your filter is set to produce A4, you will only get A4 output. } It is impossible to account for all possible situation, one would encounter using \TeX. If you are using multiple \TeX\ files you must make sure that the ``MAIN`` source file is called root.tex - this is particularly important if your conference is using PaperPlaza's built in \TeX\ to PDF conversion tool. \subsection{Headings, etc} Text heads organize the topics on a relational, hierarchical basis. For example, the paper title is the primary text head because all subsequent material relates and elaborates on this one topic. If there are two or more sub-topics, the next level head (uppercase Roman numerals) should be used and, conversely, if there are not at least two sub-topics, then no subheads should be introduced. Styles named `Heading 1', `Heading 2', `Heading 3', and `Heading 4' are prescribed. \subsection{Figures and Tables} Positioning Figures and Tables: Place figures and tables at the top and bottom of columns. Avoid placing them in the middle of columns. Large figures and tables may span across both columns. Figure captions should be below the figures; table heads should appear above the tables. Insert figures and tables after they are `d in the text. Use the abbreviation `Fig. 1', even at the beginning of a sentence. \begin{table}[h] \caption{An Example of a Table} \label{table_example} \begin{center} \begin{tabular}{|c||c|} \hline One & Two\\ \hline Three & Four\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[thpb] \centering \framebox{\parbox{3in}{We suggest that you use a text box to insert a graphic (which is ideally a 300 dpi TIFF or EPS file, with all fonts embedded) because, in an document, this method is somewhat more stable than directly inserting a picture. }} \caption{Inductance of oscillation winding on amorphous magnetic core versus DC bias magnetic field} \label{figurelabel} \end{figure} Figure Labels: Use 8 point Times New Roman for Figure labels. Use words rather than symbols or abbreviations when writing Figure axis labels to avoid confusing the reader. As an example, write the quantity `Magnetization', or `Magnetization, M', not just `M'. If including units in the label, present them within parentheses. Do not label axes only with units. In the example, write `Magnetization (A/m)' or `Magnetization {A[m(1)]}', not just `A/m'. Do not label axes with a ratio of quantities and units. For example, write `Temperature (K)', not `Temperature/K.' \section{CONCLUSIONS} A conclusion section is not required. Although a conclusion may review the main points of the paper, do not replicate the abstract as the conclusion. A conclusion might elaborate on the importance of the work or suggest applications and extensions. \addtolength{\textheight}{-12cm} \section*{APPENDIX} Appendixes should appear before the acknowledgment. \section*{ACKNOWLEDGMENT} The preferred spelling of the word `acknowledgment' in America is without an `e' after the `g'. Avoid the stilted expression, `One of us (R. B. G.) thanks . . .' Instead, try `R. B. G. thanks'. Put sponsor acknowledgments in the unnumbered footnote on the first page. References are important to the reader; therefore, each citation must be complete and correct. If at all possible, references should be commonly available publications. They are cited like so: \cite{IEEEexample:articleetal}, \cite{IEEEexample:book}, ... \bibliographystyle{IEEEtran} \section{Problem Statement} \label{sec:problem_statement} For an autonomous driving vehicle, it is important to make correct prediction of its surrounding obstacles' future movements especially the surrounding vehicles in urban drive scenarios.More precise prediction is helpful for making more appropriate decisions to reduce the risk of collisions and improve the comfort level. To solve this problem, we are given the historical states and surrounding environment for each vehicle. A state includes position, velocity, heading, and shape. The surrounding environment includes lanes, road boundaries, intersections, crosswalks and nearby dynamic obstacles. The prediction model output a predicted trajectory with a sequence of positions representing its future movement. \section{Related Works} \label{sec:related_works} \section{Conclusion} \label{sec:conclusion} In this paper, we present a complete data driven prediction architecture including both the onboard part and offboard parts. We show this with two example how the direct/indirect prediction generation methods can benefit from the automatic data annotation, training process and hyper-parameter tuning to reduce the deployment effort across different scenarios. \section{Introduction} \subsection{Motivation} \label{subsec:motivation} In order to fully autonomous driving in complex scenarios, comprehensive understanding and accurate prediction of driving entities' future states are crucial. Researchers from both academia and industry have been extensively studied the prediction techniques. However, rarely researchers discussed the challenges in scaling of the prediction models to different scenarios and entity types in real world application. On the other hand, Apollo platform~\cite{apollo}, has the challenge of running in different scenarios across the nations and on different vehicle platforms~\cite{apollo_2019}. So a main motivation of this work is a combined onboard-offboard and combined end-to-end prediction pipeline that helps scale the autonomous vehicle testing and operating to different scenarios, different traffic rule, and different vehicle platforms with enhanced efficiency and reduced cost. \subsection{Related Work} \label{subsection:related_work} The prediction problems aim to estimate the entities future states (position, heading, velocity, acceleration etc.) in the next few seconds, and can usually be solved via two different approaches: One is~\emph{Direct trajectory prediction}: the model directly output the entities' future trajectories in discrete point format. Another is~\emph{Entities' intention prediction + post trajectory generation}: the model would only output entities' intentions with probabilities (change lane/no change, intersection exist lane, roundabout exit/no exit.) with an sampling/optimization based trajectory generator. \subsubsection{prediction of intention + post trajectory generation} \label{subsection:indirection_prediction} Early works in ADV prediction usually use Kalman filter (KF) and its alternatives to estimate and propagate the entity future states~\cite{cosgun2017towards}, and Gaussian Process (GP) for human dynamic modeling~\cite{wang2007gaussian}. While these approaches usually work well in simple short-term horizon, they generally fail to encoder environment context (such as road topology, traffic rules) thus downgrade in performance in complex environment. Alternatively, some works formulate this problem with Partially-Observable Markov Decision Process (POMDP)~\cite{kitani2012activity} or Hidden Markov Model (HMM)~\cite{deo2018would} followed by inverse optimal control. Recently, Researcher also try combine the RNN based high-level policy anticipate with low level non-linear optimization based trajectory generation~\cite{8793568}. We call these \emph{indirect prediction trajectory generation} in following chapters. \subsubsection{prediction algorithm with direct trajectory generation} \label{subsection:prediction_algorithm} The prediction models above usually model the ego-environment behavior in the environment with the assumption that obstacles behave independently of each other. Inspired by the successful applications of deep learning in computer vision and natural language processing, different deep learning approaches have been proposed to model both the ego-environment and obstacle-obstacle interactions within the environment. Researchers either implicitly model environment and these interactions via encoding, like~\cite{bansal2018chauffeurnet} and~\cite{chou2019predicting} do, or explicitly model the social interactions between obstacles by adding both the spatial and temporal information in LSTM based deep learning model structures~\cite{alahi2016social}\cite{vemula2018social}\cite{mohamed2020social}. Multi-modal prediction trajectories and trajectories' probabilities can also be added with softmax normalization~\cite{chai2019multipath}, using Variant Auto Encoding (VAE)~\cite{hong2019rules}, or Generative Adversarial Approach~\cite{li2019coordination}. We call these \emph{direct prediction trajectory generation} in following chapters. \subsection{Contributions} Our main contributions are as follows: \begin{enumerate} \item \textbf{Data Driven prediction architects aiming to support large scale operation}: This includes two major parts. \begin{enumerate} \item \emph{Onboard part}: this is the part which is actually running inside the ADV and includes three major components: Message Pre-Processing, Model Inference and Trajectory Post-Processing. \item \emph{Offboard part}: this is the part which does not run on the ADV, but instead running on the data-pipeline, this includes 5 components: Automatic Data Annotation, Feature Extraction, Model Training, Hyper-Parameter Auto-tuning and Result Evaluation. \end{enumerate} \item \textbf{Indirect/direct prediction trajectories generation support}: We use two models to show that our pipeline architectures are flexible and completely automatic to support and accelerate scenario adaptions for both indirect and direct prediction generation: In Section~\ref{sec:semantic_lstm}, we use ~\emph{semantic map + LSTM} to show a complete pipeline for direct trajectory prediction through automatic data annotation and model structure improvement without any human intervention, yet achieve similar performance compared with model trained on man. In Section~\ref{sec:intention_prediction_and_post_trajectory_generation} we use~\emph{Intersection exit prediction model + Siamese auto tuning + post trajectory generation} as an example for indirect trajectory prediction pipeline. Siamese auto tuning increases efficiency by 400\%, avoiding manual parameter tuning process. \item \textbf{Real world application and open capability}: Following extensive onboard and offboard testings, this system has been deployed to several fleet of self-driving vehicles of different types in both China and US. Apollo platform may open the prediction data pipeline and model training service as we have done for other services. \end{enumerate} This paper is organized as: Section~\ref{sec:prediction_architecture} gives the introduction of prediction onboard and offboard components, Section~\ref{sec:semantic_lstm} and~\ref{sec:intention_prediction_and_post_trajectory_generation} give two examples of direct and indirect prediction trajectory generation process, and how they can be improved via our pipeline efficiency, Section~\ref{sec:conclusion} gives the conclusion remarks and future work. \section{Intention Prediction and Post Trajectory Generation} \label{sec:intention_prediction_and_post_trajectory_generation} We support different intention prediction model in Apollo platform and in this section, we will use Intersection MLP (shown in Figure~\ref{fig:siamese} as an example to show how the scenario adaption can be efficiently done in our prediction pipeline. \label{subsec:siamese_network} \begin{figure*}[!htb] \centering \includegraphics[width=12cm, height=10cm]{images/siamese_new.png} \caption{Workflow of intention prediction and post trajectory generation.} \label{fig:siamese} \end{figure*} \subsection{MLP based Intention Prediction model for Intersection Exit} \label{subsec:intention_prediction} Figure \ref{fig:siamese} upper right part shows the intersection exit model for the red vehicle. The model takes obstacle's historical states (positions, headings, velocities) and all intersection exits' features (positions and headings) as inputs and outputs three intentions: going straight, turning left and turning right with probabilities of 0.4, 0.4 and 0.2 as priors in the post trajectory generation stage. \subsection{Post Trajectory Generation} \label{subsec:post_trajectory_generation} Once we get the intention output, ~\emph{post trajectory generation } is the next submodule the complete prediction trajectories using a lattice-like sampling methods: \begin{enumerate} \item Generate prediction paths for each intention via lane sequence search. \item Along each path, sample different temporal profiles within vehicle physical limits. \item Combine paths and temporal profiles to generate trajectories, and select the best trajectory as output according to trajectory posteriors distribution defined in Equation~\ref{eq:posterior}. \end{enumerate} Equation~\ref{eq:posterior} gives the trajectory posterior calculation. Where~\emph{prior} is output from MLP model, $Z$ is the normalization factor and $C$ is the trajectory total cost calculated in Equation~\ref{eq:cost_total}. \begin{equation} \textrm{Posterior} = \frac{1}{Z} \cdot \textrm{prior} \cdot e^{-C} \label{eq:posterior} \end{equation} The total cost in Equation~\ref{eq:cost_total} is weighted cost from different trajectory evaluation metrics, such as acceleration, centripetal acceleration and collision cost. $Z_1$ and $Z_2$ are the normalization terms, and $d_i$ is point-wise distance error between ego vehicle's previously planned trajectory and obstacles' predicted position. Note that $\theta_1$ to $\theta_3$ in this equation are the weights reflecting the driving patterns of road entities. And these are dramatically different in different geo-fenced area and operation time. \begin{subequations} \label{eq:cost_total} \begin{align} & C = \theta_1 C_{acc} + \theta_2 C_{centripetal\_acc} + \theta_3 C_{collision} \\ & \text{where: } \\ & \hspace{2.0em} C_{acc} = \sum\limits_{i}a_i^2 \label{cost:acc}, \\ & \hspace{2.0em} C_{centripetal\_acc} = \frac{1}{Z_1}\sum\limits_{i}(v_i^2\kappa_i)^2 \label{cost:centripetal_acc}, \\ & \hspace{2.0em} C_{collision} = \frac{1}{Z_2}\sum\limits_{i}e^{-d_i^2} \end{align} \end{subequations} \subsection{Siamese Network For Efficiency Improvement} Manual tuning weighs for Equation~\ref{eq:cost_total} is really low efficient and by no means makes large deployment possible, in order to support fleet deployment at scale in different geo-fenced areas, we introduce an auto-tune submodule to automatically find the optimal weights in different scenarios. We use Siamese network here as an example, but this can be of IRL or Bayesian based approach as well. The key thing is that those methods need to share the basic assumption that human trajectories (excluding the unsafe ones), are optimal in a statistical manner. The inputs of Siamese network are the sub-costs of the sampled trajectories and the ground-truth real trajectories. We use the weights for different costs in post trajectory generation after training the network. We use $\xi$ to denote a sampled candidate trajectory, and $\hat\xi$ to denote a ground-truth real trajectory. We re-represent their cost function of as $C(\theta;\xi)$ and $C(\theta;\hat\xi)$. The objective function for the Siamese Network is designed as Equation~\eqref{eq:objective}. \begin{equation} L = \sum\limits_{j=0}^{N}\sum\limits_{i=0}^M|C(\theta;\hat\xi_i)-C(\theta;\xi_j) + \delta|_+ \label{eq:objective} \end{equation} where, $|\cdot|_+$ is the maximum between $\cdot$ and 0, $N$ is the number of data (an obstacle at a timestamp), $M$ is the number of sample candidate trajectories for each data. $\delta$ is marginal factor which is a small positive constant. We tried to solve the optimization problem to minimize the objective function defined in Equation~\eqref{eq:objective}. The marginal constant $\delta$ is necessary added to avoid getting all zero weights. Also, the relatively small value of $\delta$ can omit the affect of the sampled trajectories with costs much larger than the ground-truth trajectory cost, in which situation, $|C(\theta;\hat\xi_i)-C(\theta;\xi_j) + \delta|_+ = 0$. Once we get the optimal weights, the onboard~\emph{post trajectory generation} submodule can use this to rank and choose optimal predicted trajectories for different geo-fenced areas with different driving patterns, traffic rules, etc. We show a roughly 400\% efficiency increase compared with manual parameter tuning when deployed in a new geo-fenced area. \section{Prediction Module Architecture in Apollo Platform} \label{sec:prediction_architecture} This section gives an introduction of prediction modules architecture in Apollo platform including both the onboard (on vehicle) components as well as the offboard (in data pipeline) components. \subsection{Onboard Architecture} \label{sec:onboard_architecture} In this section, we introduce the onboard architecture of the prediction module on Apollo autonomous driving open-source platform. This architecture supports multiple obstacle categories, multiple scenarios and multiple levels of obstacle attention priorities. The structure of onboard workflow is shown in Figure \ref{fig:online_architecture}. \begin{figure*}[!htb] \centering \includegraphics[width=16cm, height=10cm]{images/onboard_architecture_new.png} \caption{Onboard architecture of the prediction module on Apollo autonomous driving open-source platform.} \label{fig:online_architecture} \end{figure*} \subsubsection{Message Pre-Processing} \label{subsec:message_preprocessing} As shown in Figure~\ref{fig:online_architecture}, \emph{message pre-processing} has two objectives: \begin{itemize} \item Merge localization/perception output to prepare environment context for machine learning models, and dump these information for future training/evaluation purpose. \item Scenario selection (intersection and regular road), obstacle prioritization (caution and normal) based on environment context and previous planning trajectories. \end{itemize} \subsubsection{Model Inference} \label{subsec:model_inference} Once the scenario, obstacle type and priority have been determined in~\emph{Message Pre-Processing}, this sub-module selects the corresponding model for inference. We take pedestrians and vehicles as examples. For a pedestrian, we apply social-attention model to predict pedestrian's future trajectory for next four seconds. For a vehicle, if it is of prioritized in caution level, we apply a \emph{semantic map + LSTM} model with direct prediction trajectory output with uncertainty information. If it is of prioritized in normal level, we apply~\emph{lane sequence model} or~\emph{intersection exit model} to predict vehicle's intention like which lane or exit the obstacle vehicle will chose next, as shown in Figure~\ref{fig:online_architecture}. \subsubsection{Trajectory Generation or Trajectory Extension} \label{subsec:trajectory_generation_and_extension} This sub-module serves for as model post-processing with two main objectives: \begin{itemize} \item For pedestrians or caution-prioritized vehicles, we~\emph{extend} the direct generated trajectories with KF with different kinodynamic modeling up to 8s. \item For normal-prioritized vehicle, we take the prediction intention and~\emph{generate} the 8s prediction trajectories via a sampling-based method with cost autotuned for different scenarios. More details would be discussed in Section~\ref{sec:intention_prediction_and_post_trajectory_generation}. \end{itemize} \subsection{Offboard Architecture} \label{sec:offboard_architecture} \begin{figure*}[!htb] \centering \includegraphics[width=16cm, height=10cm]{images/offboard_architecture_new.png} \caption{Offboard architecture of the prediction module on Apollo autonomous driving open-source platform.} \label{fig:offline_architecture} \end{figure*} In this section, we introduce the offboard architecture support data processing and model training. As shown in Figure \ref{fig:offline_architecture}, this architecture includes 1) automatic data annotation, 2) feature extraction and model training 3) auto tuning and 4) results evaluation. \subsubsection{Automatic Data Annotation} \label{subsec:data_labeling} This sub-module takes time stamped perception and localization information, map topology information as well as traffic rules dumped from onboard Database I to generate "ground truth" labels. Depending on specific task/ML model needs, the annotation labels can includes but not limited to: future positions, lane sequence labeling, and intersection exit labeling, etc. \subsubsection{Feature Extraction and Model Training} \label{subsec:feature_extraction} This sub-module takes the features for different models from Database 2 with the key of the combination of road test ID, obstacle ID and timestamp. We also have a sub-key to distinguish the features for different models including intersection features, semantic map, lane sequence features and pedestrian historical movement features. These key-value system make feature query, combination more easier for different model training tasks. \subsubsection{Auto Tuning} \label{subsec:auto_tuning} Auto tune sub-module here only applicable to the ~\emph{indirect prediction trajectory generation}, where after we get the intention prediction results, we need to use either graphical search, curve fitting or optimization based method to generate prediction trajectory afterwards. In all three methods, cost tuning are needed for different scenarios. This cost tuning can be done via simple logistic regression to Inverse Reinforcement Learning (IRL) framework~\cite{kober2013reinforcement} or Bayesian based methods~\cite{neumann2019data}, in section~\ref{sec:intention_prediction_and_post_trajectory_generation}, we use Siamese network as an example search for the optimal cost in post trajectory generation. \subsubsection{Results Evaluation} \label{subsec:result_evaluation} Results evaluation task measures how accurate obstacle's predicted trajectories are, compared with its actual future trajectories. Common trajectory evaluation metrics include Average Displacement Error (ADE), Final Displacement Error (FDE), other evaluation metrics may include Dynamic Time Warping (DTM)~\cite{senin2008dynamic}, CLEAR-MOTA~\cite{bernardin2008evaluating} or Weighted Brier Score~\cite{zhan2018towards}. In following sections, we will use ADE and FDE for results comparison with peers. \section{Prediction Model based on Semantic Map and LSTM} \label{sec:semantic_lstm} In this section, we use our semantic map encoding + LSTM as an example to show the efficiency increase for a typical direct prediction trajectory generation workflow, including both offboard pipeline and onboard components that: \begin{enumerate} \item \emph{Efficiency improvement}: We give an example usage of the completely automatic pipeline for semantic map encoding avoiding any human intervention, thus support easy scenario extension and scaled ADV fleet deployment. \item \emph{Performance comparable}: We show that through prediction model structure redesign we achieve similar performance as in table~\ref{table:model_comparison}. \end{enumerate} \subsection{Automatic Annotation and Semantic Map Encoding} \label{subsec:semantic_map} We use a semantic map similar to~\cite{DBLP:journals/corr/abs-1808-05819} to encode the dynamic context in environment and its past history, as shown in Figure~\ref{fig:semantic_map_target}. The semantic map encode a 40$\times$40 square meters environment (includes all road entities, map topology as well as traffic rules) into a 400$\times$400 pixels image centered with target vehicle. The target vehicle is marked red, while other entities (vehicles, bicycles, pedestrians) are marked yellow, with their historical trajectories marked with darker color. Note that the "target vehicle" here refers not necessarily only our own ADV, but also can be the obstacle vehicles on road. We use timestamped perception results as automatic annotation for all road entities' histories and create a semantic map for each vehicle. \begin{figure}[!htb] \centering \includegraphics[width=4cm, height=4cm]{images/semantic_map_target.png} \caption{Semantic map for the target vehicle.} \label{fig:semantic_map_target} \end{figure} \subsection{Model Structure and Loss Function} \label{subsec:semantic_lstm_method} \begin{figure*}[!htb] \centering \includegraphics[width=14cm, height=6cm]{images/model_structure.png} \caption{Structure of semantic map + LSTM model.} \label{fig:model_structure} \end{figure*} The model combines LSTM, CNN, and MLP. The input of CNN part is the semantic map described in Section \ref{subsec:semantic_map}. The output of of CNN is a feature vector. In the final version of the model, MobileNet-v2 is chosen in the CNN part. And the output is the flatten vector from the second from last layer. The LSTM sequence consist of historical part and prediction part. In the historical part we embed the relative coordinates to its current position and then apply the embedded feature to update the LSTM hidden state. In the prediction part, we concatenate LSTM hidden state and the output feature vector of CNN, then pass the concatenation into an MLP to get a predicted position point relative to the current position. Then we use the predicted position to get the next embedding feature, updating the LSTM hidden state which will be used to predict the next future relative position. The procedure is shown in Figure \ref{fig:model_structure}. We continue this procedure until we get 30 future relative positions which stand for a 3-second predicted trajectory because the time resolution of predicted trajectory is 0.1 second. We have two loss functions. One is mean squared error (MSE) described in Equation \eqref{eq:mse}. \begin{equation} \text{MSE} = \frac{1}{N}\sum\limits_{i = 1}^{N} [(x_i^{pred} - x_i^{true})^2 + (y_i^{pred} - y_i^{true})^2] \label{eq:mse} \end{equation} where, $N$ is the number of predicted trajectory points. With the MSE loss, the model output trajectory points for three seconds. The other loss function is negative log likelihood (NLL) based on bi-variate Gaussian distribution as Equation \eqref{eq:gaussian_loss} \begin{equation} \text{Loss} = -\frac{1}{N}\sum\limits_{i = 1}^{N}\log{P} \label{eq:gaussian_loss} \end{equation} where $P$ is a bi-variate probability density function with mean $(\mu_x, \mu_y)$, and covariate matrix $\begin{pmatrix} \sigma_x^2 & \rho\sigma_x\sigma_y\\ \rho\sigma_x\sigma_y & \sigma_y^2 \end{pmatrix}$. With the NLL loss, the model outputs the Gaussian distributions of the future trajectory points for three seconds. The model is trained from 1000 hours' urban driving traffic data by Lincoln MKZs equipped with Velodyne HDL-64E LiDar. The software for obstacle detection and localization was Apollo 5.0 (https://github.com/ApolloAuto/apollo/tree/r5.0.0). For each surrounding vehicle, we crop a small image which stands for its local area as Figure \ref{fig:crop}. In each small image, the corresponding obstacle's heading is upside, and its current and historical polygons are marked as red. This small image is the input of the CNN part of the model shown in Figure \ref{fig:model_structure}. \subsection{Evaluation and Result Comparison} \label{subsec:semantic_evaluation} We compared our model performance with the state-of-art from industry and show the results in Table \ref{table:model_comparison}. We compared the results of our models with peers' on auto-annotated Apollo dataset and showed that our model out performed in ADE and FDE for both 1s and 3s. And achieved similar state-of-art 3s ADE results (0.77m vs 0.71m) with peer's best performance on their internal dataset. Due to commercial restriction, We can only selectively present results in Sunnyvale, CA and San Mateo, CA as a demonstration to show that our model and pipeline achieved similar results, regardless of different driving patterns. But the system performance has been validated under different geo-fenced areas across countries (China and United States). We also investigated the robustness of \emph{semantic map + LSTM + uncertainty} model in different environments. Table \ref{table:semantic_lstm_stability} shows a pretty robust performance for different test environments including going straight, turning left, turning right and changing lane. \begin{table*}[!htb] \centering \begin{tabular}{lclccccccc} \hline \hline & & & \multicolumn{4}{c}{Apollo Data} & \multicolumn{3}{c}{Others' Internal Data} \\ Team & Scenario & Model & ADE(1s) & FDE(1s) & ADE(3s) & FDE(3s) & ADE(1s) & ADE(3s) & ADE(5s) \\ \hline \multirow{6}{*}{Apollo} & \multirow{3}{*}{Sunnyvale} & LSTM & 0.26m & 0.48m & 1.33m & 3.34m & - & - & - \\ & & Semantic map + LSTM & 0.23m & \textbf{0.37m} & \textbf{0.77m} & \textbf{1.85m} & - & - & - \\ & & Semantic map + LSTM + uncertainty & \textbf{0.22m} & 0.38m & 0.79m & 1.93m & - & - & - \\ \cline{2-10} & \multirow{3}{*}{San Mateo} & LSTM & 0.26m & 0.51m & 1.35m & 3.41m & - & - & - \\ & & Semantic map + LSTM & 0.24m & \textbf{0.39m} & \textbf{0.79m} & \textbf{1.91m} & - & - & - \\ & & Semantic map + LSTM + uncertainty & \textbf{0.21m} & 0.40m & 0.80m & 1.98m & - & - & - \\ \hline Uber & - & Semantic map + MLP \cite{DBLP:journals/corr/abs-1808-05819} & 0.29m & 0.51m & 0.97m & 2.33m & & \textbf{0.71m} & \\ ZooX & - & Semantic map + GMM + CVAE \cite{DBLP:journals/corr/abs-1906-08945} & - & - & - & - & 0.44m & - & 2.99m \\ \hline\hline \end{tabular} \caption{Model performance comparison.} \label{table:model_comparison} \end{table*} \begin{table}[!htb] \begin{tabular}{lllll} \hline \hline Behavior & ADE(1s) & FDE(1s) & ADE(3s) & FDE(3s) \\ \hline Straight & 0.229m & 0.371m & 0.776m & 1.894m \\ Turn Left & 0.248m & 0.385m & 0.744m & 1.718m \\ Turn Right & 0.299m & 0.432m & 0.867m & 2.049m \\ Change Lane & 0.261m & 0.412m & 0.787m & 1.813m \\ \hline\hline \end{tabular} \caption{Stability analysis of semantic map + LSTM model.} \label{table:semantic_lstm_stability} \end{table} \begin{figure}[!htb] \centering \includegraphics[width=0.45\textwidth]{images/crop.png} \caption{Process to crop small image for each obstacle.} \label{fig:crop} \end{figure} \section{Post Trajectory Generation} \label{sec:post_trajectory_generation} This section introduces a data-driven, learning-based method to tune the parameters in trajectory generation stage in the prediction module. This method takes the predicted paths with their probabilities from the deep learning models as priors, then for each path, we sample some candidate predicted trajectories based on different speed profiles. We assign the cost values for all candidate trajectories and compute the likelihood of the associated path using the smallest cost value. The candidate trajectory with that smallest cost value is the predicted trajectory in its associated path. The posterior of each path with its predicted trajectory is computed by the prior and likelihood. The cost of each candidate trajectory is computed by the weighted sum of three cost functions. We applied Siamese network to tune the weights by real-world road test data. The follow subsections introduce the details of the method. \subsection{Prior} \label{subsec:prior} In Apollo, we have several deep learning behavior prediction models. One predicts lane sequences the target obstacle will go. Another predicts the intersection exits for the vehicles running within the area of an intersection. Both of the two models have the same format of the output which is the possible future paths with their probabilities. As shown in Figure \ref{fig:autotuning_example}(a), the red vehicle is the target vehicle that we want to make prediction and the blue vehicle is the autonomous driving ego vehicle. The deep learning behavioral model outputs three possible paths with their probabilities as the three red arrows. Their probabilities are 0.2, 0.4, 0.4 for turning right, going straight, and turning left, respectively. For each path, its probability from the deep learning behavioral model is used as prior in this poster processing method. \begin{figure} \centering \includegraphics[width=8cm, height=8cm]{images/autotuning_example.png} \caption{The procedure of trajectory posterior processing.} \label{fig:autotuning_example} \end{figure} \subsection{Sampling} \label{subsec:sampling} To get the candidate trajectories with different speed profile, we sample multiple values of accelerations, and generate trajectories according to the sampled accelerations by constant acceleration movement. For each sampled trajectory, it goes along the central curve of the associated path. Figure \ref{fig:autotuning_example}(b) illustrates the sampling process. \subsection{Cost Functions} \label{subsec:cost_functions} For each candidate trajectory, we designed three cost functions related to longitudinal acceleration, centripetal acceleration, and collision with ego vehicle. They represent longitudinal comfort, lateral comfort, and safety. The formula of these three cost functions are shown in Eq\eqref{cost:acc} to Eq\eqref{cost:collision}. \begin{eqnarray} C_{acc} &=& \sum\limits_{i}a_i^2 \label{cost:acc} \\ C_{centripetal\_acc} &=& \frac{1}{z_1}\sum\limits_{i}(v_i^2\kappa_i)^2 \label{cost:centripetal_acc} \\ C_{collision} &=& \frac{1}{z_2}\sum\limits_{i}d_i^2 \label{cost:collision} \end{eqnarray} where $z_1$ and $z_2$ are the normalization terms. $d_i$ in Eq\eqref{cost:collision} is the distance between the the predicted trajectory point and the ego vehicle at the timestamp of the $i$th predicted trajectory point. The total cost of a candidate trajectory as shown in Eq\eqref{cost:total} can be computed by the weighted sum of Eq\eqref{cost:acc}, Eq\eqref{cost:centripetal_acc}, and Eq\eqref{cost:collision}. \begin{equation} C = w_1 C_{acc} + w_2 C_{centripetal\_acc} + w_3 C_{collision} \label{cost:total} \end{equation} where, $w_1$, $w_2$ and $w_3$ are the weights. In subsection \ref{subsec:siamese_network} we will discuss more about how to tune the weights automatically by data. \subsection{Likelihood} \label{subsec:likelihood} The likelihood of a candidate trajectory can be computed by Eq \begin{equation} \textrm{likelihood} = \exp(-C) \label{eq:likelihood} \end{equation} where $C$ is the total cost described in Eq\eqref{cost:total}, which designed to make the lower-cost candidate trajectories have higher likelihood probability. With the likelihood computed by Eq\eqref{cost:total}, we can get the posterior probability by Eq\eqref{eq:posterior}. \begin{equation} \textrm{Posterior} = \frac{1}{Z} \cdot \textrm{prior} \cdot \textrm{likelihood} \label{eq:posterior} \end{equation} where $Z$ is the normalization term to make the posteriors' sum of an obstacle to be one. As the example shown in Figure \ref{fig:autotuning_example}(d), the posterior probabilities are computed to correct the prior probabilities based on the likelihood involving comfort and safety. \subsection{Siamese Network} \label{subsec:siamese_network} In this subsection, we introduce the method that is used to tune the weights. The assumption of the model is that the ground truth real trajectories are the optimal ones that have the least cost values. And we try to tune the weights to make the minimal cost of the sampled candidate trajectories close to the cost value of the ground truth trajectory. We use $\xi$ to denote a sampled candidate trajectory, and $\hat\xi$ to denote a ground-truth real trajectory. We re-represent the cost function of as $C(\theta;\xi)$ and $C(\theta;\hat\xi)$. The objective function is designed as Eq\eqref{eq:objective}. \begin{equation} L = \sum\limits_{j=0}^{N}\sum\limits_{i=0}^M|C(\theta;\hat\xi_i)-C(\theta;\xi_j) + \delta|_+ \label{eq:objective} \end{equation} where, $|\cdot|_+$ is the maximum between $\cdot$ and 0, $N$ is the number of data (an obstacle at a timestamp), $M$ is the number of sample candidate trajectories for each data. $\delta$ is marginal factor which is a small positive constant. We tried to solve the optimization problem to minimize the objective function defined in Eq\eqref{eq:objective}. The marginal constant $\delta$ is necessary to avoid getting the solution that all weights are zero. On the other hand, it can filter out the sampled candidate trajectories whose cost values are much different from the ground-truth real trajectory's cost. To solve the optimization defined in Eq\eqref{eq:objective}, we apply a Neural network shown in Figure \ref{fig:siamese}. The input layer consists of three cost differences and $delta$. It is followed by a ReLU activation function. The labels are all zero. We trained this model on the real-world road test data and apply the trained weights $w_1$, $w_2$, $w_3$ in the cost function. This autotuning mechanism can learn the weights according to the local driving behavior, avoiding manual parameters tuning and more fit to the traffic in a specific area. \begin{figure*} \centering \includegraphics[width=12cm, height=4.8cm]{images/siamese.png} \caption{Siamese network for autotuning.} \label{fig:siamese} \end{figure*} \section{Offboard Architecture} \label{sec:offline_architecture} In this section, we introduce the architecture to process data and model training offline. It consists of five data processing applications including 1) automatic data annotation, 2) feature extraction, 3) model training, 4) auto tuning, and 5) results evaluation. The whole workflow and the relationship among the five data applications are described in Figure \ref{fig:offline_architecture}. \subsection{Automatic Data Annotation} \label{subsec:data_labeling} Data labeling takes the obstacle information sequence data from Database 1. \subsection{Feature Extraction} \label{subsec:feature_extraction} \subsection{Model Training} \label{subsec:model_training} \subsection{Auto Tuning} \label{subsec:auto_tuning} \subsection{Results Evaluation} \label{subsec:result_evaluation} \begin{figure*} \centering \includegraphics[width=14cm, height=9cm]{images/offline_architecture.png} \caption{Offline Architecture of the prediction module on Apollo autonomous driving open-source platform.} \label{fig:offline_architecture} \end{figure*}
1,314,259,993,995
arxiv
\section{Introduction} Let $n$ be a positive integer, $I=[a,b]$ a bounded segment of the real line, of length $L=b-a$. Define ${\mathcal D}^n(I)$ as the set of real functions $f$ defined on $I$, with successive derivatives $f^{(k)}$ defined and continuous on $I$ for $0 \leqslant k \leqslant n-1$, and $f^{(n)}$ defined on $\mathring{I}=\,]a,b[$. We will use the notation \[ m_n(f)=\inf_{a<t<b} \vert f^{(n)}(t)\vert. \] Let $p$ be a positive real number, or $\infty$. The problem addressed in this article is that of determining the best constant $C^*=C^*(n,p,I)$ in the inequality \[ m_n(f) \leqslant C^*\norm{f}_p \quad (f \in {\mathcal D}^n(I)), \] where \[ \norm{f}_p =\Big(\int_a^b \abs{f(t)} ^p \, dt \Big)^{1/p}, \] with the usual convention when $p=\infty$, here: $\norm{f}_{\infty}=\max \abs{f}$. \smallskip This problem has been posed by Kwong and Zettl in their 1992 Lecture Notes \cite{zbMATH00205893} (see Lemma~1.1, p. 6). They give upper bounds for $C^*(n,p,I)$, but their reasoning and results are erroneous. In her 1993 PhD Thesis \cite{MR2689365}, Huang has pointed out that this problem is equivalent to a classical problem in the theory of polynomial approximation: that of determining the minimal~$L^p$-norm of a monic polynomial of given degree on a given bounded interval. Our purpose in this text is to give a new proof of the equivalence, and to list the consequences of the known results about this extremal problem for the evaluation of~$C^*(n,p,I)$. \section{First observations} \subsection{Homogeneity} Defining $g(u)=f(a+uL)$ for $f \in {\mathcal D}^n(I)$ and $0\leqslant u \leqslant 1$, one has \[ g \in {\mathcal D}^n([0,1]) \quad ; \quad g^{(n)}(u)=L^n f^{(n)}(a+uL) \quad (0 <u<1) \quad ; \quad \norm{g}_p=L^{-1/p}\norm{f}_p. \] Hence, \begin{equation}\label{220817b} C^*(n,p,I)=C^*(n,p,[0,1])\,L^{-n-1/p}, \end{equation} and one is left with determining $C^*(n,p,[0,1])=C(n,p)$, or in fact $C^*(n,p,I)$ for any fixed, chosen segment $I$. We will see that $I=[-1,1]$ is particularly convenient. \subsection{An extremal problem} One has \begin{align*} C^*(n,p,I)&=\sup\set{m_n(f)/\norm{f}_p, \; f\in {\mathcal D}^n(I), \; m_n(f) \neq 0}\\ &=\sup\set{m_n(f)/\norm{f}_p, \; f\in {\mathcal D}^n(I), \; m_n(f)= \lambda}\expli{for every $\lambda >0$}\\ &=\lambda/D^*(n,p,\lambda, I), \end{align*} where \begin{align*} D^*(n,p,\lambda, I)&=\inf\set{\norm{f}_p, \; f\in {\mathcal D}^n(I), \; m_n(f)=\lambda}\\ &=\inf\set{\norm{f}_p, \; f\in {\mathcal D}^n(I), \; m_n(f)\geqslant\lambda}, \end{align*} the last equality being true since $D^*(n,p,\mu, I)=\frac{\mu}{\lambda} D^*(n,p,\lambda, I)\geqslant D^*(n,p,\lambda, I)$ if $\mu \geqslant \lambda$. \smallskip Also, since a derivative has the intermediate value property (cf. \cite{zbMATH02715424}, pp. 109-110), the inequality $m_n(f)\geqslant \lambda>0$ implies that $f^{(n)}$ has constant sign on $I$, so that \begin{equation*} D^*(n,p,\lambda, I)=\inf\set{\norm{f}_p, \; f\in {\mathcal D}^n(I), \; f^{(n)}(t) \geqslant\lambda \text{ for } a<t<b}. \end{equation*} Thus, determining $C^*(n,p,I)$ is equivalent to minimizing $\norm{f}_p$ for $f \in {\mathcal D}^n(I)$ with the constraint $f^{(n)}(t) \geqslant\lambda>0$ for $a<t<b$. We will denote this extremal problem by ${\mathcal E}^*(n,p,\lambda, I)$. \section{The relevance of monic polynomials} Let ${\mathcal P}_n$ be the set of monic polynomials of degree $n$, with real coefficients, identified with the set of the corresponding polynomial functions on $I$, which is a subset of ${\mathcal D}^n(I)$. Since $m_n(f)=n!$ for $f \in {\mathcal P}_n$, one has \begin{equation}\label{220809a} D^*(n,p,n!,I)\leqslant D^{**}(n,p,I), \end{equation} where \[ D^{**}(n,p,I)=\inf\set{\norm{Q}_p, \; Q\in {\mathcal P}_n}. \] \smallskip A basic fact in the study of the extremal problem ${\mathcal E}^*(n,p,\lambda, I)$ is that \eqref{220809a} is in fact an equality. \begin{prop}\label{220809b} For all $n,p,I$, one has $D^*(n,p,n!,I) = D^{**}(n,p,I)$. \end{prop} It follows from this proposition that $C^*(n,p,I)=n!/D^{**}(n,p,I)$ and, by \eqref{220817b}, \begin{equation}\label{220817c} C(n,p)=L^{n+1/p}n!/D^{**}(n,p,I). \end{equation} \smallskip Let us review the history of Proposition \ref{220809b}. For $p=\infty$, it is a corollary to a theorem of S. N. Bernstein from 1937. Denoting by $E_k(f)$ the distance (for the uniform norm on $I$) between $f$ and the set of polynomials of degree at most~$k$, he proved in particular that \[ E_{n-1}(f_0)>E_{n-1}(f_1) \quad (f_0,f_1 \in {\mathcal D}^n(I)), \] provided that the inequality $f_0^{(n)}(\xi) > \vert f_1^{(n)}(\xi)\vert$ is valid for every $\xi \in \mathring{I}$ (cf. \cite{bernstein1937}, p. 48, inequalities~(47bis)-(48bis)). Proposition \ref{220809b} follows by taking $f_1(x)=x^n$ and $f_0(x)=\lambda f(x)$, where $f$ is a generic element of~${\mathcal D}^n(I)$ such that $f^{(n)}(t) \geqslant n!$ for $a<t<b$, and $\lambda >1$, then letting $\lambda \rightarrow 1$. This theorem of Bernstein was generalized by Tsenov in 1951 to the case of the $L^p$-norm on~$I$, where $p\geqslant 1$ (cf. \cite{zbMATH03064306}, Theorem 4, p. 477), thus providing a proof of Proposition \ref{220809b} for $p\geqslant 1$. The case $0<p<1$ was left open by Tsenov. The study of the extremal problem ${\mathcal E}^*(n,p,\lambda, I)$ was one of the themes of the 1993 PhD thesis of Xiaoming Huang \cite{MR2689365}. In Lemma 2.0.7, pp. 9-10, she gave another proof (due to Saff) of Proposition~\ref{220809b} in the case $p=\infty$. For $1\leqslant p < \infty$, she gave a proof of Proposition 1 which is unfortunately incomplete (cf. \cite{MR2689365}, pp. 28-30). Again, the case $0<p<1$ was left open. \medskip We present now a self-contained proof of Proposition~\ref{220809b}, valid for $0<p\leqslant \infty$. As it proceeds by induction on $n$, we will need the following classical-looking division lemma, for which we could not locate a reference (compare with \cite{zbMATH03108045} or \cite{zbMATH03316213}). \begin{prop}\label{220811a} Let $n\geqslant 2$ and $f \in {\mathcal D}^n(I)$. Let $c \in \, [a,b]$. Put \begin{equation} g(x)= \begin{dcases}\label{220811a} \frac{f(x)-f(c)}{x-c} & (x \in I, \, x \neq c)\\ f'(c) & (x=c). \end{dcases} \end{equation} Then $g\in {\mathcal D}^{n-1}(I)$. For every $x \in \,]a,b[\,$, one has \[ g^{(n-1)}(x)=\frac{f^{(n)}(\xi)}n\raisebox{.7mm}{,} \] where $\xi \in \,]a,b[$. \end{prop} \noindent {\bf Proof\ } Since $f'$ is continuous, one has \[ g(x)=\int_0^1f'\big(c+t(x-c) \big) \, dt \quad (x \in I). \] Using the rule of differentiation under the integration sign, one sees that $g$ is $n-2$ times differentiable on $I$, with \[ g^{(n-2)}(x)=\int_0^1t^{n-2}f^{(n-1)}\big(c+t(x-c) \big) \, dt \quad (x \in I). \] As $f^{(n-1)}$ is continuous on $I$, this formula yields the continuity of $g^{(n-2)}$ on $I$. \smallskip The function $g$ is $n$ times differentiable on $\mathring{I} \setminus\set{c}$ (this set is just $\mathring{I}$ if $c=a$ or $c=b$), being a quotient of $n$ times differentiable functions, with non-vanishing denominator. In the case~$a<c<b$, we have now to check that $g$ is $n-1$ times differentiable at the point $c$. The function $f^{(n-1)}$ being continuous on $I$ and differentiable at the point $c$, there exists a function $\varepsilon(h)$, defined and continuous on the segment $[a-c,b-c]$ (the interior of which contains~$0$), vanishing for $h=0$, such that \[ f^{(n-1)}(c+h)=f^{(n-1)}(c)+hf^{(n)}(c)+h\varepsilon(h) \quad (a \leqslant c+h \leqslant b). \] Hence, \begin{align*} g^{(n-2)}(x)&=\int_0^1t^{n-2}f^{(n-1)}\big(c+t(x-c) \big) \, dt \\ &= \int_0^1t^{n-2}\Big(f^{(n-1)}(c)+t(x-c)f^{(n)}(c)+t(x-c) \varepsilon\big(t(x-c)\big) \Big)\, dt \\ &=\frac{f^{(n-1)}(c)}{n-1}+\frac{f^{(n)}(c)}{n}(x-c)+(x-c)\int_0^1t^{n-1}\varepsilon\big(t(x-c)\big) \Big)\, dt \end{align*} When $x$ tends to $c$, the last integral tends to $0$, so that the function $g^{(n-2)}$ is differentiable at the point $c$, with \[ g^{(n-1)}(c)=\frac{f^{(n)}(c)}{n}\cdotp \] \smallskip If $x \in \mathring{I} \setminus\set{c}$, one may use the general Leibniz rule and Taylor's theorem with the Lagrange form of the remainder in order to compute $g^{(n-1)}(x)$ : \begin{align*} g^{(n-1)}(x) &= \frac{d^{n-1}}{dx^{n-1}} \Big(\big(f(x)-f(c)\big)\cdot \frac{1}{x-c}\Big)\\ &=\big(f(x)-f(c)\big)\cdot \frac{(-1)^{n-1}(n-1)!}{(x-c)^n}+\sum_{k=1}^{n-1}\binom{n-1}{k}f^{(k)}(x)\cdot \frac{(-1)^{n-1-k}(n-1-k)!}{(x-c)^{n-k}}\\ &=\frac{(n-1)!}{(c-x)^n}\Big(f(c)-f(x)-\sum_{k=1}^{n-1} \frac{f^{(k)}(x)}{k!}(c-x)^k\Big)\\ &=\frac{(n-1)!}{(c-x)^n}\cdot \frac{f^{(n)}(\xi)}{n!}(c-x)^n\expli{where $\xi$ belongs to the open interval bounded by $c$ and $x$}\\ &=\frac{f^{(n)}(\xi)}n\cdotp\tag*{\mbox{$\Box$}} \end{align*} \smallskip In the next proposition, we stress the main element of our proof of Proposition \ref{220809b}, namely the fact that the condition $f^{(n)} \geqslant n!$, for some $f \in {\mathcal D}^n(I)$, implies that the absolute value of $f$ dominates the absolute value of some monic polynomial of degree $n$. \begin{prop}\label{220811b} Let $n\geqslant 1$ and $f \in {\mathcal D}^n(I)$ such that $f^{(n)}(x) \geqslant n!$ for every $x \in \, ]a,b[\,$. \smallskip Then there exists a monic polynomial $P$ of degree $n$, with all its zeros in $I$, such that the inequality $\abs{f(x)}\geqslant \abs{P(x)}$ is valid for every $x \in I$. \smallskip Moreover, if $\abs{f(x)} = \abs{Q(x)}$ for every $x \in I$, where $Q$ is a monic polynomial of degree $n$ with real coefficients, then $f(x)=Q(x)$ for every $x \in I$. \end{prop} \noindent {\bf Proof\ } The assertion about the zeros may be obtained \emph{a posteriori}, by replacing the zeros of $P$ by their projections on $I$. The following proof leads directly to a polynomial $P$ with all zeros in $I$. We use induction on $n$. \smallskip For $n=1$, the function $f$ is continuous on $[a,b]$, differentiable on $]a,b[\,$, with $f'(x)\geqslant 1$ for~$a<x<b$. If $f(a)\geqslant 0$, one has, for $a<x\leqslant b$, $f(x)=f(a)+(x-a)f'(\xi)$ (where $a<\xi <x$), thus~$f(x) \geqslant x-a$. Hence, one has $\abs{f(x)}\geqslant \abs{x-a}$ for every $x \in I$. If $f(b) \leqslant 0$, one proves similarly that $\abs{f(x)}\geqslant \abs{x-b}$ for every $x \in I$. If $f(a)< 0<f(b)$, there exists $c \in \, ]a,b[$ such that $f(c)=0$. One has then, for every $x \in I$, \[ f(x)=f(x)-f(c)=(x-c)f'(\xi) \quad (\text{where }a < \xi< b). \] Hence $\abs{f(x)} \geqslant \abs{x-c}$ for every $x \in I$, and the result is proven for $n=1$. \smallskip Let now $n \geqslant 2$, and suppose that the result is valid with $n-1$ instead of $n$. Let~$f \in {\mathcal D}^n(I)$ such that $f^{(n)}(x) \geqslant n!$ for every $x \in \, ]a,b[\,$. If $f$ vanishes at some point $c\in I$, it follows from Proposition \ref{220809a} that the function $g$ defined on $I$ by \[ g(x)= \begin{dcases} \frac{f(x)}{x-c} & (x \in I, \, x \neq c)\\ f'(c) & (x=c), \end{dcases} \] belongs to ${\mathcal D}^{n-1}(I)$ and that, for every $x \in \,]a,b[\,$, one has \[ g^{(n-1)}(x)=\frac{f^{(n)}(\xi)}n\raisebox{.7mm}{,} \] where $\xi \in \,]a,b[$, thus $g^{(n-1)}(x) \geqslant (n-1)!$. By the induction hypothesis, there exists a monic polynomial~$Q$ of degree $n-1$, with all its roots in $I$, such that $\abs{g(x)}\geqslant \abs{Q(x)}$ for every $x\in I$. Hence, one has the inequality $\abs{f(x)}\geqslant \abs{P(x)}$ for every $x \in I$, where $P(x)=(x-c)Q(x)$ is a monic polynomial of degree $n$, with all its roots in $I$. If $f>0$, it reaches a minimum at some point $c\in I$. Again, it follows from Proposition \ref{220809a} that the function $g$ defined on $I$ by \eqref{220811a} satisfies the required hypothesis for degree $n-1$. Thus there exists a monic polynomial~$Q$ of degree $n-1$, with all its roots in $I$, such that $\abs{g(x)}\geqslant \abs{Q(x)}$ for every $x\in I$. Hence, one has the inequality \[ f(x)-f(c)=\abs{f(x)-f(c)} \geqslant \abs{P(x)} \quad (x \in I), \] where $P(x)=(x-c)Q(x)$. It follows that \[ \abs{f(x)}=f(x)\geqslant f(c) + \abs{P(x)} > \abs{P(x)} \quad (x \in I) \] If $f<0$, the reasoning is similar by considering a point $c\in I$ where $f$ reaches a maximum. \smallskip Let us prove the last assertion. The hypothesis $\abs{f}=\abs{P}$ is equivalent to the equality~$f^2=P^2$, that is $(f-P)(f+P)=~0$. The set $E=\set{x\in I, \; f(x)+P(x)=0}$ has empty interior, since~$f^{(n)}(x)+P^{(n)}(x)=0$ on every open subinterval of $E$, whereas $f^{(n)}(x)+P^{(n)}(x)\geqslant 2n!$ on~$\mathring{I}$. The set $I\setminus E$ is therefore dense in $I$ ; its elements $x$ all verify $f(x)=P(x)$, hence $f=P$ on $I$ by continuity.\hfill$\Box$ \medskip Proposition \ref{220809b} is an immediate corollary of Proposition \ref{220811b}: by taking $f$ and $P$ as stated there, one has $\abs{f(x)}\geqslant \abs{P(x)}$ for every $x \in I$, so that \begin{equation}\label{220811c} \int_a^b \abs{f(x)}^p \, dx \geqslant \int_a^b \abs{P(x)}^p \, dx, \end{equation} for every $p>0$ (for $p=\infty$: $\max\abs{f} \geqslant \max\abs{P}$). Moreover, if $p<\infty$, equality in \eqref{220811c} implies that $\abs{f}= \abs{P}$ on $I$, hence $f=P$. In other words, if $0<p<\infty$, the extremal problem ${\mathcal E}^*(n,p,n!, I)$ has exactly the same solutions (value of the infimum and extremal functions) as the problem ${\mathcal E}^{**}(n,p,I)$ obtained by considering only monic polynomials of degree $n$, which one may even take with all their roots in~$I$. For $p=\infty$, our reasoning does not prove that an extremal function for ${\mathcal E}^*(n,p,n!, I)$ (if it exists) must be a polynomial. This is true anyway, as proved by Huang in \cite{MR2689365}, pp. 10-13. \section{Extremal polynomials} One may now use the results of the well developed theory of the extremal problem ${\mathcal E}^{**}(n,p,I)$ for polynomials. Thus, since the integral \[ \int_a^b \abs{(x-x_1)\cdots(x-x_n)}^p \, dx \quad (x_1,\dots,x_n \in I) \] (or the value $\max_{x\in I}\abs{(x-x_1)\cdots(x-x_n)}$) is a continuous function of $(x_1,\dots,x_n)$, the compactness of $I^n$ yields the existence of an extremal (polynomial) function for ${\mathcal E}^{**}(n,p,I)$, hence for ${\mathcal E}^{*}(n,p,n!, I)$. It is a known fact that the polynomial extremal problem ${\mathcal E}^{**}(n,p,I)$ has a unique solution for all~$p \in\, ]0,\infty]$, but there is no proof valid uniformly for all values of $p$. $\bullet$ For $p=\infty$, uniqueness was proved by Young in 1907 (cf. \cite{zbMATH02644168}, Theorem 5, p. 340)) and follows from the general theory of uniform approximation (cf. \cite{zbMATH03770219}, Theorem 1.8, p. 28). $\bullet$ For $1<p<\infty$, as proved by Jackson in 1921 (cf. \cite{zbMATH02601208}, \S6, pp. 121-122), this is a consequence of the strict convexity of the space $L^p(I)$. $\bullet$ For $p=1$, this is also due to Jackson in 1921 (cf. \cite{zbMATH02601209}, \S 4, pp. 323-326). $\bullet$ For $0<p<1$, the uniqueness of the extremal polynomial was proved in 1988 by Kro\'o and Saff (cf. \cite{zbMATH04032294}, Theorem 2, p. 184). Their proof uses the uniqueness property for $p=1$ and the implicit function theorem. \medskip We will denote by $T_{n,p,I}$ the unique solution of the extremal problem ${\mathcal E}^{**}(n,p,I)$. Uniqueness gives immediately the relation \[ T_{n,p,I}(a+b-x)=(-1)^nT_{n,p,I}(x) \quad (x \in {\mathbb R}). \] Another property of these polynomials is the fact that all their roots are simple. For $p=1$, this fact was proved by Korkine and Zolotareff in 1873 (cf. \cite{zbMATH02717982}, pp. 339-340), before their explicit determination of the extremal polynomial (see \S\ref{220830a} below), and their proof extends, \emph{mutatis mutandis}, to the case $1<p<\infty$. For $p=\infty$, this is a property of the Chebyshev polynomials of the first kind (see \S\ref{221108a} below). Lastly, for $0<p<1$, this was proved by Kro\'o and Saff in~\cite{zbMATH04032294},~p.~187. Define $T_{n,p}=T_{n,p,[-1,1]}$, and write $n=2k+\varepsilon$, where $k \in {\mathbb N}$ and $\varepsilon\in\set{0,1}$. It follows from the mentioned results that \begin{equation}\label{220817a} T_{n,p}(x)=x^{\varepsilon} (x^2-x_{n,1}(p)^2) \cdots (x^2-x_{n,k}(p)^2) \quad (x \in {\mathbb R}), \end{equation} where \[ 0<x_{n,1} (p)< \dots < x_{n,k}(p) \leqslant 1. \] Kro\'o, Peherstorfer and Saff have conjectured that all the $x_{n,k}$ are increasing functions of $p$ (cf. \cite{zbMATH04023942}, p. 656, and \cite{zbMATH04032294}, p. 192). \section{Results on $C(n,p)$} \subsection{The case $n=1$} The value $n=1$ is the only one for which $C(n,p)$ is explicitly known for all $p$. \begin{prop}\label{220801a} One has $C(1,p)=2(p+1)^{1/p}$ for $0<p<\infty$, and $C(1,\infty)=2$. \end{prop} \noindent {\bf Proof\ } By \eqref{220817a}, one has $T_{1,p}(x)=x$, so that, for $0<p<\infty$, \[ D^{**}(1,p,[-1,1])=\Big(\int_{-1}^1\abs{t}^p\, dt \Big)^{1/p} =\big(2/(p+1)\big)^{1/p}, \] and, by \eqref{220817c}, \begin{equation*} C(1,p)=2^{1+1/p}/D^{**}(1,p,[-1,1])=2(p+1)^{1/p}.\tag*{\mbox{$\Box$}} \end{equation*} \smallskip Note that the Lemma 1.1, p. 6 of \cite{zbMATH00205893}, asserts that $C(1,p) \leqslant 2\cdot 3^{1/p}$ for $p \geqslant 2$, and that bound is~$< 2(p+1)^{1/p}$ for $p>2$. \subsection{The case $p=\infty$}\label{221108a} This is the classical case, solved by Chebyshev in 1853 by introducing the polynomials $T_n$ defined by the relation $T_n(\cos t)=\cos nt$ (now called Chebyshev polynomial of the first kind): the unique solution of the extremal problem ${\mathcal E}^{**}(n,\infty,[-1,1])$ is $2^{1-n}T_n$. Let us record a short proof of this fact. \smallskip Take $I=[-1,1]$ and suppose that $P$ is a monic polynomial of degree $n$ satisfying the inequality $\norm{P}_{\infty}\leqslant \norm{2^{1-n}T_n}_{\infty}=2^{1-n}$. Then, for $\lambda>1$ the polynomial \[ Q_{\lambda}=\lambda 2^{1-n}T_n-P \] is of degree $n$, with leading coefficient~$\lambda -1$. Moreover, it satisfies \[ (-1)^kQ_{\lambda}(\cos k\pi/n)=\lambda 2^{1-n} -(-1)^kP(\cos k\pi/n)>0 \quad (k=0,\dots,n) \] By the intermediate value property, $Q_{\lambda}$ has at least $n$ distinct roots, hence exactly $n$, and these roots, say $x_1,\dots, x_n$, have absolute value not larger than $1$. Hence, \[ \abs{Q_{\lambda}(x)}= (\lambda-1)\abs{(x-x_1)\cdots(x-x_n)}\leqslant (\lambda-1)(1+\abs{x})^n \quad (x \in {\mathbb R}). \] When $\lambda \rightarrow 1$, $Q_{\lambda}(x)$ tends to $0$ for every real $x$, which means that $P=2^{1-n}T_n$. \medskip One deduces from this theorem the value of $C(n,\infty)$. One has \[ D^{**}(n,\infty,[-1,1])=\max_{\abs{x} \leqslant 1}\abs{2^{1-n}T_n(x)}=2^{1-n}, \] hence \begin{equation}\label{220830b} C(n,\infty)=2^n\cdot n!/D^{**}(n,\infty,[-1,1])=2^{2n-1}n! \end{equation} (compare with the upper bound $C(n,\infty) \leqslant 2^{n(n+1)/2}n^n$ of \cite{zbMATH03280851}, 3 (a), p. 185). This result is essentially due to Bernstein (1912, cf. \cite{zbMATH02616863}, p. 65). \smallskip Qualitatively, the result expressed by \eqref{220830b} was nicely described by Soula in \cite{zbMATH03006312}, p. 86, as follows. \begin{quote} Bernstein's principle: the minimum of the absolute value of the $n$-th derivative of an $n$ times differentiable function and the maximum of the absolute value of the $n$-th derivative of an analytic function have similar orders of magnitude. \end{quote} \subsection{The case $p=2$} In this case, the extremal problem ${\mathcal E}^{**}(n,2,[-1,1])$ is an instance of the general problem of computing the orthogonal projection of an element of a Hilbert space onto a finite dimensional subspace. Here, the Hilbert space is $L^2(-1,1)$, the element is the monomial function $x^n$, and the subspace is the set of polynomial functions of degree less than $n$. The solution follows from the theory of orthogonal polynomials: the extremal polynomial for ${\mathcal E}^{**}(n,2,[-1,1])$ is \[ \frac{2^n(n!)^2}{(2n)!}P_n(x) \quad (\abs{x}\leqslant 1), \] where $P_n$ is the $n$-th Legendre polynomial, defined by \[ P_n(x)=\frac{1}{2^nn!}\frac{d^n}{dx^n}(x^2-1)^{n}. \] Hence, \[ D^{**}(n,2,[-1,1])=\frac{2^n(n!)^2}{(2n)!}\norm{P_n}_2=\frac{2^n(n!)^2}{(2n)!}\sqrt{\frac{2}{2n+1}}\raisebox{.7mm}{,} \] (see \cite{zbMATH02581422}, \S15$\cdot$14, p. 305) and \begin{equation}\label{220830c} C(n,2)=2^{n+{\frac{1}{2}}}\cdot n!/D^{**}(n,2,[-1,1])=\frac{(2n)!}{n!}\sqrt{2n+1}, \end{equation} a result given by Soula in 1932 (cf. \cite{zbMATH03006312}, pp. 87-88). \subsection{The case $p=1$}\label{220830a} The problem ${\mathcal E}^{**}(n,1,[-1,1])$ was solved by Korkine and Zolotareff in \cite{zbMATH02717982}: the extremal polynomial is $2^{-n}U_n(x)$, where $U_n$ is the $n$-th Chebyshev polynomial of the second kind, defined by the relation $U_n(\cos t)=\sin (n+1)t/\sin t$. Therefore, one has \begin{align*} D^{**}(n,1,[-1,1]) &=2^{-n}\int_{-1}^1\abs{U_n(x)} \, dx =2^{-n}\int_{0}^{\pi}\abs{U_n(\cos t)} \, \sin t \,dt\\ &=2^{-n}\int_{0}^{\pi}\abs{\sin (n+1)t}\,dt =2^{-n}\int_{0}^{\pi}\sin u\,du\\ &=2^{1-n}, \end{align*} and \begin{equation}\label{220830d} C(n,1)=2^{n+1}\cdot n!/D^{**}(n,1,[-1,1])=2^{2n} n!. \end{equation} \subsection{Bounds for $C(n,p)$} We begin with a simple monotony result. \begin{prop} For every positive integer $n$, the function $p \mapsto C(n,p)$ is decreasing on the interval $0<p\leqslant \infty$. \end{prop} \noindent {\bf Proof\ } Let $I=[0,1]$. Equivalently, we will see that the function $p \mapsto D^{**}(n,p,I)$ is increasing. This is due to the fact that, for a fixed $f \in L^{\infty}(I)$ such that $\abs{f}$ is not equal almost everywhere to a constant, the function $p \mapsto \norm{f}_p$ is increasing (a consequence of Hölder's inequality). Thus, for every $Q \in {\mathcal P}_n$ and $0<p < p'\leqslant \infty$, \[ \norm{Q}_{p'} > \norm{Q}_p \geqslant D^{**}(n,p,I), \] which implies that $D^{**}(n,p',I) > D^{**}(n,p,I)$.\hfill$\Box$ \smallskip In particular, \eqref{220830b} and \eqref{220830d} yield the inequalities \[ 2^{2n-1}n! < C(n,p) < 2^{2n}n! \quad (1 < p < \infty). \] \smallskip The next proposition implies that the limit of $C(n,p)$ when $p$ tends to $0$ is $(2e)^nn!$. \begin{prop} For every positive integer $n$ and every positive real number $p$, one has \[ 2^n(1+np)^{1/p}n!\leqslant C(n,p) \leqslant (2e)^nn! \] \end{prop} \noindent {\bf Proof\ } Equivalently, we will prove that \begin{equation}\label{220903a} (2e)^{-n} \leqslant D^{**}(n,p,I) \leqslant 2^{-n}(1+np)^{-1/p}, \end{equation} where $I=[0,1]$. \smallskip Let $Q(t)=(t-x_1)\cdots(t-x_n)$, where $0 \leqslant x_1,\dots, x_n \leqslant 1$. One has \begin{align*} \ln\norm{Q}_p &=\frac 1p \ln\int_{0}^1 \abs{Q(t)}^p \, dt\\ &\geqslant \frac 1p\int_{0}^1 \ln \big(\abs{Q(t)}^p\big) \, dt\expli{by Jensen's inequality}\\ &=\int_{0}^1 \ln \abs{Q(t)} \, dt\\ &=\sum_{k=1}^n\int_{0}^1 \ln \abs{t-x_k} \, dt. \end{align*} Now, \[ \int_{0}^1 \ln \abs{t-x} \, dt=(1-x)\ln(1-x)+x\ln x-1 \quad (0 \leqslant x \leqslant 1), \] attains its minimal value, namely $-1-\ln 2$, when $x=1/2$. This implies the first inequality of~\eqref{220903a}. \smallskip To prove the second inequality of~\eqref{220903a}, we just compute $\norm{Q}_p^p $ when $Q(t)=(t-1/2)^n$ : \begin{equation*} \int_0^1\abs{t-1/2}^{np}\, dt= 2\frac{(1/2)^{np+1}}{np+1}\cdotp\tag*{\mbox{$\Box$}} \end{equation*} \smallskip For $0<p<1$, we can also prove the following result. \begin{prop} Let $n$ be a positive integer, and $p$ such that $0<p<1$. One has \[ 1 \leqslant \frac{C(n,p)}{ 2^{2n}n! }\leqslant {\frac{1}{2}}(8/\pi)^{1/p}. \] \end{prop} \noindent {\bf Proof\ } The first inequality is just $C(n,1) \leqslant C(n,p)$. To prove the second inequality, let $r$ and $s$ such that $1<s<2$ and $r^{-1}+s^{-1}=1$. Define \begin{align*} I_1(s) &=\int_{-1}^1 \frac{dt}{(1-t^2)^{s/2}}\\ I_2(s) &= \int_{-1}^1 \abs{t}^{(s-1)/s} \, \frac{dt}{\sqrt{1-t^2}}\cdotp \end{align*} The integrals $I_1(s)$ and $I_2(s)$ may be computed, using the eulerian identity \[ \int_0^1t^{x-1}(1-t)^{y-1} \, dt= {\mathrm B}(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}\quad (x>0, \; y>0). \] The results are \begin{align*} I_1(s) &=2^{1-s}\frac{\Gamma(1-\frac s2)^2}{\Gamma(2-s)}\\ I_2(s)&=\frac{\Gamma\big(1-\frac{1}{2s}\big)\Gamma({\frac{1}{2}})}{\Gamma(\frac 32-\frac{1}{2s})}\cdotp \end{align*} Now, let~$Q \in {\mathcal P}_n$ and put~$p'=p/r$. By Hölder's inequality, one has \begin{align*} \int_{-1}^1 \abs{Q(t)}^{p'} \, \frac{dt}{\sqrt{1-t^2}} &\leqslant \Big(\int_{-1}^1 \abs{Q(t)}^{p'r} \, dt \Big)^{1/r}\Big(\int_{-1}^1 \frac{dt}{(1-t^2)^{s/2}}\Big)^{1/s}\\ &=\norm{Q}_p^{p'} I_1(s)^{1/s}. \end{align*} It was proved by Kro\'o and Saff (cf. \cite{zbMATH04032294}, pp. 182-183) that \begin{align*} 2^{(n-1)p'}\int_{-1}^1 \abs{Q(t)}^{p'} \, \frac{dt}{\sqrt{1-t^2}} &\geqslant \int_{-1}^1 \abs{T_n(t)}^{p'} \, \frac{dt}{\sqrt{1-t^2}} \\ &=\int_0^{\pi} \abs{\cos nu}^{p'} \, du=\int_0^{\pi} \abs{\cos u}^{p'} \, du\\ &=\int_{-1}^1 \abs{t}^{p'} \, \frac{dt}{\sqrt{1-t^2}} \\ &\geqslant \int_{-1}^1 \abs{t}^{1/r} \, \frac{dt}{\sqrt{1-t^2}} \expli{one has $p'=p/r<1/r$}\\ &=I_2(s). \end{align*} Therefore, with $I=[-1,1]$, \begin{equation}\label{220903b} \norm{Q}_p \geqslant 2^{1-n}I_2(s)^{1/p'}I_1(s)^{-1/p's}=2^{1-n}A(s)^{1/p} \quad (1<s<2), \end{equation} where \[ A(s)=I_2(s)^{s/(s-1)}I_1(s)^{-1/(s-1)}. \] Hence \[ A(s)=2\Big(\frac{\Gamma\big(1-\frac{1}{2s}\big)^s\Gamma({\frac{1}{2}})^s\Gamma(2-s)}{\Gamma(1-\frac s2)^2\Gamma\big(\frac 32-\frac{1}{2s}\big)^s}\Big)^{1/(s-1)} \quad (1<s <2). \] Putting $f(s)=\ln\Gamma(s)$, one has \[ \ln A(s)=\ln 2+\frac{sf(1-1/2s)+sf(1/2)+f(2-s)-2f(1-s/2)-sf(3/2-1/2s)}{s-1}\cdotp \] When $s$ tends to $1$, the last fraction tends to \begin{equation*} \ln \pi +\frac 32 \psi(1/2) -\frac 32 \psi(1)=\ln \pi -3\ln 2, \end{equation*} with the usual notation $f'=\Gamma'/\Gamma=\psi$. It follows that \[ A(s) \rightarrow \frac {\pi}4 \quad (s \rightarrow 1). \] Together with \eqref{220903b}, this gives the inequality \[ D^{**}(n,p,[-1,1])\geqslant 2^{1-n}(\pi/4)^{1/p} \] and \eqref{220817c} now implies \begin{equation*} C(n,p) \leqslant 2^{2n-1}n!(8/\pi)^{1/p}.\tag*{\mbox{$\Box$}} \end{equation*} \medskip We now prove an inequality involving three values of the function $C$. \begin{prop} Let $p,q,r$ be positive real numbers such that \[ \frac 1p =\frac 1q + \frac 1r\cdotp \] Let $m$ and $n$ be positive integers. Then, \[ \frac{C(m+n,p)}{(m+n)!} \geqslant \frac{C(m,q)}{m!}\cdot \frac{C(n,r)}{n!}\cdotp \] \end{prop} \noindent {\bf Proof\ } Equivalently, by \eqref{220817c}, one has to prove that \[ D^{**}(m+n,p,I)\leqslant D^{**}(m,q,I)\cdot D^{**}(n,r,I), \] where $I$ is a segment of the real line. In fact, if $P\in {\mathcal P}_{m}$ and $Q\in {\mathcal P}_{n}$, then$PQ \in {\mathcal P}_{m+n}$ hence \[ D^{**}(m+n,p,I)^p\leqslant \int_{I} \abs{P(t)Q(t)}^{p} \, dt\leqslant \Big(\int_{I} \abs{P(t)}^{q} \, dt \Big)^{p/q} \cdot \Big(\int_{I} \abs{Q(t)}^{r} \, dt \Big)^{p/r} \] by the definition of $D^{**}(m+n,p,I)$ and Hölder's inequality. The greatest lower bound of the last term, when $P$ runs over ${\mathcal P}_m$ and $Q$ runs over ${\mathcal P}_n$, is \[ D^{**}(m,q,I)^p\cdot D^{**}(n,r,I)^p. \] The result follows.\hfill$\Box$ \subsection{An open question} Finally, observing that \[ C(n,2) \sim \sqrt{\frac{2}{\pi}}\cdot 2^{2n}n! \quad ( n \rightarrow \infty), \] (an exercise on Stirling's formula from \eqref{220830c}), we ask the following question. \begin{quote} Is it true that, for every $p>0$, the quantity $2^{-2n}C(n,p)/n!$ tends to a limit when $n$ tends to infinity? \end{quote} \bibliographystyle{smfplain}
1,314,259,993,996
arxiv
\section{Introduction} \label{Introduction} In \cite{Cohl12pow} and~\cite{CTRS} (see also~\cite{Cohlerratum12}), we present some def\/inite integral and inf\/inite series addition theorems which arise from expanding fundamental solutions of elliptic equations on ${\mathbb R}^d$ in axisymmetric coordinate systems which separate Laplace's equation. We utilize orthogonality and integral transforms to obtain new def\/inite integrals from some of these addition theorems. \section{Def\/inite integrals from integral transforms} \subsection{Application of Hankel's transform} We use the following result where for $x\in(0,\infty)$ we def\/ine \[ F(r\pm 0):=\lim_{x\to r\pm}F(x); \] see \cite[p.~456]{Watson}: \begin{theorem} \label{2:Hankel} Let $F:(0,\infty)\to{\mathbb C}$ be such that \begin{gather}\label{2:cond} \int_0^\infty \sqrt{x}\,|F(x)|\,dx<\infty, \end{gather} and let $\nu \ge -\frac12$. Then{\samepage \begin{gather}\label{2:Hankel2} \frac12(F(r+0)+F(r-0))=\int_0^\infty uJ_\nu(ur)\int_0^\infty xF(x)J_\nu(ux)\,dx\,du \end{gather} provided that the positive number $r$ lies inside an interval in which $F(x)$ has finite variation.} \end{theorem} As an illustration for the method of integral transforms, we give the following example. According to \cite[(13.22.2)]{Watson} (see also \cite[(6.612.3)]{Grad}), we have for $\operatorname{Re} a>0$, $b,c>0$, $\operatorname{Re} \nu>-\frac12,$ then \begin{gather} \int_0^\infty e^{-ka}J_\nu(kb) J_\nu(kc) dk= \frac{1}{\pi\sqrt{bc}} Q_{\nu-1/2}\left( \frac{a^2+b^2+{c}^2}{2bc}\right), \label{QJ} \end{gather} where $J_\nu:{\mathbb C}\setminus(-\infty,0]\to{\mathbb C}$, for order $\nu\in{\mathbb C}$ is the Bessel function of the f\/irst kind def\/ined in \cite[(10.2.2)]{NIST} and $Q_\nu^\mu:{\mathbb C}\setminus(-\infty,1]\to{\mathbb C}$ for $\nu+\mu\notin -{\mathbb N}$ with degree $\nu$ and order $\mu$, is the associated Legendre function of the second kind def\/ined in \cite[(14.3.7), \S~14.21]{NIST}. The Legendre function of the second kind $Q_\nu:{\mathbb C}\setminus(-\infty,1]\to{\mathbb C}$ for $\nu\notin -{\mathbb N}$ is def\/ined in terms of the zero-order associated Legendre function of the second kind $Q_\nu(z):=Q_\nu^0(z)$. If we apply Theorem~\ref{2:Hankel} to the function $F:(0,\infty)\to{\mathbb C}$ def\/ined by \begin{gather} F(k):=\frac{\pi\sqrt{c}}{k}e^{-ka}J_\nu(kc), \label{FQJE} \end{gather} then condition~\eqref{2:cond} is satisf\/ied. If we use~(\ref{FQJE}) in~\eqref{2:Hankel2} then we obtain the following result. If $\operatorname{Re} a>0$, $c>0$, $\operatorname{Re} \nu>-\frac12$, then \[ \int_0^\infty J_\nu(kb)\, Q_{\nu-1/2}\left(\frac{a^2+b^2+c^2}{2bc}\right)\! \sqrt{b}\,db=\frac{\pi\sqrt{c}}{k}e^{-ka}J_\nu(kc), \] which is actually given in \cite[(2.18.8.11)]{Prud90}. Hardy \cite[(33.16)]{Hardy08} derives an interesting extension of (\ref{QJ}) (see also \cite[p.~389]{Watson} and \cite[p.~17]{Askey75}). We apply the Whipple formula \cite[(14.9.17), \S~14.21]{NIST} to Hardy's extension to obtain \begin{gather} \int_0^\infty k e^{-ka}J_\nu(kb)J_\nu(kc)dk\nonumber\\[0.10cm] \qquad=\frac{-2a}{\pi\sqrt{bc} \left(a^2+(b+c)^2\right)^{1/2} \left(a^2+(b-c)^2\right)^{1/2}} Q_{\nu-1/2}^1\left(\frac{a^2+b^2+c^2}{2bc}\right), \label{Qnu1kint} \end{gather} for $\operatorname{Re} a>0$, $b,c>0$, $\operatorname{Re} \nu>-1$. It is mentioned in \cite[p.~389]{Watson} that~(\ref{Qnu1kint}) can be derived from~(\ref{QJ}) by dif\/ferentiation with respect to~$a$. Using this integral and Theorem~\ref{2:Hankel}, we prove the following theorem. \begin{theorem} Let $\operatorname{Re} a>0$, $c>0$, $\operatorname{Re} \nu>-1$. Then \begin{gather*} \int_0^\infty \frac{J_\nu(kb)} { \left(a^2+(b+c)^2\right)^{1/2} \left(a^2+(b-c)^2\right)^{1/2} } Q_{\nu-1/2}^1\!\left(\frac{a^2+b^2+c^2}{2bc}\right)\sqrt{b}\,db =-\frac{\pi\sqrt{c}}{2a}e^{-ka}J_\nu(kc). \end{gather*} \end{theorem} \begin{proof} By applying Theorem~\ref{2:Hankel} to the function $F:(0,\infty)\to{\mathbb C}$ def\/ined by \[ F(k):=-\frac{\pi\sqrt{c}}{2a}e^{-ka}J_\nu(kc), \] using (\ref{Qnu1kint}), we obtain the desired result. \end{proof} Now, we give another example of how an integral expansion for a fundamental solution of Laplace's equation on~${\mathbb R}^3$ in parabolic coordinates can be used to prove a new def\/inite integral. \begin{theorem} Let $m\in{\mathbb N}_0$, $\lambda'\in(0,\infty)$, $\mu,\mu'\in(0,\infty)$, $\mu\ne \mu'$, $k\in(0,\infty)$. Then \[ \int_0^\infty Q_{m-1/2}(\chi) J_m(k\lambda)\sqrt{\lambda}\,d\lambda= 2\pi\sqrt{\lambda'\mu\mu'} J_m(k\lambda') I_m(k\mu_<) K_m(k\mu_>), \] where \[ \chi=\frac{4\lambda^2\mu^2+4{\lambda'}^2{\mu'}^2+(\lambda^2-{\lambda'}^2+{\mu'}^2-\mu^2)^2} {8\lambda\lambda'\mu\mu'}>1 \] and $\mu_\lessgtr:={\min \atop \max}\{\mu,\mu'\}$. \end{theorem} \begin{proof} We apply Theorem \ref{2:Hankel} to the function $F:(0,\infty)\to{\mathbb C}$ def\/ined by \[ F(k):=2\pi\sqrt{\lambda'\mu\mu'} J_m(k\lambda') I_m(k\mu_<) K_m(k\mu_>), \] where $I_\nu:{\mathbb C}\setminus(-\infty,0]\to{\mathbb C},$ for $\nu\in{\mathbb C}$, is the modif\/ied Bessel function of the f\/irst kind def\/ined in \cite[(10.25.2)]{NIST} and $K_\nu:{\mathbb C}\setminus(-\infty,0]\to{\mathbb C}$, for $\nu\in{\mathbb C}$, is the modif\/ied Bessel function of the second kind def\/ined in~\cite[(10.27.4)]{NIST}. Again, we see that condition~\eqref{2:cond} is satisf\/ied. We obtain the desired result from \cite[(41)]{CTRS}, namely, for $\lambda\in(0,\infty)$, \begin{gather*} \int_0^\infty J_m(k\lambda) J_m(k\lambda') I_m(k\mu_<) K_m(k\mu_>) kdk= \frac{Q_{m-1/2}(\chi)}{2\pi\sqrt{\lambda\lambda'\mu\mu'}}.\tag*{\qed} \end{gather*} \renewcommand{\qed}{} \end{proof} \subsection{Application of Fourier cosine transform} \begin{theorem} Let $a,b\in(0,\infty)$ with $b\le a$, $k\in(0,\infty)$, $\operatorname{Re} \nu>-\frac12$. Then \begin{gather} \int_0^\infty Q_{\nu-1/2}\left( \frac{a^2+{b}^2+z^2}{2ab}\right) \cos(kz)\,dz =\pi\sqrt{ab}\,I_\nu(kb)K_\nu(ka). \label{Qmumhalfcoskz} \end{gather} \end{theorem} \begin{proof} According to \cite[(6.672.4)]{Grad} we have the integral relation \[ \int_0^\infty I_\nu(kb) K_\nu(ka) \cos(kz) dk=\frac{1}{2\sqrt{ab}} Q_{\nu-1/2}\left(\frac{a^2+{b}^2+z^2}{2ab}\right), \] where $a,b\in(0,\infty)$ with $b<a,$ $z>0$, $\operatorname{Re} \nu>-\frac12$. We obtain the desired result from Theorem~\ref{2:Hankel} with $F:(0,\infty)\to{\mathbb C}$ def\/ined such that \[ F(k):= \pi \sqrt{\frac{ab}{k}}\,I_\nu(kb)K_\nu(ka) \] and $\nu=-\frac12$. Furthermore, if one makes the replacement $z\mapsto z/(\sqrt{2}a)$, $k\mapsto\sqrt{2}ka$ in \cite[(7.162.6)]{Grad}, namely \[ \int_0^\infty Q_{\nu-1/2}(1+z^2)\cos(kz)dz=\frac{\pi}{\sqrt{2}} I_{\nu} \left(\frac{k}{\sqrt{2}}\right) K_{\nu} \left(\frac{k}{\sqrt{2}}\right), \] where $k\in(0,\infty)$ and $\operatorname{Re} \nu>-\frac12,$ then we see that~(\ref{Qmumhalfcoskz}) holds also for any $a,b\in(0,\infty)$ with $a=b$. \end{proof} \section{Def\/inite integrals from orthogonality relations} \label{Definiteintegralsfromorthogonalityrelations} \subsection{Degree orthogonality for associated Legendre functions\\ with integer degree and order} \label{DegreeorthogonalityforassociatedLegendrefunctions} We take advantage of the degree orthogonality relation for the Ferrers function of the f\/irst kind with integer degree and order, namely (cf.\ \cite[(7.112.1)]{Grad}) \begin{gather} \int_0^\pi {\mathrm P}_n^m(\cos\theta){\mathrm P}_{n'}^m(\cos\theta)\sin\theta d\theta =\frac{2}{2n+1}\frac{(n+m)!}{(n-m)!}\delta_{n,n'}, \label{orthoglegendrePn} \end{gather} where $m,n,n'\in{\mathbb N}_0$, and $m\le n$, $m\le n'$. We are using the associated Legendre function of the f\/irst kind (on-the-cut), ${\mathrm P}_\nu^\mu:(-1,1)\to{\mathbb C},$ for $\nu,\mu\in{\mathbb C}$, the Ferrers function of the f\/irst kind, which is def\/ined in \cite[(14.3.1)]{NIST}. The following estimates for the Ferrers function of the f\/irst kind with integer degree and order will be useful. If $\theta\in[0,\pi]$ and $m,n\in{\mathbb N}_0$ then \cite[\S~5.3, (19)]{Schafke63} \begin{gather}\label{3:est1} |{\mathrm P}_n^m(\cos\theta)|\le \frac{(m+n)!}{n!} \end{gather} and if $\theta\in(0,\pi)$ then \cite[p.~203]{MOS} \begin{gather} \label{3:est2} |{\mathrm P}_n^m(\cos\theta)|< 2\frac{(n+m)!}{n!} \left({\pi n}\right)^{-1/2}(\csc \theta)^{m+1/2}. \end{gather} If $\mu\in{\mathbb C}$, $\xi>0$ are f\/ixed and $0\le \nu\to+\infty$, we also have the following asymptotic formulas for the associated Legendre functions \begin{gather} P_\nu^\mu(\cosh \xi) = (2\pi\sinh\xi)^{-1/2} \frac{\Gamma(\nu+\mu+1)}{\Gamma(\nu+\frac32)} e^{(\nu+\frac12)\xi}\big(1+O\big(\nu^{-1}\big)\big) \label{3:asyP} ,\\ Q_\nu^\mu(\cosh \xi) = \left(\frac{\pi}{2\sinh\xi}\right)^{1/2} \frac{\Gamma(\nu+\mu+1)}{\Gamma(\nu+\frac32)} e^{-(\nu+\frac12)\xi+i\pi\mu} \big(1+O\big(\nu^{-1}\big)\big), \label{3:asyQ} \\ P_\nu^\mu(i\sinh \xi) = (2\pi\cosh\xi)^{-1/2} \frac{\Gamma(\nu+\mu+1)}{\Gamma(\nu+\frac32)} e^{(\nu+\frac12)\xi+i\pi\nu/2}\big(1+O\big(\nu^{-1}\big)\big) \label{3:asyPi} ,\\ Q_\nu^\mu(i\sinh \xi) = \left(\frac{\pi}{2\cosh\xi}\right)^{1/2} \frac{\Gamma(\nu+\mu+1)}{\Gamma(\nu+\frac32)} e^{-(\nu+\frac12)\xi -i\pi(\nu+1)/2 +i\pi\mu } \big(1+O\big(\nu^{-1}\big)\big). \label{3:asyQi} \end{gather} These asymptotic formulae follow from representations of Legendre functions by Gauss hypergeometric functions; see \cite[(8.1.1), (8.10.4--5)]{Abra}. \begin{theorem} Let $n,m\in{\mathbb N}_0$, with $n\ge m,$ $\nu\in{\mathbb C}\setminus\{2m,2m+2,2m+4,\ldots\}$, $r,r'\in(0,\infty)$, $r\ne r'$, $\theta'\in(0,\pi)$. Then \begin{gather} \int_0^\pi \big(\chi^2-1\big)^{(\nu+1)/4}Q_{m-1/2}^{-(\nu+1)/2}(\chi){\mathrm P}_n^m(\cos\theta) (\sin\theta)^{(\nu+2)/2}d\theta\nonumber\\ \qquad{} = \frac{i\sqrt{\pi}}{2^{(\nu+1)/2}(\sin\theta')^{\nu/2}} \left(\frac{r_>^2-r_<^2}{rr'}\right)^{(\nu+2)/2} Q_n^{-(\nu+2)/2}\left(\frac{r^2+{r'}^2}{2rr'}\right){\mathrm P}_n^m(\cos\theta'), \label{bignuresult} \end{gather} where \begin{gather} \chi=\frac{r^2+{r'}^2-2rr'\cos\theta\cos\theta'}{2rr'\sin\theta\sin\theta'} \label{chisphR3} \end{gather} and $r_{\lessgtr}:={\min \atop \max}\{r,r^\prime\}.$ \end{theorem} \begin{proof} We start with the following addition theorem for the associated Legendre function of the second kind (see \cite{Cohl12pow}), namely for $\theta\in(0,\pi)$, \begin{gather} \big(\chi^2-1\big)^{(\nu+1)/4}(\sin\theta)^{\nu/2} Q_{m-1/2}^{-(\nu+1)/2}(\chi) =\frac{i\sqrt{\pi}}{2^{(\nu+3)/2}}(\sin\theta^\prime)^{-\nu/2} \left(\frac{r_>^2-r_<^2}{rr^\prime}\right)^{(\nu+2)/2} \nonumber\\ \qquad{} \times\sum_{n=m}^\infty (2n+1)\frac{(n-m)!}{(n+m)!} Q_n^{-(\nu+2)/2}\left(\frac{r^2+{r^\prime}^2}{2rr^\prime} \right) {\mathrm P}_n^m(\cos\theta) {\mathrm P}_n^m(\cos\theta^\prime), \label{addtheorem3ba} \end{gather} where $\chi>1 $ is given by~(\ref{chisphR3}). By~\eqref{3:est1} and~\eqref{3:asyQ} the inf\/inite series is uniformly convergent for $\theta\in[0,\pi]$. Therefore, if we multiply both sides of~(\ref{addtheorem3ba}) by $\sin\theta\, {\mathrm P}_{n'}^m(\cos\theta)$, where $n'\in{\mathbb N}_0$ and integrate over $\theta\in(0,\pi)$ we obtain~(\ref{bignuresult}). \end{proof} \begin{corollary} Let $n,m\in{\mathbb N}_0$ with $n\ge m$, $r,r'\in(0,\infty)$, $r\ne r'$, $\theta'\in(0,\pi)$. Then \[ \int_0^\pi Q_{m-1/2}(\chi){\mathrm P}_n^m(\cos\theta)\sqrt{\sin\theta}d\theta= \frac{2\pi \sqrt{\sin\theta'} }{2n+1} {\mathrm P}_n^m(\cos\theta')\left(\frac{r_<}{r_>}\right)^{n+1/2}, \] where $\chi>1$ is given by~\eqref{chisphR3}. \end{corollary} \begin{proof} Substitute $\nu=-1$ in (\ref{bignuresult}) and use \cite[(14.5.17)]{NIST}. \end{proof} \begin{theorem}\label{prolate} Let $m,n\in{\mathbb N}_0$ with $n\ge m$, $\sigma,\sigma'\in(0,\infty)$, $\theta'\in(0,\pi)$. Then \begin{gather} \int_0^\pi Q_{m-1/2}(\chi) {\mathrm P}_n^m(\cos\theta) \sqrt{\sin\theta}\,d\theta =2\pi (-1)^m\frac{(n-m)!}{(n+m)!}\nonumber\\ \qquad {} \times\sqrt{\sinh\sigma\sinh\sigma'\sin\theta'}\, {\mathrm P}_n^m(\cos\theta') P_n^m(\cosh\sigma_<)Q_n^m(\cosh\sigma_>), \label{prolateint} \end{gather} where \begin{gather} \chi=\frac{\cosh^2\sigma+\cosh^2\sigma'-\sin^2\theta-\sin^2\theta' -2\cosh\sigma\cosh\sigma'\cos\theta\cos\theta'} {2\sinh\sigma\sinh\sigma'\sin\theta\sin\theta'} \label{chiprolate} \end{gather} and $\sigma_{\lessgtr}:={\min \atop \max}\{\sigma,\sigma^\prime\}$. \end{theorem} \begin{proof} We start with the following addition theorem for the associated Legendre function of the second kind (see \cite[(37)]{CTRS}), namely \begin{gather} Q_{m-1/2}(\chi)=\pi(-1)^m \sqrt{\sinh\sigma\sinh\sigma'\sin\theta\sin\theta'} \sum_{n=m}^\infty (2n+1) \left[\frac{(n-m)!}{(n+m)!}\right]^2 \nonumber\\ \hphantom{Q_{m-1/2}(\chi)=}{} \times {\mathrm P}_n^m(\cos\theta){\mathrm P}_n^m(\cos\theta')P_n^m(\cosh\sigma_<)Q_n^m(\cosh\sigma_>), \label{addtheoremprolate} \end{gather} where $\chi\ge 1$ is def\/ined by (\ref{chiprolate}). Note that $\chi=1$ only if $\sigma=\sigma'$ and $\theta=\theta'$, and in that case $Q_{m-1/2}(\chi)$ has a logarithmic singularity. If $\sigma\ne \sigma'$ then \eqref{3:est1}, \eqref{3:asyP}, \eqref{3:asyQ} show that the series in \eqref{addtheoremprolate} is uniformly convergent for $\theta\in[0,\pi]$. Therefore, if we multiply both sides of (\ref{addtheoremprolate}) by $\sqrt{\sin\theta}\,{\mathrm P}_{n'}^m(\cos\theta)$ and integrate over $\theta\in[0,\pi]$, then by~(\ref{orthoglegendrePn}) we have obtained (\ref{prolateint}). If $\sigma=\sigma'$ then one may use \eqref{3:est2}, \eqref{3:asyP}, \eqref{3:asyQ} and the orthogonality relation \eqref{orthoglegendrePn} to show that the series in \eqref{addtheoremprolate} (as a series of functions in the variable $\theta\in(0,\pi)$) converges in $L^2(0,\pi)$. That is \begin{gather}\label{L2} \sum_{n=m}^\infty \left\{\frac{(n-m)!}{(n+m)!}\phi_n(\theta') P_n^m(\cosh \sigma_<) Q_n^m(\cosh \sigma_>)\right\}^2 <\infty, \end{gather} where $\phi_n\in L^2(0,\pi)$ forms an orthonormal basis and is def\/ined as \[ \phi_n(\theta) :=\sqrt{\sin\theta} \sqrt{\frac{2n+1}{2} \frac{(n-m)!}{(n+m)!}}{\rm P}_n^m(\cos\theta), \] for $n=m,m+1,\dots$. Then by the asymptotics of $P_n^m$ and $Q_n^m$ (cf.~(\ref{3:asyP}), (\ref{3:asyQ})) \[ P_n^m(\cosh \sigma_<) Q_n^m(\cosh \sigma_>)= O\left(n^{2m-1}\right)\quad\text{as $n\to\infty$}. \] Also from the estimate of ${\rm P}_n^m$ (\ref{3:est2}), $\phi_n(\theta')=O\left(1\right).$ Therefore, \[ \frac{(n-m)!}{(n+m)!} \phi_n(\theta')P_n^m(\cosh \sigma_<) Q_n^m(\cosh \sigma_>)=O\big(n^{-1}\big), \] and this implies \eqref{L2} because $\sum\limits_{n=1}^\infty \frac{1}{n^2} <\infty.$ Therefore, we again obtain (\ref{prolateint}). \end{proof} \begin{theorem}\label{oblate} Let $m,n\in{\mathbb N}_0$ with $0\le m\le n$, $\sigma,\sigma'\in(0,\infty)$, $\theta'\in[0,\pi].$ Then \begin{gather} \int_0^\pi Q_{m-1/2}(\chi) {\mathrm P}_n^m(\cos\theta)\sqrt{\sin\theta}d\theta =2\pi i(-1)^m\frac{(n-m)!}{(n+m)!}\nonumber\\ \qquad{} \times\sqrt{\cosh\sigma\cosh\sigma'\sin\theta'} {\mathrm P}_n^m(\cos\theta') P_n^m(i\sinh\sigma_<)Q_n^m(i\sinh\sigma_>), \label{oblateint} \end{gather} where \begin{gather} \chi=\frac{\sinh^2\sigma+\sinh^2\sigma'+\sin^2\theta+\sin^2\theta' -2\sinh\sigma\sinh\sigma'\cos\theta\cos\theta'} {2\cosh\sigma\cosh\sigma'\sin\theta\sin\theta'}. \label{chioblate} \end{gather} \end{theorem} \begin{proof} We start with oblate spheroidal coordinates on ${\mathbb R}^3$, namely \[ x = a\cosh\sigma\sin\theta\cos\phi,\qquad y = a\cosh\sigma\sin\theta\sin\phi,\qquad z = a\sinh\sigma\cos\theta, \] where $a>0$, $\sigma\in [0,\infty)$, $\theta\in [0,\pi]$, $\phi\in[0,2\pi)$. The reciprocal distance between two points ${\bf x},{{\bf x}^\prime}\in{\mathbb R}^3$ expanded in terms of the separable harmonics in this coordinate system is given in \cite[(41), p.~218]{MacRobert47}, namely \begin{gather*} \frac{1}{\|{\bf x}-{{\bf x}^\prime}\|}=\frac{i}{a} \sum_{n=0}^\infty (2n+1)\sum_{m=0}^n(-1)^m\epsilon_m \left[\frac{(n-m)!}{(n+m)!}\right]^2\cos(m(\phi-\phi')) \\ \hphantom{\frac{1}{\|{\bf x}-{{\bf x}^\prime}\|}=}{} \times {\mathrm P}_n^m(\cos\theta){\mathrm P}_n^m(\cos\theta')P_n^m(i\sinh\sigma_<)Q_n^m(i\sinh\sigma_>), \end{gather*} where $\epsilon_m:=2-\delta_{m,0}$ is the Neumann factor \cite[p.~744]{MorseFesh} commonly occurring in Fourier cosine series, with $\sigma'\in[0,\infty),$ $\theta'\in[0,\pi],$ $\phi'\in[0,2\pi)$. Note that the corresponding expression given in \cite[\S~5.2]{CTRS} is given incorrectly (see \cite{Cohlerratum12}). By reversing the order of summations in the above expression and comparing with the Fourier cosine expansion for the reciprocal distance between two points, namely \[ \frac{1}{\|{\bf x}-{{\bf x}^\prime}\|}=\frac{1}{\pi a \sqrt{\cosh\sigma\cosh\sigma'\sin\theta\sin\theta'}} \sum_{m=0}^\infty \epsilon_m \cos(m(\phi-\phi')) Q_{m-1/2}(\chi), \] where $\chi>1$ is given by (\ref{chioblate}), we obtain the following addition theorem for the associated Legendre function of the second kind \begin{gather} Q_{m-1/2}(\chi)=i\pi(-1)^m\sqrt{\cosh\sigma\cosh\sigma'\sin\theta\sin\theta'} \sum_{n=m}^\infty (2n+1) \left[\frac{(n-m)!}{(n+m)!}\right]^2 \nonumber\\ \hphantom{Q_{m-1/2}(\chi)=}{} \times {\mathrm P}_n^m(\cos\theta){\mathrm P}_n^m(\cos\theta')P_n^m(i\sinh\sigma_<)Q_n^m(i\sinh\sigma_>). \label{addtheoremoblate} \end{gather} If we multiply both sides of (\ref{addtheoremoblate}) by $\sqrt{\sin\theta}\,{\mathrm P}_{n'}^m(\cos\theta)$ and integrate over $\theta\in[0,\pi]$, then by (\ref{orthoglegendrePn}) we have obtained~(\ref{oblateint}). We justify the interchange of integral and inf\/inite sum as before by using the asymptotic formulas \eqref{3:asyPi}, \eqref{3:asyQi}. \end{proof} \begin{theorem} Let $m,n\in{\mathbb N}_0$ with $0\le m\le n$, $\sigma,\sigma'\in(0,\infty)$, $\theta'\in(0,\pi)$. Then \begin{gather} \int_0^\pi Q_{m-1/2}(\chi) {\mathrm P}_n^m(\cos\theta) \sqrt{\sin\theta}d\theta =\frac{2\pi\sqrt{\sin\theta'}}{2n+1} {\mathrm P}_n^m(\cos\theta') e^{-(n+1/2)(\sigma_>-\sigma_<)}, \label{bisphereint} \end{gather} where if we define $s=\cosh\sigma$, $s'=\cosh\sigma'$, $\tau=\cos\theta$, $\tau^\prime=\cos\theta'$, then \begin{gather} \chi=\frac{ \sin^2\theta(s^\prime-\tau^\prime)^2+\sin^2\theta^\prime(s-\tau)^2 +\bigl[(s^\prime-\tau^\prime)\sinh\sigma-(s-\tau)\sinh\sigma^\prime\bigr]^2} {2\sin\theta\sin\theta^\prime(s-\tau)(s^\prime-\tau^\prime)}. \label{chibisphere} \end{gather} \end{theorem} \begin{proof} We start with bispherical coordinates on ${\mathbb R}^3$, namely \[ x = \frac{a\sin\theta\cos\phi}{\cosh\sigma-\cos\theta},\qquad y = \frac{a\sin\theta\sin\phi}{\cosh\sigma-\cos\theta},\qquad z = \frac{a\sinh\sigma}{\cosh\sigma-\cos\theta}, \] where $a>0$, $\sigma\in [0,\infty)$, $\theta\in [0,\pi]$, $\phi\in[0,2\pi)$. The reciprocal distance between two points ${\bf x},{{\bf x}^\prime}\in{\mathbb R}^3$ expanded in terms of the separable harmonics in this coordinate system is given in \cite[(9), p.~222]{MacRobert47}, namely \begin{gather*} \frac{1}{\|{\bf x}-{{\bf x}^\prime}\|}=\frac{1}{a} \sqrt{(\cosh\sigma-\cos\theta)(\cosh\sigma'-\cos\theta')} \sum_{n=0}^\infty e^{-(n+1/2)(\sigma_>-\sigma_<)} \\ \hphantom{\frac{1}{\|{\bf x}-{{\bf x}^\prime}\|}=}{} \times\sum_{m=0}^n\epsilon_m \frac{(n-m)!}{(n+m)!}{\mathrm P}_n^m(\cos\theta){\mathrm P}_n^m(\cos\theta') \cos(m(\phi-\phi')) , \end{gather*} where $\sigma'\in[0,\infty)$, $\theta'\in[0,\pi]$, $\phi'\in[0,2\pi)$. By reversing the order of summations in the above expression and comparing with the Fourier cosine expansion for the reciprocal distance between two points, namely \begin{gather*} \frac{1}{\|{\bf x}-{{\bf x}^\prime}\|}= \frac{\sqrt{(\cosh\sigma-\cos\theta)(\cosh\sigma'-\cos\theta')}} {\pi a \sqrt{\sin\theta\sin\theta'}} \sum_{m=0}^\infty \epsilon_m \cos(m(\phi-\phi')) Q_{m-1/2}(\chi), \end{gather*} where $\chi\ge 1$ is given by (\ref{chibisphere}), we obtain the following addition theorem for the associated Legendre function of the second kind \begin{gather} Q_{m-1/2}(\chi)=\pi\sqrt{\sin\theta\sin\theta'} \sum_{n=m}^\infty \frac{(n-m)!}{(n+m)!}e^{-(n+1/2)(\sigma_>-\sigma_<)}{\mathrm P}_n^m(\cos\theta){\mathrm P}_n^m(\cos\theta'). \label{addtheorembisphere} \end{gather} If $\sigma=\sigma'$ and $\theta=\theta'$, then $\chi=1,$ and $Q_{m-1/2}(\chi)$ has a logarithmic singularity. Note that the corresponding expression given in \cite[\S~6.1, (45)]{CTRS} is given incorrectly (see \cite{Cohlerratum12}). If we multiply both sides of~(\ref{addtheorembisphere}) by $\sqrt{\sin\theta}\,{\mathrm P}_{n'}^m(\cos\theta)$ and integrate over $\theta\in[0,\pi]$, then by~(\ref{orthoglegendrePn}) we have obtained~(\ref{bisphereint}). We justify the interchange of integral and inf\/inite sum in the same way as in the proof of Theorem~\ref{prolate}. \end{proof} \subsection{Order orthogonality for associated Legendre functions\\ with integer degree and order} \label{OrderorthogonalityforassociatedLegendrefunctions} In this subsection we take advantage of the order orthogonality relation for the Ferrers function of the f\/irst kind with integer degree and order (cf. \cite[(14.17.8)]{NIST}) \begin{gather} \int_0^\pi {\mathrm P}_n^m(\cos\theta){\mathrm P}_n^{m'}(\cos\theta)\frac{1}{\sin\theta}d\theta =\frac{1}{m}\frac{(n+m)!}{(n-m)!}\delta_{m,m'}, \label{orderPorthogonality} \end{gather} with $m\ge 1$. \begin{theorem}\label{orderm} Let $m\in{\mathbb N}$, $n\in{\mathbb N}_0$ with $1\le m\le n$, $\theta'\in[0,\pi]$, $\phi,\phi'\in[0,2\pi)$. Then \[ \int_0^\pi P_n(\cos\gamma){\mathrm P}_n^m(\cos\theta)\frac{1}{\sin\theta}d\theta= \frac{2}{m}{\mathrm P}_n^m(\cos\theta')\cos(m(\phi-\phi')), \] where \[ \cos\gamma=\cos\theta\cos\theta'+\sin\theta\sin\theta'\cos(\phi-\phi'). \] \end{theorem} \begin{proof} We start with the addition theorem for spherical harmonics (cf. \cite[(14.18.1)]{NIST}), namely \begin{gather} P_n(\cos\gamma)=\sum_{m=-n}^n\frac{(n-m)!}{(n+m)!}{\mathrm P}_n^m(\cos\theta){\mathrm P}_n^m(\cos\theta') e^{im(\phi-\phi')}, \label{addtheoremsph} \end{gather} where $P_n:{\mathbb C}\to{\mathbb C}$, for $n\in{\mathbb N}_0$, is the Legendre polynomial which can be def\/ined in terms of the terminating Gauss hypergeometric series (see for instance \cite[Chapters~15,~18]{NIST}) as follows \[ P_n(z):={}_2F_1\left(\!\begin{array}{c}-n,n+1\\ 1\end{array};\frac{1-z}{2}\right). \] We then take advantage of the order orthogonality relation for the Ferrers functions of the f\/irst kind with integer degree and order. If we multiply both sides of~(\ref{addtheoremsph}) by $(\sin\theta)^{-1}\,{\mathrm P}_n^{m'}(\cos\theta)$ and integrate over $\theta\in(0,\pi),$ by using~(\ref{orderPorthogonality}) we obtain the desired result. \end{proof} Theorem~\ref{orderm}, originating from~(\ref{addtheoremsph}), is the only example of a def\/inite integral that we could f\/ind using the order orthogonality relation for the Ferrers functions of the f\/irst kind~(\ref{orderPorthogonality}). Therefore we highly suspect that this result is previously known, and include it mainly for completeness sake. It would however be very interesting to f\/ind another example using this orthogonality relation. \subsection{Orthogonality for Chebyshev polynomials of the f\/irst kind} \label{OrthogonalityfromChebyshevpolynomialsofthefirstkind} Here we take advantage of orthogonality from Chebyshev polynomials of the f\/irst kind (cf. \cite[\S~18.3]{NIST}) \begin{gather} \int_0^\pi T_m(\cos\theta) T_n(\cos\theta) d\theta=\frac{\pi}{\epsilon_n} \delta_{m,n}, \label{orthogcheby1st} \end{gather} where $T_n:{\mathbb C}\to{\mathbb C}$, for $n\in{\mathbb N}_0,$ is the Chebyshev polynomial of the f\/irst kind which can be def\/ined in terms of the terminating Gauss hypergeometric series (see \cite[Chapter~18]{NIST}) \[ T_n(z)= {}_2F_1\left(\!\begin{array}{c}-n,n\\[1mm]\frac12\end{array};\frac{1-z}{2}\right). \] The Chebyshev polynomials of the f\/irst kind satisfy the identity \cite[(18.5.1)]{NIST} \begin{gather*} T_n(\cos\theta)=\cos(n\theta). \end{gather*} \begin{theorem} Let $m,n\in{\mathbb Z}$, $\sigma,\sigma'\in(0,\infty)$. Then \begin{gather} \int_0^\pi Q_{m-1/2}(\chi) \cos(n\psi)d\psi =\pi(-1)^m\sqrt{\sinh\sigma\sinh\sigma'} \nonumber\\ \hphantom{\int_0^\pi Q_{m-1/2}(\chi) \cos(n\psi)d\psi=}{} \times\frac{\Gamma\left(n-m+\frac12\right)} {\Gamma\left(n+m+\frac12\right)} P_{n-1/2}^m(\cosh\sigma_<) Q_{n-1/2}^m(\cosh\sigma_>), \label{toroidint} \end{gather} where \begin{gather} \chi=\coth\sigma\coth\sigma'-\operatorname{csch} \sigma \operatorname{csch} \sigma'\cos\psi. \label{chitor} \end{gather} \end{theorem} \begin{proof} We start with toroidal coordinates on ${\mathbb R}^3$, namely \[ x = \frac{a\sinh\sigma\cos\phi}{\cosh\sigma-\cos\psi} , \qquad y = \frac{a\sinh\sigma\sin\phi}{\cosh\sigma-\cos\psi} ,\qquad z = \frac{a\sin\psi}{\cosh\sigma-\cos\psi}, \] where $a>0$, $\sigma\in (0,\infty)$, $\psi,\phi\in[0,2\pi)$. The reciprocal distance between two points ${\bf x},{{\bf x}^\prime}\in{\mathbb R}^3$ is given algebraically by \begin{gather*} \frac{1}{\|{\bf x}-{{\bf x}^\prime}\|}=\frac{1}{a} \sqrt{ \frac{(\cosh\sigma-\cos\psi)(\cosh\sigma'-\cos\psi')}{2\sinh\sigma\sinh\sigma'} }\\ \hphantom{\frac{1}{\|{\bf x}-{{\bf x}^\prime}\|}=}{} \times\left[ \frac{\cosh\sigma\cosh\sigma'-\cos(\psi-\psi')}{\sinh\sigma\sinh\sigma'} -\cos(\phi-\phi') \right]^{-1/2}, \end{gather*} where $(\sigma',\psi',\phi')$ are the toroidal coordinates corresponding to the point ${{\bf x}^\prime}$. Using Heine's recipro\-cal square root identity (see for instance \cite[(3.11)]{CohlDominici}) \[ \frac{1}{\sqrt{z-x}}=\frac{\sqrt{2}}{\pi} \sum_{m=0}^\infty \epsilon_m Q_{m-1/2}(z) T_m(x), \] where $z>1$ and $x\in[-1,1]$, we can obtain a Fourier cosine series representation for the reciprocal distance between two points in toroidal coordinates on ${\mathbb R}^3$, namely \begin{gather*} \frac{1}{\|{\bf x}-{{\bf x}^\prime}\|}=\frac{1}{\pi a} \sqrt{\frac{(\cosh\sigma-\cos\psi)(\cosh\sigma'-\cos\psi')} {\sinh\sigma\sinh\sigma'}} \sum_{m=0}^\infty\epsilon_m\cos(m(\phi-\phi')) Q_{m-1/2}(\chi), \end{gather*} where $\chi>1$ is given by (\ref{chitor}). We can further expand the associated Legendre function of the second kind using the following addition theorem (cf.~\cite[(8.795.2)]{Grad}) \begin{gather} Q_{m-1/2}(\chi)=(-1)^m\sqrt{\sinh\sigma\sinh\sigma'} \sum_{n=0}^\infty\epsilon_n\cos(n(\psi-\psi')) \nonumber\\ \hphantom{Q_{m-1/2}(\chi)=}{} \times \frac{\Gamma\left(n-m+\frac12\right)}{\Gamma\left(n+m+\frac12\right)} P_{n-1/2}^m(\cosh\sigma_<) Q_{n-1/2}^m(\cosh\sigma_>). \label{addtheoremtoroidal} \end{gather} Note that with the above addition theorem, we have the expansion of the reciprocal distance between two points in terms of the separable harmonics in toroidal coordinates \begin{gather*} \frac{1}{\|{\bf x}-{{\bf x}^\prime}\|}=\frac{1}{\pi a} \sqrt{(\cosh\sigma-\cos\psi)(\cosh\sigma'-\cos\psi')} \sum_{m=0}^\infty (-1)^m\epsilon_m\cos(m(\phi-\phi')) \\ \hphantom{\frac{1}{\|{\bf x}-{{\bf x}^\prime}\|}=}{} \times \sum_{n=0}^\infty\epsilon_n\cos(n(\psi-\psi')) \frac{\Gamma\left(n-m+\frac12\right)}{\Gamma\left(n+m+\frac12\right)} P_{n-1/2}^m(\cosh\sigma_<)Q_{n-1/2}^m(\cosh\sigma_>) \end{gather*} (see also \cite[\S~6.2]{CTRS} and \cite{Cohlerratum12}). If we relabel $\psi-\psi'\mapsto\psi$ and multiply both sides of (\ref{addtheoremtoroidal}) by $\cos(n\psi)$ and integrate over $\psi\in[0,\pi]$, then by~(\ref{orthogcheby1st}) we have obtained~(\ref{toroidint}). The interchange of inf\/inite sum and integral is justif\/ied by~(\ref{3:asyP}),~(\ref{3:asyQ}). \end{proof} \subsection*{Acknowledgements} This work was conducted while H.S.~Cohl was a National Research Council Research Postdoctoral Associate in the Information Technology Laboratory at the National Institute of Standards and Technology, Gaithersburg, Maryland, USA. The authors would also like to acknowledge two anonymous referees whose comments helped improve this paper. \pdfbookmark[1]{References}{ref}
1,314,259,993,997
arxiv
\section{Introduction} Several unification schemes merging gravity and the standard model of strong and electroweak interactions predict the existence of short-range forces with coupling strength of the order of Newtonian gravity \cite{Giudice}. Efforts to evidence a {\sl fifth force} have been envisaged regardless of any concrete unification scheme since various decades \cite{Fujii,Fischbach}, and there are compelling reasons to improve our limits especially in the largely unexplored submillimeter range. Constraints in both coupling and range for these interactions have been obtained with various experimental setups, including the recent configurations using a disk-shaped torsional balance parallel to a rotating flat surface \cite{Adelberger1,Adelberger2,Adelberger3,Adelberger4}, or micromechanical resonators in a parallel plane geometry \cite{Price1,Carugno,Price2,Kapitulnik1,Kapitulnik2,Kapitulnik3}. Due to the surge of activity in the study of Casimir forces, limits have been also given in the submicrometer range based on the level of accuracy between Casimir theory and experiment. However, unlike the case of experiments performed between bodies kept at larger distances, the use of the parallel plane geometry on such small lengthscale has been proven to be challenging in terms of parallelism \cite{Bressi1,Bressi2,Bressi3}, and therefore the attention has been focused on the analysis of the residuals in the Casimir theory-experiment comparison involving the sphere-plane configuration. Dedicated efforts to obtain limits from sphere-plane Casimir experiments have involved the use of the so-called Proximity Force Approximation (PFA) \cite{Fischbach1,Decca2003,Decca2005,Klim,Decca2007,Decca2007bis,MostepanenkoJPA}, which allows to map the force $F_{sp}$ between a sphere of radius $R$ and a plane located at a distance $a$ from the sphere into the energy per unit area $E_{pp}$ of the parallel plate configuration, namely $F_{\rm sp}(a)=2 \pi R E_{\rm pp}(a)$ \cite{Derjaguin}. This approximation is believed to be valid in the limit $a \ll R$ and to hold with a high degree of accuracy for forces between entities concentrated on the surfaces, such as electrostatic or Casimir forces between conductors \cite{Gies,Bordag,Krause}. Obviously, in order to test how well PFA approximates the exact force, one needs either to compute the interaction exactly or at least to assess reliable bounds. For the electrostatic sphere-plane interaction, the exact analytical result for the force is well-known and has a closed form \cite{Smythe}, such that deviations from PFA can be readily analyzed. For the Casimir sphere-plane interaction, the exact force has been computed only very recently, both for ideal \cite{Emig1,Emig2,Paulo2008} and real metallic plates \cite{Canaguier}. Available analytical and numerical results seem to indicate that, at least for zero temperature and within the used plasma model, deviations from PFA applied to the sphere-plane Casimir interaction are small, of the order of $0.1\%$ or higher, in recent Casimir experiments aiming to put limits to Yukawa interactions. It has been argued in \cite{ReplyPRARC} that the application of the PFA to forces acting between entities embedded in volumetric manifolds, such as gravitational forces or their putative short-range components, is in general {\sl invalid} and has to be carefully scrutinized in each specific configuration. Based on this suggestion, a recent reanalysis of the PFA in the case of gravitational and Yukawian forces has been discussed in \cite{DeccaPFA}. The main conclusion of this reanalysis is that ``a confusion with different formulations of the PFA'' existed in the previous literature, and that ``care is required in the application of the PFA to gravitational forces''. This confusion is stated to originate from a specific form of the PFA used so far, to be contrasted with a more general formulation of the PFA. In \cite{DeccaPFA} it is also claimed that the difference between the two PFAs is negligible in the actual configuration used to give the allegedly best limits obtained in the 100 nm range \cite{Decca2007,Decca2007bis}. In this paper we further discuss the meaning of the PFA in the case of volumetric forces. We argue that the discrepancy between the two forms of the PFA is a significant source of error in the determination of bounds on parameters of Yukawian forces from force residuals in Casimir sphere-plane experiments performed so far that used PFA to model such non-Newtonian forces. We then show that the general form for the PFA discussed in \cite{DeccaPFA} is simply a different choice of the infinitesimal volume for integrating the force due to an extended object, and coincides with the exact result only in the case when one of the two surfaces is an infinite planar slab (or semispace). The level of approximation in using the two PFAs for Yukawa forces in the sphere-plane geometry is of the same order of magnitude as the Casimir theory-experiment comparison, that uses PFA to compute the sphere-plane Casimir force (as already noticed in \cite{DeccaPFA}). Therefore, since such a comparison provides force {\sl residuals} that are in turn compared against the theory of Yukawa forces to obtain limits on its $\alpha-\lambda$ parameter space, the use of these {\sl subsequent} PFA approximations of comparable level of approximation provides a possible source of systematic error, not carefully accounted for so far. We also argue that other volumetric effects not directly related to the PFA, such as the finite size of the planar surface used in the actual experiments, may provide a source of systematic error not taken into account so far, which strongly affects the limits to power-law forces \cite{Buisseret2007, MostepanenkoJPA}, but should not be a major source of concern on limits to Yukawa forces. We believe that, considering the various number of complications related to the sphere-plane geometry, upgraded versions of parallel plates experiments such as the ones discussed in \cite{Price1,Carugno,Price2,Kapitulnik1,Kapitulnik2,Kapitulnik3} could provide limits on Yukawian and power-law forces in the submicrometer range more immune to a set of systematic errors characteristic of the sphere-plane configuration. \section{Proximity Force Approximations and volumetric forces between extended objects} In order to introduce the notation and as a prelude to our discussion, we briefly summarize the results contained in \cite{DeccaPFA}. The actual experimental configuration used in \cite{Decca2007} is not a parallel plate geometry, rather it is a sphere-plane geometry, and the PFA is used to map the force between a sphere and a plane $F_\mathrm{sp}$ into the energy per unit area of the parallel plate configuration $E_\mathrm{pp}$ \begin{equation} F_\mathrm{sp}(a)=2 \pi \bar{R} E_\mathrm{pp}(a) , \label{APPROXPFA} \end{equation} where $\bar{R}=\sqrt{R_x R_y}$ is the geometrical average of the principal radii of curvature of the spherical surface evaluated at its point of minimum distance from the plane. In the experiment reported in \cite{Decca2007}, the force is measured by looking at the frequency shift of a mechanical resonator, as customary in atomic force microscopy \cite{Giessbl}, and as first reported in the context of Casimir force measurements in \cite{Puppo}. The frequency shift is proportional to the gradient of the force, and therefore \begin{equation} \Delta \nu^2= \frac{1}{4 \pi^2 m} \frac{\partial F_\mathrm{sp}}{\partial a}=\frac{\bar{R}}{2 \pi m} \frac{\partial E_\mathrm{pp}}{\partial a}= \frac{\bar{R}}{2 \pi m}P_\mathrm{pp}, \end{equation} where $P_\mathrm{pp}$ is the plane-plane pressure, and $m$ is the mass of the resonator. The measure of the frequency shift can then be mapped, via use of Eq. (\ref{APPROXPFA}), into the equivalent pressure exerted between two fictitious parallel plates mimicking the actual sphere-plane geometry. Within the validity of Eq. (\ref{APPROXPFA}), this is a valid assumption for the case of forces acting between surfaces, such as electrostatic forces between conductors or Casimir forces. A first sign of the fact that there can be issues with the PFA in dealing with volumetric forces, such as the hypothetical Yukawian forces of gravitational origin, is manifested by noticing that the exact formula for the Yukawa force between two infinite parallel slabs depends on the thicknesses of both slabs, which implies that the PFA formula applied to the volumetric Yukawa force in the sphere-slab configuration also depends on {\sl both} thicknesses (and on the sphere radius). However, the exact sphere-slab force obviously depends only on the slab thickness and on the sphere radius - it does not, and cannot, depend on the thickness of the {\sl metaphysical} slab introduced in the virtual mapping to the parallel geometry. Indeed, consider the Yukawa potential energy for two pointlike masses $m_1$ and $m_2$, located at positions ${\bf r}_1$ and ${\bf r}_2$ respectively, \begin{equation} U_{\rm Yu}({\bf r}_1, {\bf r}_2)=- \alpha G m_1 m_2 \; \frac{e^{-| {\bf r}_2- {\bf r}_1|/\lambda}}{| {\bf r}_2- {\bf r}_1|}, \end{equation} where, as usual, the strength of the Yukawa interaction is parameterized in terms of Newton's gravitational constant $G$ through a dimensionless quantity $\alpha$, and $\lambda$ is its range. Assuming that the Yukawa interaction is additive, once integrated over two infinite, homogeneous parallel slabs separated by a distance $a$, one derives the corresponding pressure $P_\mathrm{Yu}$, \begin{equation} P_\mathrm{Yu}(a) = - 2 \pi \alpha G \rho_1 \rho_2 \lambda^2 e^{-a/\lambda} (1-e^{-D_1/\lambda})(1-e^{-D_2/\lambda}) , \label{pressure} \end{equation} where $D_1$ and $D_2$ indicate the thickness of each slab, $\rho_1$ and $\rho_2$ their densities. The exact Yukawa interaction in the sphere-slab geometry can be readily computed, assuming additivity. The result is \cite{DeccaPFA} \begin{equation} F^\mathrm{exact}_{\mathrm{Yu}}(a) = -4 \pi^2 \alpha G \rho_1 \rho_2 \lambda^3 R e^{-a/\lambda} (1-e^{-D_1/\lambda})(1-\lambda/R+e^{-2R/\lambda}+ e^{-2R/\lambda} \lambda/R ). \label{YukExact} \end{equation} As we mentioned above, most recent experimental works on limits to extra-gravitational forces from sphere-plane Casimir measurements used the usual PFA. In this approximation, the Yukawa force between a homogeneous sphere and an infinite homogeneous slab of thickness $D_1$ is \begin{equation} F^\mathrm{PFA}_{\mathrm{Yu}}(a)= 2 \pi R P_{\rm Yu}(a) = -4 \pi^2 \alpha G \rho_1 \rho_2 \lambda^3 R e^{-a/\lambda}(1-e^{-D_1/\lambda})(1-e^{-D_2/\lambda}). \label{YukPFA} \end{equation} In this case one needed to consider, in order to map the actual sphere-plane configuration into a parallel plate geometry, a fictitious upper plate of thickness $D_2$ large enough, {\it i.e.} much larger than the explored Yukawa range ($D_2 \gg \lambda$). Again, this situation may appear disturbing to whoever believes that any experiment-theory comparison should not rely on the introduction of arbitrary parameters not having a tangible, measurable counterpart in the concrete experimental setup. Clearly the PFA prediction Eq. (\ref{YukPFA}) fails to give the exact result Eq. (\ref{YukExact}) in the range of its supposed validity, $a \ll R$, and it is necessary to assume $\lambda \ll R, D_2$ in order for PFA to tend to the exact result. But in this limit the volumetric nature of the interaction is lost, since the atoms in the ``bulk" no longer contribute appreciably to the total force. Likewise, PFA fails to give the exact Newtonian interaction between the sphere and the slab, even in the range of its supposed validity $a \ll R$. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{pfayuk.fig1a.eps} \includegraphics[width=0.45\textwidth]{pfayuk.fig1b.eps} \end{center} \caption{Schematics of the integration for the sphere-slab configuration according to the usual slicing along horizontal infinitesimal slabs used for the exact calculation (left), and slicing using vertical columns (right) as used in the EPFA calculation. As far as the surface facing the sphere is an infinite plane (and therefore translational invariance of the potential $V$ due to the plane is satisfied), these merely correspond to two different and equivalent choices for the integration volume.} \label{pfayuk.fig1} \end{figure} The authors of \cite{DeccaPFA} consider the most general formulation of PFA \cite{Derjaguin1934}, that we will call ``exact" PFA formula (EPFA) to distinguish it from the usual PFA approximation, described above. In the EPFA the force between two compact bodies is expressed as the sum of forces between plane parallel surface elements $dx dy$. The $z$ component of the force is \begin{equation} F_z^\mathrm{EPFA}(a) = \int \int_\sigma dx dy \; P(x,y,z(x,y)) , \end{equation} where $P(x,y,z(x,y))$ is the pressure between two parallel plates at a local distance $z(x,y)=z_2(x,y)-z_1(x,y)>0$ ($z_i(x,y)$ are the surfaces of the two bodies), $a$ is the distance between them (smallest value of $z(x,y)$), and $\sigma$ is the part of the $(x,y)$-plane where both surfaces are defined (see Fig. 1). The EPFA prediction for the Yukawa force between a sphere and an infinite planar slab is \cite{DeccaPFA} \begin{equation} F^\mathrm{EPFA}_{\mathrm{Yu}}(a) = -4 \pi^2 \alpha G \rho_1 \rho_2 \lambda^3 R e^{-a/\lambda} (1-e^{-D_1/\lambda})(1-\lambda/R+e^{-2R/\lambda}+ e^{-2R/\lambda} \lambda/R ) , \label{YukEPFA} \end{equation} which coincides with the exact result of Eq.(\ref{YukExact}). In Fig.2 we plot the ratio $\eta$ \begin{equation} \eta =\frac{F^{\rm EPFA}_{\rm Yu}}{F^{\rm PFA}_{\rm Yu}}= \frac{1-\lambda/R+e^{-2R/\lambda}+e^{-2R/\lambda} \lambda/R}{1-e^{-D_2/\lambda}} \label{equeta} \end{equation} as a function of the Yukawa parameter $\lambda$, for different values of the sphere radius keeping fixed $D_2 \rightarrow \infty$ (left plot), and for different values of $D_2$ keeping fixed the sphere radius at $R=150 \mu$m (right plot). Note that $\eta$ is independent of the sphere-slab separation $a$. When $D_2 \gg \lambda$, as surely realized in the left plot, PFA always {\sl overestimates} the EPFA result, i.e. $\eta<1$ (similarly to how the PFA overestimates the exact Casimir force in the sphere-plane geometry). Note also that when $D_2 \gg \lambda$ the atoms in the ``bulk" of the two bodies do not contribute appreciably to the Yukawa force, thereby making it effectively of a surface character (i.e., non-volumetric), as in the case of Casimir or electrostatic forces. Instead, for values of $D_2 \simeq 10 \lambda$ (or smaller) the volumetric character of the Yukawa interaction is manifest, and $\eta$ is no longer less than one (right plot). In this case the PFA applied to Yukawa forces is invalid and in the data analysis one should at least declare the value of $D_2$ imagined for which the limits are assessed. Overestimating the Yukawa force leads to stronger limits for the coupling constant $\alpha$ for a given $\lambda$ with respect to the proper use of the EPFA. Considering the relatively small margins of improvement reported recently (see for instance Fig. 3 in \cite{Decca2007}) a systematic shift due to the use of the PFA instead of the EPFA may lead to significant changes for the exclusion region in the $\alpha-\lambda$ plane. Both plots show that the use of PFA instead of EPFA is unreliable especially in the region near or above $\lambda$=100 nm. Unfortunately the region in between 100 nm and few $\mu$m is also the one {\sl directly} explored with the Casimir force experiments, since actual measurements take place in this range of distance between the involved objects. It is known that the best limits on Yukawa interactions can be set for $\lambda$ of the order of the actual explored distance between the two bodies $\simeq a$, and the extrapolation of the measurements to smaller $\lambda$ is affected by the fast growth of the bounds as $\alpha \propto \exp(a/\lambda)$. The fact that the EPFA Eq. (\ref{YukEPFA}) gives the correct exact results for the Yukawa (and also gravitational) force in the sphere-plane configuration is in fact a trivial consequence of the additivity of these interactions and of the translational invariance of the infinite plane surface, the shape of second surface (in this case a sphere) being irrelevant. Indeed, the EPFA is just a {\sl different} parameterization of the exact formula of addition of forces between particles. On one hand, the exact interaction energy between a test mass and an infinite slab (or half-space) depends only on the normal coordinate $z$, being independent of the in-plane coordinates $x, y$ by symmetry. Hence, the potential due to the infinite slab is $V(x,y,z)=V(z)$. For a body ({\it e.g.} a sphere) of mass density $\rho(x,y,z)$, additivity implies that the total interaction energy can be obtained as \begin{equation} U_{\mathrm{body}}= \int dx dy dz \; \rho(x,y,z) \; V(z), \label{body} \end{equation} and similarly for the force. Since $V$ depends only on $z$, it is convenient to compute the integral by adding forces at different slices at constant $z$, {\it i.e.} considering infinitesimal slices in $z$, then evaluating first the potential energy of a slice of the body parallel to the plane at a distance $z$ \begin{equation} W(z)=\int dx dy \; \rho(x,y,z) \; V(z), \end{equation} and then integrating along $z$ the quantity $W(z)$ one obtains the exact expression for the body-plane interaction $U_{\mathrm{body}}$ (see Fig. 1 left). On the other hand, the EPFA states that the interaction energy between the body and the plane is obtained from slicing the body into cylinders perpendicular to the plane (see Fig. 1 right) and integrating the cylinder-plane interaction energy along the portion of the body that faces the plane (i.e., one must integrate over the surface $\sigma$ on the plane that is the normal shadow of the body). The potential energy of this column of the body centered around $(x,y)$ is \begin{equation} G(x,y)=\int dz \; \rho(x,y,z) \; V(z), \end{equation} and then integrating along $x,y$ the quantity $G(x,y)$ one gets the EPFA expression for the body-plane interaction $U_{\mathrm{body}}$ (see Fig. 1 right). Again, since the interaction is additive and it does not depend on the $x,y$ coordinates, this integral is exactly equal to the previous one: we are simply integrating the same function $U(z)$ over the body using a different parameterization of the volumetric integral. The same holds for the force between any body (not necessarily a sphere) and an infinite slab (see also \cite{Emig}). \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{pfayuk.fig2a.eps} \includegraphics[width=0.45\textwidth]{pfayuk.fig2b.eps} \end{center} \caption{(Color online) Comparison between the EPFA and the PFA versus the range of the Yukawian force for a homogeneous sphere above an infinite homogeneous slab. (Left) Plot of $\eta$ versus $\lambda$ in the case of spheres of radius $R=$150, 100, and 50$\mu$m, in the limit of $D_2 \to \infty$. (Right) Same plot but for a sphere of radius $150 \mu$m and finite values of the metaphysical parameter $D_2$ representing the thickness of the upper slab introduced in \cite{DeccaPFA} for the comparison of the actual sphere-plane geometry to the parallel plate case using the PFA.} \label{pfayuk.fig2} \end{figure} However, when none of the two bodies is an infinite slab (e.g., two spheres of radii $R_i$, mass densities $\rho_i(x,y,z)$, separated by a distance $a$ along the $z$ direction), translational invariance along $x-y$ is obviously broken, and the EPFA does not coincide with the exact formula. The exact interaction energy $U(x,y,z)$ between a source body and a test mass at position $(x,y,z)$ can be easily computed. For instance, for the spherical source body $U$ depends only on $r=(x^2+y^2+z^2)^{1/2}$, with the origin of coordinates at the center of the sphere. Integrating $U(x,y,z)$ over the volume of the second body one gets the exact result. For example, for the two-spheres case the gravitational energy depends only on the center-to-center distance and scales as $1/a$. Let us compare this known exact result with the EPFA prediction. One slices each body in cylindrical slabs, calculates the slab-slab interaction energy $U_{ss}(z)$ that depends on the local distance $z$ between the slabs, and finally one adds up these contributions over the shadow of one of the bodies on the other one. It is clear that EPFA cannot give the exact result since $U_{ss}$ is translational invariant but the exact $U$ is not, and EPFA fails to predict the exact $1/a$ dependency. Therefore the EPFA formula for the energy and force of additive two-body interactions is a trivial reparameterization of the exact result when one of the bodies is an infinite plane (or slab). For other geometries, EPFA fails to give the correct result as a consequence of the broken translational invariance. In particular, this is the case of a sphere above a finite size slab, like in experiments, especially those involving slabs of typical sizes comparable to those of the sphere. In the experimental configuration used in \cite{Decca2007} various substrates are present on the sphere and on the slab. Imagining that the layered slab is infinitely long, the Yukawa potential at a distance $z$ from the top layer due to the slab is % \begin{eqnarray} V^{\Delta}_{\rm Yu}(z) &= & - 2 \pi \alpha G \lambda^2 e^{-z/\lambda} \left[ \rho''_1 e^{- \Delta''_1/\lambda} (e^{\Delta''_1/\lambda}-1) + \rho'_1 e^{- (\Delta''_1+\Delta'_1)/\lambda} (e^{\Delta'_1/\lambda}-1) + \right. \nonumber \\ && ~~~~~~~~~~~~~~~~~~~~~~~~ \left. \rho_1 e^{- (\Delta''_1+\Delta'_1+D_1)/\lambda} (e^{D_1/\lambda}-1) \right] . \label{testmass} \end{eqnarray} % Here $\Delta''_1$ and $\rho''_1$ are the thickness and density of the top layer, $\Delta'_1$ and $\rho'_1$ are the thickness and density of the middle layer, and $D_1$ and $\rho_1$ and the thickness and density of the lower part of the layered slab. The last factor in Eq.(\ref{testmass}) can be considered a sort of {\sl effective} density of the planar surface, in which the various densities are weighted by their thicknesses in units of $\lambda$ (indeed yielding their arithmetic average in the case of $\Delta', \Delta'', D_1 \ll \lambda$). We can compute the exact expression for the Yukawa interaction energy between the layered infinite slab and a layered sphere of mass density $\rho_2(x,y,z)$ using Eq. (\ref{body}). As discussed above, this exact computation will trivially coincide with the EPFA expression. Let $R$ and $\rho_2$ be the radius and density of the sphere, $\Delta'_2$ and $\rho'_2$ the width and density of the inner layer on the sphere, $\Delta''_2$ and $\rho''_2$ the width and density of the outer layer, and $a$ the distance from the outer layer of the sphere to the top of the layered slab. The total EPFA Yukawa interaction energy can be written as a sum of contributions from each layer on the sphere, $U^{\Delta, \rm EPFA}_{\rm Yu}= U^{\Delta}_2 + U'^{\Delta}_2 + U''^{\Delta}_2$, where \begin{eqnarray} U^{\Delta}_2 &=& 2 \pi \rho_2 \int_0^R r^2 dr \int_0^{\pi} d\theta \sin\theta \rho_2 V^{\Delta}_{\rm Yu}(z), \nonumber \\ U'^{\Delta}_2 &=& 2 \pi \rho_2' \int_R^{R+\Delta'_2} r^2 dr \int_0^{\pi} d\theta \sin\theta \rho'_2 V^{\Delta}_{\rm Yu}(z), \nonumber \\ U''^{\Delta}_2 &=& 2 \pi \rho_2'' \int_{R+\Delta'_2}^{R+\Delta'_2+\Delta''_2} r^2 dr \int_0^{\pi} d\theta \sin\theta \rho''_2 V^{\Delta}_{\rm Yu}(z). \nonumber \end{eqnarray} Note that, instead of using horizontal or vertical slicings for the volume integration as done in Fig. 1 for the non-layered case, we use spherical slicings more appropriate for the layered sphere case. Here $z=a+\Delta''_2+\Delta'_2+R - r \cos\theta$ denotes the vertical position of any infinitesimal mass element inside the layered sphere. Computing these integrals we obtain the EPFA expression for the Yukawa interaction energy between the layered infinite slab and the layered sphere % \begin{eqnarray} && U_{\rm Yu}^{\Delta, {\rm EPFA}}(a) = - 4 \pi^2 \alpha G \lambda^4 R \; e^{-(a+\Delta''_2+\Delta'_2)/\lambda} \nonumber \\ && \times \left\{ \rho''_1 e^{- \Delta''_1/\lambda} (e^{\Delta''_1/\lambda}-1) + \rho'_1 e^{- (\Delta''_1+\Delta'_1)/\lambda} (e^{\Delta'_1/\lambda}-1) + \rho_1 e^{- (\Delta''_1+\Delta'_1+D_1)/\lambda} (e^{D_1/\lambda}-1) \right\} \nonumber \\ && \times \left\{ \rho_2 \left[ 1-\frac{\lambda}{R} + e^{-2 R/\lambda} + \frac{\lambda}{R} e^{-2 R/\lambda} \right] \right. + \nonumber \\ && ~~~~~ \rho'_2 \left[ \left( 1-\frac{\lambda}{R} \right) (e^{\Delta'_2/\lambda}-1) + \frac{\Delta'_2}{R} e^{\Delta'_2/\lambda} + e^{-2 R/\lambda} \left( \left(1- \frac{\lambda}{R} \right) (1-e^{-\Delta'_2/\lambda}) + \frac{\Delta'_2}{R} e^{-\Delta'_2/\lambda} \right) \right] + \nonumber \\ && ~~~~~ \rho''_2 \left[ \left( 1-\frac{\lambda-\Delta'_2}{R} \right) e^{\Delta'_2/\lambda} (e^{\Delta''_2/\lambda}-1) + \frac{\Delta''_2}{R} e^{(\Delta'_2+\Delta''_2)/\lambda} + \right. \nonumber \\ && ~~~~~~~~~ \left. \left. e^{-2 R/\lambda} \left( \left( 1+\frac{\lambda+\Delta'_2}{R} \right) e^{-\Delta'_2/\lambda} (e^{-\Delta''_2/\lambda}-1) + \frac{\Delta''_2}{R} e^{-(\Delta'_2+\Delta''_2)/\lambda} \right) \right] \right\} . \label{UdeltaEPFA} \end{eqnarray} The EPFA expression for the corresponding force is $F^{\Delta,\rm EPFA}_{\rm Yu} = - \partial U_{\rm Yu}^{\Delta,\rm EPFA} / \partial a= \lambda^{-1} U_{\rm Yu}^{\Delta, \rm EPFA}$. Note that when there are no layers on the slab ($\Delta'_1=\Delta''_1=0$) and no layers on the sphere ($\Delta'_2=\Delta''_2=0$), then the expression for the force that follows from Eq. (\ref{UdeltaEPFA}) is identical to Eq. (\ref{YukExact}). \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{pfayuk.fig3a.eps} \includegraphics[width=0.45\textwidth]{pfayuk.fig3b.eps} \end{center} \caption{(Color online) Ratio $\eta_{\Delta}/\eta$ for the comparison of the EPFA and PFA schemes for the multilayered and corresponding homogeneous situation (obtainable by using Eqs. (14) and (15) with $\Delta_2'=\Delta_2''=0$, $\rho_2'=\rho_2''=0$, and replacing $R$ with $R+\Delta_2'+\Delta_2''$). (Left) Ratio $\eta_{\Delta}/\eta$ versus the range of the Yukawian force for different values of the radius of the inner sphere. The parameters for the layered sphere are $\Delta_2'=10$ nm, $\Delta_2''=180$ nm, $\rho_2$=4.1 g/cm${}^3$, $\rho_2'$=7.14 g/cm${}^3$, and $\rho_2''$=19.28 g/cm${}^3$. The parameters for the layered slab are $D_1=3.5 \mu$m, $\Delta_1'=10$ nm, $\Delta_1''=210$ nm, $\rho_1$=2.33 g/cm${}^3$, $\rho_1'$=7.14 g/cm${}^3$, and $\rho_1''$=19.28 g/cm${}^3$. In the evaluation of the PFA force $F^{\Delta, \rm PFA}_{\rm Yu}$, a value of the metaphysical parameter $D_2 = 10^8\mu$m is used. (Right) Ratio $\eta_{\Delta}/\eta$ versus the range of the Yukawian force for different values of the metaphysical parameter $D_2$ and a radius of curvature of $R=150 \mu$m.} \label{pfayuk.fig3} \end{figure} On the other hand, the PFA expression for the force between the layered infinite slab and the layered sphere is $F^{\Delta, \rm PFA}_{\rm Yu} = 2 \pi R P^{\Delta}_{\rm Yu}$, where $P^{\Delta}_{\rm Yu}$ is the pressure between two parallel layered slab, one identical to the previous slab, and a metaphysical slab of width $D_2$ and density $\rho_2$, covered by two layers of widths and densities identical to the ones of the layered sphere above. Using Eq. (\ref{pressure}) for the various pairs of layers in the different slabs, we calculate the PFA expression for the layered sphere-slab force \begin{eqnarray} && F^{\Delta, \rm PFA}_{\rm Yu}(a) = - 4 \pi^2 \alpha G \lambda^3 R e^{-a/\lambda} \times \nonumber \\ && \left\{ \rho_1 (1-e^{-D_1/\lambda}) \left[ \rho_2 e^{-(\Delta'_1+\Delta''_1+\Delta'_2+\Delta''_2)/\lambda} (1-e^{-D_2/\lambda}) + \rho'_2 e^{-(\Delta'_1+\Delta''_1+\Delta''_2)/\lambda} (1-e^{-\Delta'_2/\lambda}) + \right. \right. \nonumber \\ && ~~~~~~~~~~~~~~~~~~~~~~~ \left. \left. \rho''_2 e^{-(\Delta'_1+\Delta''_1)/\lambda} (1-e^{-\Delta''_2/\lambda}) \right] + \right. \nonumber \\ && ~~ \rho'_1 (1-e^{-\Delta'_1/\lambda}) \left[ \rho_2 e^{-(\Delta''_1+\Delta'_2+\Delta''_2)/\lambda} (1-e^{-D_2/\lambda}) + \rho'_2 e^{-(\Delta''_1+\Delta''_2)/\lambda} (1-e^{-\Delta'_2/\lambda}) + \rho''_2 e^{-\Delta''_1/\lambda} (1-e^{-\Delta''_2/\lambda}) \right] + \nonumber \\ && ~ \left. \rho''_1 (1-e^{-\Delta''_1/\lambda}) \left[ \rho_2 e^{-(\Delta'_2+\Delta''_2)/\lambda} (1-e^{-D_2/\lambda}) + \rho'_2 e^{-\Delta''_2/\lambda} (1-e^{-\Delta'_2/\lambda}) + \rho''_2 (1-e^{-\Delta''_2/\lambda}) \right] \right\} . \label{UdeltaPFA} \end{eqnarray} To assess the effect of the multilayered structures, we have evaluated the ratio $\eta_{\Delta}/\eta$ with $\eta_{\Delta}= F^{\Delta, \rm EPFA}_{\rm Yu} / F^{\Delta, \rm PFA}_{\rm Yu}$ and $\eta = F^{\rm EPFA}_{\rm Yu} / F^{\rm PFA}_{\rm Yu}$, as a function of $\lambda$ for three radii of curvatures of the sphere, assuming for the PFA calculation a value of the metaphysical parameter $D_2= 10^8\mu$m (see Fig. 3 left). The effect of multilayers is to slightly flatten $\eta_{\Delta}$ as compared to $\eta$ in the homogeneous case (Fig. 2 left). The dependence of the same ratio for a fixed value of $R$ and different values of the metaphysical parameter $D_2$ is shown in Fig. 3 right. Note that $\eta_{\Delta}$ is independent of the sphere-slab separation, just as $\eta$ is. As discussed in the Introduction, the PFA used in all recent sphere-plane Casimir experiments for the Casimir theory-experiment comparison is expected to approximate the exact Casimir force within 0.1 $\%$. This expectation comes from recent analytical approaches to the sphere-plane Casimir interaction \cite{Emig1,Emig2,Paulo2008,Canaguier} that, although formally exact, require the evaluation of the determinant of an infinite-dimensional matrix, which becomes a numerically demanding task, especially in the PFA regime, $a \ll R$, where larger and larger matrices are needed for convergence. Numerical computations of the exact, zero-temperature sphere-plane Casimir force using parameters for metallic spheres ($R=10 \mu$m and optical response modeled by the simple plasma model with plasma wavelength $\lambda_p=136 $nm) show that deviations from PFA can be as large as $20\%$ for the smallest $a/R \approx 0.5$ studied numerically (see Fig.2 of \cite{Canaguier}). An extrapolation to smaller values of $a/R$ using a cubic polynomial fit of the numerical data is also provided in \cite{Canaguier}. Assuming one can use it for the recent Casimir sphere-plane experiment \cite{Decca2007} (with a radius of curvature $R=151.3 \mu$m), gives a deviation from PFA of the order of $0.1\%$ at the smallest value of $a/R \approx 0.001$ reached in the experiment ($a_{\rm min} \approx 160$nm). Since the limits to non-Newtonian forces are obtained using the {\sl residuals} in the Casimir theory-experiment comparison, in order to meaningfully replace the exact formula of the Yukawa force with its PFA approximation, the level of accuracy between these two should be therefore a small fraction, for instance 10 $\%$, of the accuracy with which the Casimir force is controlled by using PFA rather than the exact expression for the sphere-plane Casimir force. If this condition is not fulfilled, the derived limits could be off also by a large, order of 100$\%$, correction. However, targeting a $10 \%$ accuracy level with respect to the Casimir theory-experiment accuracy implies deviations from $\eta=1$ of $0.01 \%$, which can be obtained, as seen in Fig. 2, only in the range of $\lambda$ below 100 nm. The presence of substrates with different densities tends to mitigate the discrepancy between the EPFA and the PFA, as seen by the curves in Fig. 3, but there is an irreducible systematic factor even at small $\lambda$. Indeed, in the limit $\lambda \rightarrow 0$, we have $\eta_{\Delta} \approx 1+(\Delta_2'+\Delta_2'')/R$, that, in the case of the experiment reported in \cite{Decca2007}, is equal to 1.00126, {\it i.e.} a correction already equal to 0.126 $\%$. All these systematic sources of uncertainty could be even larger in experiments for which the radius of curvature of the sphere is not adequately optimized. Indeed, the use of spheres with smaller radius of curvature is affected more by this effect, as emphasized in the left plot of Fig. 2 and in Fig. 3 for the cases of $R=50 \mu$m and $100 \mu$m. Moreover, for small spheres the PFA approximation to the Casimir force itself is less accurate. The use of spheres with large radius of curvature is beneficial to reduce these sources of error in the experiment-theory comparison, but may face experimental issues recently identified in \cite{Kim1} and interpreted as due to deviations from an ideal spherical geometry (as proposed in \cite{DeccaComment}) and/or a consequence of larger sensitivity to electrostatic patch effects \cite{Kim1,ReplyPRARC}. \section{Breaking the x-y translational invariance} In the previous section we have seen that translational invariance is crucial to make the EPFA reproduce the exact result, but in actual experiments such an invariance is obviously satisfied only approximately, leading to an additional source of systematic error related to the finite size of the surfaces, as we discuss here for both power-law forces, and for Yukawa forces. Instead of computing the more involved problem of the gravitational force between a sphere and a finite-size slab, we consider here the simpler case of the gravitational force acting on a point-like test mass $m_2$ above the center of a disk of thickness $D_1$, radius $R_d$, and mass density $\rho_1$. We obtain \begin{eqnarray} F_g (z_2) & =& -2 \pi G \rho_1 m_2 \int_{-D_1}^0 dz_1 \int_0^{R_d} r dr \frac{z_2-z_1}{[r^2+(z_2-z_1)^2]^{3/2}} \nonumber \\ & = & -2\pi G \rho_1 m_2 \{ D_1 +{(R_d^2+z_2^2)}^{1/2}-{[R_d^2+(z_2+D_1)^2]}^{1/2} \} , \end{eqnarray} where $z_2$ is the distance between the test mass and the disk. The force becomes independent of $R_d$ only in the limit $R_d \gg D_1, z_2$ (in which case it is also independent of $z_2$). In order to assess the different forces acting on the various parts of a sphere in the presence of a disk of finite radius we evaluate the ratio between the forces exerted at the point of the sphere closest to the plane ($z_2=a$) and the farthest point ($z_2=a+2R$). This quantity is simple to evaluate yet provides a practical figure of merit for how much the extended geometry of the sphere is affected by the finite size of the disk. This gives a ratio $\xi_g=F_g(z_2=a)/F_g(z_2=a+2R)$: \begin{equation} \xi_g= \frac{\beta+[\gamma^2+\kappa^2]^{1/2}-[(\gamma+\beta)^2+\kappa^2]^{1/2}} {\beta+[(2+\gamma)^2+\kappa^2]^{1/2}-[(2+\gamma+\beta)^2+\kappa^2]^{1/2}}, \end{equation} where we have defined $\beta \equiv D_1/R$, $\gamma \equiv a/R$, and $\kappa \equiv R_d/R$. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{pfayuk.fig4a.eps} \includegraphics[width=0.45\textwidth]{pfayuk.fig4b.eps} \end{center} \caption{(Color online) (Left) Plot of the ratio $\xi_N$ between the force $F(r) \propto r^{-N}$ exerted at the closest point of the sphere from the disk and the force exerted at the farthest point, versus the radius of the disk $R_d$ constituting the planar surface of finite size, for four different exponents $N$. We assume a radius of the sphere equal to $R=150 \mu$m, a sphere-plane distance of $a$=100 nm, and that the center of the sphere is right above the center of the disk. Only the case of $N=2$ (Newtonian gravitation) gives a ratio of unity in the large $R_d$ limit. (Right) Plot of the ratio $\xi_N$ versus the exponent of the power force law $N$ for various radii of the disk $R_d$. The cross indicates the ratio $\xi=1$ for the case of an exponent $N=2$ and an infinite plane; it is provided as a help to the eye to better show the convergence in the case of $N=2$ of $\xi_N$ to unity with disks of progressively larger radii.} \label{pfayuk.fig4} \end{figure} This is a large correction, of the order of 300$\%$, if a disk of radius equal to twice the radius of the sphere ($R_d=2 R$), in a geometrical setting not dissimilar from the one used in \cite{Decca2007}, is considered. Since the experiment in \cite{Decca2007} is anyway insensitive to the gravitational force, like any experiment performed in the micrometer range, this is not a major practical concern. However, in the case of a more generic power law such as $F_N=-K \rho_1 m_2/r^N$ we get: \begin{equation} F_N(z)= \frac{2 \pi K \rho_1 m_2}{(N-1)(N-3)} \{(z+D_1)^{3-N}-z^{3-N}+(R_d^2+z^2)^{(3-N)/2}-[R_d^2+(z+D_1)^2]^{(3-N)/2} \} , \end{equation} apart from the cases of $N=1$ and $N=3$ in which logarithmic integrations occur. In these two cases one obtains \begin{equation} F_{N=1}(z)= \frac{\pi K \rho_1 m_2}{2} \{(z^2+R^2) \ln(z^2+R_d^2)-[(z+D_1)^2+R_d^2] \ln[(z+D_1)^2+R_d^2]+(z+D_1)^2 \ln (z+D_1)^2-z^2 \ln z^2 \} , \end{equation} which in the limit $R_d \rightarrow \infty$ becomes independent of $z$, and \begin{equation} F_{N=3}(z)= - \frac{\pi K \rho_1 m_2}{2} \ln \left[\frac{(z^2+R_d^2)(z+D_1)^2}{z^2 [(z+D_1)^2+R_d^2]}\right] , \end{equation} which in the limit $R_d \rightarrow \infty$ behaves as $\ln(1 + D_1^2/z^2)$. Notice that the fact that the force is independent on the distance from an infinite plane is only characteristic of forces scaling with the inverse square of the distance, such as the gravitational force, making the integration of the force trivially geometrical. In the general case $n \neq 2$, even in the situation of a sphere in front of an infinite plane, different points of the sphere will feel different forces, with the farthest point feeling smaller (larger) force for a power law exponent larger (smaller) than 2, as a consequence of the interplay between the solid angle and the distance scaling of the force, which makes peculiar the $N=2$ case as expressed by the Gauss law. This is shown in Fig. 4 (left) for the cases of $N=1,2,3,$ and $4$, with the ratio $\xi_N$ between the forces evaluated at the top and at the bottom of the sphere, and in Fig. 4 (right) by showing the same ratio versus the power law exponent for different values of the radius of the disk. Therefore, when considering power-law forces as the ones discussed for instance around Eq. 2 in \cite{MostepanenkoJPA}, one should then take carefully into account the finite size of the plane in deriving limits to these forces \cite{Note}. \begin{figure}[t] \begin{center} \includegraphics[width=0.5\textwidth]{pfayuk.fig5.eps} \end{center} \caption{(Color online) Plot of the ratio $\xi_{\rm Yu}$ between the Yukawa force exerted at the closest point of the sphere from the disk and the force exerted at the farthest point, versus the radius of the disk $R_d$ constituting the planar surface of finite size, for three different values of the Yukawa range $\lambda$. As before, we assume a radius of the sphere equal to $R=150 \mu$m, a sphere-plane distance of $a$=100 nm, and the center of the sphere right above the center of the disk. The values of $\lambda$ are chosen to be 100 $\mu$m, 500 $\mu$m, and 1000 $\mu$m. In the last case the force may be considered as a long range one and the farthest point on the sphere is also contributing almost as the closest one. For $\lambda$ progressively smaller than the radius of the sphere the ratio $\xi_{\rm Yu}$ gets larger and larger and the dependence on $R_d$ is not appreciable.} \label{pfayuk.fig5} \end{figure} Finally, we discuss the effect of the finite size of the planar surface in the case of Yukawa forces. The potential energy of a pointlike particle of mass $m_2$ located at height $z$ along the axis of a planar disk surface of radius $R_d$, density $\rho_1$, and thickness $D_1$, is \begin{eqnarray} U_{\rm Yu}(z) & = & - \alpha G \rho_1 m_2 \int_0^R dr ~ r \int_0^{2\pi} \int_{-D_1}^0 [(z_2-z_1)^2+r^2]^{1/2} e^{-\sqrt{(z_2-z_1)^2+r^2}/\lambda} \nonumber \\ &=& -2 \pi \alpha G \rho_1 m_2 \lambda^2 \left[e^{-z/\lambda}(1-e^{-D_1/\lambda})-e^{-\sqrt{z^2+R_d^2}/\lambda}+e^{-\sqrt{(z+D_1)^2+R_d^2}/\lambda}\right] , \end{eqnarray} and the related force is \begin{eqnarray} F_{\rm Yu}(z) & = & -\frac{\partial U_{\rm Yu}}{\partial z}= -2 \pi \alpha G \rho_1 m_2 \lambda e^{-z/\lambda} (1-e^{-D_1/\lambda}) \nonumber \\ & & \times \left\{1 - \frac{z}{R_d} \frac{e^{-[1+(z/R_d)^2]^{1/2}R_d/\lambda+z/\lambda}}{[1+(z/R_d)^2]^{1/2}(1-e^{-D_1/\lambda})} + \frac{z+D_1}{R_d} \frac{e^{-[1+((z+D_1)/R_d)^2]^{1/2}R_d/\lambda+z/\lambda}}{[1+((z+D_1)/R_d)^2]^{1/2}(1-e^{-D_1/\lambda})} \right\} , \end{eqnarray} \noindent where the finite size terms appear as corrections to the indefinite plane formula originating by the first term alone. As before, we introduce as figure of merit the ratio $\xi_{\rm Yu}=F_{\rm Yu}(z_2=a)/F_{\rm Yu}(z_2=a+2R)$. This ratio is very large for realistic configurations, expressing the short-range nature of the force. Indeed, even in the infinite plane limit we have a ratio of $\xi_{\rm Yu}=e^{2R/\lambda} \simeq e^{3000}$ in the case of a sphere of radius $R=150 \mu$m at a $\lambda=0.1 \mu$m. The dependence on the disk radius become significant only at values of $\lambda$ comparable to the radius of the sphere, as shown in Fig. 5. The presence of suppression factors for the farthest point of the form $e^{2R/\lambda}$ makes very insensitive the Yukawa force to the finite size of the disk (for previous considerations, see also \cite{Bordag1998}). \section{Conclusions} Our analysis, although confirming some of the results already discussed in \cite{DeccaPFA}, draws quite different conclusions from the common outcome. In particular, we argue that the application of PFA to volumetric forces is not rigorous if considered in its original formulation applied so far to compute the sphere-plane Yukawa interactions in the most sphere-plane Casimir force experiments reported in \cite{Decca2003,Decca2005,Klim,Decca2007,Decca2007bis,MostepanenkoJPA}. It application to volumetric forces is instead of trivial nature if considered in the exact formulation EFPA discussed in \cite{DeccaPFA}, since the latter is identical to the exact calculation, just differing in the choice of the infinitesimal integration volume. We have shown that the usual PFA is an invalid approximation to compute volumetric forces. In particular, it does not reproduce in its usual range of validity exact known expressions for gravitational and Yukawa interactions in non-translational invariant geometries, such as sphere-finite size slab or sphere-sphere configurations. The ``exact" PFA is the exact expression for any additive two-body interaction when one of the bodies is translational invariant, as is the case for a sphere in front of an infinite homogeneous slab or half-space. For non-translational invariant geometries, EPFA also fails to give the exact result for volumetric interactions, even in the regime of parameters where it is assumed to be valid, therefore also being an invalid approximation for volumetric forces. The difference between the two formulations of the PFA is shown to affect significantly the limits obtained so far unless one considers a regime of Yukawa range so small, $\lambda \ll R, D_2$, that the approximation of a surface force ({\it i.e.} neglecting the Yukawa force due to the atoms in the ``bulk'' of the two bodies, therefore manifestly of non-volumetric character) holds. By using the PFA the Yukawa force is overestimated and therefore the limits in the $\alpha-\lambda$ plane become more stringent than by using the exact force estimated via EPFA. Moreover, the use of PFA instead of the EPFA for the parameters of the experiment supposed to provide the strongest limits to Yukawian interactions \cite{Decca2007} occurs with an accuracy of the same order of magnitude with which the exact Casimir force is expected to be also approximated by the corresponding PFA. On one hand, since the Casimir theory-experiment comparison provides force {\sl residuals} that are in turn compared against the theory of Yukawa forces to obtain limits on them, the use of these {\sl subsequent} PFA approximations of comparable level of approximation provides a possible source of systematic error, not carefully accounted for so far. On the other hand, both Casimir and Yukawa PFAs overestimate the respective exact forces, and therefore the systematic source of error introduced by their use might be less critical than expected on first principles. In any case, it is important to assess as much as possible both sources of errors or, in alternative, to use exact expressions for the Yukawa and Casimir forces. Furthermore, a systematic source of error present even in the EPFA scheme for finite-size planar surfaces has been discussed and shown to be significant only for power-law forces. Our analysis suggests that future limits (or reanalysis of experiments already performed) on Yukawian forces should rely upon the use of the exact expression for the Yukawa force, as performed in \cite{Bordag1998,Masuda}. It is also worth to point out that all the above considerations hold provided that the simple scenario of additive forces is assumed, which is valid in general for weak forces among atoms in the low density limit, such that correlations leading to fluctuating forces are negligible. However, the hypothetical Yukawa forces should be located in a regime of coupling constants intermediate between the gravitational (additive) force and the Casimir (non additive) force. It is not understood a priori if the Yukawa force is weak enough to make the additivity assumption reliable, and this should be kept in mind in future broad-range searches of these forces. Finally, considering the complications emerging in the sphere-plane geometry due to the presence of previously unidentified systematics such as the sensitivity to deviations from the ideal spherical geometry \cite{Kim1,Iannuzzi,DeccaComment,ReplyPRARC} and possible effects of variability of the contact potential with distance \cite{Speake,Stipe,Kim1,Iannuzzi,Kim2,Kimpatch}, it may be worth to focus future Casimir experiments to set bounds on extra-gravitational forces on the actual parallel-plane geometry, without the drawbacks of a virtual mapping from the sphere-plane geometry made explicit in this paper. The stronger force signal expected for the same distance between the two surfaces, the reduced sensitivity to distance-dependent contact potentials due to image charges, the absence of deviations from a uniform radius of curvature, the existence of exact mode summation techniques to compute the Casimir force, the possibility to control parallelism using recently developed technology \cite{PPtechnology}, and the possibility to compensate off-line the lack of parallelism by using the PFA as discussed in \cite{Bordagpp}, all point in the direction to continue this class of experiments in the actual parallel plane configuration, extending below the 10 $\mu$m range the results of the experiments described in \cite{Adelberger1,Adelberger2,Adelberger3,Adelberger4,Price1,Carugno, Price2,Kapitulnik1,Kapitulnik2,Kapitulnik3}.
1,314,259,993,998
arxiv
\section{Introduction} In the standard $SU(2) \times U(1)$ gauge model for the electroweak interactions, the Yukawa couplings of the fermions to the unique scalar $SU(2)$ doublet of the model are completely arbitrary---as a matter of fact, those couplings make up almost all the free parameters of the Standard Model (SM). As a consequence of this fact, the SM leaves the quark masses and mixings unpredicted; although the model accounts for the existence of quark masses and mixings, their actual values remain arbitrary in the context of the SM. Several more complex models have tried to overcome this shortcoming of the SM. However, most of those models are in reality {\it Ans\"atze}: instead of {\em deriving} the structure of the Yukawa couplings from some underlying symmetry of a self-consistent gauge theory, they simply {\em assume} the Yukawa couplings to have some aesthetically appealing pattern or texture. A model should instead rely on some flavour (family) symmetry. If the flavour symmetry is Abelian, then all its irreducible representations (irreps) are one-dimensional and the symmetry can at most force some Yukawa couplings to vanish.\footnote{The converse of this statement also holds: any pattern of vanishing Yukawa couplings may be enforced by an Abelian flavour symmetry with an adequate spectrum of scalars~\cite{zeros}.} A non-Abelian flavour symmetry can also force non-vanishing Yukawa couplings to be interrelated among themselves through definite Clebsch--Gordan factors. Since there are three families of quarks, a most desirable non-Abelian flavour symmetry ought to have three-dimensional irreps, in order to achieve full unification of the three generations and, hence, to achieve a minimal number of independent Yukawa couplings. The smallest discrete group with a three-dimensional irrep is $A_4$,\footnote{A useful list of all the discrete non-Abelian groups with 31 or less group elements is provided in~\cite{kephart}.} the group of the even permutations of four objects. The group $A_4$ has 12 group elements, one triplet irrep $\bf{3}$ and three inequivalent singlet irreps $\bf{1}$, $\bf{1^\prime}$, and $\bf{1^{\prime\prime}}$ (the $\bf{1}$ is the trivial representation, the irreps $\bf{1^\prime}$ and $\bf{1^{\prime\prime}}$ are complex-conjugate of each other). This group has, in the last few years, been used in many models for the lepton masses and mixings~\cite{leptons}. It has also been used in models for the quark sector~\cite{quarks}, or for both quarks and leptons simultaneously~\cite{both}. In this paper we suggest a model for the Yukawa couplings of the quarks based on an $A_4$ family symmetry. Our model has three scalar gauge-$SU(2)$ doublets united in a $\bf{3}$ of $A_4$. The left-handed-quark $SU(2)$ doublets are also united in a $\bf{3}$ of $A_4$. In each electric-charge sector, the three right-handed quarks are in a $\bf{1} \oplus \bf{1^\prime} \oplus \bf{1^{\prime\prime}}$ of $A_4$. Thus, our model achieves a high degree of simplicity and, even, uniqueness, because it treats the three families of quarks in the same way and it treats both electric-charge sectors in the same way. Furthermore, our model does not require $A_4$ to be broken anywhere in the Lagrangian, not even through soft terms---the breaking of $A_4$ is solely spontaneous. This, too, adds to the simplicity of the model. Surprisingly, our model is able to predict the mixing parameter $\left| V_{ub} / V_{cb} \right|$ (where $V$ is the quark mixing, or CKM, matrix) fully right: it predicts $\left| V_{ub} / V_{cb} \right| \approx 0.088$, in agreement with the usual averages of the various phenomenological analyses. On the other hand, our model also leads to a null violation of the discrete symmetry $CP$ in the charged gauge interactions of the quarks; thus, the observed $CP$ violation, for instance in $K^0$--$\bar K^0$ mixing, or in $B^0_d$ decays, should in the context of our model be explained through scalar-mediated interactions, including flavour-changing neutral Yukawa interactions. The plan of our paper is the following. In section~\ref{yukawas} we derive the form of the quark Yukawa-coupling matrices. In section~\ref{potential} we study the scalar potential and the ensuing vacuum. In section~\ref{matrices} we write down the quark mass matrices and demonstrate that in our model there is no $CP$ violation in the CKM matrix. In section~\ref{chi2} we explain the method that we used in the numerical analysis and give some fits and results. A short summary is provided in section~\ref{summary}. \section{The Yukawa couplings} \label{yukawas} The gauge symmetry of the model is $SU(2) \times U(1)$. There are three scalar $SU(2)$ doublets $\phi_j$ ($j = 1, 2, 3$) with hypercharge $1/2$. They form a triplet $\bf{3}$ of the flavour symmetry $A_4$. There are three left-handed-quark $SU(2)$ doublets $Q_{Lj}$ with hypercharge $1/6$. They are united in another $\bf{3}$ of $A_4$. There are three right-handed-quark $SU(2)$ singlets $n_{Rj}$ with hypercharge $-1/3$ and three right-handed-quark $SU(2)$ singlets $p_{Rj}$ with hypercharge $2/3$. The $n_{R1}$ and $p_{R1}$ are $A_4$-invariant (they are $\bf{1}$'s of $A_4$), the $n_{R2}$ and $p_{R2}$ are $\bf{1^\prime}$'s of $A_4$, and the $n_{R3}$ and $p_{R3}$ are $\bf{1^{\prime\prime}}$'s of $A_4$. This means that there exist two non-commuting transformations $T_1$ and $T_2$, \begin{eqnarray} T_1: & & \left\{ \begin{array}{l} \phi_1 \to \phi_2 \to \phi_3 \to \phi_1, \\ Q_{L1} \to Q_{L2} \to Q_{L3} \to Q_{L1}, \\ n_{R2} \to \omega n_{R2}, \ n_{R3} \to \omega^2 n_{R3}, \\ p_{R2} \to \omega p_{R2}, \ p_{R3} \to \omega^2 p_{R3}, \end{array} \right. \label{t1} \\ T_2: & & \left\{ \begin{array}{l} \phi_2 \to - \phi_2, \ \phi_3 \to - \phi_3, \\ Q_{L2} \to - Q_{L2}, \ Q_{L3} \to - Q_{L3}, \end{array} \right. \label{t2} \end{eqnarray} under which the Lagrangian is invariant. In equation~(\ref{t1}), $\omega \equiv \exp{\left( 2 i \pi / 3 \right)} = \sqrt[3]{1} = \left( - 1 + i\, \sqrt{3} \right) / 2$. The $\bf{3}$ is a real representation of $A_4$. Indeed, the scalar $SU(2)$ doublets with hypercharge $-1/2$ \begin{equation} \tilde \phi_j \equiv i \tau_2 \phi_j^\ast \end{equation} transform under $T_1$ and $T_2$ in exactly the same way as the $\phi_j$, as is obvious from equations~(\ref{t1}) and~(\ref{t2}). Given this spectrum of fields and their transformation laws under both the gauge symmetry $SU(2) \times U(1)$ and the flavour symmetry $A_4$, the quark Yukawa Lagrangian is \begin{eqnarray} {\cal L}_{\rm Yukawa} &=& - y_1 \left( \overline{Q_{L1}} \phi_1 + \overline{Q_{L2}} \phi_2 + \overline{Q_{L3}} \phi_3 \right) n_{R1} \nonumber\\ & & - y_2 \left( \overline{Q_{L1}} \phi_1 + \omega \overline{Q_{L2}} \phi_2 + \omega^2 \overline{Q_{L3}} \phi_3 \right) n_{R2} \nonumber\\ & & - y_3 \left( \overline{Q_{L1}} \phi_1 + \omega^2 \overline{Q_{L2}} \phi_2 + \omega \overline{Q_{L3}} \phi_3 \right) n_{R3} \nonumber\\ & & - y_4 \left( \overline{Q_{L1}} \tilde \phi_1 + \overline{Q_{L2}} \tilde \phi_2 + \overline{Q_{L3}} \tilde \phi_3 \right) p_{R1} \nonumber\\ & & - y_5 \left( \overline{Q_{L1}} \tilde \phi_1 + \omega \overline{Q_{L2}} \tilde \phi_2 + \omega^2 \overline{Q_{L3}} \tilde \phi_3 \right) p_{R2} \nonumber\\ & & - y_6 \left( \overline{Q_{L1}} \tilde \phi_1 + \omega^2 \overline{Q_{L2}} \tilde \phi_2 + \omega \overline{Q_{L3}} \tilde \phi_3 \right) p_{R3} + {\rm H.c.}, \end{eqnarray} the six Yukawa couplings $y_{1\mbox{--}6}$ being in general complex. The scalar doublets \begin{equation} \phi_j = \left( \begin{array}{c} \phi_j^+ \\ \phi_j^0 \end{array} \right), \quad \tilde \phi_j = \left( \begin{array}{c} {\phi_j^0}^\ast \\ - \phi_j^- \end{array} \right) \end{equation} are assumed to have vacuum expectation values (VEVs) \begin{equation} \left\langle 0 \left| \phi_1^0 \right| 0 \right\rangle = v_1 e^{- i \alpha / 2}, \quad \left\langle 0 \left| \phi_2^0 \right| 0 \right\rangle = v_2 e^{i \beta / 2}, \quad \left\langle 0 \left| \phi_3^0 \right| 0 \right\rangle = v_3, \end{equation} where $v_{1,2,3}$ are, without loss of generality, real and non-negative. Since $\overline{Q_{Lj}} = \left( \overline{p_{Lj}},\ \overline{n_{Lj}} \right)$, the quark mass matrices, defined through \begin{equation} {\cal L}_{\rm mass} = - \left( \begin{array}{ccc} \overline{n_{L1}} & \overline{n_{L2}} & \overline{n_{L3}} \end{array} \right) M_n \left( \begin{array}{c} n_{R1} \\ n_{R2} \\ n_{R3} \end{array} \right) - \left( \begin{array}{ccc} \overline{p_{L1}} & \overline{p_{L2}} & \overline{p_{L3}} \end{array} \right) M_p \left( \begin{array}{c} p_{R1} \\ p_{R2} \\ p_{R3} \end{array} \right) + {\rm H.c.}, \end{equation} are \begin{eqnarray} \label{M_n} M_n &=& D \left( \begin{array}{ccc} y_1 v_1 & y_2 v_1 & y_3 v_1 \\ y_1 v_2 & \omega y_2 v_2 & \omega^2 y_3 v_2 \\ y_1 v_3 & \omega^2 y_2 v_3 & \omega y_3 v_3 \end{array} \right), \\ \label{M_p} M_p &=& D^\ast \left( \begin{array}{ccc} y_4 v_1 & y_5 v_1 & y_6 v_1 \\ y_4 v_2 & \omega y_5 v_2 & \omega^2 y_6 v_2 \\ y_4 v_3 & \omega^2 y_5 v_3 & \omega y_6 v_3 \end{array} \right), \end{eqnarray} where \begin{equation} D \equiv \mbox{diag} \left( e^{- i \alpha / 2},\ e^{i \beta / 2},\ 1 \right). \end{equation} The quark mass matrices in equations~(\ref{M_n}) and~(\ref{M_p}) are identical to the mass matrix for the charged leptons in the original $A_4$ model of Ma and Rajasekaran (MR)~\cite{raja}. Indeed, we have adopted in our model the same $A_4$ representations for the quarks as MR did in their paper for the leptons. The crucial difference between the two models is that, while MR have assumed a vacuum state characterized by $v_1 = v_2 = v_3$ and $\alpha = \beta = 0$, we shall demonstrate the existence of, and employ, a vacuum state with different features. Let the unitary matrices $U_{L,R}^{n,p}$ satisfy \begin{eqnarray} {U_L^n}^\dagger \left( \begin{array}{ccc} y_1 v_1 & y_2 v_1 & y_3 v_1 \\ y_1 v_2 & \omega y_2 v_2 & \omega^2 y_3 v_2 \\ y_1 v_3 & \omega^2 y_2 v_3 & \omega y_3 v_3 \end{array} \right) U_R^n &=& {\rm diag} \left( m_d, m_s, m_b \right), \\ {U_L^p}^\dagger \left( \begin{array}{ccc} y_4 v_1 & y_5 v_1 & y_6 v_1 \\ y_4 v_2 & \omega y_5 v_2 & \omega^2 y_6 v_2 \\ y_4 v_3 & \omega^2 y_5 v_3 & \omega y_6 v_3 \end{array} \right) U_R^p &=& {\rm diag} \left( m_u, m_c, m_t \right). \end{eqnarray} Then, the quark mixing (CKM) matrix is \begin{equation} V = {U_L^p}^\dagger D^2\, U_L^n. \end{equation} One may absorb the phases of $y_{1,2,3}$ in the overall phases of the three rows of $U_R^n$, and similarly absorb the phases of $y_{4,5,6}$ in the matrix $U_R^p$. Those six phases are therefore unphysical. Thus, this model for the quark masses and mixings has ten parameters: the two phases $\alpha$ and $\beta$ in the diagonal matrix $D^2$, and the eight real quantities \begin{equation} |y_1| v_3,\ \left| \frac{y_2}{y_1} \right|,\ \left| \frac{y_3}{y_1} \right|,\ |y_4| v_3,\ \left| \frac{y_5}{y_4} \right|,\ \left| \frac{y_6}{y_4} \right|,\ \frac{v_2}{v_3},\ \frac{v_1}{v_3}. \end{equation} As we shall see in the next section, the $A_4$-symmetric scalar potential is so constrained that these ten parameters reduce to only eight. \section{The scalar potential} \label{potential} The most general renormalizable scalar potential invariant under the symmetry $A_4$ is \begin{eqnarray} V &=& \mu \left( \phi_1^\dagger \phi_1 + \phi_2^\dagger \phi_2 + \phi_3^\dagger \phi_3 \right) + \lambda_1 \left( \phi_1^\dagger \phi_1 + \phi_2^\dagger \phi_2 + \phi_3^\dagger \phi_3 \right)^2 \nonumber\\ & & + \lambda_2 \left[ \left( \phi_1^\dagger \phi_1 \right) \left( \phi_2^\dagger \phi_2 \right) + \left( \phi_2^\dagger \phi_2 \right) \left( \phi_3^\dagger \phi_3 \right) + \left( \phi_3^\dagger \phi_3 \right) \left( \phi_1^\dagger \phi_1 \right) \right] \nonumber\\ & & + \left( \lambda_3 - \lambda_2 \right) \left[ \left( \phi_1^\dagger \phi_2 \right) \left( \phi_2^\dagger \phi_1 \right) + \left( \phi_2^\dagger \phi_3 \right) \left( \phi_3^\dagger \phi_2 \right) + \left( \phi_3^\dagger \phi_1 \right) \left( \phi_1^\dagger \phi_3 \right) \right] \nonumber\\ & & + \frac{\lambda_4}{2} \left\{ e^{i \epsilon} \left[ \left( \phi_1^\dagger \phi_2 \right)^2 + \left( \phi_2^\dagger \phi_3 \right)^2 + \left( \phi_3^\dagger \phi_1 \right)^2 \right] + \mbox{H.c.} \right\}, \label{thepot} \end{eqnarray} where $\mu$ and $\lambda_{1\mbox{--}4}$ are real. The phase $\epsilon$ is arbitrary. We define $v \equiv \sqrt{v_1^2 + v_2^2 + v_3^2}$ and \begin{eqnarray} \theta_1 &\equiv& \epsilon - \beta, \\ \theta_2 &\equiv& \epsilon - \alpha, \\ \theta_3 &\equiv& \epsilon + \alpha + \beta. \end{eqnarray} Then, \begin{eqnarray} V_0 \equiv \left\langle 0 \left| V \right| 0 \right\rangle &=& \mu v^2 + \lambda_1 v^4 + \lambda_3 \left( v_1^2 v_2^2 + v_2^2 v_3^2 + v_3^2 v_1^2 \right) \nonumber\\ & & + \lambda_4 \left( v_1^2 v_2^2 \cos{\theta_3} + v_2^2 v_3^2 \cos{\theta_1} + v_3^2 v_1^2 \cos{\theta_2} \right). \end{eqnarray} The equations for vacuum stability are \begin{eqnarray} \frac{\partial V_0}{\partial v_1^2} = 0 &=& \mu + 2 \lambda_1 v^2 + \lambda_3 \left( v_2^2 + v_3^2 \right) + \lambda_4 \left( v_2^2 \cos{\theta_3} + v_3^2 \cos{\theta_2} \right), \label{v1} \\ \frac{\partial V_0}{\partial v_2^2} = 0 &=& \mu + 2 \lambda_1 v^2 + \lambda_3 \left( v_1^2 + v_3^2 \right) + \lambda_4 \left( v_1^2 \cos{\theta_3} + v_3^2 \cos{\theta_1} \right), \label{v2} \\ \frac{\partial V_0}{\partial v_3^2} = 0 &=& \mu + 2 \lambda_1 v^2 + \lambda_3 \left( v_1^2 + v_2^2 \right) + \lambda_4 \left( v_1^2 \cos{\theta_2} + v_2^2 \cos{\theta_1} \right), \label{v3} \\ \frac{\partial V_0}{\partial \alpha} = 0 &=& \lambda_4 v_1^2 \left( - v_2^2 \sin{\theta_3} + v_3^2 \sin{\theta_2} \right), \label{alpha} \\ \frac{\partial V_0}{\partial \beta} = 0 &=& \lambda_4 v_2^2 \left( - v_1^2 \sin{\theta_3} + v_3^2 \sin{\theta_1} \right). \label{beta} \end{eqnarray} We reject possible solutions to these equations in which one of the VEVs vanishes, and also solutions in which $v_1 = v_2 = v_3$.\footnote{Notice that Ma and Rajasekaran opted precisely for $v_1 = v_2 = v_3$ in their model~\cite{raja}, which is otherwise similar to ours.} Then, equations~(\ref{alpha}) and~(\ref{beta}) yield \begin{equation} \sin{\theta_j} = k v_j^2, \label{sines} \end{equation} where $k$ is a real constant with dimension $M^{-2}$. Subtracting equations~(\ref{v2}) and~(\ref{v3}) from equation~(\ref{v1}), one obtains \begin{equation} \begin{array}{rcl} \left( v_2^2 - v_1^2 \right) \lambda_3 + \left[ \left( v_2^2 - v_1^2 \right) \cos{\theta_3} + v_3^2 \left( \cos{\theta_2} - \cos{\theta_1} \right) \right] \lambda_4 &=& 0, \\*[2mm] \left( v_3^2 - v_1^2 \right) \lambda_3 + \left[ \left( v_3^2 - v_1^2 \right) \cos{\theta_2} + v_2^2 \left( \cos{\theta_3} - \cos{\theta_1} \right) \right] \lambda_4 &=& 0. \end{array} \label{syst} \end{equation} Equations~(\ref{syst}) constitute a Cramer system for $\lambda_3$ and $\lambda_4$. The Cramer determinant must vanish and one hence obtains \begin{equation} \sum_{j=1}^3 a_j \cos{\theta_j} = 0, \label{sum} \end{equation} where \begin{eqnarray} a_1 &\equiv& v_3^4 - v_2^4 + v_1^2 v_2^2 - v_1^2 v_3^2, \label{a1} \\ a_2 &\equiv& v_1^4 - v_3^4 + v_2^2 v_3^2 - v_1^2 v_2^2, \label{a2} \\ a_3 &\equiv& v_2^4 - v_1^4 + v_1^2 v_3^2 - v_2^2 v_3^2 \label{a3} \end{eqnarray} satisfy \begin{equation} \sum_{j=1}^3 a_j = 0. \label{sum2} \end{equation} Equation~(\ref{sum}) together with equation~(\ref{sines}) imply \begin{eqnarray} 0 &=& - \lambda \left( a_1 \sqrt{1 - k^2 v_1^4},\ a_2 \sqrt{1 - k^2 v_2^4},\ a_3 \sqrt{1 - k^2 v_3^4} \right) \nonumber\\ &=& 4 k^2 \left[ 4 k^2 v_1^4 v_2^4 v_3^4 - \lambda \left( v_1^2, v_2^2, v_3^2 \right) \right] \left( v_1^2 - v_2^2 \right)^2 \left( v_1^2 - v_3^2 \right)^2 \left( v_2^2 - v_3^2 \right)^2, \label{fin} \end{eqnarray} where~\cite{book} \begin{equation} \lambda \left( a, b, c \right) \equiv - a^4 - b^4 - c^4 + 2 a^2 b^2 + 2 a^2 c^2 + 2 b^2 c^2. \end{equation} Equation~(\ref{fin}) has five solutions: \begin{enumerate} \item $k = 0$, \item $k^2 = \lambda \left( v_1^2, v_2^2, v_3^2 \right) / \left( 4 v_1^4 v_2^4 v_3^4 \right)$, \item $v_1 = v_2$, \item $v_1 = v_3$, \item $v_2 = v_3$. \end{enumerate} It is easy to explore in detail solutions 1\ and~2\ and to show that they require $\lambda_3 = \pm \lambda_4$ (solution 1\ furthermore needs $\epsilon = 0$). Thus, those solutions require a non-trivial constraint on the parameters of the potential, and that constraint is in general unstable under renormalization. Those solutions should therefore be discarded. The remaining solutions 3--5 are equivalent, since the three scalar doublets $\phi_j$ form a triplet of $A_4$. We shall use for definiteness solution 3: $v_1 = v_2$ and $\theta_1 = \theta_2$, i.e.~$\alpha = \beta$. Equations~(\ref{v1})--(\ref{beta}) then reduce to only three equations, \begin{eqnarray} 0 &=& \mu + 2 \lambda_1 \left( 2 v_1^2 + v_3^2 \right) + \lambda_3 \left( v_1^2 + v_3^2 \right) + \lambda_4 \left[ v_1^2 \cos{\left( \epsilon + 2 \alpha \right)} + v_3^2 \cos{\left( \epsilon - \alpha \right)} \right], \\ 0 &=& \mu + 2 \lambda_1 \left( 2 v_1^2 + v_3^2 \right) + 2 \lambda_3 v_1^2 + 2 \lambda_4 v_1^2 \cos{\left( \epsilon - \alpha \right)}, \\ 0 &=& v_1^2 \sin{\left( \epsilon + 2 \alpha \right)} - v_3^2 \sin{\left( \epsilon - \alpha \right)}, \end{eqnarray} which determine the three quantities $v_1$, $v_3$, and $\alpha$. \section{The mass matrices and $CP$ conservation} \label{matrices} We saw in the previous section that, just as we had advertised at the end of section~\ref{yukawas}, consideration of the most general $A_4$-invariant scalar potential actually reduces the ten parameters of our model to only eight, since $v_2 = v_1$ and $\beta = \alpha$. The quark mass matrices of our model are therefore \begin{eqnarray} M_n &=& \mbox{diag} \left( e^{- i \alpha / 2}, e^{i \alpha / 2}, 1 \right) \left( \begin{array}{ccc} a & b & c \\ a & \omega b & \omega^2 c \\ r a & \omega^2 r b & \omega r c \end{array} \right), \label{mn} \\ M_p &=& \mbox{diag} \left( e^{i \alpha / 2}, e^{- i \alpha / 2}, 1 \right) \left( \begin{array}{ccc} f & g & h \\ f & \omega g & \omega^2 h \\ r f & \omega^2 r g & \omega r h \end{array} \right), \label{mp} \end{eqnarray} where $r \equiv v_3 / v_1$ and $a$, $b$, $c$, $f$, $g$, and $h$ are real and positive. Then, \begin{eqnarray} H_n \equiv M_n M_n^\dagger &=& \left( \begin{array}{ccc} x & y^\ast e^{- i \alpha} & r y e^{- i \alpha / 2} \\ y e^{i \alpha} & x & r y^\ast e^{i \alpha / 2} \\ r y^\ast e^{i \alpha / 2} & r y e^{- i \alpha / 2} & r^2 x \end{array} \right), \\ H_p \equiv M_p M_p^\dagger &=& \left( \begin{array}{ccc} z & w^\ast e^{i \alpha} & r w e^{i \alpha / 2} \\ w e^{- i \alpha} & z & r w^\ast e^{- i \alpha / 2} \\ r w^\ast e^{- i \alpha / 2} & r w e^{i \alpha / 2} & r^2 z \end{array} \right), \end{eqnarray} where $x \equiv a^2 + b^2 + c^2$ and $z \equiv f^2 + g^2 + h^2$ are real, while $y \equiv a^2 + \omega b^2 + \omega^2 c^2$ and $w \equiv f^2 + \omega g^2 + \omega^2 h^2$ are complex. Now, computing the commutator of $H_p$ and $H_n$ one finds that it is of the form \begin{equation} \left[ H_p, H_n \right] = \left( \begin{array}{ccc} - n & 0 & - m \\ 0 & n & - m^\ast \\ m^\ast & m & 0 \end{array} \right), \end{equation} hence $\det \left[ H_p, H_n \right] = 0$. Therefore, in this model there is no $CP$ violation in the quark mixing matrix, i.e.~the Jarlskog observable $J$~\cite{jarlskog} vanishes. \section{Numerical procedure and results} \label{chi2} We have performed a global $\chi^2$~analysis of the the quark mass matrices given in the previous section---equations~(\ref{mn}) and~(\ref{mp})---by employing the downhill simplex method~\cite{downhill}. Table~\ref{input} specifies in its first two columns the observable quantities $O_i$ in the form \begin{equation} O_i = \bar{O}_i \pm \sigma_i, \end{equation} where $\bar O_i$ is the experimental mean value of $O_i$ and $\sigma_i$ is the square root of its variance. The index $i = 1, \ldots, 9$ labels the nine observables given in table~\ref{input}. \begin{table}[t] \begin{center} \begin{tabular}{c|c|c|c} \hline\hline Observable & Experimental value & Model prediction & Pull \\ \hline $m_{d}$~[MeV] & $5 \pm 2$ & $4.977$ & $-1.2 \times 10^{-2}$ \\ $m_{s}$~[MeV] & $95 \pm 25$ & $90.545$ & $-1.8 \times 10^{-1}$ \\ $m_{b}$~[MeV] & $4200 \pm 70$ & $4200.79$ & $+1.1 \times 10^{-2}$ \\[1ex] $m_{u}$~[MeV] & $2.25 \pm 0.75$ & $2.250$ & $+1.9 \times 10^{-5}$ \\ $m_{c}$~[MeV] & $1250 \pm 90$ & $1250.498$ & $+5.5 \times 10^{-3}$ \\ $m_{t}$~[GeV] & $172.5 \pm 2.7$ & $172.497$ & $-1.2 \times 10^{-3}$ \\[1ex] $\sin \theta_{12}$ & $0.2243 \pm 0.0016$ & $0.22431$ & $+8.1 \times 10^{-3}$ \\ $\sin \theta_{23}$ & $0.0413 \pm 0.0015$ & $0.04139$ & $+6.0 \times 10^{-2}$ \\ $\sin \theta_{13}$ & $0.0037 \pm 0.0005$ & $0.003627$ & $-1.5 \times 10^{-1}$ \\ \hline\hline \end{tabular} \end{center} \caption{ Experimental data and result of our best fit. The experimental data (average values and error bars) used in our numerical analysis are given in the second column. The data on the quark masses have been taken from~\cite{masses}. The light-quark masses are renormalized at a common scale $\mu \approx$~2~GeV. The charm- and bottom-quark masses are running masses in the $\overline{\mathrm{MS}}$ scheme. The data on the quark mixing angles have been taken from~\cite{mixingangles}. The third column displays the values $P_i$ predicted by our model when the values of its parameters are those in equations~(\ref{param}). The fourth column shows the number of standard deviations from the mean values, $\left( P_i - \bar{O}_i \right) / \sigma_i$, computed using the data from the second column. The value $\chi^2 = 0.057$ is the sum of the squares of the numbers in the fourth column and is dominated by the pulls of $m_s$ and $\sin{\theta_{13}}$.} \label{input} \end{table} Writing $\mathbf{x}$ for the set of the eight parameters of our model ($a$, $b$, $c$, $f$, $g$, $h$, $r$, and $\alpha$), and $P_i \left( \mathbf{x} \right)$ for the resulting predictions for each of the observables, one constructs the $\chi^{2}$ function \begin{equation} \label{chisquare} \chi^{2} \left( \mathbf{x} \right) = \sum_{i=1}^{9} \left[ \frac{P_i \left( \mathbf{x} \right) - \bar{O}_i} {\sigma_i} \right]^2. \end{equation} The global minimum of $\chi^2$ represents the best possible fit of the model predictions to the experimental data. We found an excellent fit, with $\chi^2=0.057$, of our model to the nine input data specified in table~\ref{input}. The input parameters of the fit are \begin{equation} \begin{array}{rcl} a &=& 40.75189\,\textrm{MeV}, \\ b &=& 87.78761\,\textrm{MeV}, \\ c &=& 2.347665\,\textrm{MeV}, \\ f &=& 3941.127\,\textrm{MeV}, \\ g &=& 515.0460\,\textrm{MeV}, \\ h &=& 1.060808\,\textrm{MeV}, \\ r &=& 43.37746, \\ \alpha &=& 0.2251660\ \textrm{radians}. \end{array} \label{param} \end{equation} Other details of the fit are given in the third and fourth columns of table~\ref{input}. Notice that the observables in table~\ref{input} do not include the phase $\delta$ of the quark mixing matrix. Indeed, since we already know that in our model that matrix is real, we are of course unable to fit for any $\delta \neq 0, \pi$. In our model the explanation of all the observed $CP$ violation through a sole phase in the quark mixing matrix gets spoiled: other sources of $CP$ violation must be found, and fitted to the observed data. In our model the most likely source of $CP$ violation will be the interactions mediated by both charged and neutral scalars, since ours is a three-Higgs-doublet model featuring, in particular, flavour-changing interactions mediated by neutral scalars. In order to test the variation of $\chi^2$ as a function of the value $\widehat O_i$ of an observable quantity $O_i$, we substitute in the expression for $\chi^2 \left( \mathbf{x} \right)$ the term $\left[ P_i \left( \mathbf{x} \right) - \bar{O}_i \right]^2 / \left( \sigma_i \right)^2$ by a term $\left[ P_i \left( \mathbf{x} \right) - \widehat O_i \right]^2 / \left( 0.01\ \widehat O_i \right)^2$. The small error assigned to $\widehat O_i$ in the denominator of this term guarantees that $O_i$ gets pinned down to the value $\widehat O_i$. Figure~\ref{fig_r} depicts $\chi^2$ as a function of $r$, i.e.~of the ratio of VEVs $v_3/v_1$. We read off from that figure that only for $40 \lesssim r \lesssim 52$ can good fits be obtained; thus, the range of the ratio of VEVs is severely constrained. In figure~\ref{fig_VubVcb}~(left panel), the change of $\chi^2$ under variations of the quark-mixing observable $\left|V_{ub} / V_{cb}\right|$ is shown. There is a pronounced minimum of $\chi^2$ for $0.08 \lesssim \left| V_{ub} / V_{cb} \right| \lesssim 0.09$; this is in excellent agreement with the value obtained for that observable by the phenomenological analyses. This remarkable result of our model provides a clear-cut prediction for $\left|V_{ub} / V_{cb}\right|$. Figure~\ref{fig_VubVcb}~(right panel) gives $\chi^2$ as a function of $\left| V_{td} / V_{ts} \right|$. We find in this case excellent fits whenever $0.14 \lesssim \left| V_{td} / V_{ts} \right| \lesssim 0.15$. Clearly, this result is correlated to our model's prediction for $\left|V_{ub} / V_{cb}\right|$, since in our model $CP$ is conserved in quark mixing and therefore the CKM matrix is determined by only three parameters. \section{Conclusions} \label{summary} In this paper we have proposed a self-consistent model for the quark masses and mixings based on a family symmetry $A_4$. The Yukawa-coupling matrices of our model contain, at face value, ten parameters, but, when one considers the $A_4$-symmetric scalar potential in detail, one sees that two of those parameters actually disappear. In our model the family symmetry $A_4$ is not broken anywhere in the Lagrangian---its breaking is fully spontaneous. The model gives a perfect fit of the observed quark masses and mixing parameters, except for the fact that there is no $CP$ violation at all in the CKM matrix. The observed $CP$ violation should result in our model from scalar-mediated interactions, in particular flavour-changing neutral Yukawa interactions at tree level and also charged-scalar-mediated box diagrams at the one-loop level; a detailed study of those interactions should be the subject of a separate publication. In previous $A_4$ models by other authors a vacuum characterized by equal VEVs $v_1 = v_2 = v_3$ has often been employed. In our $A_4$ model for the quark sector, on the contrary, we use $v_1 = v_2$ while $v_3$ is some 43 times larger. It is not clear to us whether this vacuum may or may not produce a viable model for the lepton sector as well; this will be the subject of future investigation. \paragraph{Acknowledgement:} The work of L.L.\ was supported by the Portuguese \textit{Funda\c c\~ao para a Ci\^encia e a Tecnologia} through the project U777--Plurianual.
1,314,259,993,999
arxiv
\section{Credits} \section{Introduction} Abstract Meaning Representation (\textsc{amr}) is a semantic representation language to map the meaning of English sentences into directed, cycled, labeled graphs \cite{banarescu2013abstract}. Graph vertices are concepts inferred from words. The concepts can be represented by the words themselves (e.g. \texttt{dog}), PropBank framesets \cite{palmer2005proposition} (e.g. \texttt{eat-01}), or keywords (like named entities or quantities). The edges denote relations between pairs of concepts (e.g. \texttt{eat-01 :ARG0 dog}). \textsc{amr} parsing integrates tasks that have usually been addressed separately in natural language processing (\textsc{nlp}), such as named entity recognition \cite{nadeau2007survey}, semantic role labeling \cite{palmer2010semantic} or co-reference resolution \cite{ng2002improving,lee2017scaffolding}. Figure \ref{f-amr-example} shows an example of an \textsc{amr} graph. \begin{figure}[hbtp] \centering \includegraphics[width=1\columnwidth]{amr-example} \caption{\label{f-amr-example} \textsc{amr} graph for \qt{When the prince arrived on the Earth, he was surprised not to see any people}. Words can refer to concepts by themselves (green), be mapped to PropBank framesets (red) or be broken down into multiple-term/non-literal concepts (blue). \texttt{Prince} plays different semantic roles.} \end{figure} Several transition-based dependency parsing algorithms have been extended to generate \textsc{amr}. \newcite{wang2015transition} describe a two-stage model, where they first obtain the dependency parse of a sentence and then transform it into a graph. \newcite{damonte-cohen-satta:2017:EACLlong} propose a variant of the \textsc{arc-eager} algorithm to identify labeled edges between concepts. These concepts are identified using a lookup table and a set of rules. A restricted subset of reentrant edges are supported by an additional classifier. A similar configuration is used in \cite{gildea-satta-cl17,peng-aaai18}, but relying on a cache data structure to handle reentrancy, cycles and restricted non-projectivity. A feed-forward network and additional hooks are used to build the concepts. \newcite{ballesteros-alonaizan:2017:EMNLP2017} use a modified \textsc{arc-standard} algorithm, where the oracle is trained using stack-\textsc{lstm}s \cite{dyer-EtAl:2015:ACL-IJCNLP}. Reentrancy is handled through \textsc{swap} \cite{nivre2009non} and they define additional transitions intended to detect concepts, entities and polarity nodes. This paper explores unrestricted non-projective \textsc{amr} parsing and introduces \textsc{amr-covington}, inspired by \newcite{covington2001fundamental}. It handles arbitrary non-projectivity, cycles and reentrancy in a natural way, as there is no need for specific transitions, but just the removal of restrictions from the original algorithm. The algorithm has full coverage and keeps transitions simple, which is a matter of concern in recent studies \cite{peng-aaai18}. \section{Preliminaries and Notation} \paragraph{Notation} We use \texttt{typewriter} font for concepts and their indexes (e.g. \texttt{dog} or \texttt{1}), regular font for raw words (e.g. dog or 1), and a bold style font for vectors and matrices (e.g. $\vec{v}$, $\vec{W}$). \noindent\newcite{covington2001fundamental} describes a fundamental algorithm for unrestricted non-projective dependency parsing. The algorithm can be implemented as a left-to-right transition system \cite{nivre2008algorithms}. The key idea is intuitive. Given a word to be processed at a particular state, the word is compared against the words that have previously been processed, deciding to establish or not a syntactic dependency arc from/to each of them. The process continues until all previous words are checked or until the algorithm decides no more connections with previous words need to be built, then the next word is processed. The runtime is $\mathcal{O}(n^2)$ in the worst scenario. To guarantee the single-head and acyclicity conditions that are required in dependency parsing, explicit tests are added to the algorithm to check for transitions that would break the constraints. These are then disallowed, making the implementation less straightforward. \section{The \textsc{amr}-Covington algorithm} The acyclicity and single-head constraints are not needed in \textsc{amr}, as arbitrary graphs are allowed. Cycles and reentrancy are used to model semantic relations between concepts (as shown in Figure \ref{f-amr-example}) and to identify co-references. By removing the constraints from the Covington transition system, we achieve a natural way to deal with them.\footnote{This is roughly equivalent to going back to the naive parser called ESH in \cite{covington2001fundamental}, which has not seen practical use in parsing due to the lack of these constraints.} Also, \textsc{amr} parsing requires words to be transformed into concepts. Dependency parsing operates on a constant-length sequence. But in \textsc{amr}, words can be removed, generate a single concept, or generate several concepts. In this paper, additional lookup tables and transitions are defined to create concepts when needed, following the current trend \cite{damonte-cohen-satta:2017:EACLlong,ballesteros-alonaizan:2017:EMNLP2017,gildea-satta-cl17}. \subsection{Formalization}\label{section-formalization} Let $G$=$(V,E)$ be an edge-labeled directed graph where: $V$=$\{\texttt{0},\texttt{1},\texttt{2},\ldots,\texttt{M}\}$ is the set of concepts and $E = V \times E \times V$ is the set of labeled edges, we will denote a connection between a head concept $\texttt{i} \in V$ and a dependent concept $\texttt{j} \in V$ as $\texttt{i} \xrightarrow{l} \texttt{j}$, where \emph{l} is the semantic label connecting them. The parser will process sentences from left to right. Each decision leads to a new parsing configuration, which can be abstracted as a 4-tuple $(\lambda_1,\lambda_2,\beta,E)$ where: \begin{itemize} \item $\beta$ is a buffer that contains unprocessed words. They await to be transformed to a concept, a part of a larger concept, or to be removed. In $b|\beta$, $b$ represents the head of $\beta$, and it optionally can be a concept. In that case, it will be denoted as \texttt{b}. \item $\lambda_1$ is a list of previously created concepts that are waiting to determine its semantic relation with respect to \texttt{b}. Elements in $\lambda_1$ are concepts. In $\lambda_1|\texttt{i}$, $\texttt{i}$ denotes its last element. \item $\lambda_2$ is a list that contains previously created concepts for which the relation with \texttt{b} has already been determined. Elements in $\lambda_2$ are concepts. In $\texttt{j}|\lambda_2$, $\texttt{j}$ denotes the head of $\lambda_2$. \item $E$ is the set of the created edges. \end{itemize} \begin{table*}[h] \begin{center} \begin{tabular}{lll} \bf Transitions & \bf Step $t$ & \bf Step $t+1$ \\ {\sc left-arc$_l$} & $(\lambda_1|\texttt{i}, \lambda_2, \texttt{b}|\beta, E)$ & $(\lambda_1, \texttt{i}| \lambda_2, \texttt{b}|\beta, E \cup \{\texttt{b} \xrightarrow{l} \texttt{i}\})$ \\ {\sc right-arc$_l$} & $(\lambda_1|\texttt{i}, \lambda_2,\texttt{b}|\beta,E)$ & $(\lambda_1,\texttt{i} |\lambda_2,\texttt{b}|\beta,E \cup \{\texttt{i} \xrightarrow{l} \texttt{b}\})$ \\ {\sc multiple-arc$_{l_1,l_2}$}&$(\lambda_1|\texttt{i}, \lambda_2, \texttt{b}|\beta, E)$ & $(\lambda_1, \texttt{i}| \lambda_2, \texttt{b}|\beta, E \cup \{\texttt{b} \xrightarrow{l} \texttt{i}\} \cup \{\texttt{i} \xrightarrow{l_2} \texttt{b}\})$ \\ {\sc shift} & $(\lambda_1,\lambda_2,\texttt{b}|\beta,E)$ & $(\lambda_1 \cdot \lambda_2|\texttt{b},[],\beta,E)$ \\ {\sc no-arc} & $(\lambda_1|\texttt{i}, \lambda_2,\beta,E)$ & $(\lambda_1, \texttt{i}|\lambda_2, \beta, E)$ \\ {\sc confirm} & $(\lambda_1, \lambda_2, b|\beta,E)$ & $(\lambda_1, \lambda_2, \texttt{b}|\beta, E)$ \\ {\sc breakdown}$_\alpha$ &$(\lambda_1, \lambda_2,b|\beta,E)$&$(\lambda_1, \lambda_2,\texttt{b}|b|\beta, E)$\\ {\sc reduce}&$(\lambda_1, \lambda_2,b|\beta,E)$&$(\lambda_1, \lambda_2,\beta,E)$\\ \end{tabular} \end{center} \caption{\label{covington-transitions} Transitions for \textsc{amr-covington}} \end{table*} Given an input sentence, the parser starts at an initial configuration $c_s$ = $([\texttt{0}],[],1|\beta, \{\})$ and will apply valid transitions until a final configuration $c_f$ is reached, such that $c_f$ = $(\lambda_1, \lambda_2, [], E)$. The set of transitions is formally defined in Table \ref{covington-transitions}: \begin{itemize} \item \textsc{left-arc}$_l$: Creates an edge $\texttt{b} \xrightarrow{l} \texttt{i}$. \texttt{i} is moved to $\lambda_2$. \item \textsc{right-arc}$_l$: Creates an edge $\texttt{i} \xrightarrow{l} \texttt{b}$. $\texttt{i}$ is moved to $\lambda_2$. \item \textsc{shift}: Pops \texttt{b} from $\beta$. $\lambda_1$, $\lambda_2$ and \texttt{b} are appended. \item \textsc{no arc}: It is applied when the algorithm determines that there is no semantic relationship between \texttt{i} and \texttt{b}, but there is a relationship between some other node in $\lambda_1$ and \texttt{b}. \item \textsc{confirm}: Pops $b$ from $\beta$ and puts the concept \texttt{b} in its place. This transition is called to handle words that only need to generate one (more) concept. \item \textsc{breakdown}$_\alpha$: Creates a concept \texttt{b} from $b$, and places it on top of $\beta$, but $b$ is not popped, and the new buffer state is $\texttt{b}|b|\beta$. It is used to handle a word that is going to be mapped to multiple concepts. To guarantee termination, \textsc{breakdown} is parametrized with a constant $\alpha$, banning generation of more than $\alpha$ consecutive concepts by using this operation. Otherwise, concepts could be generated indefinitely without emptying $\beta$. \item \textsc{reduce}: Pops $b$ from $\beta$. It is used to remove words that do not add any meaning to the sentence and are not part of the \textsc{amr} graph. \end{itemize} \textsc{left} and \textsc{right-arc} handle cycles and reentrancy with the exception of cycles of length 2 (which only involve \texttt{i} and \texttt{b}). To assure full coverage, we include an additional transition: \textsc{multiple-arc}$_{(l_{1},l_{2})}$ that creates two edges $\texttt{b} \xrightarrow{l_1} \texttt{i}$ and $\texttt{i} \xrightarrow{l_2} \texttt{b}$. \texttt{i} is moved to $\lambda_2$. \textsc{multiple-arc}s are marginal and will not be learned in practice. \textsc{amr-covington} can be implemented without \textsc{multiple-arc}, by keeping \texttt{i} in $\lambda_1$ after creating an arc and using \textsc{no-arc} when the parser has finished creating connections between \texttt{i} and \texttt{b}, at a cost to efficiency as transition sequences would be longer. Multiple edges in the same direction between \texttt{i} and \texttt{b} are handled by representing them as a single edge that merges the labels. \paragraph{Example} Table \ref{table-gold-transitions-examples} illustrates a valid transition sequence to obtain the \textsc{amr} graph of Figure \ref{f-amr-example}. \begin{table}[t!] \centering \tabcolsep=0.15cm \small{ \begin{tabular}{rrrr} $\mathbf{\lambda_1}$ & $\mathbf{\lambda_2}$ &$\mathbf{\beta}$&\bf Action (times)\\ &&w, t, p&\textsc{reduce}$\times 2_1$\\ &&p, a, o&\textsc{confirm}$_2$\\ &&\texttt{p}, a, o&\textsc{shift}$_3$\\ \texttt{p}&&a, o, t&\textsc{confirm}$_4$\\ \texttt{p}&&\texttt{a}, o, t&\textsc{left-arc}$_5$\\ &\texttt{p}&\texttt{a}, o, t&\textsc{shift}$_6$\\ \texttt{p}, \texttt{a}&&o, t, E&\textsc{reduce}$\times 2_7$\\ \texttt{p}, \texttt{a}&&E, h, w&\textsc{breakdown}$_8$\\ \texttt{p}, \texttt{a}&&\texttt{\qt{E}}, E, h&\textsc{shift}$_9$\\ \texttt{p}, \texttt{a}, \texttt{\qt{E}}&&E, h, w&\textsc{breakdown}$_{10}$\\ \texttt{p}, \texttt{a}, \texttt{\qt{E}}&&\texttt{\qt{E}}, h, w&\textsc{shift}$_{11}$\\ \texttt{a}, \texttt{\qt{E}}, \texttt{\qt{E}}&&E, h, w&\textsc{breakdown}$_{12}$\\ \texttt{a}, \texttt{\qt{E}}, \texttt{\qt{E}}&&\texttt{n}, h, w&\textsc{left-arc}$_{12}$\\ \texttt{a}, \texttt{\qt{E}}&\texttt{\qt{E}}&\texttt{n}, h, w&\textsc{shift}$_{13}$\\ \texttt{\qt{E}}, \texttt{\qt{E}}, \texttt{n}&&E, h, w&\textsc{confirm}$_{14}$\\ \texttt{\qt{E}}, \texttt{\qt{E}}, \texttt{n}&&\texttt{p2}, h, w&\textsc{left-arc}$_{15}$\\ \texttt{a}, \texttt{\qt{E}}, \texttt{\qt{E}}&\texttt{n}, &\texttt{p2}, h, w&\textsc{no-arc}$_{16}$\\ \texttt{p}, \texttt{a}, \texttt{\qt{E}}&\texttt{\qt{E}}, \texttt{n}, &\texttt{p2}, h, w&\textsc{left-arc}$_{17}$\\ \texttt{p}, \texttt{a}&\texttt{\qt{E}}, \texttt{\qt{E}}, \texttt{n},&\texttt{p2}, h, w&\textsc{shift}$_{18}$\\ \texttt{\qt{E}}, \texttt{n}, \texttt{p2}&&h, w, s&\textsc{reduce}$\times 2_{19}$\\ \texttt{\qt{E}}, \texttt{n}, \texttt{p2}&&s, n, t&\textsc{confirm}$_{20}$\\ \texttt{\qt{E}}, \texttt{n}, \texttt{p2}&&\texttt{s}, n, t&\textsc{left-arc}$_{21}$\\ \texttt{\qt{E}}, \texttt{\qt{E}}, \texttt{n}&\texttt{p2}&\texttt{s}, n, t&\textsc{no-arc}$\times 3_{22}$\\ \texttt{p}, \texttt{a}&\texttt{\qt{E}}, \texttt{\qt{E}}, \texttt{n}&\texttt{s}, n, t&\textsc{left-arc}$\times 2_{23}$\\ &\texttt{p}, \texttt{a}, \texttt{\qt{E}}&\texttt{s}, n, t&\textsc{shift}$_{24}$\\ \texttt{n}, \texttt{p2}, \texttt{s}&&n, t, s2&\textsc{confirm}$_{25}$\\ \texttt{n}, \texttt{p2}, \texttt{s}&&\texttt{-}, t, s2&\textsc{shift}$_{26}$\\ \texttt{p2}, \texttt{s}, \texttt{-}&&t, s2, a2&\textsc{reduce}$_{27}$\\ \texttt{p2}, \texttt{s}, \texttt{-}&&s2, a2, p3&\textsc{confirm}$_{28}$\\ \texttt{p2}, \texttt{s}, \texttt{-}&&\texttt{s2}, a2, p3&\textsc{left-arc}$_{29}$\\ \texttt{n}, \texttt{p2}, \texttt{s}&\texttt{-}&\texttt{s2}, a2, p3&\textsc{no-arc} $\times 5_{30}$\\ \texttt{p}&\texttt{a}, \texttt{\qt{E}}, \texttt{\qt{E}}&\texttt{s2}, a2, p3&\textsc{left-arc}$_{31}$\\ &\texttt{p}, \texttt{a}, \texttt{\qt{E}}&\texttt{s2}, a2, p3&\textsc{shift}$_{32}$\\ \texttt{s}, \texttt{-}, \texttt{s2}&&a2, p3&\textsc{confirm}$_{33}$\\ \texttt{s}, \texttt{-}, \texttt{s2}&&\texttt{a2}, p3&\textsc{shift}$_{34}$\\ \texttt{-}, \texttt{s2}, \texttt{a2}&&p3&\textsc{confirm}$_{35}$\\ \texttt{-}, \texttt{s2}, \texttt{a2}&&\texttt{p3}&\textsc{left-arc}$_{36}$\\ \texttt{s}, \texttt{-}, \texttt{s2}&\texttt{a2}&\texttt{p3}&\textsc{right-arc}$_{37}$\\ \texttt{s2}, \texttt{a2}, \texttt{p3}&&&\textsc{shift}$_{38}$\\ \end{tabular}} \caption{\label{table-gold-transitions-examples} Sequence of gold transitions to obtain the \textsc{amr} graph for the sentence \qt{When the prince arrived on the Earth, he was surprised not to see any people}, introduced in Figure \ref{f-amr-example}. For brevity, we represent words (and concepts) by their first character (plus an index if it is duplicated) and we only show the top three words for $\lambda_1$, $\lambda_2$ and $\beta$. Steps from 20 to 23(2) and from 28 to 31 manage the reentrant edges for \texttt{prince} (\texttt{p}) from \texttt{surprise-01} (\texttt{s}) and \texttt{see-01} (\texttt{s2}). } \end{table} \subsection{Training the classifiers} The algorithm relies on three classifiers: (1) a transition classifier, $T_c$, that learns the set of transitions introduced in \S \ref{section-formalization}, (2) a relation classifier, $R_c$, to predict the label(s) of an edge when the selected action is a \textsc{left-arc}, \textsc{right-arc} or \textsc{multiple-arc} and (3) a hybrid process (a concept classifier, $C_c$, and a rule-based system) that determines which concept to create when the selected action is a \textsc{confirm} or \textsc{breakdown}. \paragraph{Preprocessing} Sentences are tokenized and aligned with the concepts using J\textsc{amr} \cite{flanigan2014discriminative}. For lemmatization, tagging and dependency parsing we used UDpipe \cite{straka2016udpipe} and its English pre-trained model \cite{zeman2017conll}. Named Entity Recognition is handled by Stanford CoreNLP \cite{manning-EtAl:2014:P14-5}. \paragraph{Architecture} We use feed-forward neural networks to train the tree classifiers. The transition classifier uses 2 hidden layers (400 and 200 input neurons) and the relation and concept classifiers use 1 hidden layer (200 neurons). The activation function in hidden layers is a $relu(x)$=$max(0,x)$ and their output is computed as $relu(W_i \cdot \vec{x}_i + b_i)$ where $W_i$ and $b_i$ are the weights and bias tensors to be learned and $\vec{x}_i$ the input at the $i$th hidden layer. The output layer uses a $\mathit{softmax}$ function, computed as $P(y=s|\vec{x}) = \frac{e^{\vec{x^T}\theta_s}} {\sum_{s'=1}^{S} e^{\vec{x^T}\theta_{s'}}}$. All classifiers are trained in mini-batches (size=32), using Adam \cite{kingma2014adam} (learning rate set to $3e^{-4}$), early stopping (no patience) and dropout \cite{srivastava2014dropout} (40\%). The classifiers are fed with features extracted from the preprocessed texts. Depending on the classifier, we are using different features. These are summarized in Appendix \ref{appendix-suplemental-material} (Table \ref{table-set-of-features}), which also describes (\ref{appendix-design-decisions}) other design decisions that are not shown here due to space reasons. \subsection{Running the system} At each parsing configuration, we first try to find a multiword concept or entity that matches the head elements of $\beta$, to reduce the number of \textsc{breakdown}s, which turned out to be a difficult transition to learn (see \S \ref{section-results}). This is done by looking at a lookup table of multiword concepts\footnote{The most frequent subgraph.} seen in the training set and a set of rules, as introduced in \cite{damonte-cohen-satta:2017:EACLlong,gildea-satta-cl17}. We then invoke $T_c$ and call the corresponding subprocess when an additional concept or edge-label identification task is needed. \paragraph{Concept identification} If the word at the top of $\beta$ occurred more than 4 times in the training set, we call a supervised classifier to predict the concept. Otherwise, we first look for a word-to-concept mapping in a lookup table. If not found, if it is a verb, we generate the concept \texttt{lemma-01}, and otherwise \texttt{lemma}. \paragraph{Edge label identification} The classifier is invoked every time an edge is created. We use the list of valid \texttt{ARG}s allowed in propbank framesets by \newcite{damonte-cohen-satta:2017:EACLlong}. Also, if \texttt{p} and \texttt{o} are a propbank and a non-propbank concept, we restore inverted edges of the form \texttt{o} $\xrightarrow{\texttt{l-of}}$ \texttt{p} as \texttt{o} $\xrightarrow{\texttt{l}}$ \texttt{p}. \section{Methods and Experiments} \paragraph{Corpus} We use the LDC2015E86 corpus and its official splits: 16\,833 graphs for training, 1\,368 for development and 1\,371 for testing. The final model is only trained on the training split. \paragraph{Metrics} We use Smatch \cite{cai2013smatch} and the metrics from \newcite{damonte-cohen-satta:2017:EACLlong}.\footnote{It is worth noting that the calculation of Smatch and metrics derived from it suffers from a random component, as they involve finding an alignment between predicted and gold graphs with an approximate algorithm that can produce a suboptimal solution. Thus, as in previous work, reported Smatch scores may slightly underestimate the real score.} \paragraph{Sources} The code and the pretrained model used in this paper can be found at \url{https://github.com/aghie/tb-amr}. \subsection{Results and discussion}\label{section-results} Table \ref{f-dev-T} shows accuracy of $T_c$ on the development set. \textsc{confirm} and \textsc{reduce} are the easiest transitions, as local information such as POS-tags and words are discriminative to distinguish between content and function words. \textsc{breakdown} is the hardest action.\footnote{This transition was trained/evaluated for non named-entity words that generated multiple nodes, e.g. father, that maps to \texttt{have-rel-role-91} \texttt{:ARG2} \texttt{father}.} In early stages of this work, we observed that this transition could learn to correctly generate multiple-term concepts for named-entities that are not sparse (e.g. countries or people), but failed with sparse entities (e.g. dates or percent quantities). Low performance on identifying them negatively affects the edge metrics, which require both concepts of an edge to be correct. Because of this and to identify them properly, we use the mentioned complementary rules to handle named entities. \textsc{right-arc}s are harder than \textsc{left-arc}s, although the reason for this issue remains as an open question for us. The performance for \textsc{no-arc}s is high, but it would be interesting to achieve a higher recall at a cost of a lower precision, as predicting \textsc{no-arc}s makes the transition sequence longer, but could help identify more distant reentrancy. The accuracy of $T_c$ is $\sim$86\%. The accuracy of $R_c$ is $\sim$79\%. We do not show the detailed results since the number of classes is too high. $C_c$ was trained on concepts occurring more than 1 time in the training set, obtaining an accuracy of $\sim$83\%. The accuracy on the development set with all concepts was $\sim$77\%. \begin{table}[t] \centering \begin{tabular}{lccc} \bf Action &\bf Prec.&\bf Rec.&\bf F-score\\ \textsc{left-arc} &81.62&87.73&84.57\\ \textsc{right-arc} &75.53&78.71&77.08\\ \textsc{multiple-arc} &00.00&00.00&00.00\\ \textsc{shift} &80.44&81.11&80.77\\ \textsc{no-arc} &89.71&86.71&88.18\\ \textsc{confirm} &84.91&96.11&90.16\\ \textsc{reduce} &96.77&91.53&94.08\\ \textsc{breakdown} &85.09&50.23&63.17\\ \end{tabular} \caption{\label{f-dev-T} $T_c$ scores on the development set.} \end{table} Table \ref{f-test-SOTA} compares the performance of our systems with state-of-the-art models on the test set. \textsc{amr-covington} obtains state-of-the-art results for all the standard metrics. It outperforms the rest of the models when handling reentrant edges. It is worth noting that D requires an additional classifier to handle a restricted set of reentrancy and P uses up to five classifiers to build the graph. \begin{table}[t] \centering \begin{tabular}{lccc|ccc} \bf Metric & \bf F& \bf W & \bf F' &\bf D& \bf P &\bf Ours \\ Smatch &58&63&67&64&64&64\\ Unlabeled &61&69&69&69&-&68\\ No-WSD &58&64&68&65&-&65\\ NER &75&75&79&83&-&83\\ Wiki&0&0&75&64&-&70\\ Negations &16&18&45&48&-&47\\ Concepts &79&80&83&83&-&83\\ Reentrancy &38&41&42&41&-&44\\ SRL &55&60&60&56&-&57\\ \end{tabular} \caption{\label{f-test-SOTA} F-score comparison with F~\cite{flanigan2014discriminative}, W~\cite{wang2015transition}, F'~\cite{flanigan2016cmu}, D~\cite{damonte-cohen-satta:2017:EACLlong}, P~\cite{peng-aaai18}. D, P and our system are left-to-right transition-based. } \end{table} \paragraph{Discussion} In contrast to related work that relies on \emph{ad-hoc} procedures, the proposed algorithm handles cycles and reentrant edges natively. This is done by just removing the original constraints of the arc transitions in the original \newcite{covington2001fundamental} algorithm. The main drawback of the algorithm is its computational complexity. The transition system is expected to run in $\mathcal{O}(n^2)$, as the original Covington parser. There are also collateral issues that impact the real speed of the system, such as predicting the concepts in a supervised way, given the large number of output classes (discarding the less frequent concepts the classifier needs to discriminate among more than 7\,000 concepts). In line with previous discussions \cite{damonte-cohen-satta:2017:EACLlong}, it seems that using a supervised feed-forward network to predict the concepts does not lead to a better overall concept identification with respect of the use of simple lookup tables that pick up the most common node/subgraph. Currently, every node is kept in $\lambda$, and it is available to be part of new edges. We wonder if only storing in $\lambda$ the head node for words that generate multiple-node subgraphs (e.g. for the word father that maps to \texttt{have-rel-role-91} \texttt{:ARG2} \texttt{father}, keeping in $\lambda$ only the concept \texttt{have-rel-role-91}) could be beneficial for \textsc{amr-covington}. As a side note, current \textsc{amr} evaluation involves elements such as neural network initialization, hooks and the (sub-optimal) alignments of evaluation metrics (e.g. Smatch) that introduce random effects that were difficult to quantify for us. \section{Conclusion} We introduce \textsc{amr-covington}, a non-projective transition-based parser for unrestricted \textsc{amr}. The set of transitions handles reentrancy natively. Experiments on the LDC2015E86 corpus show that our approach obtains results close to the state of the art and a good behavior on re-entrant edges. As future work, \textsc{amr-covington} produces sequences of \textsc{no-arc}s which could be shortened by using non-local transitions \cite{qi-manning:2017:Short,2017arXiv171009340F}. Sequential models have shown that fewer hooks and lookup tables are needed to deal with the high sparsity of \textsc{amr} \cite{ballesteros-alonaizan:2017:EMNLP2017}. Similarly, \textsc{bist-covington} \cite{vilares2017non} could be adapted for this task. \section*{Acknowledgments} This work is funded from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation programme (FASTPARSE, grant agreement No 714150), from the TELEPARES-UDC project (FFI2014-51978-C2-2-R) and the ANSWER-ASAP project (TIN2017-85160-C2-1-R) from MINECO, and from Xunta de Galicia (ED431B 2017/01). We gratefully acknowledge NVIDIA Corporation for the donation of a GTX Titan X GPU. \bibliographystyle{acl_natbib}
1,314,259,994,000
arxiv
\section{Introduction} Object detection is one of the most fundamental tasks in computer vision. The performance of object detection has been significantly improved due to the rapid progress of deep convolutional neural networks~\cite{alexnet,vggnet,googlenet,he2016deep,bath_norm,resnext,senet,shufflenet,squeezenet,mobilenet}. Recent CNN based object detectors can be categorized into one-stage detector, like YOLO~\cite{yolo,yolo9000}, SSD~\cite{ssd}, and RetinaNet~\cite{focal_loss}, and two-stage detectors, e.g. Faster R-CNN~\cite{faster_rcnn}, R-FCN~\cite{rfcn}, FPN~\cite{fpn}. Both of them depend on the backbone network pretrained for the ImageNet classification task. However, there is a gap between the image classification and the object detection problem, which not only needs to recognize the category of the object instances but also spatially localize the bounding-boxes. More specifically, there are two problems using the classification backbone for object detection tasks. \begin{enumerate*}[label=(\roman*)] \item Recent detectors, e.g., FPN, involve extra stages compared with the backbone network for ImageNet classification in order to detect objects with various sizes. \item Traditional backbone produces higher receptive field based on large downsampling factor, which is beneficial to the visual classification. However, the spatial resolution is compromised which will fail to accurately localize the large objects and recognize the small objects. \end{enumerate*} A well designed detection backbone should tackle all of the problems above. In this paper, we propose DetNet, which is a novel backbone designed for object detection. More specifically, due to variant object scales, DetNet involves additional stages which are utilized in the recent object detectors like FPN. Different from traditional pre-trained models for ImageNet classification, we maintain the spatial resolution of the features even though extra stages are included. However, high resolution feature maps bring more challenges to build a deep neural network due to the computational and memory cost. To keep the efficiency of our DetNet, we employ a low complexity dilated bottleneck structure. By integrating these improvements, our DetNet not only maintains high resolution feature maps but also keeps large receptive field, both of which are important for the object detection task. To summarize, we have the following contributions: \begin{itemize} \item We are the first to analyze the inherent drawbacks of traditional ImageNet pre-trained model for fine-tunning recent object detectors. \item We propose a novel backbone, called DetNet, which is specifically designed for object detection task by maintaining the spatial resolution and enlarging the receptive field. \item We achieve new state-of-the-art results on MSCOCO object detection and instance segmentation track based on a low complexity DetNet59 backbone. \end{itemize} \section{Related Works} Object detection is a heavily researched topic in computer vision. It aims at finding ``where'' and ``what'' each object instance is when given an image. Old detectors extract image features by using hand-engineered object component descriptors, such as HOG~\cite{hog}, SIFT~\cite{sift}, Selective Search~\cite{selective_search}, Edge Box~\cite{edge_boxes}. During a long time, DPM~\cite{dpm} and its variants are the dominant methods among traditional object detectors. With the rapid progress of deep convolutional neural networks, CNN based object detectors have yielded a remarkable result and become a new trend in detection literature. In network structure, recent CNN based detectors are usually split into two parts. The one is backbone network, and the other is detection business part. We briefly introduce these two parts as follows. \subsection{Backbone Network} The backbone network for object detection are usually borrowed from the ImageNet~\cite{imagenet2015} classification. In last few years, ImageNet has been regarded as a most authoritative datasets to evaluate the capability of deep convolution neural networks. Many novel networks are designed to get higher performance for ImageNet. AlexNet~\cite{alexnet} is among the first to try to increase the depth of CNN. In order to reduce the network computation and increase the valid receptive field, AlexNet down-samples the feature map with 32 strides which is a standard setting for the following works. VGGNet~\cite{vggnet} stacks 3x3 convolution operation to build a deeper network, while still involves 32 strides in feature maps. Most of the following researches adopt VGG like structure, and design a better component in each stage~(split by stride). GoogleNet~\cite{googlenet} proposes a novel inception block to involve more diversity features. ResNet~\cite{he2016deep} adopts ``bottleneck'' design with residual sum operation in each stage, which has been proved a simple and efficient way to build a deeper neural network. ResNext~\cite{resnext} and Xception~\cite{xception} use group convolution layer to replace the traditional convolution. It reduces the parameters and increases the accuracy simultaneously. DenseNet~\cite{densenet} densely concat several layers, it further reduces parameters while keeping competitive accuracy. Another different research is Dilated Residual Network~\cite{DRN} which extracts features with less strides. DRN achieves notable results on segmentation, while has little discussion on object detection. There are still lots of research for efficient backbone, such as~\cite{mobilenet,shufflenet,squeezenet}. However they are usually designed for classification. \subsection{Object Detection Business Part} Detection business part is usually attached to the base-model which is designed and trained for ImageNet classification dataset. There are two different design logic for object detection. The one is one-stage detector, which directly uses backbone for object instance prediction. For example, YOLO~\cite{yolo,yolo9000} uses a simple efficient backbone DarkNet\cite{yolo}, and then simplifies detection as a regression problem. SSD~\cite{ssd} adopts reduced VGGNet\cite{vggnet} and extracts features in multi-layers, which enables network more powerful to handle variant object scales. RetinaNet~\cite{focal_loss} uses ResNet as a basic feature extractor, then involves ``Focal'' loss~\cite{focal_loss} to address class imbalance issue caused by extreme foreground-background ratio. The other popular pipeline is two-stage detector. Specifically, recent two-stage detector will predict lots of proposals first based on backbone, then an additional classifier is involved for proposal classification and regression. Faster R-CNN~\cite{faster_rcnn} directly generates proposals from backbone by using Region Proposal Network~(RPN). R-FCN~\cite{rfcn} proposes to generate a position sensitive feature map from output of the backbone, then a novel pooling methods called position sensitive pooling is utilized for each proposals. Deformable convolution Networks~\cite{deformable} tries to enable convolution operation with geometric transformations by learning additional offsets without supervision. It is among the first to ameliorate backbone for object detection. Feature Pyramid Network~\cite{fpn} constructs feature pyramids by exploiting inherent multi-scale, pyramidal hierarchy of deep convolutional networks, specifically FPN combines multi-layer output by utilizing U-shape structure, and still borrows the traditional ResNet without further study. DSOD~\cite{dsod} first proposes to train detection from scratch, whose results are lower than pretrained methods. In conclusion, traditional backbones are usually designed for ImageNet classification. What is the suitable backbone for object detection is still an unexplored field. Most of the recent object detectors, no matter one-stage or two-stage, follow the pipeline of ImageNet pre-trained models, which is not optimal for detection performance. In this paper, we propose DetNet. The key idea of DetNet is to design a better backbone for object detection. \section{DetNet: A Backbone network for Object Detection} \label{sec:DetNet} \subsection{Motivation} Recent object detectors usually rely on a backbone network which is pretrained on the ImageNet classification dataset. As the task of ImageNet classification is different from the object detection which not only needs to recognize the category of the objects but also spatially localize the bounding-boxes. The design principles for the image classification is not good for the localization task as the spatial resolution of the feature maps is gradually decreased for the standard networks like VGG16 and Resnet. A few techniques like Feature Pyramid Network~(FPN) as in Fig.~\ref{fig:backbone} A. \cite{fpn} and dilation are applied to these networks to maintain the spatial resolution. However, there still exists the following three problems when trained with these backbone networks. \begin{figure}[ht] \begin{center} \includegraphics[clip=true, ,width=1\linewidth]{backbone.pdf} \end{center} \caption{Comparisons of different backbones used in FPN. Feature pyramid networks~(FPN) with traditional backbone is illustrated in~(A). Traditional backbone for image classification is illustrated in~(B). Our proposed backbone is illustrated in~(C), which has higher spatial resolution and exactly the same stages as FPN. We do not illustrate stage 1 (with stride 2) feature map due to the limitation of figure size.} \label{fig:backbone} \end{figure} \paragraph{The number of network stages is different.} As shown in Fig.~\ref{fig:backbone} B, typical classification network involves 5 stages, with each stage down-sampling feature maps by pooling 2x or stride 2 convolution. Thus the output feature map spatial size is ``32x'' sub-sampled. Different from traditional classification network, feature pyramid detectors usually adopt more stages. For example, in Feature Pyramid Networks~(FPN)~\cite{fpn}, additional stage \emph{P6} is added to handle larger objects and \emph{P6,P7} is added in RetinaNet~\cite{focal_loss} in a similar way. Obviously, extra stages like \emph{P6} are not pre-trained in ImageNet dataset. \paragraph{Weak visibility of large objects.} The feature map with strong semantic information has strides of 32 respect to input image, which brings large valid receptive field and leads the success of ImageNet classification task. However, large stride is harmful for the object localization. In Feature Pyramid Networks, large object is generated and predicted within deeper layers, the boundary of these object may be too blurry to get an accurate regression. This case is even worse when more stages are involved into classification network, since more down-sampling brings more strides to object. \paragraph{Invisibility of small objects.} Another drawback of large stride is the missing of small objects. The information from the small objects will be easily weaken as the spatial resolution of the feature maps is decreased and the large context information is integrated. Therefore, Feature Pyramid Network predicts small object in shallower layers. However, shallow layers usually only have low semantic information which may be not sufficient to recognize the category of the object instances. Therefore detectors must enhance their classification capability by involving context cues of high-level representations from the deeper layers. As Fig.~\ref{fig:backbone} A shows, Feature Pyramid Networks relieve it by adopting bottom-up pathway. However, if the small objects is missing in deeper layers, these context cues will miss simultaneously. To address these problems, we propose DetNet which has following characteristics. \begin{enumerate*}[label=(\roman*)] \item The number of stages is directly designed for Object Detection. \item Even though we involve more stages~(such as 6 stages or 7 stages) than traditional classification network, we maintain high spatial resolution of the feature maps, while keeping large receptive field. \end{enumerate*} DetNet has several advantages over traditional backbone networks like ResNet for object detection. First, DetNet has exactly the same number of stages as the detector used, therefore extra stages like \emph{P6} can be pre-trained in ImageNet dataset. Second, benefited by high resolution feature maps in last stage, DetNet is more powerful in locating the boundary of large objects and finding the missing small objects. More detailed discussion can be referred to Section\ref{sec:experiments}. \subsection{DetNet Design} In this subsection, we will present the detail structure of DetNet. We adopt ResNet-50 as our baseline, which is widely used as the backbone network in a lot of object detectors. To fairly compare with the ResNet-50, we keep stage 1,2,3,4 the same as original ResNet-50 for our DetNet. There are two challenges to make an efficient and effective backbone for object detection. On one hand, keeping the spatial resolution for deep neural network costs extremely large amount of time and memory. On the other hand, reducing the down-sampling factor equals to reducing the valid receptive field, which will be harmful for many vision tasks, such as image classification and semantic segmentation. DetNet is carefully designed to address the two challenges. Specifically, DetNet follows the same setting for ResNet from the first stage to the fourth stage. The difference starts from the fifth stage and an overview of our DetNet for image classification can be found in Fig.~\ref{fig:different_bottleneck} D. Let us discuss the implementation details of DetNet59 which extends the ResNet50. Similarly, our DetNet can be easily extended with deep layers like ResNet101. The detail design of our DetNet59 is illustrated as follows: \begin{itemize} \item We introduce the extra stages, e.g., \emph{P6}, in the backbone which will be later utilized for object detection as in FPN. Meanwhile, we fix the spatial resolution as 16x downsampling even after stage 4. \item Since the spatial size is fixed after stage 4, in order to introduce a new stage, we employ a dilated~\cite{wavelet1999,fcn,fcn_crf} bottleneck with 1x1 convolution projection~(Fig.~\ref{fig:different_bottleneck} B) in the begining of the each stage. We find the model in Fig.~\ref{fig:different_bottleneck} B is important for multi-stage detectors like FPN. \item We apply bottleneck with dilation as a basic network block to efficiently enlarge the receptive filed. Since dilated convolution is still time consuming, our stage 5 and stage 6 keep the same channels as stage 4~(256 input channels for bottleneck block). This is different from traditional backbone design, which will double channels in a later stage. \end{itemize} \begin{figure}[ht] \begin{center} \includegraphics[clip=true, ,width=1\linewidth]{DetNet4FPN.pdf} \end{center} \caption{Detail structure of DetNet~(D) and DetNet based Feature Pyramid NetWork~(E). Different bottleneck block used in DetNet is illustrated in~(A, B). Original bottleneck is illustrated in ~(C). DetNet follows the same design as ResNet before stage 4, while keeps spatial size after stage 4~(e.g. stage 5 and 6).} \label{fig:different_bottleneck} \end{figure} It is easy to integrate DetNet with any detectors with/without feature pyramid. Without losing representativeness, we adopt prominent detector FPN as our baselines to validate the effectiveness of DetNet. Since DetNet only changes the backbone of FPN, we fix the other structures in FPN except backbone. Because we do not reduce spatial size after stage 4 of Resnet-50, we simple sum the output of these stages in top-down path way. \section{Experiments} \label{sec:experiments} In this section, we will evaluate our approach on popular MS COCO benchmark, which has 80 objects categories. There are 80k images in training set, and 40k images in validation dataset. Following a common practice, we further split the 40k validation set into 35k \emph{large-val} datasets and 5k \emph{mini-val} datasets. All of our validation experiments involve training set and the \emph{large-val} for training~(about 115k images), then test on 5k \emph{mini-val} datasets. We also report the final results of our approach on COCO \emph{test-dev}, which has no disclosed labels. We use standard coco metrics to evaluate our approach, including AP~(averaged precision over intersection-over-union thresholds), AP$_{50}$, AP$_{75}$~(AP at use different IoU thresholds), and AP$_{S}$, AP$_{M}$, AP$_{L}$~(AP at different scales: small,middle,large). \subsection{Detector training and inference} Following training strategies provided by Detectron~\footnote{\url{https://github.com/facebookresearch/Detectron}} repository~\cite{Detectron2018}, our detectors are end-to-end trained on 8 Pascal TITAN XP GPUs, optimized by synchronized SGD with a weight decay of 0.0001 and momentum of 0.9. Each mini-batch has 2 images, so the effective batch-size is 16. We resize the shorter edge of the image to 800 pixels, the longer edge is limited to 1333 pixels to avoid too much memory cost. We pad the images within mini-batch to the same size by filling zeros into the right-bottom of the image. We use typical ``2x'' training settings used in Detectron~\cite{Detectron2018}. Learning rate is set to 0.02 at the begin of the training, and then decreased by a factor of 0.1 after 120k and 160k iterations and finally terminates at 180k iterations. We also warm-up our training by using smaller learning rate $0.02 \times 0.3$ for first 500 iteration. All experiments are initialized with ImageNet pre-trained weights. We fix the parameters of stage 1 in backbone network. Batch normalization is also fixed during detector fine-tuning. We only adopt a simple horizontal image flipping data augmentation. As for proposal generation, unless explicitly stated, we first pick up 12000 proposals with highest scores, then followed by non maximum suppression~(NMS) operation to get at most 2000 RoIs for training. During testing, we use 6000/1000~(6000 highest scores for NMS, 1000 RoIs after NMS) setting. We also involve popular RoI-Align technique used in Mask R-CNN~\cite{mask_rcnn}. \subsection{Backbone training and Inference} Following most hyper-parameters and training settings provided by ResNext~\cite{resnext}, we train backbone on ImageNet classification datasets by 8 Pascal TITAN XP GPUs with 256 total batch size. We use standard evaluation strategy for testing, which will report the error on the single 224x224 center crop from the image with 256 shorter side. \subsection{Main Results} We adopt FPN with ResNet-50 backbone as our baseline because FPN is a prominent detector for many other vision tasks, such as instance segmentation and skeleton~\cite{mask_rcnn}. To validate the effectiveness of DetNet for FPN, we propose DetNet-59 which involves an additional stage compared with ResNet-50. More design details can be found in Section~\ref{sec:DetNet}. Then we replace ResNet-50 backbone with DetNet-59 and keep the other structures the same as original FPN. We first train DetNet-59 on ImageNet classification, results are shown in Table~\ref{table:fpn_detnet_res50}. DetNet-59 has 23.5\% top-1 error at the cost of 4.8G FLOPs,. Then we train FPN with DetNet-59, and compare it with ResNet-50 based FPN. From Table~\ref{table:fpn_detnet_res50} we can see DetNet-59 has superior performance than ResNet-50~(over 2 points gains in mAP). \begin{table}[ht] \begin{center} \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{bacbone} & \multicolumn{2}{c|}{Classification} & \multicolumn{6}{c}{FPN results} \\ \cline{2-9} & Top1 err& FLOPs, & mAP & AP$_{50}$ & AP$_{75}$ & $\text{AP}_s$ & $\text{AP}_m$ & $\text{AP}_l$ \\ \hline ResNet-50 & 24.1 & 3.8G & 37.9 & 60.0 & 41.2 & 22.9 & 40.6 & 49.2 \\ \textbf{DetNet-59} & 23.5 & 4.8G & \textbf{40.2} & 61.7 & 43.7 & 23.9 & 43.2 & 52.0 \\ ResNet-101 & 23.0 & 7.6G & 39.8 & 62.0 & 43.5 & 24.1 & 43.4 & 51.7 \\ \hline \end{tabular} \end{center} \caption{Results of different backbones used in FPN. We first report the standard Top-1 error on ImageNet classification~(the lower error is, the better accuracy in classification). FLOPs means the computation complexity. We also illustrate FPN COCO results to investigate effectiveness of these backbone for object detection.} \label{table:fpn_detnet_res50} \end{table} Since DetNet-59 has more parameters than ResNet-50~(because we involving additional stage for FPN \emph{P6}), a natural hypothesis is that the improvement is mainly due to more parameters. To validate the effectiveness of DetNet-59, we also train FPN with ResNet-101 which has 7.6G FLOPs complexity, the results is 39.8 mAP. ResNet-101 has much more FLOPs than DetNet-59, and still yields lower mAP than DetNet-59. Therefore the results prove that DetNet is more suitable than ResNet. As DetNet is directly designed for object detection, to further validate the advantage of DetNet, we train FPN based on DetNet-59 and ResNet-50 from scratch. The results are shown in Table~\ref{table:fpn_detnet_res50_scratch}. Noticing that we use multi-gpu synchronized batch normalization during training as in~\cite{megdet} in order to train from scratch. Concluding from the results, DetNet-59 still outperforms ResNet-50 by 1.8 points, which further proves that DetNet is more suitable for object detection. \begin{table}[ht] \begin{center} \begin{tabular}{l|c|c|c|c|c|c} \hline backbone & mAP & AP$_{50}$ & AP$_{75}$ & $\text{AP}_s$ & $\text{AP}_m$ & $\text{AP}_l$ \\ \hline ResNet-50 from scratch & 34.5 & 55.2 & 37.7 & 20.4 & 36.7 & 44.5 \\ \textbf{DetNet-59} from scratch & \textbf{36.3} & 56.5 & 39.3 & 22.0 & 38.4 & 46.9 \\ \hline \end{tabular} \end{center} \caption{FPN results on different backbones, which is trained from scratch. Since we don't involve ImageNet pre-trained weights, we want to directly compare backbone capability for object detection.} \label{table:fpn_detnet_res50_scratch} \end{table} \subsection{Results analysis} In this subsection, we will analyze how DetNet improve the object detection. There are two key-points in object detection evaluation, the one is average precision~(AP) and the other is average recall~(AR). AR means how much objects we can find out, AP means how much objects is correctly predicted~(right label for classification). AP and AR are usually evaluated on different IoU threshold to validate the regression capability for object location. The larger IoU is, the more accurate regression needs. AP and AR are also evaluated on different range of bounding box areas~(small, middle, and large) to find the detail influences on the scale objects. At first, we investigate the impact of DetNet on detection accuracy. We evaluate the performance at different IoU thresholds and object scales as shown in Table~\ref{table:DetNet_detail_ap}. \begin{table}[ht] \begin{center} \begin{tabular}{l|c|c|c|c|c|c|c} Models & scales & mAP & AP$_{50}$ & AP$_{60}$ & AP$_{70}$ & AP$_{80}$ & AP$_{85}$ \\ \hline \emph{ResNet-50} & over all scales& 37.9 & 60.0 & 55.1 & 47.2 & 33.1 & 22.1 \\ & small & 22.9 & 40.1 & 35.5 & 28.0 & 17.5 & 10.4 \\ & middle & 40.6 & 63.9 & 59.0 & 51.2 & 35.7 & 23.3\\ & large & 49.2 & \cellcolor{red!25} 72.2 & 68.2 & 60.8 & 46.6 & \cellcolor{yellow!25} 34.5 \\ \hline \emph{DetNet-59} & over all scales& 40.2 & 61.7 & 57.0 & 49.6 & 36.2 & 25. 8 \\ & small & 23.9 & 41.8 & 36.8 & 29.8 & 17.7 & 10.5 \\ & middle & 43.2 & 65.8 & 61.2 & 53.6 & 39.9 & 27.3 \\ & large & 52.0 & \cellcolor{red!25} 73.1 & 69.5 & 63 & 51.4 & \cellcolor{yellow!25} 40.0 \\ \end{tabular} \end{center} \caption{Comparison of Average Precision~(AP) of FPN on different IoU thresholds and different bounding box scales. AP$_{50}$ is a effective metric to evaluate classification capability. AP$_{85}$ requires accurate location of the bounding box prediction. Therefore it validates the regression capability of our approaches. We also illustrate AP at different scales to capture the influence of high resolution feature maps in backbone.} \label{table:DetNet_detail_ap} \end{table} \begin{table}[ht] \begin{center} \begin{tabular}{l|c|c|c|c|c|c|c} Models & scales & mAR & AR$_{50}$ & AR$_{60}$ & AR$_{70}$ & AR$_{80}$ & AR$_{85}$ \\ \hline \emph{ResNet-50} & over all scales & 52.8 & 80.5 & 74.7 & 64.3 & 46.8 & 34.2\\ & small & 35.5 & \cellcolor{red!25} 60.0 & 53.8 & 43.3 & 28.7 & \cellcolor{orange!25} 18.7 \\ & middle & 56.0 & 84.9 & 79.2 & 68.7 & 50.5 & 36.2 \\ & large & 67.0 & \cellcolor{yellow!25} 95.0 & 90.9 & 80.3 & 63.1 & \cellcolor{pink!25} 50.2 \\ \hline \emph{DetNet-59} & over all scales & 56.1 & 83.1 & 77.8 & 67.6 & 51.0 & 38.9 \\ & small & 39.2 & \cellcolor{red!25} 66.4 & 59.4 & 47.3 & 29.5 & \cellcolor{orange!25} 19.6 \\ & middle & 59.5 & 87.4 & 82.5 & 72.6 & 55.6 & 41.2 \\ & large & 70.1 & \cellcolor{yellow!25} 95.4 & 91.8 & 82.9 & 69.1 & \cellcolor{pink!25} 56.3 \\ \end{tabular} \end{center} \caption{Comparison of Average Recall~(AR) of FPN on different IoU thresholds and different bounding box scales. AR$_{50}$ is a effective metric to show how many reasonable bounding boxes we find out~(class agnostic). AR$_{85}$ means how accurate of box location.} \label{table:DetNet_detail_ar} \end{table} DetNet-59 has an impressive improvement in the performance of large object location, which bring 5.5~(40.0 vs 34.5) points gains in AP$_{85}$@large. The reason is that original ResNet based FPN has big stride in deeper feature map, large objects may be too blurry to get an accurate regression. We also investigate the influence of DetNet for finding missing objects. As shown in Table~\ref{table:DetNet_detail_ar}, we make the detail statistics on averaged recall at different IoU threshold and scales. We conclude the Table as follows: \begin{itemize} \item Compared with ResNet-50, DetNet-59 is more powerful for finding missing small objects, which yields 6.4 points gain~(66.4 vs 60.0) in AR$_{50}$ for small object. DetNet keeps higher resolution in deeper stages than ResNet, thus we can find smaller object in deeper stages. Since we use up-sampling path-way in Fig.~\ref{fig:backbone} A. Shallow layer can also involve context cues for finding small objects. However, AR$_{85}$@small is comparable~(18.7 vs 19.6) between ResNet-50 and DetNet-59. This is reasonable. DetNet has no use for small object location, because ResNet based FPN has already used large feature map for small object. \item DetNet is good for large object localization, which has 56.3~(vs 50.2) in AR$_{85}$ for large objects. However, AR$_{50}$ in large object does not change too much~(95.4 vs 95.0). In general, DetNet finds more accurate large objects rather than missing large objects. \end{itemize} \subsection{Discussion} As mentioned in Section~\ref{sec:DetNet}, the key idea of DetNet is a novel designed backbone specifically for object detection. Based on a prominent object detector like Feature Pyramid Network, DetNet-59 follows exactly the same number of stages as FPN while maintaining high spatial resolution. To discuss the importance of the backbone for object detection, we first investigate the influence of stages. Since the stage-6 of DetNet-59 has the same spatial size as stage-5, a natural hypothesis is that DetNet-59 simply involves a deeper stage-5 rather than producing a new stage-6. To prove DetNet-59 indeed involves an additional stage, we carefully analyze the details of DetNet-59 design. As shown in Fig.~\ref{fig:different_bottleneck} B. DetNet-59 adopts a dilated bottleneck with simple 1x1 convolution as projection layer to split stage 6. It is much different from traditional ResNet, when spatial size of the feature map does not change, the projection will be simple identity in bottleneck structure(Fig.~\ref{fig:different_bottleneck} A) rather than 1x1 convolution(Fig.~\ref{fig:different_bottleneck} B). We break this convention. We claim the bottleneck with 1x1 convolution projection is effective to create a new stage even spatial size is unchanged. To prove our idea, we involve DetNet-59-NoProj which is modified DetNet-59 by removing 1x1 projection convolution. Detail structure is shown in Fig.~\ref{fig:DetNet-59-noproj}. There are only minor differences~(red cell) between DetNet-59~(Fig.~\ref{fig:different_bottleneck} D) and DetNet-59-NoProj~(Fig.~\ref{fig:DetNet-59-noproj}). First we train DetNet-59-NoProj in ImageNet classification, results are shown in Table~\ref{table:DetNet-59-noproj}. DetNet-59-NoProj has 0.5 higher Top1 error than DetNet-59. Then We train FPN based on DetNet-59-NoProj in Table~\ref{table:DetNet-59-noproj}. DetNet-59 outperforms DetNet-59-NoProj over 1 point for object detection. The experimental results validate the importance of involving a new stage as FPN used for object detection. When we use module in Fig.~\ref{fig:different_bottleneck} A in our network, the output feature map is not much different from the input feature map, because output feature map is just sum of original input feature map and its transformation. Therefore, it is not easy to create a novel semantic stage for network. While if we adopt module in Fig.~\ref{fig:different_bottleneck} B, it will be more divergent between input and output feature map, which enables us to create a new semantic stage. \begin{figure}[ht] \begin{center} \includegraphics[clip=true, ,width=1\linewidth]{DetNet-59-noproj.pdf} \end{center} \caption{The detail structure of DetNet-59-NoProj, which adopts module in Fig.~\ref{fig:backbone} A to split stage 6~(while original DetNet-59 adopts Fig.~\ref{fig:backbone} B to split stage 6). We design DetNet-59-NoProj to validate the importance of involving a new semantic stage as FPN for object detection.} \label{fig:DetNet-59-noproj} \end{figure} \begin{table}[ht] \begin{center} \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{bacbone} & \multicolumn{2}{c|}{Classification} & \multicolumn{6}{c}{FPN results} \\ \cline{2-9} & Top1 err& FLOPs & mAP & AP$_{50}$ & AP$_{75}$ & $\text{AP}_s$ & $\text{AP}_m$ & $\text{AP}_l$ \\ \hline \textbf{DetNet-59} & 23.5 & 4.8G & \textbf{40.2} & 61.7 & 43.7 & 23.9 & 43.2 & 52.0 \\ DetNet-59-NoProj & 24.0 & 4.6G & 39.1 & 61.3 & 42.1 & 23.6 & 42.0 & 50.1\\ \hline \end{tabular} \end{center} \caption{Comparison of DetNet-59 and DetNet-59-NoProj. We report both results on ImageNet classification and FPN COCO detection. DetNet-59 consistently outperforms DetNet-59-NoProj, which validates the importance of the backbone design~(same semantic stage) as FPN.} \label{table:DetNet-59-noproj} \end{table} Another natural question is that ``what is the result if we train FPN initialized with ResNet-50 parameters, and dilate stage 5 of the ResNet-50 during detector fine-tuning~(for simplify, we denote it as ResNet-50-dilated)''. To show the importance of pre-train backbone for detection, we compare DetNet-59 based FPN with ResNet-50-dilate based FPN in Table~\ref{table:fpn_detnet_dilateres50}. ResNet-50-dilated has more FLOPs than DetNet-59, while gets lower performance than DetNet-59. Therefore, we have shown the importance of directly training base-model for object detection. \begin{figure}[ht] \begin{center} \includegraphics[clip=true, ,width=1\linewidth]{det_results.pdf} \end{center} \caption{Illustrative results of DetNet-59 based FPN.} \label{fig:det_results} \end{figure} \begin{table}[ht] \begin{center} \begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{bacbone} & \multicolumn{2}{c|}{Classification} & \multicolumn{6}{c}{FPN results} \\ \cline{2-9} & Top1 err& FLOPs, & mAP & AP$_{50}$ & AP$_{75}$ & $\text{AP}_s$ & $\text{AP}_m$ & $\text{AP}_l$ \\ \hline \textbf{DetNet-59} & 23.5 & 4.8G & \textbf{40.2} & 61.7 & 43.7 & 23.9 & 43.2 & 52.0 \\ ResNet-50-dilated & -- & 6.1G & 39.0 & 61.4 & 42.4 & 23.3 & 42.1 & 50.0\\ \hline \end{tabular} \end{center} \caption{Comparison of FPN results on DetNet-59 and ResNet-50-dilated to validate the importance of pre-train backbone for detection. ResNet-50-dilated means that we fine-tune detection based on ResNet-50 weights, while involving dilated convolution in stage-5 of the ResNet-50. We don't illustrate Top-1 error of ResNet-50-dilated because it can not be directly used for image classification.} \label{table:fpn_detnet_dilateres50} \end{table} \subsection{Comparison to State of the Art} We evaluate DetNet-59 based FPN on MSCOCO~\cite{mscoco,coco_api} detection test-dev dataset, and compare it with recent state-of-the-art methods listed in Table~\ref{table:compare2SOA}. Noticing that test-dev dataset is different from mini-validation dataset used in ablation experiments. It has no disclosed labels and is evaluated on the server. Without any bells and whistles, our simple but efficient backbone achieves new state-of-the-art on COCO object detection, even outperforms strong competitors with ResNet-101 backbone. It is worth nothing that DetNet-59 has only 4.8G FLOPs complexity while ResNet-101 has 7.6G FLOPs. We refer the original FPN results provided in Mask R-CNN~\cite{mask_rcnn}. It should be higher by using Detectron~\cite{Detectron2018} repository, which will generate 39.8 mAP for FPN-ResNet-101. \begin{table}[ht] \begin{center} \begin{tabular}{l|c|c|c|c|c|c|c} Models & Backbone & mAP & AP$_{50}$ & AP$_{75}$ & AP$_{s}$ & AP$_{m}$ & AP$_{l}$ \\ \hline SSD513~\cite{ssd} & ResNet-101 & 31.2 & 50.4 & 33.3 & 10.2 & 34.5 & 49.8 \\ DSSD513~\cite{ssd,dssd} & ResNet-101 & 33.2 & 53.3 & 35.2 & 13.0 & 35.4 & 51.1 \\ Faster R-CNN +++~\cite{he2016deep} & ResNet-101&34.9&55.7&37.4 &15.6&38.7&50.9 \\ Faster R-CNN G-RMI~\footnote{}~\cite{GRMI} & Inception-ResNet-v2 & 34.7 & 55.5 & 36.7 & 13.5 & 38.1 & 52.0 \\ RetinaNet~\cite{focal_loss} & ResNet-101 & 39.1 & 59.1 & 42.3 & 21.8 & 42.7 & 50.2 \\ FPN~\cite{mask_rcnn} & ResNet-101 & 37.3 & 59.6 & 40.3 & 19.8 & 40.2 & 48.8\\ FPN & \textbf{DetNet-59} & \textbf{40.3} & 62.1 & 43.8 & 23.6 & 42.6 & 50.0 \end{tabular} \end{center} \caption{Comparison of object detection results between our approach and state-of-the-art on MSCOCO test-dev datasets. Based on our simple and effective backbone DetNet-59, our model outperforms all previous state-of-the-art. It is worth nothing that DetNet-59 yields better results with much lower FLOPs. } \label{table:compare2SOA} \end{table} To validate the generalization capability of our approach, we also evaluate DetNet-59 for MSCOCO instance segmentation based Mask R-CNN. Results are shown in Table.~\ref{table:compare2SOA_instance} for test-dev. Thanks for the impressive ability of our DetNet59, we obtain a new state-of-the-art results on instance segmentation as well. \begin{table}[ht] \begin{center} \begin{tabular}{l|c|c|c|c|c|c|c} Models & Backbone & mAP & AP$_{50}$ & AP$_{75}$ & AP$_{s}$ & AP$_{m}$ & AP$_{l}$ \\ \hline MNC~\cite{MNC} & ResNet-101 & 24.6 & 44.3 & 24.8 & 4.7 & 25.9 & 43.6 \\ FCIS~\cite{FCIS} + OHEM~\cite{OHEM} & ResNet-101-C5-dilated & 29.2 & 49.5 & - & 7.1 & 31.3 & 50.0 \\ FCIS+++~\cite{FCIS} +OHEM & ResNet-101-C5-dilated & 33.6 & 54.5 & - & - & - & - \\ Mask R-CNN~\cite{mask_rcnn} & ResNet-101 & 35.7 & 58.0 & 37.8 & 15.5& 38.1 & 52.4 \\ Mask R-CNN & \textbf{DetNet-59} & \textbf{37.1} & 60.0 & 39.6 & 18.6 & 39.0 & 51.3 \\ \end{tabular} \end{center} \caption{Comparison of instance segmentation results between our approach and other state-of-the-art on MSCOCO test-dev datasets. Benefit from DetNet-59, we achieve a new state-of-the-art on instance segmentation task.} \label{table:compare2SOA_instance} \end{table} \footnotetext{\url{http://image-net.org/challenges/talks/2016/GRMI-COCO-slidedeck.pdf}} Some of the results are visualized in Fig.~\ref{fig:det_results} and Fig.~\ref{fig:instance_results}. Detection results of FPN with DetNet-59 backbone are shown in Figure Fig.~\ref{fig:det_results}. Instance segmentation results of Mask R-CNN with DetNet-59 backbone are shown in Figure ~\ref{fig:instance_results}. We only illustrate bounding boxes and instance segmentation no less than 0.5 classification scores. \begin{figure}[ht] \begin{center} \includegraphics[clip=true, ,width=1\linewidth]{instance_results.pdf} \end{center} \caption{Illustrative results of DetNet-59 based Mask R-CNN.} \label{fig:instance_results} \end{figure} \section{Conclusion} In this paper, we design a novel backbone network specifically for the object detection task. Traditionally, the backbone network is designed for the image classification task and there is a gap when transferred to the object detection task. To address this issue, we present a novel backbone structure called DetNet, which is not only optimized for the classification task but also localization friendly. Impressive results have been reported on the object detection and instance segmentation based on the COCO benchmark. \bibliographystyle{splncs}
1,314,259,994,001
arxiv
\section{Introduction}\label{intro} \noindent Algebras endowed with a bracket satisfying the Jacobi identity are currently used extensively in the formulations of classical and quantum theories. For example, the formulations of Classical Mechanics and Classical Field Theory are based on symplectic manifolds endowed with the Poisson bracket satisfying the Jacobi identity \cite{Ar,BSh}. One of the main ingredients in consistent formulation of quantum mechanics is the commutator of operators, which satisfies the Jacobi identity. The main object of General Relativity is the Riemann tensor, which in particular satisfies the Bianchi identity following from the Jacobi identity for covariant derivatives. The quantization of dynamical systems with constraints in the Hamiltonian formalism \cite{BV1,BF,GT,HT} assumes the use of even symplectic supermanifolds \cite{Leites, DeWitt} endowed with superextension of the commutator and a Poisson bracket satisfying the generalized Jacobi identity. The quantum theory of gauge fields in the Lagrangian formalism is constructed using odd symplectic supermanifolds endowed with an antibracket (a super Poisson bracket \cite{BV}) that satisfies the generalized Jacobi identity. The list of such descriptions can be continued. Here, we consider the Jacobi identities naturally existing for any associative algebra from a new standpoint (in our understanding). For this, we use a remarkable identity for any three elements of a given associative algebra presented in terms of only single commutators. The Jacobi identity written, as is known, in terms of double commutators and anticommutators follows from this identity. Using the anticommutator, we introduce a second (fundamental) identity for an arbitrary associative algebra written for three elements of the algebra in terms of single commutators and anticommutators. We show that the first identity is a consequence of the second. This allows speaking of this identity as a fundamental (basic) identity for any associative algebra. For identities (one of which is the Jacobi identity) in terms of double commutators and anticommutators can be derived from the fundamental identity. Among these identities, two are independent. For any algebra, we prove that if the fundamental identity is satisfied, then the multiplication operation is associative. We generalize the basic relations and statements to the case of superalgebras. This paper is organized as follows. In Section~2, we introduce the fundamental identity written in terms of single commutators and anticommutators for arbitrary associative algebras and derive a set of four (reducible) identities in terms of double commutators and anticommutators. We show that an algebra endowed with the fundamental identity is an associative algebra. In Section~3, we extend the results obtained in the preceding sections to the super case. In Section~4, we study a new interesting identity that exists for nondegenerate symplectic (super)manifolds. In Section~5, we generalize the basic identities discussed in Section 2 to the case of an arbitrary number of elements. Finally, we make some concluding concluding remarks in Section~6.\\ \section{Remarkable identities in associative algebras} \noindent We consider an arbitrary associative algebra $\cal A$ with elements $X\in {\cal A}$. Let $T_i, i=1,2,...,n$ be a basis in $\cal A$. Then there exists the decomposition $X=x^iT_i$ for any element $X$ in $\cal A$. Because $T_iT_j\in {\cal A} $, we have \begin{eqnarray} \label{TT} T_iT_j=F^{\;\;k}_{ij}\;T_k, \end{eqnarray} where $F^{\;\;k}_{ij}$ are structure constants of the algebra. In terms of $F^{\;\;k}_{ij}$, the associativity conditions $(XY)Z=X(YZ)$ is \begin{eqnarray} \label{FFas} F^{\;\;n}_{ij}F^{\;\;m}_{nk}=F^{\;\;n}_{j\;k}F^{\;\;m}_{in}. \end{eqnarray} These constants can be uniquely represented as sum of symmetric and antisymmetric terms \begin{eqnarray} \label{F} F^{\;\;k}_{ij}=\frac{1}{2}c^{\;\;k}_{ij}+\frac{1}{2}f^{\;\;k}_{ij}, \end{eqnarray} where $c^{\;\;k}_{ij}$ and $f^{\;\;k}_{ij}$ have the symmetries \begin{eqnarray} \label{fsym} c^{\;\;k}_{ij}= c^{\;\;k}_{j\;i},\quad f^{\;\;k}_{ij}=-f^{\;\;k}_{j\;i}. \end{eqnarray} The commutator $[\cdot,\cdot]$ and anticommutator $\{\cdot,\cdot\}$ in this algebra are defined for any two elements $X,Y\in {\cal A}$ by the relations \begin{eqnarray} \label{comm} [X,Y]=XY-YX,\qquad \{X,Y\}=XY+YX, \end{eqnarray} which are elements of ${\cal A}$. The Leibniz rules for the commutator and anticommutator follow from (\ref{comm}), \begin{eqnarray} \label{Lcomm} [X,YZ]=[X,Y]Z+Y[X,Z],\qquad \{X,YZ\}=\{X,Y\}Z-Y[X,Z]. \end{eqnarray} It is clear that \begin{eqnarray} \label{commTT} [T_i,T_j]=f^{\;\;k}_{ij}T_k ,\qquad \{T_i,T_j\}= c^{\;\;k}_{ij}T_k. \end{eqnarray} The following remarkable identities written in terms of single commutators and anticommutators exist for any associative algebra $\cal A$ and for any $X,Y,Z\in {\cal A}$ (without any reference to a basis): \begin{eqnarray} \label{FunI} &&[X,YZ]+ [Z,XY]+[Y,ZX]\equiv 0,\\ \label{FunII} &&[X,YZ]+\{Y,ZX\}-\{Z,XY\}\equiv 0. \end{eqnarray} From these identities, we can derive a set of identities in terms of double commutators and anticommutators. In particular, the Jacobi identity \begin{eqnarray} \label{JI} [X,[Y,Z]]+ [Z,[X,Y]]+ [Y,[Z,X]]\equiv 0 \end{eqnarray} follows from (\ref{FunI}). Moreover, we can deduce the identity containing the anticommutator from (\ref{FunI}), \begin{eqnarray} \label{JIab} [X,\{Y,Z\}]+ [Z,\{X,Y\}]+ [Y,\{Z,X\}]\equiv 0 \end{eqnarray} Similarly, we can derive the following identities from (\ref{FunII}): \begin{eqnarray} \label{JIantiI} &&[X,\{Y,Z\}]- \{Z,[X,Y]\}+ \{Y,[Z,X]\}\equiv 0,\\ &&[X,[Y,Z]]+\{Y,\{Z,X\}\}-\{Z,\{X,Y\}\}\equiv 0. \label{JIantiII} \end{eqnarray} We note that identities (\ref{FunI}) and (\ref{FunII}) are not independent, because summing (\ref{FunII}) over cyclic permutations gives identity (\ref{FunI}). Therefore, we have reason to regard identity (\ref{FunII}) as the fundamental identity in associative algebras because the identities (\ref{FunI}), (\ref{JI}), (\ref{JIab}), (\ref{JIantiI}) and (\ref{JIantiII}) can be derived from it. In turn, the set of identities (\ref{JI})-(\ref{JIantiII}) is not independent. Indeed, summing of (\ref{JIantiI}) over cyclic permutations gives identity (\ref{JIab}). The same operation applied to identity (\ref{JIantiII}) leads to Jacobi identity (\ref{JI}) \cite{MR}. We note that identities (\ref{JIantiI}), (\ref{JIantiII}) were also discussed for associative algebras in \cite{LV}. It is clear that having identities (\ref{JIantiI})-- (\ref{JIantiII}) and explicit realization of commutator and anticommutator (\ref{comm}), we can reproduce fundamental identity (\ref{FunII}). We note that from associativity condition (\ref{FFas}), we can derive analogues of identities (\ref{FunI})-(\ref{JIantiII}) in terms of structure constants. In particular, the Jacobi identities for the antisymmetric parts $f^{\;\;k}_{ij}$ of the structure constants have the form \begin{eqnarray} \label{JIf} f^{\;\;m}_{ij}f^{\;\;n}_{mk}+ f^{\;\;m}_{ki}f^{\;\;n}_{mj}+ f^{\;\;m}_{jk}f^{\;\;n}_{mi} \equiv 0. \end{eqnarray} The identities containing symmetric and antisymmetric parts of structure constants can be written as \begin{eqnarray} \nonumber \label{JIfc} &&c^{\;\;m}_{ij}f^{\;\;n}_{mk}+ c^{\;\;m}_{ki}f^{\;\;n}_{mj}+ c^{\;\;m}_{jk}f^{\;\;n}_{mi} \equiv 0,\\ \label{JIfcII}&&f^{\;\;m}_{ij}c^{\;\;n}_{mk}-f^{\;\;m}_{ki}c^{\;\;n}_{mj}+ c^{\;\;m}_{jk}f^{\;\;n}_{mi}\equiv 0,\\ \nonumber \label{JIfcIII}&& c^{\;\;m}_{ij}c^{\;\;n}_{mk}- c^{\;\;m}_{ki}c^{\;\;n}_{mj} +f^{\;\;m}_{jk}f^{\;\;n}_{mi}\equiv 0. \end{eqnarray} The existence of fundamental identity (\ref{FunII}) allows discussing the associativity of multiplication in the algebra from a new standpoint. Indeed, let ${\cal A}$ be an algebra, then we can introduce the commutator and anticommutator by rules (\ref{comm}) as elements of ${\cal A}$. We suppose that fundamental identity (\ref{FunII}) is satisfied. We know that identities (\ref{JIab})-(\ref{JIantiII}) follow from (\ref{FunII}). We can then show that the multiplication in ${\cal A}$ is associative. For this, we introduce the multiplication for any two elements $X,Y\in {\cal A}$ as \begin{eqnarray} \label{defmula} XY=\frac{1}{2}\;\Big([X,Y]+\{X,Y\}\Big) \end{eqnarray} and we verify that equality \begin{eqnarray} \label{Imul} (XY)Z=X(YZ) \end{eqnarray} holds. Indeed, from (\ref{defmula}) that \begin{eqnarray} &&(XY)Z=[[X,Y],Z]+[\{X,Y\},Z]+\{[X,Y],Z\}+\{\{X,Y\},Z\}, \label{ass1}\\ &&X(YZ)=[X,[Y,Z]]+[X,\{Y,Z\}]+\{X,[Y,Z]\}+\{X,\{Y,Z\}\}. \label{ass2} \end{eqnarray} For the difference between (\ref{ass1}) and (\ref{ass2}), we obtain \begin{eqnarray}\label{ass3} &&(XY)Z-X(YZ)=-[Z,[X,Y]]-[X,[Y,Z]]- \nonumber \\ &&-[Z,\{X,Y\}]-[X,\{Y,Z\}]+\{Z,[X,Y]\}-\{X,[Y,Z]\}+ \nonumber \\ &&+\{Z,\{X,Y\}\}-\{X,\{Y,Z\}\}.\label{ass3} \end{eqnarray} With identity (\ref{JIantiII}) and Jacobi identity (\ref{JI}), which follows from it, taking into account, relation (\ref{ass3}) becomes \begin{equation}\label{ass4} (XY)Z-X(YZ)=-[Z,\{X,Y\}]-[X,\{Y,Z\}]+\{Z,[X,Y]\}-\{X,[Y,Z]\}. \end{equation} Expressing $[Z,\{X,Y\}]$ and $[X,\{Y,Z\}]$ using identity (\ref{JIantiI}), we now obtain associativity condition (\ref{Imul}). Therefore, for any algebra ${\cal A}$, fundamental identity (\ref{FunII}) is equivalent to the associativity condition. We formulate this result as a theorem: {\bf Theorem 1.} {\it For the associativity of multiplication in an algebra ${\cal A}$, it is necessary and sufficient that identity (\ref{FunII}), where the commutators and anticommutators are defined by rule (\ref{comm}), is satisfied}. Any associative algebra has the natural structure of a Lie algebra if the Lie bracket is defined in terms of associative multiplication. The converse does not hold in general: the Lie bracket in the general case does not allow introducing an associative multiplication. But the following theorem holds. {\bf Theorem 2.} {\it For any algebra ${\cal A}$ equipped with two bilinear operations $[\cdot,\cdot]$ and $\{\cdot,\cdot\}$ with the symmetry properties}: \begin{eqnarray} \label{syma} [X,Y]=-[Y,X], \qquad \{X,Y\}=\{Y,X\}, \end{eqnarray} {\it and satisfying the identities (\ref{JIantiI}) and (\ref{JIantiII}), associative multiplication can be introduced by rule (\ref{defmula})}. The proof of Theorem 2 is in fact contained in relations (\ref{Imul}) - (\ref{ass4}). We note that the above results do not use assumptions about the finiteness or infiniteness of a system of basis vectors $\{T_i\}$ of the algebra $\cal A$ or even the existence of a basis at all. \\ \section{Superextension} \noindent The results obtained in the preceding section can be extended to any associative superalgebras ${\cal A}_s$ with elements $X\in {\cal A}_s$ having the Grassmann parity $\varepsilon(X)$. Let $T_i, i=1,2,...,n,\; \varepsilon(T_i)=\varepsilon_i$ be a basis in ${\cal A}_s$ such that there exist the decomposition $X=x^iT_i,\; \varepsilon(x^i)=\varepsilon(X)+\varepsilon_i$, for any element $X\in {\cal A}_s$. Because $T_iT_j\in {\cal A}_s $, we have \begin{eqnarray} \label{TTs} T_iT_j=F^{\;\;k}_{ij}\;T_k, \end{eqnarray} where $F^{\;\;k}_{ij},\;(\varepsilon(F^{\;\;k}_{ij})= \varepsilon_i+\varepsilon_j+\varepsilon_k)$, are structure constants. In terms of $F^{\;\;k}_{ij}$ the associativity condition $(XY)Z=X(YZ)$ is \begin{eqnarray} \label{FFassup} F^{\;\;n}_{ij}F^{\;\;m}_{nk}=F^{\;\;n}_{j\;k}F^{\;\;m}_{in} (-1)^{\varepsilon_i(\varepsilon_j+\varepsilon_k+\varepsilon_n)}. \end{eqnarray} In the super case, the commutator $[\cdot,\cdot]$ and anticommutator $\{\cdot,\cdot\}$ are introduced for any two elements $X,Y\in {\cal A}_s$ by the relations \begin{eqnarray} \label{scomm} [X,Y]=XY-(-1)^{\varepsilon(X)\varepsilon(Y)}YX,\qquad \{X,Y\}=XY+(-1)^{\varepsilon(X)\varepsilon(Y)}YX \end{eqnarray} with obvious symmetries properties \begin{eqnarray} \label{scommsym} [X,Y]=-[Y,X](-1)^{\varepsilon(X)\varepsilon(Y)},\qquad \{X,Y\}=\{Y,X\}(-1)^{\varepsilon(X)\varepsilon(Y)}. \end{eqnarray} The Leibniz rules follow from (\ref{scomm}) \begin{eqnarray} \nonumber \label{Lscomm} &&[X,YZ]=[X,Y]Z+Y[X,Z](-1)^{\varepsilon(X)\varepsilon(Y)},\\ \label{Lsanticomm} &&\{X,YZ\}=\{X,Y\}Z-Y[X,Z](-1)^{\varepsilon(X)\varepsilon(Y)}. \end{eqnarray} For basis elements, we have \begin{eqnarray} \label{scommTT} [T_i,T_j]=f^{\;\;k}_{ij}T_k ,\qquad \{T_i,T_j\}= c^{\;\;k}_{ij}T_k, \end{eqnarray} where structure constants $f^{\;\;k}_{ij}$ and $c^{\;\;k}_{ij}$ have the symmetry properties \begin{eqnarray} \label{sfsym} c^{\;\;k}_{ij}= c^{\;\;k}_{ji}(-1)^{\varepsilon_i\varepsilon_j},\quad f^{\;\;k}_{ij}=-f^{\;\;k}_{ji}(-1)^{\varepsilon_i\varepsilon_j}. \end{eqnarray} They can be identified with symmetric and antisymmetric parts of the structure coefficients $F^{\;\;k}_{ij}$ \begin{eqnarray} \label{Fcfind} c^{\;\;k}_{ij}=F^{\;\;k}_{ij}+F^{\;\;k}_{j\;i} (-1)^{\varepsilon_i\varepsilon_j},\quad f^{\;\;k}_{ij}=F^{\;\;k}_{ij}-F^{\;\;k}_{j\;i} (-1)^{\varepsilon_i\varepsilon_j}. \end{eqnarray} For any associative superalgebra ${\cal A}_s$ and for any $X,Y,Z\in {\cal A}_s$, we have the identities \begin{eqnarray} \label{FunIs} &&[X,YZ](-1)^{\varepsilon(X)\varepsilon(Z)}+ [Z,XY](-1)^{\varepsilon(Z)\varepsilon(Y)}+ [Y,ZX](-1)^{\varepsilon(Y)\varepsilon(X)}\equiv 0\\ \label{FunIIs} &&[X,YZ](-1)^{\varepsilon(X)\varepsilon(Z)}+ \{Y,ZX\}(-1)^{\varepsilon(Y)\varepsilon(X)}- \{Z,XY\}(-1)^{\varepsilon(Z)\varepsilon(Y)}\equiv 0 , \end{eqnarray} which generalize relations (\ref{FunI}) and (\ref{FunII}). Identity (\ref{FunIs}) can be derived from (\ref{FunIIs}) and can be regarded as the fundamental identity for associative superalgebras. A set of identities in terms of double commutators and anticommutators is \begin{eqnarray} \label{JIs} [X,[Y,Z]](-1)^{\varepsilon(X)\varepsilon(Z)}+ [Z,[X,Y]](-1)^{\varepsilon(Z)\varepsilon(Y)}+ [Y,[Z,X]](-1)^{\varepsilon(Y)\varepsilon(X)}\equiv 0, \end{eqnarray} \begin{eqnarray} \label{JIsI} [X,\{Y,Z\}](-1)^{\varepsilon(X)\varepsilon(Z)}+ [Z,\{X,Y\}](-1)^{\varepsilon(Z)\varepsilon(Y)}+ [Y,\{Z,X\}](-1)^{\varepsilon(Y)\varepsilon(X)}\equiv 0, \end{eqnarray} \begin{eqnarray} \nonumber \label{JIantis} &&[X,\{Y,Z\}](-1)^{\varepsilon(X)\varepsilon(Z)}- \{Z,[X,Y]\}(-1)^{\varepsilon(Z)\varepsilon(Y)}+\\ &&\qquad\qquad\qquad\qquad\qquad + \{Y,[Z,X]\}(-1)^{\varepsilon(Y)\varepsilon(X)}\equiv 0,\\ \nonumber \label{JIantiIs} &&[X,[Y,Z]](-1)^{\varepsilon(X)\varepsilon(Z)}+ \{Y,\{Z,X\}\}(-1)^{\varepsilon(X)\varepsilon(Y)}-\\ &&\qquad\qquad \qquad\qquad\qquad- \{Z,\{X,Y\}\}(-1)^{\varepsilon(Z)\varepsilon(Y)} \equiv 0. \end{eqnarray} Identities (\ref{JIs}) and (\ref{JIsI}) respectively follow from (\ref{JIantiIs}) and (\ref{JIantis}) by summing over cyclic permutations. The identities in terms of symmetric and antisymmetric parts of structure constants follow from associativity condition (\ref{FFassup}), \begin{eqnarray} \nonumber \label{JIfs} &&f^{\;\;n}_{ij}f^{\;\;m}_{nk}(-1)^{\varepsilon_i\varepsilon_k}+ f^{\;\;n}_{ki}f^{\;\;m}_{nj}(-1)^{\varepsilon_k\varepsilon_j}+ f^{\;\;n}_{j\;k}f^{\;\;m}_{ni}(-1)^{\varepsilon_j\varepsilon_i} \equiv 0,\\ \nonumber \label{JIfcs} &&c^{\;\;n}_{ij}f^{\;\;m}_{nk}(-1)^{\varepsilon_i\varepsilon_k}+ c^{\;\;n}_{ki}f^{\;\;m}_{nj}(-1)^{\varepsilon_j\varepsilon_k}+ c^{\;\;n}_{j\;k}f^{\;\;m}_{ni}(-1)^{\varepsilon_i\varepsilon_j} \equiv 0,\\ \label{JIfcIIs}&&f^{\;\;n}_{ij}c^{\;\;m}_{nk}(-1)^{\varepsilon_i\varepsilon_k} -f^{\;\;n}_{ki}c^{\;\;m}_{nj}(-1)^{\varepsilon_k\varepsilon_j}+ c^{\;\;n}_{j\;k}f^{\;\;m}_{ni}(-1)^{\varepsilon_i\varepsilon_j}\equiv 0,\\ \nonumber \label{JIfcIIIs}&& c^{\;\;n}_{ij}c^{\;\;m}_{nk}(-1)^{\varepsilon_i\varepsilon_k}- c^{\;\;n}_{ki}c^{\;\;m}_{nj}(-1)^{\varepsilon_j\varepsilon_k} +f^{\;\;n}_{j\;k}f^{\;\;m}_{ni}(-1)^{\varepsilon_i\varepsilon_j}\equiv 0. \end{eqnarray} We can again consider the associativity of multiplication operation in superalgebras from a new standpoint. For this, we consider a superalgebra ${\cal A}_s$. We introduce the commutator and anticommutator by rule (\ref{scomm}) as elements of ${\cal A}_s$. We suppose that fundamental identity (\ref{FunIIs}) is satisfied. Using the representation of the multiplication for any two elements $X,Y\in {\cal A}_s$ in the form \begin{eqnarray}\label{defmuls} XY=\frac{1}{2}\Big([X,Y]+\{X,Y\}\Big) \end{eqnarray} and repeating the proof given in Section 2, we obtain the associativity of multiplication \begin{eqnarray} (XY)Z=X(YZ). \end{eqnarray} Consequently, we have the following theorem. {\bf Theorem 3.} {\it For associativity of multiplication in a superalgebra ${\cal A}_s$, it is necessary and sufficient that identity (\ref{FunIIs}), where the commutators and anticommutators are defined by rule (\ref{scomm}), is satisfied}. Any associative superalgebra has a natural structure of a Lie superalgebra if the Lie superbracket is defined in terms of associative multiplication by the formula $[X,Y]=XY-YX(-1)^{\varepsilon(X)\varepsilon(Y)}$. The converse does not hold, generally speaking: the Lie superbracket in the general case does not allow introducing an associative multiplication. But the following theorem holds. {\bf Theorem 4.} {\it Let ${\cal A}_s$ be a superalgebra equipped with two bilinear operations $[\cdot,\cdot]$ and $\{\cdot,\cdot\}$ with the symmetry properties}: \begin{eqnarray} [X,Y]=-(-1)^{\varepsilon(X)\varepsilon(Y)}[Y,X],\qquad \{X,Y\}=(-1)^{\varepsilon(X)\varepsilon(Y)}\{Y,X\}\;. \end{eqnarray} {\it If these operations satisfy identities (\ref{JIantis}) and (\ref{JIantiIs}), then an associative multiplication can be introduce in ${\cal A}_s$.} Indeed, we define the multiplication $X\circ Y$ by rule (\ref{defmuls}), \[ X\circ Y=\frac{1}{2}\Big([X,Y]+\{X,Y\}\Big), \] and apply the proof given in Section 2. We then obtain the associativity of this multiplication. In terms of this multiplication, the binary operations introduced above have the usual representation \begin{eqnarray} [X,Y]=X\circ Y -Y\circ X(-1)^{\varepsilon(X)\varepsilon(Y)},\quad \{X,Y\} =X\circ Y +Y\circ X(-1)^{\varepsilon(X)\varepsilon(Y)}. \end{eqnarray} We note that the above consideration does not contain any assumptions concerning a basis in the superalgebra. \\ \section{Nondegenerate symplectic (super)manifolds} We consider a nondegenerate symplectic supermanifold $({\cal M}, \omega)$, where ${\cal M}$ is a supermanifold and $\omega$ is a nondegenerate closed 2-form with the Grassmann parity $\varepsilon(\omega(X,Y))=\varepsilon(X)+\varepsilon(Y)+\varepsilon(\omega)$, where $X$ and $Y$ are elements of the cotangent space of ${\cal M}$. We speak of an even symplectic supermanifold if $\varepsilon(\omega)=0$ and of an odd symplectic supermanifold if $\varepsilon(\omega)=1$. It is well known (see, e. g., \cite{Ar}) that an even symplectic supermanifold is the foundation for describing classical dynamical systems in the Hamiltonian formalism. In turn, the covariant quantization of gauge theories (Batalin-Vilkovisky method \cite{BV}) is based on a nondegenerate odd symplectic supermanifolds. Any nondegenerate closed symplectic structure defines the Poisson superbracket $\{F,G\}$\; $(\varepsilon(\{F,G\})=\varepsilon(F)+\varepsilon(G)+\varepsilon(\omega))$, which for any two scalar functions $F,\;G$ on ${\cal M}$ is a scalar under general changes of coordinates on ${\cal M}$. The Poisson superbracket has the properties of antisymmetry \begin{eqnarray} \label{PBant} \{F,G\}=-\{G,F\} (-1)^{(\varepsilon(F)+\varepsilon(\omega))\;(\varepsilon(G)+\varepsilon(\omega))} \end{eqnarray} the linearity \begin{eqnarray} \{F+G,H\}=\{F,H\}+\{G,H\} \end{eqnarray} and satisfied the Leibniz rule \begin{eqnarray} \label{PBL} \{F,GH\}=\{F,G\}H+\{F,H\}G(-1)^{\varepsilon(H)\varepsilon(G)} \end{eqnarray} and the Jacobi identity \begin{eqnarray} \nonumber &&\{F,\{G,H\}\}(-1)^{(\varepsilon(F)+\varepsilon(\omega))\;(\varepsilon(H)+\varepsilon(\omega))}+ \{H,\{F,G\}\}(-1)^{(\varepsilon(H)+\varepsilon(\omega))\;(\varepsilon(G)+\varepsilon(\omega))}+\\ \label{PBJI} &&+\{G,\{H,F\}\}(-1)^{(\varepsilon(G)+\varepsilon(\omega))\; (\varepsilon(F)+\varepsilon(\omega))}\equiv 0, \end{eqnarray} which is consequence of the closedness of symplectic structure. In even case ($\varepsilon(\omega)=0$), the Poisson superbracket coincides with the superextension of the Poisson bracket. In odd case ($\varepsilon(\omega)=1$), the Poisson superbracket is the antibracket, which is one of the fundamental operations in the BV quantization method \cite{BV,BV5} and it is known in mathematics as the Buttin bracket \cite{Bu}. We note that in even case ($\varepsilon(\omega)=0$), the identity \begin{eqnarray} \label{PBnI} \{F,GH\}(-1)^{\varepsilon(F)\varepsilon(H)}+ \{H,FG\}(-1)^{\varepsilon(H)\varepsilon(G)}+ \{G,HF\}(-1)^{\varepsilon(G)\varepsilon(F)} \equiv 0. \end{eqnarray} follows from (\ref{PBant}) and (\ref{PBL}). Unfortunately, in contrast to associative algebras, we cannot regard this identity as fundamental for nondegenerate closed even symplectic supermanifolds, because we cannot deduce Jacobi identity (\ref{PBJI}) from (\ref{PBnI}). Nevertheless if the canonical quantization is applied to a dynamical system for which the phase space is described by a nondegenerate even closed symplectic supermanifold, then the Poisson bracket is transformed into the commutator $\{F,G\}\rightarrow (i\hbar)^{-1}[{\hat F},{\hat G}]$, and the identity (\ref{PBnI}) reduces to (\ref{FunIs}) for the operators ${\hat F}, {\hat G}, {\hat H}$. \section{Generalization of basic identities} We note that for any associative algebra, we have the identities \begin{eqnarray} \label{GenId} [X_1,X_2X_3\cdots X_n]+cycle(X_1,X_2,...,X_n) \equiv 0,\qquad n=3,4,..., \end{eqnarray} which generalize identity (\ref{FunI}). The Jacobi identity \begin{eqnarray} \nonumber [[X_1,X_2],X_3]+ [[X_3,X_1],X_2]+ [[X_2,X_3],X_1]\equiv 0 \end{eqnarray} follows from (\ref{GenId}) for $n=3$ \begin{eqnarray} \label{GenJI3our} &&[X_1,X_2X_3]- [X_2,X_1X_3]+cycle(X_1,X_2,X_3) \equiv 0. \end{eqnarray} The generalized Jacobi identity for $n=4$ was discussed in \cite{We} and had the form \begin{eqnarray} \label{GenJI4} [[[X_1,X_2],X_3],X_4]+ [[[X_2,X_1],X_4],X_3]+ [[[X_3,X_4],X_1],X_2]+[[[X_4,X_3],X_2],X_1] \equiv 0. \end{eqnarray} This identity can be obtained from (\ref{GenId}). Indeed, a direct verification shows that it in fact coincides with \begin{eqnarray} \label{GenJI4our1} &&[X_1,X_2X_3X_4]- [X_2,X_1X_3X_4]+ [X_4,X_3X_2X_1]- [X_4,X_3X_1X_2]+\\ \nonumber &&\qquad+ cycle(X_1,X_2,X_3,X_4) \equiv 0. \end{eqnarray} The generalization of identity (\ref{GenJI4}) for $n=5,6,...$ was given in \cite{BL}, and these generalized Jacobi identities can again be derived from fundamental identities (\ref{GenId}). We can also suggest a generalization of identity (\ref{FunII}). For four elements, we have \begin{eqnarray} \label{Funm4} [X_1,X_2X_3X_4]-\{X_4,X_1X_2X_3\}+\{X_3,X_4X_1X_2\}+[X_2,X_3X_4X_1]\equiv 0. \end{eqnarray} In general, for any associative algebra, there exist the identities \begin{eqnarray} \label{FunmII} &&[X_1, X_2X_3\cdots X_n]-\{X_n,X_1X_2\cdots X_{n-1}\}+\{X_{n-1},X_nX_1\cdots X_{n-2}\}+\\ \nonumber &&+[X_{n-2},X_{n-1}X_nX_1\cdots X_{n-3}]+[X_{n-3}, X_{n-2}X_{n-1}X_nX_1\cdots X_{n-4}]+\\ \nonumber &&+\cdots +[X_2,X_3\cdots X_nX_1]\equiv 0,\quad n\geq 4 . \end{eqnarray} The proof of (\ref{FunmII}) is based on the obvious identity \begin{eqnarray} \label{FunmuII} &&[X_n,X_1 X_2\cdots X_{n-1}]+[X_{n-1},X_nX_1\cdots X_{n-2}]+\\ \nonumber &&+\{X_n,X_1X_2\cdots X_{n-1}\}-\{X_{n-1},X_nX_1\cdots X_{n-2}\}\equiv 0,\quad n\geq 2 . \end{eqnarray} Applying identity (\ref{GenId}) to (\ref{FunmuII}), we then obtain identity (\ref{FunmII}). As in the case of (\ref{FunI}), identity (\ref{GenId}) can be derive from (\ref{FunmII}). For this, we sum (\ref{FunmII}) over cyclic permutations. We have \begin{eqnarray} \label{FunmIII} \nonumber &&\big((n-2)[X_1, X_2X_3\cdots X_n]-\{X_n,X_1X_2\cdots X_{n-1}\}+\{X_{n-1},X_nX_1\cdots X_{n-2}\}\big)+\\ &&\qquad +cycle(X_1,X_2,...,X_n)\equiv 0. \end{eqnarray} But \begin{eqnarray} \label{FunmIV} \nonumber \big(\{X_n,X_1X_2\cdots X_{n-1}\}-\{X_{n-1},X_nX_1\cdots X_{n-2}\}\big)+cycle(X_1,X_2,...,X_n)\equiv 0 \end{eqnarray} and identity (\ref{GenId}) follows from (\ref{FunmIII}). This allows speaking about identities (\ref{FunmII}) as fundamental for any associative algebra. \\ \section{Conclusions} We have discussed identity (\ref{FunII}) for arbitrary associative algebra and analog (\ref{FunIIs}) of this identity for an arbitrary associative (super)algebra . We proposed regarding these identities as fundamental because they are described in terms of single commutators and anticommutators in contrast to the identities for algebras usually discussed (see \cite{MR,LV}), which are in fact consequences of these identities. We proved (Theorem 2 (Theorem 4)) that any algebra or superalgebra endowed with two bilinear operations (commutator and anticommutator) satisfying identities (\ref{JIantiI}) and (\ref{JIantiII}) or ((\ref{JIantis}) and (\ref{JIantiIs})) is an associative algebra or superalgebra \footnote{Note that for algebras this fact was proved in \cite{MR}.}. We can stress that there were no assumptions in the proof concerning the finiteness or even the existence of a basis for a given algebra or superalgebra. We introduced identity (\ref{PBJI}) for any nondegenerate even symplectic supermanifold and discussed an application of this identity in the canonical quantization of dynamical systems. Finally, we proposed a generalization of the basic identities to the case of an arbitrary number of elements involved in these relations (see (\ref{GenId}) and (\ref{FunmII})).\\ \section*{Acknowledgments} \noindent The authors thank V. V. Zharinov and A. K. Pogrebkov for carefully reading the manuscript and also D. A. Leites and P. A. Zusmanovich for the useful remarks and references concerning identities in algebras. This work is supported in part by the Program for Supporting Leading Scientific Schools (grant 88.2014.2, P.M.L. and O.V.R.), the Ministry of Education and Science of the Russian Federation (grant TSPU-122, P.M.L. and O.V.R), and the Russian Foundation for Basis Research (N 12-02-00121, P.M.L. and O.V.R., and grant N 14-01-00489, I.V.T.). \bigskip \begin {thebibliography}{99} \addtolength{\itemsep}{-3pt} \bibitem{Ar} V.I. Arnold, {\it Mathematical Methods of Classical Mechanics} (Springer-Verlag, Berlin, 1978). \bibitem{BSh} N.N. Bogoliubov, D.V. Shirkov, {\it Introduction to the Theory of Quantized Field} (John Wiley and Sons Inc., 1959). \bibitem{BV1} I.A. Batalin I.A., Vilkovisky G.A., {\it Relativistic $S$-matrix of dynamical systems with boson and fermion constraints}, Phys. Lett. {\bf 69B} (1977) 309. \bibitem{BF} I.A. Batalin I.A., Fradkin E.S., {\it A generalized canonical formalism and quantization of reducible gauge theories}, Phys. Lett., {\bf 122B} (1983) 157. \bibitem{GT} D.M. Gitman, I.V. Tyutin, {\it Quantization of fields with constraints} ( Springer, Berlin, 1990). \bibitem{HT} M. Henneaux, C. Teitelboim, {\it Quantization of gauge systems} (Princeton U.P., Princeton, 1992). \bibitem{Leites} D. A. Leites, {\it Introduction to the theory of supermanifolds}, Russian Mathematical Surveys, {\bf 35}:1 (1980), 1. \bibitem{DeWitt} B. DeWitt, {\it Supermanifolds} (Second Edition, Cambridge University Press, 1992). \bibitem{BV} I.A. Batalin, G.A. Vilkovisky, {\it Gauge algebra and quantization}, Phys. Lett. {\bf B102} (1981) 27. \bibitem{MR} M. Markl, E. Remm, {\it Algebras with one operation including Poisson and other Lie-admissible algebras}, J. Algebra {\bf 299} (2006) 171. \bibitem{LV} J.-L. Loday, B. Vallette, {\it Algebraic Operados} (Springer, 2012). \bibitem{BV5} I.A. Batalin, G.A. Vilkovisky, {\it Quantization of gauge theories with linearly dependent generators}, Phys. Rev. {\bf D28} (1983) 2567. \bibitem{Bu} C. Buttin, {\it Les d\`{e}rivations des champs de tenseurs et l'invariant differentiel de Schouten}, C.R. Acad. Sci. Paris, {\bf 269}:1 (1969), A87-A89. \bibitem{We} F. Wever, {\it \"{U}ber Invarianten in Lie'schen Ringen}, Math. Ann. {\bf 120} (1949) 563. \bibitem{BL} D. Blessnohl, H. Laue, {\it Generalized Jacobi identities}, Note di Matematika, Vol. {\bf VIII} (1988) 111. \end{thebibliography} \end{document}
1,314,259,994,002
arxiv
\section{Introduction} The Free-Electron Laser (FEL) is a proven source of high-power tunable radiation over a wide spectral range into the hard X-ray~\cite{McNeil xray}, where its output is transforming our ability to investigate matter and how it functions, in particular in biology~\cite{xfelsci}. In addition to the atomic spatiotemporal resolution offered by the short wavelengths and pulses, the FEL can also generate radiation output from planar through to full circular polarisation using undulators of variable ellipticity such as the APPLE-III undulator design, proposed for SwissFEL~\cite{swissund}, and the Delta undulator design~\cite{deltaund}, installed at LCLS~\cite{lclsund}. This variably polarised output offers another important degree of freedom with which to investigate the behaviour of matter and is of significant interest across a wide range of science~\cite{couprie,mazza,emma1,emma2}. FEL user facilities, such as the FERMI user facility in Italy, are now recognising and addressing this need for elliptically polarised output~\cite{fermi1,fermi2}. In a planar undulator, the electrons have a fast axial `jitter' motion at twice the undulator period as they propagate along the undulator axis. In addition to the coupling of the electrons to the fundamental radiation wavelength, the jitter motion allows coupling to odd harmonics of the fundamental, which can also experience gain. A commonly used model used for simulating the FEL interaction is the `averaged' model which, as the name suggests, averages the governing Maxwell and Lorentz equations describing the electron/radiation coupling over an undulator period~\cite{colson}. The averaging of the jitter motion introduces coupling terms described by a difference of Bessel functions which depend upon both the undulator strength and the harmonic~\cite{colson,bw}. For an helical undulator, there is no electron jitter and the difference of Bessel functions coupling terms become a constant for the fundamental and zero for all harmonics, i.e in an helical undulator there is no gain coupling to harmonics. It is perhaps surprising that the equivalent coupling terms for an elliptically polarised undulator have not been derived previously. In this paper, the coupling terms due to electron jitter motion are calculated in a general way for all undulator ellipticities from a planar through to an helical configuration, corresponding to those now available from variably polarised undulators, so enabling more accurate modelling of this important type of FEL output. The resultant derived coupling terms, which are more general form of the difference of Bessel functions factors of the planar undulator case, are used to predict the scaling of the FEL interaction for a range of undulator ellipticities. An averaged FEL simulation code then uses the general Bessel function factors to give solutions of elliptically polarised FEL output into the nonlinear, high-gain regime and tested against the scaling. A further test is also made by comparing the results of the averaged FEL simulations with an unaveraged simulation code, Puffin~\cite{Campbell Puffin}. New, perhaps unexpected, results are presented and discussed. \section{The elliptical undulator model} In this section the equations describing the electron beam and radiation evolution in an elliptically polarised undulator are derived in the 1D limit. The equations are averaged over an undulator period removing any sub-wavelength information or effects such as Coherent Spontaneous Emission. The undulator magnetic field with variable ellipticity is simply defined as: \begin{align} \textbf{B}_u = -B_0 \sin(k_u z)\hat{\textbf{x}} + u_e B_0 \cos(k_u z)\hat{\textbf{y}} \label{Ufield} \end{align} where $u_e$ describes the undulator ellipticity, $B_0$ the peak undulator magnetic field, and $k_u = 2 \pi/\lambda_u$ where $\lambda_u$ is the undulator period. The undulator ellipticity parameter varies in the range $0\leq u_e\leq 1$, from a planar ($u_e=0$) through to an helical undulator ($u_e=1$) to give an RMS elliptical undulator parameter of: \begin{align} \bar{a}_u = \sqrt{\frac{ 1 + u_e^2}{2} }\;a_u, \label{rms undulator parameter} \end{align} where the peak undulator parameter is defined as $a_u = e B_0/m c k_u$. The resonant fundamental FEL wavelength is then: \begin{align} \lambda_r = \frac{\lambda_u}{2 \gamma_r^2} ( 1 + \bar{a}_u^2), \end{align} where the resonant electron energy in units of electron rest mass $\gamma_r = \left<\gamma\right>$, the mean of the electron beam. \subsection{The electron equations} In the averaged FEL model the electron orbits are first calculated in the absence of any radiation field from the Lorentz force equation: \begin{align} \frac{d \boldsymbol{\beta}_j}{d t} = -\frac{e}{\gamma_j m} \boldsymbol{\beta}_j \times \boldsymbol{B}_u \label{lorentz} \end{align} where $\boldsymbol{\beta}_j = \textbf{v}_j/c$ and $\gamma_j$ are the $j^{th}$ electron's velocity scaled with respect to the speed of light $c$, and the corresponding Lorentz factor. Substituting for the undulator field~(\ref{Ufield}), and integrating the Lorentz equation~(\ref{lorentz}), the scaled electron velocity components are obtained: \begin{align} \beta_{xj} &= \left({\frac{2 u_e^2 }{1 + u_e^2} }\right)^{1/2} \frac{\bar{a}_u}{ \gamma_j } \sin(k_u z) \label{betax} \\ \beta_{yj} &= -\left({\frac{ 2 }{1 + u_e^2} }\right)^{1/2} \frac{\bar{a}_u}{ \gamma_j } \cos(k_u z) \label{betay} \\ \beta_{zj} &= \left[ \bar{\beta}_z^2 - \left(\frac{1 -u_e^2}{1 +u_e^2}\right)\frac{\bar{a}_u^2}{ \gamma_j^2}\cos(2k_u z)\right]^{1/2} \label{betaz} \ \end{align} where $\bar{v}_z = c\bar{\beta}_z$ is the average longitudinal electron velocity. The constants $m$ and $e$ take their usual meanings of rest mass and charge magnitude of the electron. Introducing the non-unit vector basis ${\bf f} = \frac{1}{\sqrt{2} }( u_e \hat{\textbf{x}} + i \hat{\textbf{y}})$, so that ${\bf f} \cdot {\bf f} = -(1-u_e^2)/2$ and ${\bf f} \cdot {\bf f^*} = (1+u_e^2)/2$, the perpendicular components may be written: \begin{equation} {\boldsymbol \beta}_{\perp j} = \frac{i}{\sqrt{1+u_e^2}} \frac{\bar{a}_u}{\gamma_j}\left({\bf f}\exp\left(-ik_uz\right)- c.c.\right) \label{betaperp}. \ \end{equation} Integrating equation~(\ref{betaz}), the longitudinal electron trajectory in the presence of the undulator field only is: \begin{align} z_j\left(t\right) &= c\bar{\beta}_z t - \frac{ \bar{a}_u^2}{4 \gamma_j^2 k_u \bar{\beta}_z^2} \left(\frac{1 -u_e^2}{1 +u_e^2}\right)\sin(2 k_u c\bar{\beta}_z t). \label{z oscil} \end{align} The oscillatory term in (\ref{z oscil}) describes the `figure-of-eight' longitudinal jitter motion of the electron in a non-helical undulator associated with coupling to harmonics of the radiation field~\cite{colson}. A co-propagating radiation field is similarly defined using the same non-unit vector basis ${\bf f}$ as the sum over harmonics of the fundamental resonant field, i.e.\ ${\bf E}=\sum_{n}{\bf E}_n$, where: \begin{align} \textbf{E}_n\left(z,t\right) = \frac{i}{\sqrt{2}}\left({\bf f}\;\mathcal{E}_n\left(z,t\right) e^{ i n(k_r z - \omega_r t)}-c.c.\right)\label{E_n}. \end{align} The scaled energy evolution of the $j$th electron in the transverse plane-wave radiation field of~(\ref{E_n}) may then be written as: \begin{align} \frac{d \gamma_j}{d t} = - \frac{e}{mc} \sum_{n} \boldsymbol{\beta}_{\perp j} \cdot \textbf{E}_n , \label{gamma derivative wrt time1} \end{align} Using equations for the electron motion~(\ref{betaperp}, \ref{z oscil}), the electric field (\ref{E_n}) and the identity: \begin{equation} e^{ i x \sin(\phi)} = \sum\limits_{n=-\infty}^\infty J_n(x)e^{ i n \phi}, \label{Jidentity} \end{equation} the equation for the electron energy~(\ref{gamma derivative wrt time1}) simplifies to: \begin{align} \frac{d \gamma_j}{d t} = - \frac{e}{4mc} \left(\frac{2}{1+u_e^2}\right)^{1/2} \frac{\bar{a}_u}{\gamma_j} \sum_{n} \left( \mathcal{E}_n e^{i n\theta_j } \left( (1+u_e^2) \right. \right. \sum\limits_{m=-\infty}^\infty J_m(b)e^{-i (n - 1 + 2m) k_u \bar{\beta}_z c t} \nonumber \\ + (1-u_e^2) \sum\limits_{m=-\infty}^\infty J_m(b)e^{-i (n + 1 + 2m) k_u \bar{\beta}_z c t} \left. \big) + c.c. \right) \label{E dot beta bessel sum}, \end{align} where $\theta_j = ( k_r + k_u) \bar{\beta}_z c t - \omega_r t$ is the pondermotive phase. Resonant, non-oscillatory terms, which do not average to zero over an undulator period occur only for $n \pm 1 + 2m = 0$, so that on averaging over an undulator period equation~(\ref{E dot beta bessel sum}) simplifies further to: \begin{align} \frac{d \gamma_j}{d t} = - \frac{e}{4mc} \left(\frac{2}{1+u_e^2}\right)^{1/2} \frac{\bar{a}_u}{\gamma_j} \sum_{n} JJ_n \big( \mathcal{E}_n e^{i n\theta_j } \ + c.c. \big), \label{E dot beta bessel} \ \end{align} where: \begin{align} JJ_n &= (-1)^{\frac{n-1}{2}} \big( (1+u_e^2) J_{\frac{n-1}{2}}(\xi) - (1-u_e^2) J_{\frac{n+1}{2}}(\xi) \big), \\ \xi &= \frac{n \bar{a}_u^2}{2(1 + \bar{a}_u^2) } \frac{ 1 -u_e^2}{ 1 + u_e^2}. \end{align} \subsection{The wave equation} The 1D wave equation is used to model the plane wave radiation field evolution and is given by: \begin{align} \Big( \frac{\partial^2 }{\partial z^2} - \frac{1}{c^2} \frac{\partial^2 }{\partial t^2} \Big) \textbf{E} = \frac{\mu_0}{\sigma} \frac{\partial \textbf{J}_\perp}{\partial t} \label{Maxwell wave equation} \end{align} where $\sigma$ is the transverse area of the co-propagating planar radiation field and electron beam with transverse current density of ${\bf J}_\perp=-ec\sum_{j=1}^{N}\boldsymbol{\beta}_{\perp}\delta\left({\bf r}-{\bf r}_j\left(t\right)\right)$. The transverse components of the electric field and transverse current density are defined by $E_{\perp} = \sqrt{2}\; \textbf{E}\cdot {\bf f^*}$ and $J_{\perp} = \sqrt{2}\; \textbf{J} \cdot{\bf f^*} $ respectively. In the 1D limit, the wave equation~(\ref{Maxwell wave equation}) simplifies to: \begin{align} \Big( \frac{\partial^2 }{\partial z^2} - \frac{1}{c^2} \frac{\partial^2 }{\partial t^2} \Big) E_{\perp} = \frac{\mu_0}{\sigma} \frac{\partial J_{\perp}}{\partial t}. \label{1D Maxwell Wave Equation} \end{align} By using the transverse velocity~(\ref{betaperp}), the harmonic fields~(\ref{E_n}) and by neglecting the backward wave as detailed in~\cite{bw} then, using the Bessel identity~(\ref{Jidentity}), the wave equation~(\ref{1D Maxwell Wave Equation}) reduces to a wave equation for each harmonic envelope $\mathcal{E}_n$: \begin{align} \Big( \frac{\partial }{\partial z } + \frac{1}{c} \frac{\partial}{\partial t} \Big) \mathcal{E}_n = \frac{e\mu_0 c^2\bar{a}_u}{\sqrt{2}\sigma (1+u_e^2)^{3/2}} \sum_{j=1}^N \frac{e^{-in \theta_j}}{\gamma_j} \big( (1+u_e^2) \sum\limits_{m=-\infty}^\infty J_m(b) e^{i (n-1 + 2m) k_u \bar{\beta}_z c t} \nonumber \\ + (1- u_e^2) \sum\limits_{m=-\infty}^\infty J_m(b)e^{i (n+1 + 2m)k_u \bar{\beta}_z c t } \big) \ \delta(z - z_j(t)), \label{Wave Equation sum of Bessel function} \end{align} where $\theta = (k_r + k_u)z - \omega_r t$ is the ponderomotive phase of the fundamental wavelength. Resonant terms are only seen to occur for $ n \pm 1 + 2m = 0$ and, as $m$ is integer, the harmonic numbers $n$ are therefore odd. Applying this resonant condition yields: \begin{align} \Big( \frac{\partial }{\partial z } + \frac{1}{c} \frac{\partial}{\partial t} \Big) \mathcal{E}_n = \frac{e\mu_0 c^2\bar{a}_u}{\sqrt{2}\sigma (1+u_e^2)^{3/2}} \sum_{j=1}^N \frac{e^{-in \theta }}{ \gamma_j} JJ_n \ \delta(z - z_j(t)). \label{uawave} \end{align} \subsection{The scaled FEL model} The scaling of~\cite{bnp,sr} is now applied using the FEL parameter $\rho = \gamma_r^{-1} (\bar{a}_u \omega_p / 4 c k_u )^{2/3}$, where $\omega_p$ is the peak non-relativistic plasma frequency of the electron beam. The wave equation for field~(\ref{uawave}) is also averaged over a radiation wavelength by assuming the field envelope does not change in this interval. The independent variables are the scaled distance through the FEL $\bar{z} = z/l_g$, and scaled position in the electron beam rest-frame $\bar{z}_1 = (z - c \bar{\beta}_z t)/ \bar{\beta}_zl_c = 2 \rho \theta_j$, where $l_g=\lambda_u/4\pi\rho$ and $l_c=\lambda_r/4\pi\rho$ are respectively the gain length and cooperation length of the FEL interaction at the fundamental ($n=1$) in an helical undulator ($u_e=1$)~\cite{sr}. Clearly, and as shown from the scaling below, these lengths are different for interactions at harmonics and in an elliptical undulator. Introducing the scaled harmonic radiation envelopes: \begin{align} A_n = \sqrt{\frac{1+u_e^2}{2}} \frac{\bar{a}_u e \mathcal{E}_n}{4 m c^2 k_u \left( \rho \gamma_r \right)^{2}}, \label{A_n} \end{align} the scaled electron energy $p_j = (\gamma_r - \gamma_j)/\rho \gamma_r$ and using the definition of the ponderomotive phase $\theta$, the scaled equations for the 1D FEL interaction in an elliptically polarised undulator including harmonic radiation fields are given by: \begin{align} \frac{d \theta_j}{d \bar{z}} &= p_j \label{dtheta} \\ \frac{d p_j}{d\bar{z}} &= -\sum_{n, odd} \alpha_n \Big( A_n e^{i \theta_j} + c.c. \Big) \label{dp}\\ \left( \frac{\partial}{\partial \bar{z}} + \frac{\partial}{\partial \bar{z}_1} \right) A_n &= \alpha_n\; \chi\left(\bar{z}_1\right)\big<e^{- i \theta_j} \big>.\label{dA} \ \end{align} where $\alpha_n$ are ellipticity dependent coupling parameters given by: \begin{align} \alpha_n = \frac{JJ_n}{1 + u_e^2}, \label{alpha} \end{align} and $\chi\left(\bar{z}_1\right)=I\left(\bar{z}_1\right)/I_{pk}$ is the beam current scaled with respect to its peak value~\cite{bw}. There is one wave equation of type~(\ref{dA}) for each harmonic considered. Notice from~(\ref{A_n}), that the harmonic field envelopes $\mathcal{E}_n$ are scaled so that the $|A_n|^2$ are proportional to the power of the elliptically polarised harmonic radiation fields over the full range of $u_e$, from planar to helical polarisation. \section{Modelling the elliptical undulator FEL} The equations for the elliptical model~(\ref{dtheta} - \ref{dA}) are now solved for a range of ellipticity parameters $u_e$. The solutions are determined by the ellipticity and harmonic dependent coupling parameters $\alpha_n$ which are specified and used in scaling to predict the gain length and saturation powers of the elliptical FEL interaction. Numerical solutions of the averaged elliptical FEL model of above are also compared with the unaveraged model of `Puffin'~\cite{Campbell Puffin}. As the equations of this model are unaveraged, no factors such as~(\ref{alpha}) appear in the model and Puffin can simulate the FEL interaction for an undulator of any ellipticity and over a broad radiation bandwidth that includes harmonic content. \subsection{Scaling} Figure~(\ref{fig1}) plots the elliptical coupling parameters $\alpha_n$ as a function of the ellipticity parameter $u_e$ for the resonant odd harmonics $n=1..7$ and for a range of RMS undulator parameters $\bar{a}_u$. \begin{figure}[!ht] \centering \includegraphics[width=140mm,height=120mm]{fig1.jpg} \caption[]{The elliptical coupling parameters $\alpha_n$ plotted as a function of the ellipticity parameter $u_e$ for the first four odd harmonics $n=1, 3 ,5 ,7$. Four different RMS undulator parameters are shown in each graph: $\bar{a}_u=0.5 , 1.0, 2.5, 5.0$. The $\alpha_n$ agree with previous analysis in the helical and planar limits, $u_e=1$ and $u_e=0$ respectively. Note that for larger undulator parameters $\bar{a}_u$, the coupling parameters $\alpha_n$ for harmonics maximise for an elliptical undulator configuration, $u_e>0$. For example, for the third harmonic with $\bar{a}_u=5$, then $\alpha_3$ is maximised for an undulator ellipticity of $u_e \approx 0.34$. \label{fig1}} \end{figure} The coupling parameters agree with previous results in the helical and planar limits. It is worth noting that for the harmonic fields $n>1$, and for larger undulator parameters $\bar{a}_u$, that the coupling is stronger for elliptically polarised undulators rather than the planar case of $u_e=0$. This result is perhaps somewhat unexpected. If the equations for the elliptical model~(\ref{dtheta}-\ref{dA}) are written in the absence of any harmonic interactions, i.e. for $n=1$ only, then the elliptical coupling parameter $\alpha_1$ could be incorporated into the scaling to give a system of universally scaled equations with no free parameters~\cite{bnp}. In this case the FEL scaling parameter would now depend upon the elliptical coupling parameter for the fundamental as $\rho \propto \alpha_1^{2/3}$, so that the gain length of the interaction, and so also the saturation length $z_{sat}$, would scale as $l_g, z_{sat} \propto \alpha_1^{-2/3}$. The scaled saturation power would scale as $|A|^2_{sat} \propto \alpha_1^{2/3}$. In the simulations which follow, an electron pulse of charge 70 pC is assumed with a uniform current, $\chi(\bar{z}_1)=1$, over scaled pulse length of $\bar{l}_e = l_e/l_c = 129$. A mean beam energy $\gamma_r = 1500$ with zero energy spread and an FEL parameter of $\rho = 2 \times 10^{-3}$ is used. Unless otherwise stated, the undulator has fixed RMS undulator parameter of $\bar{a}_u = 1.0$ independent of the undulator ellipticity, to give a fixed resonant radiation wavelength of $\lambda_r=16$nm. A seed laser of scaled amplitude of $A_0 = 10^{-4}$ was used to initiate the FEL interaction. This eliminates shot-to-shot variation of the radiation pulse saturation energy and saturation length which occurs when the interaction starts from noise, simplifying comparison with analysis and the results obtained from the solutions of the different numerical codes. The total scaled energy of an harmonic of the radiation pulse is defined by: \begin{align} E_n(\bar{z})=\int^{+\infty}_{-\infty} |A_n(\bar{z},\bar{z}_1)|^2 d \bar{z}_1. \label{energy} \end{align} with the total given by the sum over the odd harmonics $E=\sum_{n,odd}E_n$. As the electron pulse is many cooperation lengths long ($\bar{l}_e = 129$) and the interaction is seeded, the interaction will approximate a steady-state interaction where pulse effects are small. In this case, the scaled pulse energy at saturation, either for a particular harmonic component $n$ or for the total, will be $E_{sat}\approx \bar{l_e} |A|_{sat}^2$. For an helical undulator in the steady-state, the scaled saturation power of the fundamental ($n=1$) is $|A|_{sat}^2\approx 1.37$. For the case considered here this gives a scaled pulse energy at saturation of $E_{sat}\approx 177$. In order to test the above scaling for the scaled saturation energy and saturation length, the equations~(\ref{dtheta} - \ref{dA}) were solved numerically for the above parameters in the absence of any harmonic interaction for a range of undulator ellipticities. Figure~\ref{fig2} demonstrates that the numerical solutions are in very good agreement with the predicted scaling. \begin{figure} \includegraphics[width=140mm,height=110mm]{fig2.jpg} \caption[]{Comparison between numerical solutions of the averaged model of equations~(\ref{dtheta} - \ref{dA}) in the absence of any harmonic interactions (red crosses) and the predicted scaling with respect to the elliptical coupling parameter of the fundamental $\alpha_1$ (blue line) for the full range of the ellipticity from planar ($u_e=0$) to helical ($u_e=1$). The top plot shows the saturated pulse energy $E_{sat}$ and the lower the scaled saturation length $\bar{z}_{sat}$. \label{fig2}} \end{figure} \subsection{Comparison between averaged and unaveraged models} Numerical solutions to the averaged elliptical model of equations~(\ref{dtheta}-\ref{dA}) are now compared with the those generated by the unaveraged code Puffin~\cite{Campbell Puffin}, which is able to model an FEL interaction in an elliptically polarised undulator across a broad bandwidth radiation field that includes harmonic content. The unaveraged electron motion of the Puffin model includes any `jitter' motion of equation~(\ref{z oscil}) due to an elliptically polarised undulator. As Puffin is an unaveraged FEL simulator, the effects of Self Amplified Coherent Spontaneous Emission (SACSE) can be significant when modelling a `flat-top' electron bunch which has discontinuities in the electron beam current. As these effects cannot be modelled in an averaged model, the electron bunch used in the Puffin simulations here is modified to have smooth ramp down in current over several radiation wavelengths at the electron bunch edges. This smooth ramping of the current significantly reduces the generation of any Coherent Spontaneous Emission, enabling a better comparison between the two models. In what follows, only the fundamental and third harmonics ($n=1,3$) are modelled using the above parameters. In the averaged model, the harmonic radiation content is obtained directly from the individual harmonic components, $A_n$. In the unaveraged model, however, access to the content of each harmonic is obtained by fourier filtering the broadband radiation field about a narrow bandwidth of the particular harmonic of interest (in this case for $n=3$.) Figure~\ref{fig3} plots the scaled pulse energy of the fundamental $E_1$, from the averaged and Puffin simulations as a function of scaled propagation distance through the interaction $\bar{z}$, for three different undulator ellipticities, $u_e = 0, 0.5, 1.0$. \begin{figure}[!ht] \centering \includegraphics[width=140mm,height=110mm]{fig3.jpg} \caption[]{Simulations using the averaged and unaveraged models show excellent agreement for the evolution of the scaled radiation pulse energy of the fundamental $E_1$, as a function of scaled distance through the undulator for planar (top, $u_e=0.0$), elliptical (middle, $u_e=0.5$) and helical (bottom, $u_e=1.0$) undulator polarisation. \label{fig3}} \end{figure} Excellent agreement between the simulations is seen for all $u_e$, well into the saturated, non-linear regime. The scaled radiation pulse energies $E_n$ of the fundamental and third harmonic for both averaged and unaveraged simulations for the planar undulator ($u_e=0$) are shown in figure~\ref{fig4}. \begin{figure}[!ht] \centering \includegraphics[width=140mm,height=110mm]{fig4.jpg} \caption[]{Comparison of the scaled pulse radiation energies $E_{1,3}$ for averaged and unaveraged simulations in a planar undulator ($u_e=0$), of RMS undulator parameter $\bar{a}=1.0$. Good agreement is seen except in the interval $11<\bar{z}<14$. \label{fig4}} \end{figure} As previously seen in figure~\ref{fig3}, the fundamental pulse energies $E_1$ of the averaged and unaveraged simulations are in excellent agreement. The third harmonic shows reasonable agreement in the decoupled linear regime until $\bar{z} \approx 11$. At this point in the averaged model, the electron bunching at the fundamental also begins to drive the third harmonic field with a growth rate $\sim 3$ times that of the fundamental~\cite{harb}. While there is evidence of similar enhanced harmonic growth in the unaveraged simulation, the effect is seen to be significantly less pronounced. As the interaction proceeds into the non-linear, saturation regime for $\bar{z}>13$, both simulations are seen to resume a similar evolution. It was noted from figure~\ref{fig1} that for larger undulator parameters $\bar{a}_u$, the coupling parameters $\alpha_n$ for harmonics maximise for an elliptical undulator configuration, $u_e>0$. This increased coupling can be expected to decrease the gain length and increase the saturation pulse energies of harmonics for these elliptical polarisations. In particular, the gain length for the third harmonic in an undulator with parameter $\bar{a}_u = 5.0$, should be minimised for an elliptical undulator with $u_e \approx 0.34$. From the above scaling (and writing $l_g(u_e)$, etc) the ratio of the two gain lengths $l_g(0.34)/l_g(0)= 0.934$. Both the averaged and unaveraged numerical models were also used to simulate both undulator ellipticities $u_e=(0, 0.34)$ for the same value of $\bar{a}_u = 5.0$. The results are shown in figure~\ref{fig5}. The simulations are seen to agree well with each other in the linear regime with the elliptical undulator measured as having the shorter gain length $l_g(0.34)/l_g(0)\approx 0.931$, in good agreement with the value calculated from scaling. A similar scaling argument for the electron pulse energies at saturation gives $E_3(0.34)/E_3(0)=1.071$ which is more difficult to compare with the simulations of figure~\ref{fig5} due to the problem in defining the points of saturation. \begin{figure}[!ht] \centering \includegraphics[width=140mm,height=110mm]{fig5.jpg} \caption[]{ Comparison of the scaled pulse energies for both averaged and unaveraged simulations of the third harmonics $E_3$ in an undulator with $a_w = 5.0$ for two different undulator ellipticities $u_e=0.0$ (planar undulator) and $u_e = 0.34$ (elliptical undulator). The third harmonic interaction is see to be stronger for the elliptical undulator, in agreement with the results of figure~\ref{fig1}, which shows that the coupling parameter is maximum for the elliptical undulator case. The gain lengths of both results agree well with predicted scaling via the elliptical coupling parameter $\alpha_3$. \label{fig5} } \end{figure} Note again, the difference in the simulation results between the averaged and unaveraged models as saturation is approached and the fundamental interaction drives that of the harmonic. The divergence between the two models is probably more pronounced in this case where $\bar{a}_u = 5.0$, than that of figure~\ref{fig4} where $\bar{a}_u = 1.0$. \newpage \section{Conclusion} An averaged FEL model in the 1D limit for ellipticity polarised undulators including resonant radiation harmonics was presented. The undulator ellipticity changes the previous difference of Bessel functions factor, familiar from planar undulator FEL theory, into a more general elliptical Bessel function factor, valid for a planar undulator through to an helical undulator. This new elliptical factor was incorporated into a set of averaged, scaled, differential equations describing the FEL interaction. The scaling of these equations allows important quantities such as the gain length and radiation pulse energy, to be estimated as a function of the undulator ellipticity. This averaged elliptical FEL model of the undulator was also solved numerically and the scaling demonstrated. One notable result is that the harmonic gain and saturation energy for larger values of the undulator parameter $\bar{a}_u$, was greater for elliptically polarised undulators than for the planar equivalent. The averaged elliptical FEL model was also compared with the numerical simulations of an unaveraged FEL model using the Puffin code which is also able to model elliptically polarised undulators (also in 3D). Overall, there was very good agreement between the two models. However, there were differences noted in the radiation pulse energy evolution of the harmonics as the interactions approached saturation and the harmonics are strongly coupled and driven by the interaction at the fundamental. This is not directly related to the ellipticity of the polarisation, but is thought to be a more general issue related to the validity of the averaging process in accurately describing the coupling between the fundamental and harmonic interactions. This topic will require further research. \ack We gratefully acknowledge support of Science and Technology Facilities Council Agreement Number 4163192 Release \#3; ARCHIE-WeSt HPC, EPSRC grant EP/K000586/1; EPSRC Grant EP/M011607/1; and John von Neumann Institute for Computing (NIC) on JUROPA at Jlich Supercomputing Centre (JSC), under project HHH20 \section*{References}
1,314,259,994,003
arxiv
\section{Introduction} \label{s_intro} The recent combined stellar structure and atmosphere ({\em CoStar}) models for massive stars of Schaerer et al.~ (1996a, b, hereafter Paper I and II) consistently treat the stellar interior, photosphere and wind using up-to-date input physics. An immediate advantage of this approach is that we predict the emergent spectral energy distribution along the evolutionary paths, which provides a large number of observable quantities ranging from the extreme ultraviolet (EUV) to the infrared (IR). Of particular interest is the spectral range shortward of the Lyman limit, where the bulk of the bolometric luminosity of hot star is emitted. This wavelength range has been recently observed in early-type stars by Hoare et al. (1993) and Cassinelli et al. (1995, 1996). From both theoretical and observational results, it has become clear that the flux shortward of the Lyman edge is not only affected by line blanketing\ and non--LTE\ effects but is also influenced by the presence of a stellar wind. The presence of a stellar wind may considerably change the formation of the flux in the Lyman continuum up to high frequencies. This was first pointed out by Gabler et al.~ (1989), who showed that a wind can cause a significant depopulation of the He~{\sc ii}\ ground state. As a consequence, models accounting for stellar winds lead to \ifmmode {\rm He^+} \else $\rm He^{+}$\fi\ ionizing fluxes which are $\sim$ 2--3 orders of magnitudes larger than the values predicted from non--LTE\ plane parallel models (see Gabler et al.~ 1992, Paper II). With respect to the widely used LTE models of Kurucz (1991) the increase is 3--6 orders of magnitude! More recently evidence has been presented that the flux at longer wavelengths, i.e.~in the $\rm He^{\circ}$ continuum (at $\lambda < 504$ \ang) and even in the Lyman continuum (at $\lambda < 912$ \ang), can also be affected by the presence of a stellar wind (Paper II, Najarro et al.~ 1996). As in the case of the \ifmmode {\rm He^+} \else $\rm He^{+}$\fi\ ionizing flux this occurs through a depopulation of the corresponding ground states. The above illustrates the importance of treating the stellar wind when predicting ionizing fluxes. The first results of these theoretical studies have already had important consequences for the interpretation of observations: \begin{itemize} \item The strong increase of \ifmmode {\rm He^+} \else $\rm He^{+}$\fi\ ionizing photons in the models of Gabler et al.~ (1991) leads to an important reduction of the Zanstra--discrepancy in central stars of planetary nebul\ae. \item Nebular calculations using emergent fluxes from wind models yields an improved match to the observed ionization structure of H~{\sc ii}\ regions, thereby resolving the so-called [Ne~{\sc iii}] problem (Sellmaier et al.~ 1996, Rubin et al.~ 1995 and references therein). \item {\sc EUVE} observations of the Lyman continuum flux of the B2II giant $\epsilon$ CMa by Cassinelli et al.~ (1995) have provided a first {\em direct comparison} with model atmospheres. Surprisingly the observations show a flux significantly larger than predicted from both LTE and non--LTE\ plane parallel models. Among several alternative explanations for this failure of plane parallel atmosphere models (Hubeny \& Lanz 1996), Najarro et al.~ (1996) suggest that the discrepancy of the Lyman continuum flux may be reduced if one accounts for the weak stellar wind of $\epsilon$ CMa. \end{itemize} The above findings clearly point out the necessity to improve our understanding of the ionizing fluxes of OB-type stars. While the first studies have concentrated on individual objects and very few models are available yet, our study covers the entire main sequence evolution between 20 and 120 \ifmmode M_{\odot} \else $M_{\odot}$\fi, i.e. spectral types O3 to early-B, and provides predictions from elaborate non--LTE\ calculations including line blanketing\ and stellar winds. This will allow to work out the consequences of revised ionizing fluxes for a large number of systems, including the galactic ISM, H~{\sc ii}\ regions and starbursts. Such direct or indirect comparisons with observations will, in turn, be of great value for testing our predictions and improve the reliability of model atmospheres for hot stars. The remainder of the paper is structured as follows: Our method and the calculated model set are described in Sect.~2. Predicted UV line blanketed spectra are presented in Sect.~3. The main presentation and discussion of the ionizing fluxes takes up Sect.~4. Section 5 contains our revised calibration of ionizing fluxes and contains comparisons to previous work. In Sect.~6 we discuss uncertainties of the present models and point out future improvements. The main results are summarised in Sect.~7. \section{Model calculations} \label{s_calculations} \subsection{Input physics and method} \label{s_input} A detailed description of our so-called {\em CoStar} models and the input physics adopted for the calculations is given in Paper I of this series. Here, we will only briefly summarise the most important characteristics of our models. The entire star, comprising the stellar interior and a spherically expanding atmosphere including the stellar wind, is treated consistently. The interior is modelled with the Geneva stellar evolution code using the same input physics (reaction rates, opacities etc.) as in the latest grid calculations of Meynet et al.~ (1994). The atmosphere is modelled using the {\sc isa-wind} code of de Koter et al.~ (1993, 1996b). Outer boundary conditions for the stellar interior calculations are given by the atmospheric structure. Basically the atmosphere is characterised by two parts: the subsonic regime with an extended photosphere and the wind, where the flow is accelerated to the terminal flow velocity \ifmmode v_{\infty} \else $v_{\infty}$\fi. For the photospheric part we solve the stationary momentum equation taking into account gas and radiation pressure. The subsonic part is smoothly connected with a wind structure described by the usual ``$\beta$-law'' (see Paper I). The temperature structure is given by radiative equilibrium in an extended grey atmosphere following Lucy (1971) and Wessolowski et al.~ (1988). In the final step, a consistent solution is constructed, embracing both the stellar interior and the atmosphere. In addition to the usual predictions from evolutionary models, {\em CoStar} models also provide the detailed emergent fluxes along the evolutionary paths. The adopted mass loss rate and the additional parameters required to describe the wind structure are taken as in Paper I: \begin{itemize} \item Mass loss rates are adopted as in Meynet et al.~ (1994). This means that for population I stars throughout the HR diagram we use the mass loss rates given by de Jager et al. (1988), enhanced by a factor of two. Justifications for this choice are given by Meynet et al.~ (1994) and Maeder \& Meynet (1994). For non-solar metallicities \ifmmode \dot{M} \else $\dot{M}$\fi\ was scaled with $\left(Z/Z_\odot \right)^\zeta$, where $Z_\odot=0.020$. Consistent with our previous grid calculations an exponent $\zeta=0.5$ was taken as indicated by wind models (Kudritzki et al.~ 1987, 1991). \item The terminal velocities \ifmmode v_{\infty} \else $v_{\infty}$\fi\ as a function of metallicity are from wind models of Leitherer et al.~ (1992). Comparisons of our adopted terminal velocities with observations of population I stars have been discussed in Paper I. \item For the rate of acceleration of the supersonic flow we take $\beta=0.8$ following theoretical predictions of Friend \& Abbott (1986) and Pauldrach et al.~ (1986). These predictions are in good agreement with observations of O stars by Groenewegen \& Lamers (1991). \end{itemize} The {\sc isa-wind} non--LTE\ radiation transfer calculations, which yield the detailed spectral evolution use the atmospheric structure from the {\em CoStar} model summarised above. In {\sc isa-wind}, the line transfer problem is treated using the Sobolev approximation, including the effects of the diffuse radiation field, and the continuous opacity inside the line resonance zone (de Koter 1993, de Koter et al.~ 1993). Line blanketing is included following the opacity sampling technique introduced by Schmutz (1991). The method involves a Monte Carlo radiation transfer calculation including the most important spectral lines of all elements up to zinc. We want to make clear that our models are not fully ``line blanketed'' in the context established in photospheric models, i.e.~a fully consistent treatment of the effects of the presence of lines on the atmospheric structure and emergent spectrum. However, we opted to use this term rather than the term ``line blocked'' because the latter would go by the fact that we {\em do} treat the redistribution of flux and -- although in an approximate way -- the effect of blocking on the temperature structure. The ionization and excitation of the metals is treated as in Schaerer \& Schmutz (1994a,b) to which the reader is referred for a detailed description of the entire procedure. The input physics for the atmospheric structure calculations consists of atomic data for the elements explicitly included in the non--LTE\ model. In the present work hydrogen and helium are treated as summarised in Paper I. The H, He, C, N, and O composition of the atmosphere is that corresponding to the outermost layer of the interior model. For the metals included in the line blanketed atmosphere, the abundances of Anders \& Grevesse (1989) have been adopted. The domain where our models are applicable is limited to relatively strong winds because of several simplifying assumptions made in the calculations (see also Sect.~\ref{s_improve} for a critical discussion): \begin{itemize} \item[$\diamond$] The Sobolev approximation yields good agreement with comoving frame calculations for O and WR stars (de Koter et al.~ 1993). However, for weaker winds differences in the level populations will progressively affect the predicted continuum fluxes in particular shortward of the Lyman edge. \item[$\diamond$] Presently our calculations neglect line broadening, yielding only a poor treatment of photospheric lines. This results in an underestimate of blanketing in the photosphere. For the early spectral types and/or strong winds this approximation should not be crucial, since in these cases photospheric lines are both weaker and less numerous, and wind effects play a very important role in establishing the equilibrium population. \item[$\diamond$] The temperature structure, which is derived from radiative equilibrium in an extended grey atmosphere, includes line blanketing only in an approximate way. Even with the improved treatment of Schaerer \& Schmutz (1994a), the determination of the temperature structure in the photosphere-wind transition zone remains somewhat uncertain. In the case that wind effects dominate, this effect should not be of importance. \item[$\diamond$] We neglect X-rays which, for stars with relatively weak winds, can drastically alter the ionization structure (MacFarlane et al.~ 1994) and might also provide an additional heating mechanism. \end{itemize} Despite these uncertainties, we will argue below that the O stars covered in the present paper should be quite adequately treated with our techniques. However, we do note that because of the above mentioned points and because of other indications (see Sects.~\ref{s_euve},\ref{s_improve}), for B stars reliable predictions of the ionizing fluxes are not yet possible. Future improvements will be necessary to extend the range of validity of the models. \begin{figure}[htb] \centerline{\psfig{figure=hr_logg_models.eps,height=8.8cm}} \caption{{\bf a} HR--diagram covering the MS phases for all initial masses. The WR stage during the H--burning phase of the 85 and 120 \ifmmode M_{\odot} \else $M_{\odot}$\fi models are not included. Symbols (circles, crosses etc.) denote the selected models describing the spectral evolution (see Table \protect\ref{ta_params}). {\bf b} $\ifmmode \log g \else $\log g$\fi$--$\log\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi$ diagram corresponding to {\bf a}. The hatched area shows the domain for which Kurucz atmosphere models are available} \label{fig_hr_logg_models} \end{figure} \begin{table*} \caption{Summary of selected models at Z=0.020: stellar parameters and approximate spectral classification} \centerline{ \begin{tabular}{rrlllllrrrrll} \\ \hline \\ model & age & $\frac{M}{\ifmmode M_{\odot} \else $M_{\odot}$\fi}$ & $\log \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi$ & $\log \frac{L}{\ifmmode L_{\odot} \else $L_{\odot}$\fi}$ & \ifmmode \log g \else $\log g$\fi & $\frac{\ifmmode R_{\star} \else $R_{\star}$\fi}{\ifmmode R_{\odot} \else $R_{\odot}$\fi}$ & $\log \ifmmode \dot{M} \else $\dot{M}$\fi$ & \ifmmode v_{\infty} \else $v_{\infty}$\fi\ & $n_{\rm H}$ & $\mu$ & SpType & SpType\\ \# & [yr] & & [K] & & [${\rm cm\; s^{-2}}$] & & [$\ifmmode M_{\odot} \else $M_{\odot}$\fi {\rm yr^{-1}}$] & [\ifmmode {\rm km \;s^{-1}} \else $\rm km \;s^{-1}$\fi] & & & (\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi--$M_{\rm bol}$) & (HRD)\\ \\ \hline \\ \multicolumn{6}{l}{20 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ A1 & 4.00 $10^4$ & 20.00 & 4.551 & 4.658 & 4.24 & 5.633 & -7.058 & 2890. & 0.90 & 0.634 & O8V & B0V \\ A2 & 3.65 $10^6$ & 19.60 & 4.523 & 4.767 & 4.01 & 7.268 & -6.857 & 2544. & 0.90 & 0.634 & O9V & B0V \\ A3 & 6.15 $10^6$ & 19.15 & 4.483 & 4.872 & 3.73 & 9.860 & -6.634 & 2202. & 0.90 & 0.634 & B0V & B0IV \\ A4 & 7.79 $10^6$ & 18.65 & 4.401 & 4.966 & 3.30 & 16.034 & -6.384 & 1802. & 0.90 & 0.634 & B0.5II & B0.5II \\ \multicolumn{6}{l}{25 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ B1 & 2.75 $10^4$ & 25.00 & 4.586 & 4.907 & 4.22 & 6.387 & -6.806 & 2897. & 0.90 & 0.634 & O7V & O9.5V \\ B2 & 2.60 $10^6$ & 24.48 & 4.560 & 4.997 & 4.02 & 7.994 & -6.596 & 2589. & 0.90 & 0.634 & O8V & O9V \\ B3 & 4.79 $10^6$ & 23.71 & 4.514 & 5.103 & 3.71 & 11.187 & -6.307 & 2208. & 0.90 & 0.634 & O9IV & O8.5V \\ B4 & 5.89 $10^6$ & 22.99 & 4.443 & 5.171 & 3.35 & 16.710 & -6.066 & 1868. & 0.90 & 0.634 & B0II & O9.5III \\ \multicolumn{6}{l}{40 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ C1 & 4.04 $10^4$ & 39.98 & 4.646 & 5.376 & 4.20 & 8.326 & -6.463 & 2962. & 0.90 & 0.634 & O5V & O7V \\ C2 & 1.50 $10^6$ & 39.25 & 4.621 & 5.447 & 4.02 & 10.152 & -6.170 & 2690. & 0.90 & 0.634 & O6V & O7V \\ C3 & 2.85 $10^6$ & 37.86 & 4.577 & 5.519 & 3.76 & 13.457 & -5.809 & 2354. & 0.90 & 0.634 & O7IV & O7IV \\ C4 & 3.81 $10^6$ & 35.48 & 4.481 & 5.578 & 3.29 & 22.422 & -5.414 & 1893. & 0.90 & 0.634 & O9.5II & O9.5I \\ C5 & 4.08 $10^6$ & 34.35 & 4.414 & 5.597 & 2.98 & 31.215 & -5.290 & 1660. & 0.90 & 0.634 & B0I & O9.5I \\ C6 & 4.38 $10^6$ & 32.38 & 4.268 & 5.628 & 2.34 & 63.521 & -5.126 & 1269. & 0.90 & 0.634 & B2I & O9.5I \\ \multicolumn{6}{l}{60 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ D1 & 8.72 $10^4$ & 59.96 & 4.681 & 5.731 & 4.16 & 10.663 & -6.090 & 3050. & 0.90 & 0.634 & O4V & O5.5V \\ D2 & 7.72 $10^5$ & 59.20 & 4.664 & 5.766 & 4.05 & 12.010 & -5.856 & 2883. & 0.90 & 0.634 & O4.5V & O5.5V \\ D3 & 2.23 $10^6$ & 55.03 & 4.597 & 5.842 & 3.68 & 17.838 & -5.252 & 2381. & 0.90 & 0.634 & O6III & O5.5IV \\ D4 & 2.76 $10^6$ & 50.60 & 4.508 & 5.867 & 3.26 & 27.672 & -4.931 & 1960. & 0.90 & 0.634 & O9 I & O7I \\ D5 & 3.00 $10^6$ & 47.43 & 4.420 & 5.879 & 2.86 & 42.147 & -4.826 & 1646. & 0.90 & 0.634 & B0I & O7I \\ D6 & 3.23 $10^6$ & 43.68 & 4.344 & 5.897 & 2.50 & 60.890 & -4.779 & 1391. & 0.87 & 0.666 & B0.5I & O7.5I \\ D7 & 3.44 $10^6$ & 40.01 & 4.352 & 5.920 & 2.48 & 60.392 & -4.747 & 1319. & 0.80 & 0.737 & B0.5I & O7I \\ \multicolumn{6}{l}{85 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ E1 & 5.00 $10^4$ & 84.88 & 4.708 & 6.004 & 4.14 & 12.900 & -5.683 & 3182. & 0.90 & 0.634 & O3IV & O4V \\ E2 & 7.10 $10^5$ & 82.84 & 4.686 & 6.034 & 4.01 & 14.780 & -5.379 & 2977. & 0.90 & 0.634 & O3.5IV & O4.5IV \\ E3 & 1.66 $10^6$ & 75.58 & 4.629 & 6.071 & 3.71 & 20.017 & -4.879 & 2538. & 0.90 & 0.634 & O5III & O4.5III \\ \multicolumn{6}{l}{120 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ F1 & 4.18 $10^4$ & 119.54 & 4.728 & 6.248 & 4.13 & 15.603 & -5.147 & 3340. & 0.90 & 0.634 & O3V & O3IV \\ F2 & 6.26 $10^5$ & 116.48 & 4.702 & 6.275 & 4.00 & 17.789 & -4.853 & 3103. & 0.90 & 0.634 & O3III & O3.5III \\ F3 & 2.12 $10^6$ & 80.43 & 4.675 & 6.282 & 3.71 & 20.681 & -4.572 & 2428. & 0.75 & 0.795 & O3I & O3.5I \\ \\ \hline \end{tabular} } \label{ta_params} \end{table*} \subsection{Selected models} \label{s_models} To provide a complete coverage of the entire main sequence (MS) evolution with our {\em CoStar} models we have calculated evolutionary tracks for initial masses of 20, 25, 40, 60, 85 and 120 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ at solar metallicity. This paper includes the results from Papers I and II, which covered the range from 40 to 85 \ifmmode M_{\odot} \else $M_{\odot}$\fi. Additionally, we provide calculations for the entire data set at low metallicities (Z=0.004). A high metallicity grid is in preparation. The continuum spectral energy distributions from both the Z=0.020 and 0.004 model sets are available on request from the authors. They will also be included in a recent CD-ROM distributed by the AAS (Leitherer et al.~ 1996). \subsubsection{Solar metallicity: Z=0.020} Figure \ref{fig_hr_logg_models} shows the evolutionary tracks at Z=0.020 in the HR-diagram and the $\ifmmode \log g \else $\log g$\fi$--$\log\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi$ diagram. For each track, we have selected several models for which the stellar parameters are summarized in Table \ref{ta_params}. Along each MS track the models have been selected according to the following criteria if possible : {\em 1)} ZAMS model, {\em 2)-5)} $\ifmmode \log g \else $\log g$\fi$ approximately 4., 3.7, 3.3, 3., and {\em 6)} maximum radius. One additional TAMS model is available for the 60 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track. The total of 27 models should provide a good coverage of the entire MS spectral evolution. The following entries are given in Table \ref{ta_params}: Model number (column 1), age (2), present mass (3), effective temperature (4), luminosity (5), gravity (6), stellar radius (7), mass loss rate (8), terminal velocity (9), number fraction of hydrogen $n_{\rm H}$ normalised to $n_{\rm H}+n_{\rm He}=1.$ (10) and the mean molecular weight per free particle $\mu$ (11) used to determine the photospheric structure. The last two columns give an approximate spectral classification, which has been obtained from a nearest neighbour search in the tables of Schmidt-Kaler (1982)\footnote{At this point, we adopt the Schmidt-Kaler classification instead of the more recent one from Vacca et al.~ (1996) since the latter does not cover the entire domain of our models.} The assignment in column 12 uses the variables ($\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,M_{\rm bol}$) \footnote{We specify \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi\ in kK, which gives a reasonably large weight to the temperature.}, while a nearest neighbour search in the HR-diagram yields the spectral types in column 13. \subsubsection{Low metallicity calculations: Z=0.004} \label{s_z004} For the low metallicity models, we have chosen the following approach: instead of calculating full {\em CoStar} models we have calculated atmosphere models at the same position in HR-diagram, i.e.~adopting identical stellar parameters as for Z=0.020. Only the wind parameters (\ifmmode \dot{M} \else $\dot{M}$\fi, \ifmmode v_{\infty} \else $v_{\infty}$\fi) and the composition have been adapted to Z=0.004. This procedure allows detailed comparisons between the emergent spectra at different metallicities. The following changes apply for Z=0.004 with respect to the parameters given for Z=0.020 in Table \ref{ta_params}: \ifmmode \dot{M} \else $\dot{M}$\fi\ is reduced by a factor of 2.236, and \ifmmode v_{\infty} \else $v_{\infty}$\fi\ is reduced by 1.233 to reflect the dependence expected from wind models (cf.~Sect.~\ref{s_input}). In addition to accounting for the abundance changes of metals, one also needs to modify the relative hydrogen to helium abundance. Consistently with recent evolutionary grid calculations (Charbonnel et al.~ 1993, Meynet et al.~ 1994), we adopt an initial helium content, $Y=Y_p +\left(\Delta Y/\Delta Z\right)Z$ with a $\Delta Y/\Delta Z$ ratio of 3. (see Schaller et al.~ 1992). The resulting abundances for Z=0.004 are: $(X,Y)$=(0.744,0.252) in mass fraction, which corresponds to H and He number abundances of $(n_{\rm H},n_{\rm He})$=(0.922,0.078) These abundances apply to all Z=0.004 models since, in contrast to the solar metallicity tracks, no surface He enrichment is expected in this case (see Meynet et al.~ 1994). Consequently the mean molecular weight $\mu$, which enters in the determination of the atmospheric structure in the low velocity domain is given by $\mu=0.600$. For solar metallicity we will discuss several results in detail. For the reduced metallicity, we will present the integrated ionizing fluxes at Z=0.004 and briefly summarise the influence of metallicity on the emergent spectra. \begin{figure*}[htb] \centerline{\psfig{figure=costar20_uv.eps,height=18cm}} \caption{Synthetic UV spectra showing the spectral evolution on the 20 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track. Plotted is the logarithm of the emergent luminosity. Approximate spectral types from Table \protect\ref{ta_params} are given. Starting with the second model, each spectrum has been shifted downwards by 0.7 dex with respect to the previous one, in order to allow a good comparison. The marks on the top and bottom indicate the location of the CNO and Si lines taken from the lists of Bruhweiler et al.~ (1981) and Dean \& Bruhweiler (1985). The strongest CNO, and Si features are labeled} \label{fig_costar20_uv} \end{figure*} \begin{figure*}[htb] \centerline{\psfig{figure=costar25_uv.eps,height=18cm}} \caption{Same as Fig.~\protect\ref{fig_costar20_uv} for the 25 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track (models B1 to B4)} \label{fig_costar25_uv} \end{figure*} \begin{figure*}[htb] \centerline{\psfig{figure=costar120_uv.eps,height=18cm}} \caption{Same as Fig.~\protect\ref{fig_costar20_uv} for the 120 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track (models F1 to F3)} \label{fig_costar120_uv} \end{figure*} \clearpage \section{Evolution of the UV line blanketed spectrum at Z=0.020} \label{s_ms_blank} Our line blanketed models allow us to predict a large number of detailed observable line features, which are well suited for comparison with UV spectra. In Paper II we have presented the spectral evolution in the 850 to 2200 \ang\ range along MS tracks with initial masses between 40 and 85 \ifmmode M_{\odot} \else $M_{\odot}$\fi. To provide the complete set of UV spectra at solar metallicity we here present the corresponding results for the additional tracks ($M_{\rm i}$ = 20, 25 and 120 \ifmmode M_{\odot} \else $M_{\odot}$\fi). The synthetic UV spectra of the models given in Table\ref{ta_params} are plotted in Figs.~\ref{fig_costar20_uv} to \ref{fig_costar120_uv}. Together with the plots from Paper II, these figures illustrate the behaviour of the strongest UV features as a function of luminosity and effective temperature. The progressive behaviour of the strongest wind lines of CNO and Si agrees with the conclusions drawn in Paper II, where we refer to for more details. {\em (a)} The predictions for the N~{\sc v} resonance line follows the observed decrease in line strength towards later spectral types. The spectra of the B-type stars show some tendency to produce less N~{\sc v} than observed, which reflects the problem of super-ionization. {\em (b)} The strong luminosity dependence of the Si~{\sc iv} line is reproduced. {\em (c)} While the C~{\sc iv} $\lambda$ 1550 doublet is predicted too weak for ZAMS models with $M_{\rm i} \ga$ 40 \ifmmode M_{\odot} \else $M_{\odot}$\fi, we obtain a strong C~{\sc iv} P-Cygni line for less massive ZAMS stars. Compared to observations (see Snow et al.~ 1994) the C~{\sc iv} line of the 20 and 25 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ ZAMS models is too strong. Both results indicate that the predicted carbon ionization balance is shifted towards too high ionization stages for a given temperature (see Paper~II). {\em (d)} The predicted O~{\sc v} $\lambda$ 1371 line shows a strong P-Cygni profile for the hottest models, while it is mostly observed in absorption. The reasons have been discussed in Paper II. In Paper II we have performed several {\em quantitative} comparisons of metal line features. In particular we have been able to reproduce the strong observed Fe~{\sc 1920} \ang\ feature in late O and early-B giants and supergiants. This feature was found to be a good temperature indicator for these stars. The new tracks confirm this result. \section{Ionizing spectrum} \label{s_ionis} In this section, we will compare our model predictions, which take into account non-LTE effects, line blanketing and a stellar wind, with both LTE and non-LTE plane parallel atmospheres. In this way, we can investigate the effect of a stellar wind on the emerging EUV continuum flux. Before doing so, we first briefly discuss the physics of the processes connected with the presence of a stellar wind that are found to affect the flux distribution shortwards of the Lyman continuum edge. There are three categories of effects, wind velocity effects, geometrical wind effects, and line blanketing effects. {\bf Wind velocity effects:} Because of the velocity gradient in a stellar wind, the He~{\sc ii} ({\sc i}) groudstate may become depopulated. If this occurs in the region where the corresponding continuum is formed, this will lead to a decrease of the bound-free opacity and consequently to an increase of the flux at $\lambda < 228~(504)$ \AA. The depopulation effect works as follows. We consider a point $r$ in the wind, hereafter the local point. The velocity gradient in the wind flow allows helium resonance line photons to escape from $r$ to infinity, forcing this transition out of detailed balance. The $n=1$ level population is no longer controlled by the local contribution to the mean intensity in the line, $\overline{J}$, i.e. by the local line source function. Instead, $\overline{J}$ is dominated by the non-local contribution which is proportional to the specific intensity at the photosphere. Because the radiation temperature at the photosphere is higher than the electron temperature $T(r)$ and because the helium resonance transition is in the Wien part of the spectrum, this will lead to a large increase of radiation at the line frequency, depopulating the groundlevel (see Gabler et al.~1989). At a certain point in the wind, the depopulation reaches a minimum. Farther out, the $n=1$ population will again increase because of the proportionality of $\overline{J}$ to the dilution factor and because of an increased importance of recombinations from higher levels (see Najarro et al.~1996). We need to distinguish four cases: {\em (a-i)} The continuum is formed in the region below (i.e. at higher Rosseland optical depth) that of the regime of strong depopulation of the groudstate. This would correspond to stars with low density winds. If the mass loss is so small that the continuum is formed in the photosphere, it will be likely that no significant changes from plane parallel models have occurred, consequently, the continuum flux will not differ much from a model without a stellar wind. {\em (a-ii)} As in the first case, the mass loss is sufficiently low that the groundlevel does not suffer from the depopulation effect described above. What defines this case is that transitions between high lying levels become optically thin. Subsequent electron cascading causes the ground state population to increase. Consequently, the continuum flux will decrease somewhat relative to a model without a stellar wind. This case occurs in the He~{\sc i}\ continuum of most of the models considered in this work. In particular, the described effect causes a {\em flattening} of the spectrum in the He~{\sc i}\ continuum, since the continuum at relatively short wavelengths is formed at relatively large Rosseland optical depth, where it is less sensitive to the described effect. {\em (a-iii)} The continuum is formed in the geometrical region corresponding to that of strong depopulation. The continuum flux will be increased significantly relative to a model without a stellar wind. {\em (a-iv)} The continuum is formed in the region outside of that of the depopulation regime. This would correspond to models with a very dense wind. An increased flux relative to a model without a stellar wind may still be present, but the flux will not be as high as in the preceding case. Case {\em (a-ii)} typically occurs in the He~{\sc i}\ continuum of most O-type models (cf.~also Sellmaier et al.~ 1996). Case {\em (a-iii)} dominates the He~{\sc ii} continuum of O-type stars. It may, however, also occur in the He~{\sc i} continuum of B-type stars (Najarro et al.~1996). {\bf Geometrical wind effects:} {\em (b)} Related to the above effects is the possibility that continuum forming layers may be located outside of the photosphere. In this case the flux is determined by the competing effects of {\em (b-i))} the increase of the emitting surface yielding a larger flux, and {\em (b-ii)} a drop in the local source function, at the emitting surface, due to the temperature decrease, which reduces the emergent flux. The geometrical effects mainly occur in the He~{\sc ii}\ continuum, where the opacity is the largest. {\bf Line blanketing effects:} Line blanketing may either cause an increase or a decrease of the flux, because two competing effects play a role: {\em (c-i)} The presence of many lines in the photosphere causes a redistribution of photons from the radiation field at $\lambda < 228~(504)$ \AA\ towards photons of longer wavelength. This blocking of photospheric flux causes a decrease in the helium ionization, increasing the He~{\sc ii} ({\sc i}) groundlevel population. Both the photospheric blocking and the increased He~{\sc ii} ({\sc i}) groundstate population cause a decrease of the flux at $\lambda < 228~(504)$ \AA. {\em (c-ii)} As first shown by Schaerer \& Schmutz (1994a), the presence of many resonance lines in the stellar wind causes an increase of the isotropy of the radiation field. This diffuse radiation is essentially the result of (multiple) photon scattering and occurs most effectively in wavelength regions where the line density is so large that the lines overlap. In these regions the star effectively shows a larger geometrical dilution factor compared to the case of no line blanketing. The associated increase in mean intensity yields a higher ionization, consequently a decrease in the groundstate population of He~{\sc ii} ({\sc i}) which causes an increase of the flux at $\lambda < 228~(504)$ \AA. The first effect is usually more important in H~{\sc i} and He~{\sc i} continua and also dominates in the He~{\sc ii} continuum of stars with low density winds. The last effect dominates in the He~{\sc ii} continuum of stars with high density winds. \begin{figure}[htb] \centerline{\psfig{figure=compare_18ryd.eps,height=8.8cm}} \caption{Comparison of emergent EUV fluxes from a {\em CoStar} model (solid line), a plane parallel non--LTE\ model of Kunze (1994, dotted line) and a plane parallel LTE model of Kurucz (1991, dashed line). Plotted is the astrophysical flux as a function of the energy. The models are for a 120 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track dwarf model with parameters: {\em CoStar}: model F2, Kunze: $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (50 kK, 4.0) and Kurucz: $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (50 kK, 5.0)} \label{fig_compare_18ryd} \end{figure} \begin{figure}[htb] \centerline{\psfig{figure=compare_8ryd.eps,height=8.8cm}} \caption{Same as Fig.~\protect\ref{fig_compare_18ryd} for a 60 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ dwarf. {\em CoStar}: model D2 (solid), Kunze (dotted): $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (45 kK, 4.0) and Kurucz (dashed): $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (45 kK, 5.0)} \label{fig_compare_8ryd} \end{figure} \begin{figure}[htb] \centerline{\psfig{figure=compare_10ryd.eps,height=8.8cm}} \caption{Same as Fig.~\protect\ref{fig_compare_18ryd} for a 60 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ supergiant. {\em CoStar}: model D4 (solid line). Parameters are as in Table.~\protect\ref{ta_params}, except for $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (35 kK, 3.2), Kunze (dotted): $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (35 kK, 3.2), and Kurucz (dashed): $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (35 kK, 4.0)} \label{fig_compare_10ryd} \end{figure} \subsection{Comparisons of CoStar and plane parallel models} \label{s_costar} We first compare the EUV spectral range of {\em CoStar} models with plane parallel models. We limit our comparisons to models which include line blanketing. For the plane parallel models we adopt the widely used LTE models of Kurucz (1991, ATLAS9) and the recent non--LTE\ models from Kunze (1994, cf.~Kunze et al.~ 1992). The latter provide an extensive grid, covering essentially the same $\ifmmode \log g \else $\log g$\fi$--$\log \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi$ parameter space as the present {\em CoStar} calculations. The Kunze models include 9 of the most abundant elements in non--LTE\ (H, He, C, N, O, Ne, Mg, Al, Si, S and Ar). Grids for plane parallel non--LTE\ models, which also include iron are not yet available (e.g.~Dreizler \& Werner 1993, Hubeny \& Lanz 1995). \begin{figure}[htb] \centerline{\psfig{figure=compare_20ryd.eps,height=8.8cm}} \caption{Same as Fig.~\protect\ref{fig_compare_18ryd} for a 20 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ dwarf. {\em CoStar} model A1 (solid line. Parameters as in Tab.~\protect\ref{ta_params} except $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (35 kK, 4.0)), Kunze (dotted) : $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (35 kK, 4.0), and Kurucz (dashed) : $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (35 kK, 4.0)} \label{fig_compare_20ryd} \end{figure} \begin{figure}[htb] \centerline{\psfig{figure=compare_23ryd.eps,height=8.8cm}} \caption{Same as Fig.~\protect\ref{fig_compare_18ryd} for a 20 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ supergiant. {\em CoStar} model A4 (solid line. Parameters as in Tab.~\protect\ref{ta_params})., Kunze (dotted) : $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (25 kK, 3.2), and Kurucz (dashed) : $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (25 kK, 3.3)} \label{fig_compare_23ryd} \end{figure} \begin{figure}[htb] \centerline{\psfig{figure=compare_23_temp.eps,height=8.8cm}} \caption{{\bf a} Comparison of temperature structures from model A4 (solid line) with a plane parallel non--LTE\ model of Kunze (1994) for $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (25 kK, 3.2) (dotted). Also indicated are the depths of the \ifmmode {\rm He^{\circ}} \else $\rm He^{\circ}$\fi\ and Lyman continuum forming layers ($\tau_\nu=2/3$) in the {\em CoStar} model A4. The temperature differences are discussed in the text. {\bf b} Measure of the spherical extension of model A4. Plotted is $\log (r/\ifmmode R_{\star} \else $R_{\star}$\fi-1)$ as a function of the Rosseland optical depth. Note the location of the continuum forming layers in the photosphere--wind transition zone} \label{fig_compare_23_temp} \end{figure} \subsubsection*{120 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track} A comparison of models at the highest \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi\ available (approximately O3 V stars) is shown in Fig.~\ref{fig_compare_18ryd}. Plotted is model F2 from the 120 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track (solid line), the Kunze model for $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (50 kK, 4.0) and Kurucz's model at $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (50 kK, 5.0)\footnote{At high temperatures the results from Kurucz models do not depend sensitively on gravity and should thus allow for an appropriate comparison.}. While the Lyman continuum is essentially identical in all models, non--LTE\ effects increase the flux in the He~{\sc i}\ continuum in both non--LTE\ models (e.g. Mihalas \& Auer 1970). Given the very large temperature, H and He are fully ionized and electron scattering becomes an important opacity source in the He~{\sc i}\ continuum. Consequently, wind effects have little influence on the EUV flux up to the He~{\sc ii}\ continuum edge. Close to the He~{\sc ii}\ edge, our model shows a small flux ``excess'' with respect to the model of Kunze (1994). This is probably due to lower blanketing above the C~{\sc iii} and O~{\sc iii} edges in our {\em CoStar} models (see Sect.~\ref{s_improve}). In the He~{\sc ii}\ continuum, the wind model shows the strong characteristic flux increase due wind effects (Cases {\em a-iii} and {\em c-ii}). \subsubsection*{60 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track} In Figures \ref{fig_compare_8ryd} and \ref{fig_compare_10ryd} we plot comparisons of dwarf and supergiant models from the 60 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track. {\bf Dwarf model:} Figure \ref{fig_compare_8ryd} shows the emergent flux for model D2 (solid line). It is compared to a model with $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (45 kK, 4.0) from Kunze (dotted line) and a Kurucz model (dashed line) for (45 kK, 5.0). In the Lyman continuum all models yield essentially the same results. In the He~{\sc i}\ continuum, non-LTE effects increase the emergent flux. The slope of the continuum of our {\em CoStar} model is slightly flatter than that for the Kunze model (Case {\em a-ii}). This is also found in recent calculations of Sellmaier et al.~ (1996). In the He~{\sc ii}\ continuum the situation is a combination of Case {\em a-iii} and {\em c-ii}. {\bf Supergiant model:} The EUV flux distribution from the supergiant model (model D4 -- but recalculated for $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (35 kK, 3.2)) is plotted in Fig.~\ref{fig_compare_10ryd} (solid line). The comparison shows the corresponding plane parallel model from Kunze for identical \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi\ and \ifmmode \log g \else $\log g$\fi\ (dotted line) and a Kurucz model (dashed) with $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (35 kK, 4.0). Given the strong stellar wind the ionizing flux is influenced by wind effects at essentially all wavelengths. As for the dwarf models, this leads to depopulated ground states (Case {\em a-iii}) but also causes the depth of continuum formation to be located in spherically extended layers In this case the flattening of the spectrum in the He~{\sc i}\ continuum is mostly due to case ({\em b}). The spectrum up to the He~{\sc ii}\ edge is flatter than plane parallel models. Due to stronger blanketing, the absolute flux in the Lyman continuum is lower than in the Kunze model. Above the He~{\sc ii}\ edge the wind is optically thick up to large distances (Case {\em b}), which explains the low flux with respect to the plane parallel non--LTE\ model. \subsubsection*{20 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track} \label{s_20track} The EUV spectra along the 40 and 25 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ tracks are qualitatively identical to the ones discussed above although the differences with respect to plane parallel non--LTE\ models increase. For the least massive objects modeled with {\em CoStar}, however, the situation may be slightly different due to their relatively low mass loss rates. {\bf Dwarf model:} Figure \ref{fig_compare_20ryd} shows the dwarf model A1 (recalculated for $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (35 kK, 4.0)) compared to Kunze and Kurucz models with identical parameters. Due to stronger blanketing the Lyman continuum flux is slightly lower than the Kunze model. Qualitatively the slope of the He~{\sc i}\ continuum resembles the cases discussed before. Figure \ref{fig_compare_20ryd}, however, shows that up to $\sim$ 2.6--3 Rydberg the {\em CoStar} model predicts a flux lower than that in both the non--LTE\ and LTE plane parallel model. The difference relative to the Kunze model is clear example of a Case {\em a-ii} effect. A similar result was also obtained for $\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi$ =35 kK by Sellmaier et al.~ (1996). Due to the weak wind, the flux increase in the He~{\sc ii}\ continuum with respect to the plane parallel non--LTE\ model is not strongly pronounced (Case {\em a-iii}). {\bf Giant models:} Let us turn to the early B type giant models (Figure~\ref{fig_compare_23ryd}). This star has a relatively weak wind, however, the {\em CoStar} model shows surprisingly large differences when compared to plane parallel models. The cause for these large differences is not directly related to the outflow (the H~{\sc i}\ and He~{\sc i}\ continua are formed in the photosphere), but is a consequence of differences in temperature structure. The temperature structures of model A4 and of the Kunze model for $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (25 kK, 3.2) are given in the upper panel of Fig.~\ref{fig_compare_23_temp}. The lower panel shows the spherical extension of model A4. Note that the structures agree well at optical depths \ifmmode \tau_{\rm Ross} \else $\tau_{\rm Ross}$\fi $\ga$ 1. In the {\em CoStar} model the quasi-hydrostatic region reaches out to $\log \ifmmode \tau_{\rm Ross} \else $\tau_{\rm Ross}$\fi \sim -1.7$. At this point the wind flow starts to accelerate, causing the temperature to decrease rapidly. Given the weakness of the wind in the {\em CoStar} model, the intermediate region $-1.7 \la \ifmmode \tau_{\rm Ross} \else $\tau_{\rm Ross}$\fi \la 1$ where geometric extension is negligible is sufficiently large for the temperature to approach the value corresponding to that of the boundary temperature in a plane parallel grey atmosphere in radiative equilibrium, i.e. $T \rightarrow 0.841 \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi$. At optical depths $\log \ifmmode \tau_{\rm Ross} \else $\tau_{\rm Ross}$\fi \la -1.7$, the spherical extension makes itself known, leading to the rapid decline of the temperature. As Fig.~\ref{fig_compare_23_temp} shows, the temperature in the plane parallel non--LTE\ model is below the grey value, as expected, due to cooling of the metals. The difference with {\em CoStar} reaches up to $\sim$ 3500 K close to the photosphere--wind transition zone, which is also the region where the He~{\sc i}\ and the Lyman continuum are formed (see Fig.~\ref{fig_compare_23_temp}). The higher temperature leads to a higher flux both in the Lyman and in the He~{\sc i}\ continuum of the {\em CoStar} model with respect to the Kunze model (see Fig.~\ref{fig_compare_23ryd})\footnote{In both models the H and He groundstates are overpopulated.}. The Kurucz model yields results intermediate between both non--LTE\ models. As discussed by Kunze (1994) this is due to an overestimation of the temperature gradient in the continuum forming layers in B star models of Kurucz (cf.~Philips \& Wright 1980). In summary, we have seen that differences in the ionizing flux down to 228 \AA\ between {\em CoStar} models and plane parallel models become particularly large for the lowest mass tracks presented here. This is not what one would expect: Given the relatively low mass outflow of these objects, wind effects are expected to become less and less important and the predictions should smoothly join those from non--LTE\ plane parallel models\footnote{Non-LTE effects are still of importance and one therefore does not expect the results to agree with the Kurucz models in this domain.}. But this does not occur. In fact the physical situation becomes more complex, as the temperature structure start to play an important role. Our predictions for BO dwarf and giant stars (roughly corresponding to models A) should thus be taken with caution. These uncertainties will be discussed in more detail in Sect.~\ref{s_improve}. \section{Revised ionizing fluxes of O and early B stars} \label{s_revision} In this section we present the integrated photon fluxes obtained from our models at solar and $1/5$ solar metallicity and derive new calibrations for O3 to B0 stars of population I. We also compare our results with previous studies that use different atmosphere calculations. \subsection{Integrated photon fluxes} In Table \ref{ta_qi_z020} we list the predicted number of photons emerging at wavelengths shorter than 912, 504, and 228 \ang\ respectively, referred to as $q_0$, $q_1$, and $q_2$. The values are given for both line blanketed models sets at Z=0.020 and 0.004. The total ionizing luminosity $Q_i$ (in photons $\rm s^{-1}$) is obtained by $Q_i=4 \pi (\ifmmode R_{\star} \else $R_{\star}$\fi \ifmmode R_{\odot} \else $R_{\odot}$\fi)^2 q_i$, where \ifmmode R_{\star} \else $R_{\star}$\fi\ is given in Table \ref{ta_params}. The $q_i$ values may be used to interpolate results for other stellar parameters and comparisons to predictions from plane parallel atmosphere models (cf.~below). It must, however, be kept in mind that the fundamental value predicted by spherically extended models, such as the present ones, is the total ionizing luminosity ($Q_i$), since the radius is one of the basic model parameters. Using interpolations of $q_i$ to obtain the number of ionizing photons for stars with significantly differing radii may therefore yield unreliable results. The same comment also applies to the wind parameters. Numerical care must be taken for interpolations since the model grid is not uniformly spaced in its input parameters. While reliable $q_0$ and $q_1$ interpolations can be done with 2-D surface interpolation algorithms the same is not true for $q_2$, which shows a strong discontinuity in the $\ifmmode \log g \else $\log g$\fi - \log\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi$ plane. For population synthesis applications the safest approach may be to assign the flux from some ``nearest neighbour'' model to the desired point and scale it to the correct bolometric luminosity, as mostly done. {\footnotesize \begin{table}[htb] \caption{Ionizing photon fluxes in ${\rm cm^{-2} s^{-1}}$ from {\em CoStar} models at Z=0.020 (columns 2-4) and Z=0.004 (columns 5-7). For Z=0.020 the stellar parameters are given in Table \protect\ref{ta_params}. The low Z models include abundance changes as well as the expected variations of the wind parameters. Empty columns denote fluxes $< 10^3\, {\rm cm^{-2} s^{-1}}$)} \centerline{ \begin{tabular}{rrrrrrrrrrr} \\ \hline \\ & \multicolumn{3}{c}{Z=0.020} & \multicolumn{3}{c}{Z=0.004}\\ model & $\log q_0$ & $\log q_1$ & $\log q_2$ & $\log q_0$ & $\log q_1$ & $\log q_2$\\ \\ \hline \multicolumn{4}{l}{20 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ A1 & 23.63 & 22.12 & 18.27 & 23.73 & 22.33 & 18.75 \\ A2 & 23.35 & 21.59 & & 23.39 & 21.69 \\ A3 & 22.92 & 20.84 & & 22.96 & 20.95 \\ A4 & 21.89 & 19.10 & & 21.89 & 19.17 \\ \multicolumn{4}{l}{25 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ B1 & 24.04 & 23.25 & 20.14 & 24.10 & 23.33 & 20.03 \\ B2 & 23.85 & 22.85 & 19.67 & 23.85 & 22.82 & 19.53 \\ B3 & 23.33 & 21.50 & & 23.36 & 21.57 \\ B4 & 22.52 & 20.09 & & 22.52 & 20.08 \\ \multicolumn{4}{l}{40 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ C1 & 24.47 & 23.97 & 21.56 & 24.50 & 24.00 & 20.49 \\ C2 & 24.33 & 23.75 & 21.27 & 24.35 & 23.80 & 20.69 \\ C3 & 24.05 & 23.31 & 20.13 & 24.08 & 23.40 & 20.13 \\ C4 & 23.00 & 20.84 & 8.69 & 23.23 & 21.25 \\ C5 & 22.38 & 19.95 & 6.27 & 22.43 & 19.76 \\ C6 & 20.90 & 14.02 & & 20.75 & \\ \multicolumn{4}{l}{60 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ D1 & 24.67 & 24.20 & 21.95 & 24.68 & 24.23 & 20.95 \\ D2 & 24.58 & 24.09 & 21.54 & 24.60 & 24.13 & 20.79 \\ D3 & 24.23 & 23.59 & 20.44 & 24.25 & 23.64 & 20.64 \\ D4 & 23.64 & 22.48 & 12.42 & 23.64 & 22.50 & 11.49 \\ D5 & 22.68 & 20.18 & 7.09 & 22.71 & 20.24 & 6.34 \\ D6 & 21.65 & 15.59 & & 21.65 & 18.16 \\ D7 & 21.74 & 15.62 & & 21.82 & 18.34 \\ \multicolumn{4}{l}{85 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ E1 & 24.81 & 24.39 & 21.55 & 24.82 & 24.40 & 21.38 \\ E2 & 24.71 & 24.25 & 21.49 & 24.72 & 24.27 & 21.27 \\ E3 & 24.45 & 23.88 & 20.89 & 24.44 & 23.90 & 20.98 \\ \multicolumn{4}{l}{120 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track:} \\ F1 & 24.92 & 24.50 & 21.79 & 24.91 & 24.51 & 21.57 \\ F2 & 24.80 & 24.35 & 21.74 & 24.80 & 24.37 & 21.55 \\ F3 & 24.69 & 24.20 & 21.19 & 24.69 & 24.22 & 21.53 \\ \hline \end{tabular} } \label{ta_qi_z020} \end{table} } \subsection{Metallicity effects} \label{s_metals} As mentioned in Sect.~\ref{s_z004}, a change in metallicity basically affects our results in three different ways: {\em 1)} Change of the total metal abundance, {\em 2)} change of the relative hydrogen to helium abundances expected from the chemical evolution, and {\em 3)} modified wind properties due to {\em 1)}. In Table \ref{ta_qi_z020} we give the ionizing photon fluxes derived from models at Z=0.004. These values are to be compared to the ones for solar metallicity. In most cases the ionizing flux in the H and He~{\sc i}\ continua increases with lower metallicity, as expected. The difference between both sets is, however, relatively small. Typically $q_0$ changes by less than $\sim$ 30 \%, although larger variations are found for $q_1$. The main influence in our models results from effect {\em 3)}, mentioned above. Effects {\em 1)} and {\em 2)} are of secondary importance. At Z=0.004 both the decrease of the wind velocities and the mass loss rate imply a diminishing importance of wind effects. Consequently, cascading from upper levels (case {\em a-ii}) becomes less important. Thus the H groundstate is less over-populated compared to the solar metallicity case, implying a slightly stronger flux in the Lyman continuum. Similar reasoning applies to the He~{\sc i}\ continuum for the temperature range covered in our models. As discussed at length, the He~{\sc ii}\ ionizing flux is most sensitive to wind effects. In general, a lowering of the metallicity implies a decrease of the wind density leading to less emission in the He~{\sc ii}\ continuum. However, in some cases of stars with large wind densities (e.g.~models E3 and F3) the opposite is seen. This is similar to what is seen in hot Wolf-Rayet stars: The He~{\sc ii}\ continuum is formed at a relatively large radius, where the ground state population drives the ionization, making it proportional to the total density (e.g.~Schmutz \& Hamann 1986). In this situation lowering the wind density (i.e.~lowering Z) reduces the recombination rates, implying a stronger ionization. This explains the increase of the He~{\sc ii}\ ionizing flux for the models with the highest wind densities when lowering the metallicity\footnote{In model F3 the He enrichment further augments this behaviour.}. In summary, we see that the metallicity dependence of the ionizing fluxes predicted by our models is relatively small and essentially due to the expected changes of the wind properties with Z (effect {\em 3)}). However, given the likely underestimate of blanketing in our models (see Sect.~\ref{s_improve}) we expect the dependence on metal abundances to be somewhat more important than found here. {\footnotesize \begin{table*} \caption{Parameters for OB-type stars: Ionizing photon fluxes per unit surface area ($q_0$ and $q_1$ in column 4 and 5) derived from solar metallicity {\em CoStar} models, using the \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi--\ifmmode \log g \else $\log g$\fi\ calibration of Vacca et al.~ (1996) given in columns 2 and 3. Adopting the radii from Vacca et al.~\ (column 4) one obtains the absolute photon fluxes $Q_0$ and $Q_1$ given in column 5 and 6} \centerline{ \begin{tabular}{lrrrrrrrrrr} \\ \hline \\ Sp.~Type & \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi & $\log g_{\rm evol}$ & $\log q_0$ & $\log q_1$ & $R$ & $\log Q_0$ & $\log Q_1$ \\ & [K] & [cgs] & [${\rm cm^{-2} s^{-1}}$] & [${\rm cm^{-2} s^{-1}}$] & [\ifmmode R_{\odot} \else $R_{\odot}$\fi] & [$\rm s^{-1}$] & [$\rm s^{-1}$] \\ \hline \\ \multicolumn{4}{l}{Luminosity class V:} \\ O3 V & 51230. & 4.149 & 24.82 & 24.39 & 13.2 & 49.85 & 49.42 \\ O4 V & 48670. & 4.106 & 24.71 & 24.27 & 12.3 & 49.68 & 49.23 \\ O4.5 V & 47400. & 4.093 & 24.65 & 24.20 & 11.8 & 49.58 & 49.12 \\ O5 V & 46120. & 4.081 & 24.58 & 24.11 & 11.4 & 49.48 & 49.01 \\ O5.5 V & 44840. & 4.060 & 24.51 & 24.00 & 11.0 & 49.38 & 48.86 \\ O6 V & 43560. & 4.042 & 24.44 & 23.91 & 10.7 & 49.28 & 48.75 \\ O6.5 V & 42280. & 4.030 & 24.36 & 23.81 & 10.3 & 49.17 & 48.62 \\ O7 V & 41010. & 4.021 & 24.27 & 23.65 & 10.0 & 49.05 & 48.44 \\ O7.5 V & 39730. & 4.006 & 24.18 & 23.50 & 9.6 & 48.93 & 48.25 \\ O8 V & 38450. & 3.989 & 24.08 & 23.33 & 9.3 & 48.80 & 48.05 \\ O8.5 V & 37170. & 3.974 & 23.94 & 23.05 & 9.0 & 48.64 & 47.74 \\ O9 V & 35900. & 3.959 & 23.79 & 22.70 & 8.8 & 48.46 & 47.37 \\ O9.5 V & 34620. & 3.947 & 23.60 & 22.27 & 8.5 & 48.25 & 46.92 \\ B0 V & 33340. & 3.932 & 23.40 & 21.79 & 8.3 & 48.02 & 46.41 \\ B0.5 V & 32060. & 3.914 & 23.18 & 21.27 & 8.0 & 47.77 & 45.86 \\ \\ \multicolumn{4}{l}{Luminosity class III:} \\ O3 III & 50960. & 4.084 & 24.81 & 24.37 & 15.3 & 49.97 & 49.52 \\ O4 III & 48180. & 4.005 & 24.70 & 24.24 & 15.1 & 49.84 & 49.38 \\ O4.5 III & 46800. & 3.971 & 24.64 & 24.18 & 15.0 & 49.78 & 49.32 \\ O5 III & 45410. & 3.931 & 24.58 & 24.11 & 15.0 & 49.71 & 49.25 \\ O5.5 III & 44020. & 3.891 & 24.51 & 24.03 & 14.9 & 49.64 & 49.16 \\ O6 III & 42640. & 3.855 & 24.43 & 23.92 & 14.8 & 49.56 & 49.05 \\ O6.5 III & 41250. & 3.820 & 24.34 & 23.78 & 14.8 & 49.47 & 48.91 \\ O7 III & 39860. & 3.782 & 24.24 & 23.63 & 14.7 & 49.36 & 48.75 \\ O7.5 III & 38480. & 3.742 & 24.12 & 23.41 & 14.7 & 49.24 & 48.53 \\ O8 III & 37090. & 3.700 & 23.97 & 23.03 & 14.7 & 49.09 & 48.14 \\ O8.5 III & 35700. & 3.660 & 23.82 & 22.68 & 14.7 & 48.94 & 47.80 \\ O9 III & 34320. & 3.621 & 23.64 & 22.28 & 14.7 & 48.76 & 47.40 \\ O9.5 III & 32930. & 3.582 & 23.44 & 21.83 & 14.7 & 48.56 & 46.95 \\ B0 III & 31540. & 3.542 & 23.21 & 21.35 & 14.7 & 48.33 & 46.47 \\ B0.5 III & 30160. & 3.500 & 22.98 & 20.90 & 14.8 & 48.11 & 46.03 \\ \\ \multicolumn{4}{l}{Luminosity class I:} \\ O3 I & 50680. & 4.013 & 24.81 & 24.34 & 17.8 & 50.09 & 49.63 \\ O4 I & 47690. & 3.928 & 24.69 & 24.24 & 18.6 & 50.02 & 49.56 \\ O4.5 I & 46200. & 3.866 & 24.63 & 24.19 & 19.1 & 49.98 & 49.53 \\ O5 I & 44700. & 3.800 & 24.57 & 24.10 & 19.6 & 49.94 & 49.47 \\ O5.5 I & 43210. & 3.740 & 24.49 & 23.96 & 20.1 & 49.88 & 49.35 \\ O6 I & 41710. & 3.690 & 24.40 & 23.83 & 20.6 & 49.81 & 49.24 \\ O6.5 I & 40210. & 3.636 & 24.30 & 23.69 & 21.2 & 49.73 & 49.12 \\ O7 I & 38720. & 3.577 & 24.18 & 23.45 & 21.8 & 49.64 & 48.91 \\ O7.5 I & 37220. & 3.516 & 24.05 & 23.17 & 22.4 & 49.53 & 48.65 \\ O8 I & 35730. & 3.456 & 23.91 & 22.86 & 23.1 & 49.42 & 48.37 \\ O8.5 I & 34230. & 3.395 & 23.75 & 22.52 & 23.8 & 49.29 & 48.05 \\ O9 I & 32740. & 3.333 & 23.55 & 22.11 & 24.6 & 49.12 & 47.67 \\ O9.5 I & 31240. & 3.269 & 23.31 & 21.61 & 25.4 & 48.90 & 47.21 \\ \hline \end{tabular} } \label{ta_vacca} \end{table*} } \subsection{A new ionizing flux calibration for O3--B0 stars} Recently Vacca et al.~ (1996) derived new calibrations of stellar parameters for O3 to B0.5 stars. These are based on results from detailed modeling of the observed absorption line spectra of stars with well-defined spectral classifications. Using these calibrations Vacca et al.~ (1996) calculate ionizing photon fluxes in the H and \ifmmode {\rm He^{\circ}} \else $\rm He^{\circ}$\fi\ continua based on Kurucz (1991) LTE model atmospheres\footnote{For clarification a brief comment on the procedure of Vacca et al.~ (1996) for deriving Kurucz ionizing fluxes appears useful: A shown in Fig.~\ref{fig_hr_logg_models} the Kurucz (1991) models only cover the domain of the 20 and 25 \ifmmode M_{\odot} \else $M_{\odot}$\fi\ track. It is important to note that for the rest of the domain of interest, which corresponds to all O3 to O9 stars, extrapolations combined with blackbody spectra are used to extend the Kurucz grid to the desired lower gravities. }. In view of the important improvements included in our {\em CoStar} models, it is of particular interest to provide a recalibration of the photon fluxes taking into account non--LTE and wind effects and line blanketing. Table \ref{ta_vacca} shows our new calculations of the H and He~{\sc i}\ ionizing fluxes for O3 to B0.5 stars of luminosity classes V, III, and I. Given are the photon fluxes per unit surface $q_0$ and $q_1$. We also provide absolute photon fluxes $Q_0$ and $Q_1$, adopting the radii from the recent calibration of Vacca et al.~ (1996) in Table \ref{ta_vacca}. The photon fluxes have been derived from our solar metallicity models (Table \ref{ta_qi_z020}) using a two-dimensional surface interpolation routine based on a modified Shepard method (NAG {\tt e01sef} routine). For the reasons mentioned above we have refrained from interpolating values for the He~{\sc ii}\ ionizing fluxes. The $\log q_i$ values have been interpolated in the $\ifmmode \log g \else $\log g$\fi-\log\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi$ plane. For the gravity calibration we prefer to use the ``evolutionary gravity'' $g_{\rm evol}$ given by Vacca et al.~, since these values are consistent with the gravity definition in our {\em CoStar} models. Furthermore the distinction with their ``spectroscopic gravity'' is not of great concern for the comparison with the ionizing fluxes since their values are only weakly dependent on the adopted gravity (see below). \begin{figure*}[htb] \centerline{\psfig{figure=compare_q0q1.eps,height=8.8cm} \psfig{figure=costar_panagia2.eps,height=8.8cm}} \caption{{\bf Left panel:} Logarithm of the hardness ratio $\log\left(q_1/q_0\right)$ as a function of effective temperature for dwarfs, giants and supergiants. Shown are the values from our {\em CoStar} models (see Table \protect\ref{ta_vacca}) and those derived by Vacca et al.~ (1996) using Kurucz models. {\bf Right panel:} Logarithm of the number of Lyman continuum ionizing photons versus effective temperature. The predictions from our models are plotted using the same symbols as in Fig.~\protect\ref{fig_hr_logg_models}. The solid lines shows the relations from Panagia (1973) for his ZAMS and luminosity class I} \label{fig_compare_q0q1} \end{figure*} \subsection{Comparison with previous calibrations} We compare our results from Table \ref{ta_vacca} with those derived by Vacca et al.~ (1996) using the Kurucz (1991) models. A brief comparison with earlier work of Panagia (1973) will also be given. For all O3 to B0.5-type stars, the total number of {\em Lyman continuum photons} $q_0$ is somewhat {\em lower} than predicted by the Kurucz models The difference increases from 0.03 to 0.07 dex ($\sim$ 7 \% to 15 \%) between O3 and O7 respectively. For later types this behaviour increases strongly and reaches 0.14 (0.26) dex for B0 dwarfs (O9.5 supergiants). More important is the comparison with the He~{\sc i}\ photon flux, which is strongly affect by non--LTE\ and wind effects (see above). For the dwarf sequence in Table \ref{ta_vacca}, {\em $q_1$} is {\em larger} by $\sim$ 25 to 75 \% with respect to the values presented by Vacca et al. For supergiants $q_1$ is typically increased by a factor of 2. It has already been noted by Vacca (1994) and by Vacca et al.~ (1996) that the $q_1$ values based on Kurucz models should be systematically underestimated because of the assumption of LTE. As mentioned in Sect.~\ref{s_20track}, our results for the latest spectral types do neither smoothly tend towards the Kurucz values, nor do they tend to the results of the plane parallel non--LTE\ models of Kunze (1994). Model calculations for later types will be necessary to locate more precisely where non--LTE\ and wind effects become negligible (but see Sect.~\ref{s_improve}). Since for most applications involving H~{\sc ii}\ regions and alike systems the ionizing spectrum will be determined by the most massive stars present, the presence of the ``discontinuity'' will not be of importance. In Figure \ref{fig_compare_q0q1} (left panel) we plot the hardness ratio $q_1/q_0$ from the {\em CoStar} values of Table \ref{ta_vacca} as a function of \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi. Also shown is the hardness ratio derived by Vacca et al.~. As pointed out by these authors (cf.~also Vacca 1994), it must be kept in mind that these values should be higher by typically a factor of 2 if one accounted for non--LTE\ effects in plane parallel models. Figure \ref{fig_compare_q0q1} clearly shows the overall increase of the hardness of the ionizing spectrum at a given temperature. The strongest hardening is found for supergiants due to the increasing importance of wind effects. Interestingly, for stars with \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi $\ga$ 36000 K we find that the hardness ratio is essentially independent of luminosity class while the values of Vacca et al.~ show a spread of up to $\sim$ 0.2 dex. To produce a hardness ratio of $\log\left(q_1/q_0\right) < -0.7$ with our new models the temperature of the exciting star can be $\sim$ 1000 to 11000 K smaller than if Kurucz models were used. Therefore the general tendency is that lower effective temperatures would be derived from observed nebular properties if one uses atmosphere models, which account for non--LTE, line blanketing\ and stellar winds. \begin{figure}[htb] \centerline{\psfig{figure=eps_cma2.eps,height=8.8cm}} \caption{Far UV and EUV flux of $\epsilon$ CMa. EUVE observations from Cassinelli et al.~ (1995) corrected for an attenuation by $N_{\rm H \, I}=1.\, 10^{18}$ (dotted line up to $\sim$ 700 \ang) and $N_{\rm H \, I}=5.\, 10^{17} \; {\rm cm^{-2}}$ (Gry et al.~ 1995; solid line) Model comparisons: {\em CoStar} model (solid line, Parameters from Table \protect\ref{ta_cma}), Kurucz model (dashed) with $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (21 kK, 3.0). All fluxes are scaled (consistently with \ifmmode R_{\star} \else $R_{\star}$\fi) to the distance of 188 pc. Discussion in text} \label{fig_eps_cma1} \end{figure} \begin{figure}[htb] \centerline{\psfig{figure=beta_cma1.eps,height=8.8cm}} \caption{EUV flux of $\beta$ CMa. EUVE observations from Cassinelli et al.~ (1996) corrected for an attenuation by $N_{\rm H \, I}=2.\, 10^{18} \; {\rm cm^{-2}}$ (solid line up to $\sim$ 700 \ang). The following models are shown for comparisons: {\em CoStar} models (Parameters from Table \protect\ref{ta_cma}. Solid line: hot model ; long-dashed: cooler model), Kurucz models with $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (25 kK, 3.5) (dashed) and (23 kK, 3.5) (dotted). All fluxes are scaled (consistently with \ifmmode R_{\star} \else $R_{\star}$\fi) to the distance of 206 pc. Discussion in text} \label{fig_beta_cma1} \end{figure} Finally it is interesting to make a brief comparison with the widely used results of Panagia (1973). For further comparisons see Vacca et al. Figure \ref{fig_compare_q0q1} (right panel) shows the comparison of the number of Lyman continuum photons between Panagia and our {\em CoStar} models. The behaviour of the models from the 40, 60, and 85 \ifmmode M_{\odot} \else $M_{\odot}$\fi track has already been discussed in Paper II. The figure clearly illustrates the smaller {\em CoStar} Lyman continuum flux for the dwarfs at $\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi \la 40$ kK which was also apparent in the above comparison with Vacca et al. The same also holds for the evolved stars from the 20 and 25 \ifmmode M_{\odot} \else $M_{\odot}$\fi tracks. The models with values of $q_0$ larger than the supergiants of Panagia are the extreme supergiant models from the 40 and 60 \ifmmode M_{\odot} \else $M_{\odot}$\fi track. \section{Discussion} \label{s_improve} We have shown that our models provide an important improvement describing the ionizing fluxes of hot stars. However, it is also important to discuss possible shortcomings in our method, which may affect our results. As discussed in Sects.~\ref{s_input} and \ref{s_costar} the uncertainties in our models are expected to become increasingly important for B stars, which have weak winds. This can be easily understood and it is best illustrated by considering the following brief exploratory study of two unique cases of B2 giants, which have recently been observed shortward of the Lyman limit with EUVE. \subsection{Comparisons with EUVE observations} \label{s_euve} The first EUV spectra from early type stars have been observed by Hoare et al.~ (1993). Subsequent EUVE observations of $\epsilon$ CMa (B2II) and $\beta$ CMa (B1 II-III) by Cassinelli et al.~ (1995, 1996), have provided spectra allowing detailed comparisons with model atmospheres. As mentioned before, these observations have revealed important discrepancies with predictions from plane parallel model atmospheres. Najarro et al.~ (1996) have pointed out that these shortcomings could, at least partly, be explained by models which account for stellar winds. Their invoked mechanism is that of Case {\em a-iii}~working in the He~{\sc i}\ continuum. This is the same mechanism that is relevant in the He~{\sc ii}\ continua of the O-type star models discussed here. It is therefore interesting to see how well our predictions agree, notwithstanding the fact that the stellar parameters of $\epsilon$ and $\beta$ CMa lie somewhat outside of the temperature domain covered in this work. \subsubsection{$\epsilon$ CMa} The adopted stellar parameters for $\epsilon$ and $\beta$ CMa are given in Table \ref{ta_cma}. Temperature, gravity and radius are from Cassinelli et al.~ (1995). We assume solar abundances. The wind parameters of $\epsilon$ CMa are somewhat uncertain. The adopted mass loss rate for $\epsilon$ CMa is compatible with the studies of Drew et al.~ (1994), Cassinelli et al.~ (1995) and Najarro et al.~ (1996). The same \ifmmode v_{\infty} \else $v_{\infty}$\fi\ and $\beta$ as assumed by Najarro et al.~ are chosen to allow for a comparison with their work. \begin{table}[htb] \caption{Stellar parameters for $\epsilon$ and $\beta$ Canis Majoris} \centerline{ \begin{tabular}{lrrrrrrrrr} \hline \\ & \multicolumn{1}{c}{$\epsilon$ CMa} & \multicolumn{1}{c}{$\beta$ CMa} \\ \hline \\ \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi\ [kK] & 21.0 & (24.8, 23.25) \\ \ifmmode \log g \else $\log g$\fi\ [cgs] & 3.2 & (3.7, 3.4) \\ \ifmmode R_{\star} \else $R_{\star}$\fi\ [\ifmmode R_{\odot} \else $R_{\odot}$\fi ] & 16.2 & 11.54 \\ $n_{\rm H}$ & 0.9 & 0.9 \\ \ifmmode \dot{M} \else $\dot{M}$\fi\ [\ifmmode {\rm M_{\sun}yr^{-1}} \else ${\rm M_{\sun}yr^{-1}}$\fi ] & $1. \, 10^{-8}$ & $3.5 \, 10^{-8}$ \\ \ifmmode v_{\infty} \else $v_{\infty}$\fi\ [\ifmmode {\rm km \;s^{-1}} \else $\rm km \;s^{-1}$\fi ] & 800. & 1220. \\ $\beta$ & 1. & 1. \\ \hline \end{tabular} } \label{ta_cma} \end{table} The spectral energy distribution predicted by {\em CoStar} and the one observed are given in Fig.~\ref{fig_eps_cma1}. The result from a Kurucz model (dashed line) for $(\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi,\ifmmode \log g \else $\log g$\fi)$ = (21 kK, 3.0) plotted for comparison. The dotted line shows the observed EUVE flux corrected for a hydrogen column density $N_{\rm H \, I}=1.\, 10^{18} \; {\rm cm^{-2}}$ (cf.~Cassinelli et al.~ 1995). The solid line shows the observation corrected using the recently derived value of $N_{\rm H \, I}=5.\, 10^{17}$ (Gry et al.~ 1995). The comparison with the plane parallel LTE model illustrates the striking underestimate of the continuum flux in both the Lyman and the He~{\sc i}\ continuum. As pointed out by Cassinelli et al., the finding of an observed ``EUV excess'' holds for any plane parallel model, i.e. it is independent of the assumption of LTE or non--LTE. \footnote{Non-LTE effects even decrease the EUV flux for the parameters of interest.} As the figure shows, our model atmosphere yields a stronger EUV emission which significantly improves the comparison with the observations. {\em Does this result solve the ``EUV excess'' problem in $\epsilon$~CMa? } The answer is: {\em no}. To explain this conclusion we need to identify the reason(s) for the EUV increase in our model relative to that of the plane parallel model. There are two basic reasons: The first and major reason is simply a larger temperature in the continuum forming layers of our {\em CoStar} model. Indeed, in our model the temperature at the depth of continuum formation at $\lambda \sim$ 600 \ang\ is $T \simeq$ 17 kK, which is larger than the value in the Kurucz model (T $\simeq$ 14 kK, Cassinelli et al.~ 1995). Although the ground states of H and He are slightly overpopulated in this region, the net result is nevertheless a stronger EUV emission. In fact, the structure of our model qualitatively resembles the structure of model A4 plotted in Fig.~\ref{fig_compare_23_temp}. The considered continuum forming layers are located at small optical depths ($\log\ifmmode \tau_{\rm Ross} \else $\tau_{\rm Ross}$\fi \la -2$), where the temperature drop due to the spherical extension has not yet set in and the temperature is thus close to the boundary value of the plane parallel grey atmosphere. This explains the relatively high temperature. As for model A4, discussed in Sect.~\ref{s_20track}, we therefore conclude that the EUV emission is strongly dependent on the temperature structure in the transition zone between photosphere and wind. The second reason regards the treatment of line broadening and its effect on the amount of line blanketing. In the present calculations for $\epsilon$ and $\beta$ CMa, line blanketing turns out to be essentially ineffective. This unrealistic behaviour is due to the neglect of line broadening (cf.~below). Introducing a small ``turbulent'' line broadening using the simple method of Schaerer \& Schmutz (1992) results in a significant decrease of the EUV flux. This further points out the need for an improved treatment of line blanketing\ for later type stars with weak stellar winds. Najarro et al.~ (1996) have pointed out the importance of the stellar wind on the ground state populations of H and He and hence on the emergent EUV flux of early B giants. To compare our models with their results we calculated a series of models where we varied the mass loss rate from $10^{-9}$ to $10^{-6}$ \ifmmode {\rm M_{\sun}yr^{-1}} \else ${\rm M_{\sun}yr^{-1}}$\fi, keeping all other parameters as in Table \ref{ta_cma}. We do not confirm their strong dependence of the model fluxes on mass loss. In particular we obtain a much weaker dependence of the Lyman jump, the He~{\sc i}\ jump and the number of ionizing photons on mass loss. This finding is confirmed by comoving frame calculations (Schmutz 1995, private communication). Our results thus indicate that the strong mass loss dependence of the EUV flux found by Najarro et al.~\ must be partly due to variations of their temperature structure with \ifmmode \dot{M} \else $\dot{M}$\fi. The amplitudes of these variations seem significantly larger than those in our calculations. This may originate from their simplified energy equation (they assume radiative equilibrium accounting only for H and He), which is used to derive their temperature structure. This finding stresses the importance of deriving both reliable temperature structures and accurate non--LTE\ populations including a detailed treatment of line blanketing\ and stellar winds to obtain reliable ionizing fluxes for B stars. \subsubsection{$\beta$ CMa} The adopted stellar parameters for $\beta$~CMa are given in Table \ref{ta_cma}. Given the ambiguity of these parameters (Cassinelli et al.~ 1996), we present two {\em CoStar} models corresponding to the limiting cases discussed by Cassinelli et al. For the wind parameters we also follow these authors and adopt their theoretical estimates based on the modified CAK theory. As for $\epsilon$ CMa we assume solar abundances. The predicted spectra and the {\em EUVE} observations are compared in Fig.~\ref{fig_beta_cma1}. The solid and long-dashed lines show the results from the hotter and cooler {\em CoStar} model respectively. Kurucz model predictions for comparable parameters (see figure caption) are shown as short-dashed and dotted lines. Compared to the Kurucz models, our calculations again show a stronger EUV flux for a given \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi. Similar to the $\epsilon$~CMa model discussed above, this is mostly due to a temperature difference in the continuum forming layer as the ground state of H is overpopulated. The {\em CoStar} model with the lower \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi\ (23250 K) fits the {\em EUVE} observations best. Interestingly, this effective temperature was favoured by Cassinelli et al.~ (1996) based on a comparison of the UV to near-IR flux distribution. In this case, the EUV emission predicted by the Kurucz model is considerably too weak. Our results thus show that even for the low \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi\ value and despite non--LTE\ effects, which overpopulate the ground state, the observed EUV flux can be reproduced with the temperature structure from our models. As for $\epsilon$~CMa, the key questions are again: how realistic is this structure, and more fundamentally, what are the physical processes that establish such a temperature structure? \subsection{Current approximations and future improvements} The exploratory results in the previous section clearly show that for stars of spectral types later than approximately B0 (not covered by our grid) reliable predictions of ionizing fluxes are not yet possible. Therefore we have limited our model set to O3-B0 stars. We shall now briefly discuss the most important model assumptions and their importance for the set of calculations presented in Table \ref{ta_params}. The assumptions listed in Sect.~\ref{s_input} will be addressed in the following. The Sobolev approximation, which is made for the line transfer should yield sufficiently precise results for the parameters of our model set (see also de Koter et al.~ 1993). The weakest point in our Monte-Carlo treatment of line blanketing is most likely the neglect of line broadening, yielding only a poor treatment of photospheric lines (see Paper II). Line blanketing in the low velocity part of the atmosphere is therefore underestimated in the present models. Although we have presently no possibility to quantify the importance of photospheric blanketing, we expect that our results should be fairly reliable for the O stars for the following reasons: {\em 1)} photospheric lines are both weaker and less numerous than in later types, and {\em 2)} given their strong outflow, wind effects play a dominant role in establishing the equilibrium population. A characteristic feature of the line blanketed non--LTE\ models of Kunze (1992, 1994) is the appearance of relatively strong absorption edges in the He~{\sc i}\ continuum which are mostly due to CNO, but also due to Ne and Ar (see e.g.~Figs.~\ref{fig_compare_10ryd}, \ref{fig_compare_20ryd}). These edges are not treated in our calculations. The Kunze models, on the other hand, do not include lines of iron peak elements, which are treated in our calculations, and which cause an important fraction of the metal line blanketing. Recently Sellmaier et al.~ (1996) presented calculations which include line blocking and non--LTE\ effects in a stellar wind model. We note that their exploratory results also lack of pronounced metal ionization edges in the EUV, which seems to confirm our calculations. A more detailed analysis of this question will be possible with the inclusion of additional metals in the full non--LTE\ calculations (see e.g.~de Koter et al.~ 1994, 1996a). A potential source of uncertainty for high \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi\ models could come from the coherent treatment of electron scattering, which might modify the He ionization, as pointed out by Rybicki \& Hummer (1994). This effect remains to be included in future models. As discussed in Paper II, the high energy part of the spectrum may be affected by the emission of X-rays which are usually attributed to shocks in the stellar winds. We did not include such shocks in our calculations. This may cause us to underestimate the flux in the \ifmmode {\rm He^+} \else $\rm He^{+}$\fi\ continuum. At longer wavelengths, our results should hardly be affected by X-rays as shown by the work of MacFarlane et al.~ (1994). They find that X-rays cause only a small perturbation of the wind structure in the O-type stars discussed in this paper. For stars of later spectral types the situation is different. For these stars, which have relatively weak winds, they do find that X-rays may drastically alter the wind ionization and possibly also contribute to heating in the photosphere. For the understanding of later types than those included in our sets, and particularly for an explanation of the ``EUV excess'' of $\epsilon$ and $\beta$ CMa (cf.~Sect.~\ref{s_euve}), the inclusion of X-rays will probably be of great concern. \section{Summary and conclusions} The present work provides an extensive set of predictions regarding the spectral energy distribution of massive stars ($M_{\rm i}=$ 20 to 120 \ifmmode M_{\odot} \else $M_{\odot}$\fi) derived from our combined stellar structure and atmosphere models. Our set covers the entire main sequence evolution and approximately corresponds to O3--B0 stars of all luminosity classes. This represents the first set of predictions for O and early-B stars, which are based on the most recent atmosphere models accounting for non--LTE\ effects, line blanketing, and stellar winds. Especially the treatment of the stellar wind is found to be of great importance for predicting reliable ionizing fluxes of hot stars. Our calculations should provide, for the first time, a reliable description of the spectral energy distribution in the He~{\sc i}\ and He~{\sc ii}\ continuum, where non--LTE, line blanketing, and wind effects are crucial. We have discussed the importance of wind effects and line blanketing\ in O3 to B0 stars in relation to the predicted ionizing fluxes. As already partly found in previous investigations (Gabler et al.~ 1989, 1992; Schaerer 1995, Najarro et al.~ 1996, Schaerer et al.~ 1996a, b) these effects have a profound importance on shaping the EUV flux of hot MS stars. The main conclusions can be summarised as follows: \begin{itemize} \item For stars with $\ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi \ga$ 35 kK the flux in the He~{\sc ii}\ continuum is increased by 2 to 3 orders of magnitudes compared to predictions from non--LTE\ plane parallel model atmospheres (cf.~Gabler et al.~ 1989). With respect to Kurucz LTE models there is a 3-6 orders of magnitude increase at \ifmmode T_{\rm eff} \else $T_{\rm eff}$\fi\ $\ga$ 38000 K. \item The flux in the He~{\sc i}\ continuum is not only increased due non--LTE\ effects (e.g.~Kudritzki et al.~ 1991) but is also modified by wind effects (cf.~Najarro et al.~ 1996, Paper II). The combined effect of the mass outflow and line blanketing\ lead to a {\em flatter energy distribution} in the He~{\sc i}\ continuum (see also Sellmaier et al.~ 1996). Typically our models lead to an increase of a factor of 1.25 to 2 for the He~{\sc i}\ ionizing photon flux with respect to Kurucz models. \item The Lyman continuum fluxes are modified due to line blanketing\ and stellar winds, although to a lesser degree than the spectrum at higher energies. For most cases the differences with Kurucz models are less than $\sim$ 20 \% in the ionizing photon flux. \end{itemize} Using our calculations we provide revised ionizing fluxes for O3 to B0 stars (see Sect.~\ref{s_revision}) based on the recent temperature and gravity calibrations of Vacca et al.~ (1996). The total number of Lyman continuum photons is found to be slightly lower than those of Vacca et al., which are derived from the plane parallel LTE models of Kurucz (1991). Due to the increased flux in the He~{\sc i}\ continuum, the hardness ratio $q_1/q_0$ of the He~{\sc i}\ to H~{\sc i} continuum is increased by factors of $\sim$ 1.6 to $\sim$ 2.5 depending on spectral type and luminosity class. These high hardness ratios only follow from Kurucz models at temperatures of about 1000 to 11000 kK higher than used in our models. We have discussed the assumptions inherent in our models and point out future improvements. These improvements will be especially important for understanding the EUV spectra of B-type stars, which show relatively weak stellar winds. We have analysed the EUV spectra of the recently observed B giants $\epsilon$ and $\beta$ CMa (Cassinelli et al.~ 1995, 1996) and have shown that reliable calculations of the temperature structure in a model accounting for the stellar wind and which includes a detailed treatment of photospheric blanketing will be crucial to reproduce the EUVE observations. We argue that these effects are likely more important than the pure wind effect invoked by Najarro et al.~ (1996) to explain the EUV excess of $\epsilon$ CMa (Sect.~\ref{s_euve}). Although potential shortcomings have been identified (Sect.~\ref{s_improve}), we consider our predictions to be fairly reliable for O stars (see also Schaerer 1996). The recent study of Sellmaier et al.~ (1996) shows that H~{\sc ii}\ regions can provide very sensitive tests and that their exploratory models, which are similar to the present ones, are quite successful in explaining the observations. To explore the broader impact of our new ionizing fluxes a grid of H~{\sc ii}\ region models has been calculated using the nebular photoionization code {\em PHOTO} of Stasi\'{n}ska (see Stasi\'{n}ska \& Schaerer 1996, Leitherer et al.~ 1996). The spectral energy distributions are available on request from the authors and will be included in a forth-coming AAS CD-ROM (Leitherer et al.~ 1996). \acknowledgements{DS particularly thanks Andr\'e Maeder for his encouragement and support for this project. Dietmar Kunze kindly provided results from his atmosphere calculations. EUVE observations were made available by David Cohen. DS also thanks Bill Vacca, Werner Schmutz and Jacques Babel for numerous discussions and comments. This work was supported in part by the Swiss National Foundation of Scientific Research and by NASA through contract NAS5-31842.}
1,314,259,994,004
arxiv
\section{Introduction} In the near future the LHC will begin its third run of data collection with the aim of doubling the luminosity collected so far during the two previous runs. While Run-1 allowed the discovery of the Higgs boson, no new physics has yet emerged from Run-2 and chances are faint to get big surprises in the existing dataset, despite the many analyses that are still ongoing. It is becoming obvious that the expected increase in luminosity will not yield a strong enough gain in sensitivity. Tiny signals hidden behind large backgrounds would not benefit much from additional statistics and would call for an improvement in the analysis techniques to benefit fully from the information contained in each event. Neural networks and boosted decisions trees, along with other less popular machine learning techniques, are used to learn this information directly from simulations or more rarely from data. The main difficulty of the machine learning methods is to make sure that they actually learned physical information and that they are able to generalize that knowledge. Ensuring this requires using control samples or regularization techniques which have become a standard in the field and many other techniques such as pivoting \cite{pivot}, weakly supervised \cite{WeakSupervised,SemiSupervised} and unsupervised \cite{Unsupervised} learnings have been applied in the field of high energy physics over the past years. The physical information however is provided to the algorithm in an indirect way, by means of the training dataset where it is encoded. Although powerful in terms of prediction, its retro-engineering prospects are limited and therefore restrict its interpretability. On the contrary, the Matrix Element Method (MEM) uses our knowledge of the Standard Model (SM) by means of the Lagrangian to compute the compatibility of experimental events with a hypothetical process. The underlying knowledge of the physics process and detector response makes its internal dynamic more straightforward to interpret than for machine learning techniques. Since there is no training step, the MEM can also be used when the number of events in the dataset in consideration is very small, a situation in which other methods struggle. The MEM originates from the Tevatron experiments DØ and CDF for top quark measurement in $t\bar{t}$ production~\cite{MEM1,MEM2,MEM3,MEM4,MEM5,MEM6,MEM7} and has become common in particle physics analyses. Recent examples at the LHC are the searches and measurements of the $t\bar{t}H$~\cite{MEM8,MEM9,MEM10,MEM12,MEM13,MEM14,MEM15} and single top ~\cite{MEM16} production processes, as well as studies of spin correlation in $t\bar{t}$ production~\cite{MEM17}. However this technique suffers from complexity and an expensive computation time. In order to obtain the probability for a given process, the matrix element must be convolved with the parton distribution functions, the transfer function of the detector and be integrated over the whole phase-space. This integral is high dimensional and its integrand has a nontrivial shape with many sharp peaks, the details will be presented in the next section. Even modern implementations of the method (e.g. MoMEMta~\cite{MoMEMta}) that use efficient integration strategies need more CPU time to evaluate this integral than is required by many applications of the method. The aim of this paper is to show how a Deep Neural Network (DNN) can be used to approximate the result of the matrix element integration. This makes it possible to use the MEM for search for new physics, parameter scans, etc. The probability provided by MoMEMta can be seen as an untraceable function --- meaning there is no closed or computationally affordable form --- of the final state 4-momenta. Any function with reasonable assumptions can be approximated by a neural network given a large enough width --- which can be broken down into several layers --- and although nothing guarantees the result of the MEM fulfills all these assumptions it was enough to motivate this study. This method is potentially much faster that the straight evaluation of the matrix element by integration. A DNN needs a simulated sample and several hours of computing time for training, but evaluating it afterwards is much faster: typically at least a few order of magnitude less computing time than for the classical MEM integration, which takes a few seconds per events even for the easiest processes. This is illustrated in Figure~\ref{fig:time_extrapolation} for a few of the benchmark processes from Ref.~\cite{MoMEMta}. Many developments have been made recently in the context of the MEM by using either parallel computing \cite{MEMParallel_1,MEMParallel_2} or GPU acceleration \cite{MEMGPU_1,MEMGPU_2}. In addition methods that bypass the classic numerical integration libraries by using boosted decision trees \cite{BDTIntegration} or neural networks \cite{DNNIntegration}, and the recent new applications of normalizing flows for phase-space sampling \cite{NormFlow1,NormFlow2} are promising ways to improve the computation time and potentially to avoid the currently required integration variables optimizations such as implemented in MoMEMta. Nonetheless these techniques can still be coupled with the method we describe in this paper. On the contrary, likelihood-free inference methods \cite{LikelihoodFree_1,LikelihoodFree_2,LikelihoodFree_3} propose to circumvent the integration shortcomings using machine learning to produce the likelihood ratio without any loss of information. Even though apparently more powerful than the MEM, they are less stable and more complicated to interpret than what we propose here. \begin{figure}[htp] \centering \includegraphics[width=.6\textwidth]{time_extrapolation.png}\hfill \caption{Computation time as a function of the number of events for few processes with MoMEMta and using the proposed DNN approach. For the DNN it was assumed that the training and evaluation times were \SI{10}{\hour} and \SI{150}{\micro\second} per event respectively. Time spent on producing the training sample is not taken into account. } \label{fig:time_extrapolation} \end{figure} \section{The Matrix Element Method in a nutshell} \label{sec:MEM} The purpose of the MEM is to compute $P(x|\alpha)$, the probability to observe an experimental event given the theoretical hypothesis $\alpha$. In this context $x$ refers to the 4-momenta of an arbitrary number of particles observed in the final state. We will distinguish these experimentally observed particles $x$ from the parton level particles $y$ produced at the interaction point before the hadronization and their detection. $\alpha$ can refer to any set of parameters (e.g. the mass of a resonance) or to different models. For hadron colliders, the likelihood of a hard scattering producing a partonic final state $y$ is proportional to the differential cross section defined as \begin{equation} d\sigma(q_1,q_2,y) = \frac{(2\pi)^4 |\mathcal{M}(q_1,q_2,y)|^2}{q_1 q_2 s} d\Phi(y), \end{equation} where $q_1$ and $q_2$ are the initial state parton momentum fractions and $s$ is the center-of-mass energy. $d\Phi(y)$ is the n-body phase space of the final state $y$, while $|\mathcal{M}(q_1,q_2,y)|^2$ denotes the matrix element for the given process $\alpha$ (including the summation over spin and colors) usually computed numerically at leading order by packages such as MG5\_aMC@NLO~\cite{MG5}. The propagation of the parton-level 4-momenta $y$ to the experimentally observed ones $x$ includes the parton distribution functions (PDF) $f_{a}(q)$ (for each parton $q$ of flavor $a$), efficiency $\epsilon(y)$ to reconstruct and select the hadronic state $y$ and transfer function $T(x|y)$ normalized with respect to $x$. The latter parameterizes the parton shower, the hadronization and the detector response (whose resolution is limited and produces a smearing of the observed particles momentas). The probability $P(x|\alpha)$ results from a convolution of the differential cross section with the transfer function and a sum over initial states: \begin{small} \begin{eqnarray} P(x|\alpha) = \frac{1}{\sigma_{\alpha}^{vis}} \int_{q_1,q_2} \sum_{a_1,a_2} \int_{y} d\Phi(y) dq_1 dq_2 f_{a_1}(q_1) f_{a_2}(q_2) \frac{(2\pi)^4 |\mathcal{M}(q_1,q_2,y)|^2}{q_1 q_2 s} T(x|y) \epsilon(y), \label{eqn:MEM} \end{eqnarray} \end{small} where $\sigma_{\alpha}^{vis}$ stands for the visible cross-section and is there to make sure that the probability is normalized. It is often computed \textit{a posteriori} with $\sigma_{\alpha}^{vis} = \sigma_{\alpha} \langle\epsilon\rangle$, where $\sigma_{\alpha} = \int d\sigma_{\alpha}(y)$ is the total cross-section and $\langle\epsilon\rangle$ is the average selection efficiency. In practice the integration and the computation of this factor are separated which is why we will omit it in the following from Equation~\ref{eqn:MEM} and define the MEM weights as $W(x|\alpha) = \sigma_{\alpha}^{vis} \times P(x|\alpha)$. In addition the weights can span several orders of magnitude which is why most of the time we will use the event information defined as $I_{\alpha} = -\log_{10}(P(x|\alpha))$ or $I'_{\alpha} = -\log_{10}(W(x|\alpha))$ which only differs by a constant term. The transfer function includes various complicated processes and several assumptions are usually made to simplify its integration. The detection and measurement of each particle in the final state is independent, at first order, which allows to factorise the transfer function for the different particles. This argument can be pushed even further by factorizing the different components of the measured 4-momentum. We therefore write the transfer function as \begin{eqnarray} T(x|y) &=& \prod_{i=1}^{n} T_i(x^i|y^i), \nonumber\\ T_i(x^i|y^i) &=& T_i^E(x^i|y^i)T_i^{\eta}(x^i|y^i)T_i^{\phi}(x^i|y^i), \end{eqnarray} where the index i refers to the final-state particles. In most cases, the resolutions in $\eta$ and $\phi$ are very narrow and are parameterized as delta functions. On the contrary, the energy resolution depends on the nature of the particles and should reproduce the behavior of the simulation detector effects, typically Gaussian for fast simulations. Note that these assumptions can break down when two objects have small angular separation which would require specific care to not impair the convergence and accuracy of the integration. In this paper we derived custom simulation-based binned transfer functions to account for potential asymmetries. The high-dimension integration in Equation~\ref{eqn:MEM} requires the use of numerical integrators such as Vegas~\cite{VEGAS}. These tools implement adaptive Monte Carlo techniques, the basic principle of which is to randomly generate points evaluated with the function that one wishes to integrate. With enough points, a relatively close approximation of the integral can be obtained along with its uncertainty. However this method becomes extremely expensive in high-dimension space: while flat regions of the phase-space only need a few points in order to get a good approximation of their integrals, regions where the function fluctuates a lot need to be well covered. Adaptive Monte Carlo techniques are iterative processes designed to populate the phase-space heterogeneously in order to decrease the integral variance. Even though they perform better than the uniform sampling, they do not scale easily with the dimensionality and the factorization assumption on which some are based --- e.g. importance sampling in Vegas --- makes them especially suboptimal if the integrand presents peaks that depend on several integration variables. In our case sharp peaks can arise from the propagators in the matrix element or in narrow transfer function. The latter can already be mapped to integration variables in the classic parameterization of the phase-space \begin{equation} d\Phi = \left( \prod_{i=3}^{n} \frac{|\mathbf{p}_i|^2d|\mathbf{p}_i| sin \theta_i d\theta_i d\phi_i}{2 E_i (2\pi)^3} \right) (2\pi)^4 \delta^4\left(p_1+p_2 -\sum_{j=3}^{n} p_j \right), \label{eqn:phase_space} \end{equation} where the Dirac function ensures the momentum conservation. Propagator enhancements then need to be addressed by inverting the Breit-Wigner resonances and the delta functions need to be integrated out. In addition this parameterization includes invisible particles, i.e. particles that did not leave a trace in the detector. Neutrinos, particles outside the acceptance and initial-state partons should be taken into account and the large volume that they represent will heavily impact the integration - unless the kinematic constrains are used to remove these degrees of freedom. \section{Fitting the MEM with DNN} Computing the weight of an event with MoMEMta can take from a few seconds to several minutes depending on the complexity of the process in question. This has to be repeated for each hypothesis $\alpha$ and for each event and quickly becomes prohibitive. In a real-life analysis with several hypotheses and sometimes additional model parameters, it often makes the implementation of the method challenging. In a nutshell, the approach proposed in this study uses MoMEMta to process simulated samples and produce event information $I'$ under different hypotheses. The result is then used together with MoMEMta inputs --- the 4-momentas of each visible particle --- to train a DNN. As shown in what follows, the resulting network can be used instead of MoMEMta on a larger set of events and for different values of the model parameters. The inputs of MoMEMta are the 4-momenta of all the observed particles as well as the missing transverse energy (only its $P_T$ and $\phi$ angle), consequently these are the inputs that we want to provide to the DNN. Some rather standard pre-processing has been used to facilitate its task, in the spirit of Ref.~\cite{DataPreProcess}. Depending on the longitudinal difference in momentum between the initial partons that collide in the detector, the particles produced might be boosted in one or the other direction. The network will still be able to learn the function at Equation~\ref{eqn:MEM} but will also have to learn about the longitudinal boost itself, which hinders its ability to describe the interesting part of the matrix element. Using as inputs the $P_T$, $\eta$ and $\phi$ angles for each of the particles improves a lot the situation, since the $\Delta \eta$ between two particles is in good approximation independent of the longitudinal boost. Furthermore, the detector has a cylindrical symmetry and we do not want the network to learn about an arbitrary reference in $\phi$. Any relative quantity defined on the angles could in principle be used, for example the relative $\Delta \phi$ angle with an arbitrary selected particle. This parameterization has shown to yield better results than the raw 4-momenta without loss of generality. While in the integration of the MEM the ordering of the particles is important because all permutations of indistinguishable particles -- e.g. two jets originating from a quark and an antiquark --- need to be considered, there is no notion of distance in the inputs of a fully connected network and it does not have to be taken into account. As the targets span several orders of magnitude we will follow the example set by Ref.~\cite{MoMEMta} and regress on the event information $ I' = -\log_{10}(\textrm{weight})$ instead. Similar approaches have been studied in detail in the literature \cite{outputPreprocess}. In some cases, the computation does not converge before reaching the maximum number of iterations. In these cases the weights have non-physical infinitesimal values with even smaller uncertainties, referred to as \textit{invalid}. These weights are logically not included in the learning process, but the DNN can be evaluated on these events as on any other. In order to probe both the behavior of the MEM and the DNN for these events, some of the invalid weights can be recomputed in MoMEMta with more sampling points and iterations and evaluated with the DNN trained on valid weights. An inherent quality of the DNN approach is the ability to interpolate on inputs that were not seen during the training. In the classical method, parameter scans require to generate weights at different parameter values and eventualy to perform an interpolation. But the cost grows with the granularity and the dimensionality of the parameter space, which can be prohibitive. The advantage of the DNN is that no integration with MoMEMta is necessary anymore once the model is built, even for new events and new parameter values --- as long as one stays clear of the extrapolation regime where DNN are know to be untrustworthy. Note that in practice the MEM probabilities --- and by extension the MEM information $I'$ --- are rarely used directly in an analysis and are combined in an application-dependent procedure. A simple comparison of their values is thus not a sufficient criterion to state that the method we propose can be used without losses. We have used the Keras~\cite{keras} library interfacing Tensorflow~\cite{tensorflow} to train the DNNs. The datasets were separated into three sets: one for the training ($\sim 70\%$), one for the hyperparameter scans used in model selection ($\sim 10\%$) and one for the performance evaluation of the selected model ($\sim 20\%$). All the plots in the paper contain events in the last set. All this is done to detect any overfitting of the network, i.e. the loss of generalization to unknown data because statistical fluctuations of the training data are learnt in addition to, or instead of, the general features of its underlying distribution. \section{Proof of concept : the \texorpdfstring{$llb\bar{b}$}{llbb} topology} \label{sec:2HDM} As a proof of concept we will apply the method we propose in this paper to several processes producing two opposite sign leptons and two jets initiated by b quarks (bjets) as detected particles. This topology is interesting because the main contributions to this topolgy --- Drell-Yan $Z\rightarrow l^+l^-$ production with additional jets, and top quark pair production $t\bar{t}$ with leptonic decays of the W bosons from both top quarks --- are very dissimilar in the way they are treated in the integration. The computation for the former is rather straightforward as no missing particles are produced while the latter contains undetected neutrinos whose degrees of freedom need to be accounted for. We will then consider the resonnant $H \rightarrow Z (\rightarrow l^+ l^-) A (\rightarrow b \bar{b})$ process that arises in the context of Two Higgs Doublet Models (2HDM), and has been studied by the ATLAS~\cite{AtlasHZA} and CMS~\cite{HZA} collaborations. The multiple resonances in this process present an interesting challenge from the integration point of view and the unknown masses of the $H$ and $A$ bosons will illustrate the power of our method when the parameter space is multi-dimensional - which is precisely where the classical integration is impractical. A summary of the number of events for which the weights have been computed is given in Table~\ref{tab:size}. Only 500K events for the $t\bar{t}$ process have been used in the training to not unbalance the network. The $H \rightarrow ZA$ samples and weights are split in 23 mass configurations up to \SI{1}{\TeV} for both $m_A$ and $m_H$. The event information $I'_\alpha$ will be evaluated for every event --- $t\bar{t}$, Drell-Yan and $H \rightarrow ZA$ --- 23 times with different mass parameters for the $H \rightarrow ZA$ case alone. The number of invalid weights and how many were recomputed with more iterations are also given in the table. The non-convergence indicates that it is very difficult for MoMEMta to associate an event with the process the weights is computed for, likely because the event is too incompatible with the hypothesis. The changes of variables introduced in the computation are an attempt to optimize the axes of the integration to the kinematics of the process in question. If the event is pathological or comes from another process then the integrator might not generate points in the large value regions. Therefore more iterations or more points are needed and the integration can reach the threshold on the number of steps before convergence. Sometimes the threshold might be too hard and the convergence was very close, this is probably the case for the invalid Drell-Yan weights given their relatively small number. In addition the fact that all of them converged at the recomputation step with more time and points tends to support this explanation. On the other hand, it is possible that the phase-space regions to be populated for the event are very far in the tails of the sampling distribution for the process under investigation. This is probably the case for a portion of the $t\bar{t}$ weights that did not converge even with more points, especially for $H \rightarrow ZA$ events with invalid weights that are mostly at high $M_H$ and $M_A$ (two thirds of the them are for cases where both masses are at the \SI{}{\TeV} scale). This is because the kinematics of the leptons and bjets are incompatible with a \SI{173}{\GeV} precursor, and thus make the task of MoMEMta more complicated. This does not happen for the Drell-Yan weights because the kinematic range of the products is more flexible. A study of these invalid weights is presented in a dedicated section. \begin{table} \caption{Sample sizes for the three types of weights and the three samples. \textit{Valid} represents weights that converged, \textit{Invalid} the ones that did not and \textit{Invalid recomputed} the ones in the invalid category that converged when recomputed.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{|l|ccc|ccc|cc|} \hline Process & \multicolumn{3}{|c|}{Drell-Yan weights} & \multicolumn{3}{|c|}{$t\bar{t}$ weights} & \multicolumn{2}{|c|}{$H \rightarrow ZA$ weights} \\ \hline Weights & Valid & Invalid & Invalid & Valid & Invalid & Invalid & Valid & Invalid \\ & & & recomputed & & & recomputed & & \\ \hline Drell-Yan sample & 305407 & 12 & 12 & 305407 & 913 & 480 & 38642 & 3842 \\ $t\bar{t}$ sample & 2903472 & 109 & 109 & 2903472 & 751 & 545 & 39441 & 14441 \\ $H \rightarrow ZA$ sample & 209130 & 10 & 10 & 209130 & 37084 & 10040 & 232719 & 32274 \\ \hline \end{tabular}} \label{tab:size} \end{table} We made sure the events were weighted proportionally to the sample size between the three categories so that they all have the same importance in the training of the DNN. The goal for the $H \rightarrow ZA$ weights is to include $M_H$ and $M_A$ in the inputs of the DNN and provide the Information $I'$ as target. This parametric DNN~\cite{paramDNN} would be very interesting for parameter scans (for example in a maximum likelihood context) that become prohibitive with the classical approach. This DNN is therefore capable to provide Information for parameter values it had not seen during the training and the interpolation that must be performed apart from the integration in the classical approach is now embedded in our method. In the next section we will detail the computation process and regression results of each type of weights. The summary of computation time and DNN topologies are at Table~\ref{tab:DNN_comp}. \subsection{Drell-Yan weights} \label{sec:DY_weights} Drell-Yan weights are the easiest to compute since the topology of the process is quite flexible in the range of allowed kinematics for the products. This makes the task of MoMEMta relatively easy when the correct changes of variables is applied. In practice the evaluation of the Drell-Yan weights takes on average a few seconds per event, some rare events can take a few tens of seconds in the tail of the distribution. The $I'_{DY}$ distributions are given in Figure~\ref{fig:DY_weight} for the three types of sample. The agreement is very good between the weights from the MEM computed with MoMEMta and the ones from the DNN. The best model selected during the parameters scan is the one with six layers of 200 neurons, \textit{relu}~\cite{relu} and \textit{selu}~\cite{selu} activation functions for the hidden and output layers respectively, the optimizer for the gradient descent was Adam~\cite{adam}. Dropout~\cite{dropout} and L2~\cite{l2} regularization technique did not improve the efficiency of the training. We emphasize that these events have never been seen by the DNN during the training or in the model evaluation in order to detect overfitting. As expected the Drell-Yan events have higher weights than $t\bar{t}$ and $H \rightarrow ZA$ ones because they have more compatibility with the Drell-Yan hypothesis. \begin{figure}[htp] \centering \includegraphics[width=.33\textwidth]{valid_weights_DY_TT-000011.png}\hfill \includegraphics[width=.33\textwidth]{valid_weights_DY_TT-000043.png}\hfill \includegraphics[width=.33\textwidth]{valid_weights_DY_TT-000027.png} \caption{Distributions of the event information $I'_{DY}$ from both MoMEMta and the DNN for the three samples : Drell-Yan (left), $t\bar{t}$ (middle) and $H \rightarrow ZA$ (right) events.} \label{fig:DY_weight} \end{figure} \subsection{\texorpdfstring{$t\bar{t}$}{ttbar} weights} \label{sec:TT_weights} Due to the more complicated topology of the $t\bar{t}$ process --- with fully leptonic decay, this will be implicit from now on --- from the narrow top resonance, the $t\bar{t}$ weights are more intricate to compute. However taking advantage of the Breit-Wigner resonances in a change of variables, the computation time can be reduced to a reasonable level (about 3.2 times slower than for the Drell-Yan weights). The corresponding $I'_{t\bar{t}}$ distributions are shown in Figure~\ref{fig:TT_weight}. The contrast in the weight distribution between the Drell-Yan and $t\bar{t}$ samples is less obvious but a longer tail can be observed for the Drell-Yan events. The double peak of the $H \rightarrow ZA$ case comes from the different mass configurations $M_H$ and $M_A$ that constitute this sample. High (pseudo)scalar masses lead to low weights while low masses are more consistent with the $t\bar{t}$ hypothesis. Overall the agreement between the classically computed weights and the ones from the DNN is good. The best model is close to the one for the Drell-Yan weights: it contains eight layers of 500 neurons and a small L2 regularization factor, probably to counter the overfitting of such a deep network. \begin{figure}[htp] \centering \includegraphics[width=.33\textwidth]{valid_weights_DY_TT-000012.png}\hfill \includegraphics[width=.33\textwidth]{valid_weights_DY_TT-000044.png}\hfill \includegraphics[width=.33\textwidth]{valid_weights_DY_TT-000028.png} \caption{Distributions of the event information $I'_{t\bar{t}}$ from both MoMEMta and the DNN for the three samples : Drell-Yan (left), $t\bar{t}$ (middle) and $H \rightarrow ZA$ (right) events.} \label{fig:TT_weight} \end{figure} \subsection{\texorpdfstring{$H\rightarrow ZA \rightarrow llbb$}{H->llbb} weights} \label{sec:signal_weights} The case of the $H \rightarrow ZA$ hypothesis is more complicated due to its hypothetical nature that makes this process dependent on unknown parameters. In our case we only varied two parameters, the masses $M_H$ and $M_A$, while other have been fixed. We have focused on 23 configurations, both for the event generation and for the MEM computation. While some integration tricks were beneficial for the other hypotheses, mostly by using the delta function for the momentum conservation in Equation~\ref{eqn:MEM} to remove some degrees of freedom --- such as the bjets momentum magnitude for the Drell-Yan process or replace the integration over the invisible particles by integrating over the resonances in the $t\bar{t}$ process --- the absence of invisible particles and the multiple resonances of the $H \rightarrow ZA$ process prevent these tricks to be profitable . This has a heavy impact on the computation time, about 50000 CPU days for all the samples (on average about ten minutes per event per weight). Figure~\ref{fig:signal_stack} shows the different contributions of the weight distributions from the event generated with different resonant masses. As expected, the event information is lower (hence the probability is higher) when the masses used for the generation match the hypothesis used for the weight calculation, although the dependence on the visible cross-section is not taken into account here and might have a small effect. This can be seen on the left part of Figure~\ref{fig:signal_stack} where the low mass events have higher weights. On the contrary, the high mass events have the highest weights in the right part of the figure, as they match more closely the topology of the high mass weights. Interestingly enough, events with same $M_A$ but different $M_H$ compared to the values included in the weight tend to have slightly higher probabilities. From a pure technical point of view the best model for the $H \rightarrow ZA$ case is intrinsically similar to that of Drell-yan and $t\bar{t}$. It consists of 8 layers of 300 neurons with \textit{relu} and \textit{selu} activation functions for the hidden and output layers respectively, trained with a small L2 factor. It is however conceptually radically different. In addition to the particles inputs, it was also given the mass parameters for each $H \rightarrow ZA$ weight provided as target. It has been trained on each event 23 times, for different $M_H$, $M_A$ and target weight. Not only has the DNN learned about the relation between the kinematics and the weight, but also the dependence on the process parameters. The comparison between the weights from MoMEMta and the DNN is shown in Figure~\ref{fig:signal_weight} for a specific mass configuration ($M_H = \SI{800}{\GeV}$, $M_A = \SI{400}{\GeV}$). As expected, the $t\bar{t}$ events have small $H \rightarrow ZA$ weights when the mass parameters are high. This is the reciprocal of the fact that high mass signals have low $t\bar{t}$ weights. Drell-Yan events also depict the same behavior since they rarely produce particles with high momenta, which is typically the case for $H \rightarrow ZA$ with high mass resonances. Notice also that the higher weights for Drell-Yan events occur when the difference in mass hypothesis $M_H-M_A$ is large. In that case, the $Z$ boson acquires a large boost not commonly observed in Drell-Yan events. \begin{figure}[htp] \centering \includegraphics[width=.5\textwidth]{valid_weights_HToZA-000188.png}\hfill \includegraphics[width=.5\textwidth]{valid_weights_HToZA-000205.png} \caption{Distributions of the event information $I'_{H\to ZA}$ for two mass hypotheses --- ($M_A = \SI{100}{\GeV}$ , $M_H = \SI{250}{\GeV}$) (left) and ($M_A = \SI{50}{\GeV}$ , $M_H = \SI{1000}{\GeV}$) (right). On each plot, the contributions from the samples with different masses used for the event generation have been stacked on top of each other for clarity.} \label{fig:signal_stack} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=.33\textwidth]{valid_weights_HToZA-000063.png}\hfill \includegraphics[width=.33\textwidth]{valid_weights_HToZA-000293.png}\hfill \includegraphics[width=.33\textwidth]{valid_weights_HToZA-000179.png} \caption{Distributions of the event information $I'_{H\to ZA}$ of the specific mass point ($M_A = \SI{400}{\GeV}$ , $M_H = \SI{800}{\GeV}$) from MoMEMta and the DNN for the three samples : Drell-Yan (left), $t\bar{t}$ (middle) and $H \rightarrow ZA$ (right) events.} \label{fig:signal_weight} \end{figure} In our specific case the parameter space is two-dimensional. To test the interpolation capabilities of the network, a new set of weights was computed with parameters $M_H = \SI{600}{\GeV}$ and $M_A = \SI{250}{\GeV}$ (never seen during the training) on a small subset of the initial samples (1K events per $H \rightarrow ZA$ sample, 5K events for the Drell-Yan and $t\bar{t}$ samples). For reference, the Delaunay technique --- a piecewise-linear interpolation --- was employed to obtain the weights at these parameters values from closeby computed points. This is compared to the DNN applied to these events without retraining in Figure~\ref{fig:interpolation}. Both method perform equally well, demonstrating that the DNN is properly interpolating the parameter space from observed samples. Note that while the Delaunay technique is relatively fast, the main bottleneck is that it requires some points to start from, which means that each event still needs to be computed for several mass points with MoMEMta. In addition the granularity will still scale exponentially with the parameter space dimension. On the contrary, there is no need to use MoMEMta anymore once the DNN is trained and while the two methods give the same result, the DNN can be orders of magnitude faster - especially in multi-dimensional parameter space. \begin{figure}[htp] \centering \includegraphics[width=.33\textwidth]{delaunay-000078.png}\hfill \includegraphics[width=.33\textwidth]{delaunay-000280.png}\hfill \includegraphics[width=.33\textwidth]{delaunay-000179.png} \caption{Distributions of the event information $I'_{H\to ZA}$ for the mass point ($M_A = \SI{250}{\GeV}$ , $M_H = \SI{600}{\GeV}$) not seen during the training. The true distribution from MoMEMta is in green, the Delaunay interpolation using the other weights is in red and the output of the DNN (not trained at this mass point) is in blue. The three samples used are Drell-Yan (left), $t\bar{t}$ (middle) and $H \rightarrow ZA$ (right) cases.} \label{fig:interpolation} \end{figure} \begin{table}[htp] \caption{Summary of evaluation and training times, and DNN topology. The topology includes the number of layers and neurons per layers as well as the L2 regularization value, the activation functions are always relu and selu for the hidden and output layers. Additionally all DNN have been trained with a batch size of 512 and initial learning rate of 0.001. A learning rate scheduler and early stopping were implemented to stop the training at the validation loss curve plateau before the 100 epochs limit (even much sooner than that for the $H \rightarrow ZA$ case). The training size for the $H \rightarrow ZA$ DNN takes into account that each event is seen for each mass configuration.} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{MEM Hypothesis} & \multirow{2}{*}{MoMEMta evaluation time} & \multicolumn{3}{c}{DNN topology} \vline & \multicolumn{2}{c}{DNN training time} \vline & \multirow{2}{*}{DNN evaluation time} \\ \cline{3-7} & & $N_{layers}$ & $N_{neurons}$ & L2 & Training size & Time / epoch & \\ \hline Drell-Yan & \SI{3.6}{\second} / event & 6 & 200 & 0 & 800K events & \SI{5}{\minute} & \SI{110}{\micro\second} / weight \\ \hline $t\bar{t}$ & \SI{12}{\second} / event & 8 & 500 & 0.1 & 800K events & \SI{10}{\minute} & \SI{150}{\micro\second} / weight \\ \hline $H \rightarrow ZA$ & \SI{600}{\second} / event & 8 & 300 & 0.1 & 5,7M events & \SI{40}{\minute} & \SI{120}{\micro\second} / weight / parameter \\ \hline \end{tabular}} \label{tab:DNN_comp} \end{table} \subsection{Applications and studies} \label{applications} While the weight distributions presented so far are a good indicator to evaluate the regression, they do not bring information about the event-by-event agreement. It is also difficult to evaluate if the residual difference is physically relevant. In the following, we will look at typical applications of the MEM in analysis, with the Information used as an input of the MVA discriminant, or interpreted as a likelihood. This will allow also to better understand the status of the invalid weights and the sensitivity to systematic uncertainties. \subsubsection{Discriminant} \label{sec:discriminant} A very simple discriminant between two hypotheses $\alpha$ and $\beta$ can be defined as \begin{equation} \mathcal{D}(x) = \frac{P(x|\alpha)}{P(x|\alpha) + P(x|\beta)} = \frac{W(x|\alpha)}{W(x|\alpha) + \gamma W(x|\beta)} \textrm{ where } \gamma = \frac{\sigma^{vis}_{\beta}}{\sigma^{vis}_{\alpha}}. \label{eqn:discriminant} \end{equation} The discriminant can be close to one or zero depending on which hypothesis $\alpha$ or $\beta$ prevails (respectively). For illustration, $\alpha$ and $\beta$ can be taken to be respectively the $t\bar{t}$ and Drell-Yan processes. As an evaluation criterion for this discriminant we have used the Receiver Operating Characteristic (ROC) curve. Although it does not impact the ROC curve, the shape of the discriminant will be impacted by the factor $\gamma$ in the denominator, we have arbitrarily taken here $\gamma = 1$. The ROC curves obtained with the weights coming from both the integration in the MEM and from the DNN are shown in Figure~\ref{fig:ROC}. The weights produced by the DNN actually provide a slightly better discriminant than the ones from MoMEMta. The difference can be traced to outliers present in the MoMEMta calculation, while the DNN behavior is smoother by nature and has fewer of them. \begin{figure}[htp] \centering \includegraphics[width=.6\textwidth]{ROC_valid_weights_DY_TT.png} \caption{ROC curve of the discriminant. The x axis represents the probability for an event to be classified in the correct process ($t\bar{t}$ or Drell-Yan) while the y-axis represents the misidentification probability. The AUC score is the area under curve.} \label{fig:ROC} \end{figure} The discriminant in Equation~\ref{eqn:discriminant} is limited to a classification between two hypotheses which is a bit restrictive. In addition its simple definition might not make use of the full information encapsulated in the MEM weights. A discriminant for higher dimension parameter space could be generalized but is not guaranteed to be optimal. In this paper we decided to follow a different path by using a classifier based on the MEM weights and leave to it the determination of an optimal decision function. A natural choice here is the use of a classifying DNN with three output nodes whose inputs are the weights for the three processes under study. The DNN is trained to maximize the probability of correct identification using a binary cross-entropy loss function. There are two ways one can define the classifiers inputs based on the parametric definition of the $H \rightarrow ZA$ weights. A \textit{global} classifier is used to find an excess in the whole mass plane, regardless of its location. In the spirit of the analysis in Reference~\cite{HZA} and anticipating the search for a specific resonance, a \textit{parametric} classifier is given the knowledge of the mass plane point of interest and can therefore be used to find an excess at a given place. On the one hand the global classifier is less sensitive because the excess needs to be large to be noticeable on the whole plane, on the other hand there is no need to correct for the look-elsewhere effect which would be the case for the parametric classifier that is evaluated at several locations. The inputs of the global classifier are the Drell-Yan, $t\bar{t}$ and the 23 $H \rightarrow ZA$ weights. As we have no knowledge of the actual value ot the masses during the training on simulated events we need a good enough coverage of the parameter space. At least one input should provide sensitivity to a given hypothesis. The classification probabilities are given in Figure~\ref{fig:prob} and the corresponding ROC curves are compared in Figure~\ref{fig:MultiROC_global} using both the weights from MoMEMta and the regressive DNNs. The parametric classifier also takes as inputs the Drell-Yan and $t\bar{t}$ weights but only one $H \rightarrow ZA$ weight with the corresponding $M_A$ and $M_H$ parameters. For $H \rightarrow ZA$ events, the actual parameter value is used, while for Drell-Yan and $t\bar{t}$ events they can either be attributed a random parameter point --- in the same proportions as in $H \rightarrow ZA$ events --- or repeated for every parameter point found in the $H \rightarrow ZA$ events. The latter was used to artificially increase the statistics. The associated ROC curves are shown in Figure~\ref{fig:MultiROC_param}, averaging over all the mass points. The dependence on the performance as a function of these mass points is illustrated by the AUC score in Figure~\ref{fig:AUC_map}. Naturally, the best performance is achieved away from the regions heavily populated by other processes. As a comparison, a simple classifier ROC curve with only the Drell-Yan and $t\bar{t}$ weights as inputs is shown in Figure~\ref{fig:MultiROC_back}. Even though this simple classifier is suboptimal for $H \rightarrow ZA$, it reaches reasonable performance. Additionally the Drell-Yan and $t\bar{t}$ classifications improve when provided with the $H \rightarrow ZA$ weight information. The MEM weights can provide discriminating power even for processes other that the one used in its computation. All the classifiers are trained with weights from MoMEMta and the ROC curves shown in Figure~\ref{fig:MultiROC} are evaluated with weights from both methods. The regression errors introduced when using the regressive DNNs are propagated through the classifiers but the loss in performance is negligible. Only in the global classifier can the MEM and DNN curves be distinguished due to the residual differences already highlighted in Figure~\ref{fig:signal_weight} that add up for all the $H \rightarrow ZA$ weights inputs. \begin{figure}[htp] \centering \includegraphics[width=.33\textwidth]{valid_weights_class_global-000012.png}\hfill \includegraphics[width=.33\textwidth]{valid_weights_class_global-000042.png}\hfill \includegraphics[width=.33\textwidth]{valid_weights_class_global-000027.png} \caption{Distributions of the class probabilities with the global classifier applied on weights coming from both MoMEMta and the DNN for the three samples : Drell-Yan (left), $t\bar{t}$ (middle) and $H \rightarrow ZA$ (right) events.} \label{fig:prob} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{AUC_map.png} \caption{Distribution of the AUC score of the $H \rightarrow ZA$ classification for the weights coming from MoMEMta (left) and the DNN (right).} \label{fig:AUC_map} \end{figure} \begin{figure}[htp] \centering \begin{subfigure}[b]{0.32\linewidth} \centering\includegraphics[width=\linewidth]{ROC_globalclass_backAndsig.png} \caption{\label{fig:MultiROC_global}} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering\includegraphics[width=\linewidth]{ROC_paramclass.png} \caption{\label{fig:MultiROC_param}} \end{subfigure} \begin{subfigure}[b]{0.32\linewidth} \centering\includegraphics[width=\linewidth]{ROC_globalclass_onlybackweights.png} \caption{\label{fig:MultiROC_back}} \end{subfigure}% \caption{ROC curves of the Global (a), Parametric (b) and Simple (c) classifiers. The fraction of events incorrectly classified as coming from a given process is represented as a function of the probability to be correctly identified as such for each of the three processes. The ROC curves are given for both the weights from MoMEMta (solid lines) and the ones coming from the DNN (dashed line). The AUC score is the area under each curve.} \label{fig:MultiROC} \end{figure} \subsubsection{Invalid weights} As discussed earlier, the MoMEMta integration may fail and result in "invalid weights". It is nevertheless trivial to evaluate the DNN for the corresponding events. The result can be compared to the output of MoMEMta when the integration is made to converge by increasing the number of sampling points, as done in Figures~\ref{fig:invalid_DY} and~\ref{fig:invalid_TT}, respectively for Drell-Yan and $t\bar{t}$ weights. For both cases it is obvious that the DNN does not agree with MoMEMta for these events with invalid weights. In the case of Drell-Yan, the DNN seems to deliver values similar to what is obtained for normal events, while MoMEMta returns small probabilities even after allowing more iterations. The picture is quite different for the $t\bar{t}$ case, where the network returns consistently smaller weights (even though these events were never seen during training). The question of what is happening with these events and whether we can trust MoMEMta with these very small weights remains open at this stage. Applying the discriminant from Equation~\ref{eqn:discriminant} to compare the invalid weights computed with MoMEMta and the DNN, much better performance is obtained with the DNN inputs than with the MoMEMta inputs (Figure~\ref{fig:cases_ROC}). This may suggest that the DNN provides a more reliable information than MoMEMta for these events, but more studies are needed to get a conclusive answer. \begin{figure}[htp] \centering \includegraphics[width=.33\textwidth]{invalid_DY_weights-0000010.png}\hfill \includegraphics[width=.33\textwidth]{invalid_DY_weights-0000032.png}\hfill \includegraphics[width=.33\textwidth]{invalid_DY_weights-0000021.png} \caption{Distributions of the event information $I'_{DY}$ for events where the integration initially failed either from MoMEMta (blue) or the DNN trained only on valid weights (red) for the three samples : Drell-Yan (left), $t\bar{t}$ (middle) and $H \rightarrow ZA$ (right) events.} \label{fig:invalid_DY} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=.33\textwidth]{invalid_TT_weights-000011.png}\hfill \includegraphics[width=.33\textwidth]{invalid_TT_weights-000033.png}\hfill \includegraphics[width=.33\textwidth]{invalid_TT_weights-000022.png} \caption{Distributions of the event information $I'_{t\bar{t}}$ for events where the integration initially failed either from MoMEMta (blue) or the DNN trained only on valid weights (red) for the three samples : Drell-Yan (left), $t\bar{t}$ (middle) and $H \rightarrow ZA$ (right) events.} \label{fig:invalid_TT} \end{figure} \begin{figure}[htp] \centering \includegraphics[width=.49\textwidth]{ROC_invalid_DY_weights.png} \hfill \includegraphics[width=.49\textwidth]{ROC_invalid_TT_weights.png} \caption{ROC curve of the discriminant for two cases : the invalid Drell-Yan weights (left) and the invalid $t\bar{t}$ weights (right). In each case two ROC curves are displayed, one for each discriminant based on the methods with MoMEMta and the DNN.} \label{fig:cases_ROC} \end{figure} \subsubsection{Effect of the systematics} Already heavy in terms of computing in its simplest form, the MEM becomes even more demanding when the effect of systematic uncertainties has to be evaluated. Indeed, the effect of uncertainties affecting the event kinematics cannot be propagated without recomputing the matrix element integral, with relatively few opportunities to optimize this calculation. While the DNN ansatz proposed here alleviates the impact of this additional integration, it is important to verify that the regression performed during the training phase is robust against these systematic effects too. As an example, we will look at the jet energy scale (JES), which is potentially among the most dangerous effects, due to the rather poor resolution of jets compared to leptons in hadron collider experiments. We will not consider the impact of the jet energy resolution, which is mostly covered by the transfer functions. To emulate a potential JES we have applied an upward scaling of the jet energy by 10\% --- which is an extreme case --- for each of the bjets on a subset of the events and computed the corresponding new weights both with MoMEMta and with the existing DNN parameterization (thus without retraining). The comparison of the weights when the JES shift is applied or not is shown in Figure~\ref{fig:JES_weights}. The regression error itself is not significantly impacted, as can be seen from the DNN bias and resolution in Table~\ref{tab:JESimpact}. This shows that the DNN is able to properly handle these modified events. In addition, the discriminant from Section~\ref{sec:discriminant} has been evaluated using both MoMEMta and DNN inputs for nominal and modified events. The associated ROC curves are shown in Figure~\ref{fig:JES_ROC}. This specific discriminant appears to be robust against variations of the jet energy scale, both in its traditional implementation and when using the DNN ansatz as input. \begin{figure}[htp] \centering \begin{subfigure}[b]{0.42\linewidth} \includegraphics[width=\textwidth]{JEC_weights-000017.png} \caption{\label{fig:JES_weights}} \end{subfigure} \begin{subfigure}[b]{0.53\linewidth} \includegraphics[width=\textwidth]{ROC_JEC.png} \caption{\label{fig:JES_ROC}} \end{subfigure} \caption{Comparison of the effect of the JES corrections on the background weights produced by MoMEMta and the DNN. The left figure illustrates the change in event information when the JES corrections are applied for both weights and both methods. The right figure is the ROC curve of the discriminant based on these weights.} \label{fig:JES_comparison} \end{figure} \begin{table}[htp] \caption{Regression bias and resolution in the event information when replacing the integration with MoMEMta by the DNN ansatz for the two SM weights with nominal and shifted JES events.} \centering \begin{tabular}{|c|c|c|} \hline & Regression bias & Regression resolution \\ \hline Nominal Drell-Yan & -0.1243 & 0.1383 \\ \hline Shifted JES Drell-Yan & 0.0049 & 0.1351 \\ \hline Nominal $t\bar{t}$ & -0.2758 & 0.4439 \\ \hline Shifted JES $t\bar{t} $ & -0.1659 & 0.4137 \\ \hline \end{tabular} \label{tab:JESimpact} \end{table} \subsubsection{Likelihood scan} A likelihood can be built from the MEM probabilities as \begin{eqnarray} L(x|\alpha) &=& \prod_{i=1}^{n} P(x_i|\alpha), \label{eqn:likelihood} \end{eqnarray} where the product is over $n$ measured events. This likelihood will peak around the parameter $\alpha$ which can be any measurable physics quantity such as a mass or a coupling and in the context of the $H \rightarrow ZA$ hypothesis will represent ($M_A$,$M_H$). It is expected that events generated from this process will produce a likelihood peaked in the two dimensional parameter space with a width roughly equal to the experimental resolution of the invariant masses $m_{jj}$ and $m_{lljj}$ used as estimators of the parameters if $n=1$ and will improve when more events are taken into account. The log-likelihood on simulated events defined as \begin{eqnarray} -\log(L(x|\alpha) ) &=& \frac{1}{n}\sum_{i=1}^{n} -\log(W(x_i|\alpha)) + \log(\sigma_{\alpha}^{vis}), \label{eqn:loglikelihood} \end{eqnarray} where the geometric mean of the likelihood is used to evaluate the resolution that would be obtained for one measured $H \rightarrow ZA$ event with no background. In that expression, $-\log(W(x_i|\alpha))$ is the output of the DNN. Had it been computed with MoMEMta, each event would have had to be integrated for several values of $\alpha = (M_A,M_H)$ with a granularity fine enough to allow the fit of a potential peak. In this particular case the computation time will grow exponentially with the parameter space dimentionality. In our two-dimensional case it is already a major pitfall. The DNN on the other hand can evaluate any event unseen during training on a grid of parameter points with the same event inputs. The grid can be made arbitrarily fine due to the non-linear interpolation property of the DNN for a very low cost in computation time. The number of evaluations still scales exponentially with the parameter space dimensionality but evaluating the DNN on batches of event allows to take advantage of parallelization to break the exponential dependence. The second term of Equation~\ref{eqn:loglikelihood} is important and must be evaluated separately. In our specific case the definition given at Section~\ref{sec:MEM} translates to \begin{eqnarray} \sigma_{\alpha}^{vis} = \textrm{Acceptance} \times \sigma(H \ \textrm{production}) &\times& BR(H \rightarrow ZA) \times BR(A \rightarrow bb) \times BR(Z \rightarrow ll). \label{eqn:visible_xsec} \end{eqnarray} The acceptance is measured on simulation and the theoretical cross-section must be multiplied by the branching ratios of the particular channel being looked at. The production cross section and branching ratio $A \rightarrow bb$ are very model dependent and are not taken into account. The branching ratio of $H \rightarrow ZA$ is mostly kinematic dependent and was kept, however a model dependent effect can be seen when $M_H = 2 \times M_A$, when the $H \rightarrow AA$ process becomes relevant. The resulting likelihood is presented in Figure~\ref{fig:likelihood}. \begin{figure}[htp] \centering \includegraphics[width=.5\textwidth]{likelihood_mH_300_mA_100-000001.png}\hfill \includegraphics[width=.5\textwidth]{likelihood_mH_500_mA_300-000001.png} \caption{Log-likelihood scan from two different $H \rightarrow ZA$ samples} \label{fig:likelihood} \end{figure} The profile likelihoods in both $M_H$ and $M_A$ are shown in Figure~\ref{fig:resolution}. For each profile, several values of the other parameter are displayed to detect an eventual shift in the expected peak. The central part of each curve is used to fit a second order polynom to obtain the resolution of the likelihood peak and compare it to the mass resolution. This likelihood has been built by averaging the contributions of the simulated $H \rightarrow ZA$ events, emulating what would be observed on a single measured $H \rightarrow ZA$ event. The computed resolutions are compatible with the experimental resolution of the invariant mass computed from the reconstructed leptons and jets ($m_{jj}$ and $m_{lljj}$). As opposed to these estimators of $M_H$ and $M_A$, the likelihood is unbiased and provides a more proper way of studying an observed resonance. \begin{figure}[htp] \centering \includegraphics[width=\textwidth]{profile_likelihood_mH_500_mA_300.png} \caption{Profile likelihoods averaged over the number of events (minimum scaled to zero for easier interpretation) and produced with a mass configuration of $m_H =\SI{500}{\GeV}$ and $m_A = \SI{300}{\GeV}$. Each profile has been computed for several values of the other parameter, the green curve is the one obtained with the parameter used in the event production and the blue and red curves are for smaller and larger (respectively) values. Each solid line is accompanied by a shorter dotted line illustrating the polynomial fit applied on this portion of data which is used to compute the 1-sigma resolution in the legend.} \label{fig:resolution} \end{figure} \subsection{Direct application to a real-life analysis} As an example of a real-life study that tackles the $llb\bar{b}$ final states, we will look closer to the CMS $H\rightarrow ZA \rightarrow llb\bar{b}$ analysis~\cite{HZA}. The strategy of this study is to exploit the kinematics of the process $H \rightarrow Z (\rightarrow l^+ l^-) A (\rightarrow b \bar{b})$ (where $l^- = e^- \ or \ \mu^-$) to reconstruct the masses of both H and A bosons using the two- and four-body invariant masses, $m_{jj}$ and $m_{lljj}$ and to define a signal region using these quantities. These distributions are positively correlated and an elliptic signal region has been chosen. The sizes and tilt angles depend on the kinematics and the masses themselves. Hence the parameterization that this analysis used is based on one-dimensional Gaussian fits of the $m_{jj}$ and $m_{lljj}$ distributions to obtain the reconstructed center and the diagonalization of the covariance matrix of both distributions yields the axes and tilt angle (Figure~\ref{fig:ellipse_resonance}). The fraction of signal and background events in different ellipses sizes is binned to be used in a maximum likelihood procedure. The use of ellipses in this kind of searches makes it very well optimized and hard to improve without loss of generality. The MEM is not expected to surpass the standard approach for such an analysis, but should be able to approach the published performances. Throughout our paper so far, only 23 points were used in the DNN training. As in the CMS paper, our models will be applied on a larger ($>200$) set of $H \rightarrow ZA$ samples to cover finely the mass plane. Since no retraining takes place, we do not have to worry about overfitting issues. \begin{figure}[htp] \centering \includegraphics[width=0.8\textwidth]{ellipse_resonance.png} \caption{Signal distribution in the ($m_{jj}$,$m_{lljj}$) plane for several mass configurations (left) and several elliptic fit sizes (parameterized by $\rho$) at a specific mass point of $m_H = \SI{500}{\GeV}$ and $m_A = \SI{300}{\GeV}$ (right), to be compared with the 2D distribution in the plane. From~\cite{HZA}.} \label{fig:ellipse_resonance} \end{figure} Every signal and background event has been processed through the regressive networks of Sections~\ref{sec:DY_weights}, \ref{sec:TT_weights} and~\ref{sec:signal_weights} in order to produce the weights needed by the two classifiers (global and parametric) that were then applied for each mass configuration. The ROC curve has then been computed for each classifier either using all preselected events, or using only events contained in the elliptic region defined in the mass plane for a given mass hypothesis and a give size parameter $\rho$. To do so, the script provided by the collaboration~\cite{HZA} has been used. Results are presented in Figure~\ref{fig:ROC_ellipse} for two representative mass points, and compared to the performance of the elliptic selection alone. \begin{figure}[htp] \centering \includegraphics[width=.5\textwidth]{ROC_mH_261p40_mA_150p50.png}\hfill \includegraphics[width=.5\textwidth]{ROC_mH_442p63_mA_193p26.png} \caption{Comparison between the ROC curves of the ellipse method for two mass points with different sizes parameterized by the scale factor $\rho$ (in dotted line), both the global (solid line) and parameterized (dashed line) classifiers, and the combination of these methods. Each ellipse is denoted by a marker and the events that pass its selection are then used to compute the ROC curve with both classifiers.} \label{fig:ROC_ellipse} \end{figure} \begin{table}[htp] \caption{Average computation time in MoMEMta for each type of weights and events. For Drell-Yan and $t\bar{t}$ weights the time distribution is peaked around the values given below, while for the $H \rightarrow ZA$ weights it is much wider - it is very common to find a computation time twice of three times the value given here. The sizes of the samples involved in this section are given as an indication of the overall computation time, the $H \rightarrow ZA$ one includes all 207 masses.} \centering \renewcommand{\arraystretch}{1.1} \begin{tabular}{|c|c|c|c|} \hline & Drell-Yan events & $t\bar{t}$ events & $H \rightarrow ZA$ events \\ \hline Drell-Yan weights & \SI{3.6}{\second} / event & \SI{3.8}{\second} / event & \SI{4}{\second} / event \\ \hline $t\bar{t}$ weights & \SI{12}{\second} / event & \SI{10}{\second}/ event & \SI{20}{\second} / event \\ \hline $H \rightarrow ZA$ weights& $\sim$ \SI{600}{\second} / event &$\sim$ \SI{600}{\second} / event &$\sim$ \SI{600}{\second} / event \\ \hline Sample size & 279203 events& 461244 events &2570596 events \\ \hline \end{tabular} \label{tab:time} \end{table} As expected, the global classifier brings no improvement to the ellipse method - especially at low masses. In some cases the combined ROC curve shows a potential gain by taking a larger ellipse combined with the classifier, as can be seen on the right plot. However this improvement is mostly located in the high purity region which is not the one aimed for in the CMS analysis. The ellipse method is very well optimized in the search for a resonance while the global classifier searches for an excess in the whole mass plane. As already mentioned, the latter will require a larger excess to detect something but will not be as heavily affected by the look-elsewhere effect. While better, the parametric network alone will only outperform the standard approach at very high masses when the ellipses are ill-defined. However, the combination of both becomes interesting for lower masses. In some cases the background contamination can be reduced by almost one order of magnitude with very small loss in signal efficiency. This effect is visible for $M_H$ as low as \SI{200}{\GeV} and increases towards the boosted and high mass regions. \begin{table}[htp] \caption{Total hypothetical computation time if all the weights had to be computed with MoMEMta and not the DNN in order to be used by the classifiers. For each sample and each weight the number of times MoMEMta would have been called is given as well as the computation time it would have required. The number of calls depends on the type of classifier. The global classifier requires only one weight per SM process but 23 different $H \rightarrow ZA$ weights, while the SM weights need to be evaluated 207 times for the parametric classifier --- because they are repeated at every mass point --- but only once for the $H \rightarrow ZA$ weight. The total time per classifier and process is also given in years.} \centering \renewcommand{\arraystretch}{1.1} \resizebox{\textwidth}{!}{ \begin{tabular}{|c|c|c|c|c|c|c|} \hline Process & \multicolumn{2}{c|}{Drell-Yan events} & \multicolumn{2}{|c|}{ $t\bar{t}$ events} & \multicolumn{2}{c|}{$H \rightarrow ZA$ events} \\ \hline \hline \textbf{Global classifier} & Calls (x1) & Time [days] & Calls (x1) & Time [days] & Calls (x1) & Time [days] \\ \hline Drell-Yan weights (x1) & 279 203 & 11.6 & 461 244 & 20.3 & 2 570 596 & 119.0 \\ \hline $t\bar{t}$ weights (x1) & 279 203 & 32.3 & 461 244 & 85.4 & 2 570 596 & 595.0 \\ \hline $H \rightarrow ZA$ weights (x23) & 6 421 669 & 44 594.9 & 10 608 612 & 73670.9 & 59 123 087 & 410 581.3 \\ \hline \hline Total time [years] & \multicolumn{2}{c|}{122.3} & \multicolumn{2}{|c|}{202.1} & \multicolumn{2}{c|}{1126.8} \\ \hline \hline \textbf{Parametric classifier} & Calls (x207) & Time [days] & Calls (x207) & Time [days] & Calls (x1) & Time [days] \\ \hline Drell-Yan weights (x1) & 57 795 021 & 2408.1 & 95 477 508 & 4199.2 & 2 570 596 & 119.0 \\ \hline $t\bar{t}$ weights (x1) & 57 795 021 & 8027,1 & 95 477 508 & 17681.0 & 2 570 596 & 595.0 \\ \hline $H \rightarrow ZA$ weights (x1) & 57 795 021 & 401 354.3 & 95 477 508 & 663 038.2 & 2 570 596 & 17 851.4 \\ \hline \hline Total time [years] & \multicolumn{2}{c|}{1128.2} & \multicolumn{2}{|c|}{1876.5} & \multicolumn{2}{c|}{50.9} \\ \hline \end{tabular}} \label{tab:calls} \end{table} While the simple approach followed in this work only has the potential to marginally improve the CMS $H \rightarrow ZA$ analysis, the DNN ansatz still opens a wide range of possibilities previously out-of-reach of the MEM. Apart from their training time, which have lasted from a few hours to a single day on CPU, evaluating a weight or a probability is a very fast process : about $150 \mu s$ with large enough batches, with small variations depending on the depth of the network. This must be compared to the computation time of MoMEMta (Table~\ref{tab:time}) and the number of times it would have been called to produce Figure~\ref{fig:ROC_ellipse} (Table~\ref{tab:calls}). The weight computations for the global (parametric) classifier would have taken about 1450 (3050) CPU years, which is more than prohibitive even with a large farm of CPU. Using the DNN to produce the weights requires in total less than 10 hours, where must be added the time of I/O data streams, RAM allocation, data repetition and processing --- all of which even tend to become dominant compared to the pure weight production. These weights must be fed to the classifiers, looped through for each ellipse and the ROC curves must be computed. In the end, with the DNN, the production of Figure~\ref{fig:ROC_ellipse} on a cluster of CPUs with a few hundred nodes took less than a day. The pure weight computation has been reduced by six orders of magnitude. \section{Conclusion} In this paper we presented a method where the integral of the Matrix Element Method is regressed by a Deep Neural Network in order to speed up the computations involved in the MEM. From the few representative processes studied in this paper, we conclude that a DNN can be trained that closely reproduces the results of the direct numerical integration of the matrix element using dedicated tools like MoMEMta. This regression with the DNN introduces inevitable inaccuracies in the weights that nonetheless do not have a significantly impact on the performance in the studied applications. Faster weight calculations open a wide range of possibilities : study of systematics, likelihood scans, parameters scans and in general enables the use of the MEM for a new wider class of physics analyses, including the search for new physics. \section{Acknowledgments} We warmly thank Olivier Mattelaer for his feedback about the method itself, Sebastien Wertz for his help with MoMEMta and pertinent remarks, Alessia Saggio for her many contributions regarding the $H\rightarrow ZA$ analysis and Pieter David for his helpful comments. Florian Bury is a Research Fellow of the Fonds de la Recherche Scientifique – FNRS. Christophe Delaere is Senior Research Associate of the FNRS. Part of this work has been funded by the IISN under convention 4.4503.16. Computational resources have been provided by the supercomputing facilities of the Université catholique de Louvain (CISM/UCL) and the Consortium des Équipements de Calcul Intensif en Fédération Wallonie Bruxelles (CÉCI) funded by the Fond de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under convention 2.5020.11. \clearpage \bibliographystyle{utphys}
1,314,259,994,005
arxiv
\section{3pt}{3pt plus 2pt minus 2pt}{1pt plus 1pt minus 1pt} \pdfminorversion=6 \newcommand{\arabic{algorithm}}{\arabic{algorithm}} \usepackage[accepted]{icml2021} \renewcommand{\algorithmiccomment}[1]{#1} \icmltitlerunning{HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture} \begin{document} \twocolumn[ \icmltitle{HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture} \begin{icmlauthorlist} \icmlauthor{Qian Lou}{in} \icmlauthor{Lei Jiang}{in} \icmlkeywords{Homomorphic Encryption, Mobile Network Architecture} \end{icmlauthorlist} \icmlaffiliation{in}{Indiana University Bloomington} \icmlcorrespondingauthor{Lei Jiang}{jiang60@iu.edu} \vskip 0.3in ] \printAffiliationsAndNotice{} \begin{abstract} Recently Homomorphic Encryption (HE) is used to implement Privacy-Preserving Neural Networks (PPNNs) that perform inferences directly on encrypted data without decryption. Prior PPNNs adopt mobile network architectures such as SqueezeNet for smaller computing overhead, but we find na\"ively using mobile network architectures for a PPNN does not necessarily achieve shorter inference latency. Despite having less parameters, a mobile network architecture typically introduces more layers and increases the HE multiplicative depth of a PPNN, thereby prolonging its inference latency. In this paper, we propose a \textbf{HE}-friendly privacy-preserving \textbf{M}obile neural n\textbf{ET}work architecture, \textbf{HEMET}. Experimental results show that, compared to state-of-the-art (SOTA) PPNNs, HEMET reduces the inference latency by $59.3\%\sim 61.2\%$, and improves the inference accuracy by $0.4 \% \sim 0.5\%$. \end{abstract} \section{Introduction} \label{s:intro} Clients feel reluctant to upload their sensitive data, e.g., health or financial records, to untrusted servers in the cloud. To protect clients' privacy, privacy-preserving neural networks (PPNNs)~\cite{GAZELLE:USENIX18,mishra2020delphi,Brutzkus:ICML2019,Gilad-Bachrach:ICML2016,dathathri:2019PLDI} are built to perform inferences directly on encrypted data. An interactive PPNN~\cite{GAZELLE:USENIX18,mishra2020delphi} uses homomorphic encryption (HE) for linear layers, and adopts secure multi-party computation (MPC) to process activation layers. However, huge volumes of data between the client and the server have to be exchanged during an inference of an interactive PPNN. For instance, DELPHI~\cite{mishra2020delphi} has to transmit 2GB data for only a ResNet-32 inference on a single encrypted CIFAR-10 image. On the contrary, non-interactive PPNNs~\cite{Brutzkus:ICML2019,dathathri:2019PLDI,Dathathri:PLDI20:EVA} approximate their activations by a degree-2 polynomial, and compute an entire inference via HE. They do not require high network bandwidth, but still can obtain competitive inference accuracy~\cite{Dathathri:PLDI20:EVA}. Thereafter (except Section~\ref{s:ppnn_o}), when we mention a PPNN, we indicate a non-interactive PPNN. Unfortunately, PPNN inferences are time-consuming. An inference of a typical PPNN~\cite{dathathri:2019PLDI} requires $>2$ seconds on an encrypted MNIST image, and consumes $>70$ seconds on an encrypted CIFAR-10 image. There is a $\times 10^6$ latency gap between a PPNN inference and a unencrypted inference. To mitigate the gap, recent work~\cite{Dathathri:PLDI20:EVA,dathathri:2019PLDI} adopts mobile neural network architectures such as SqueezeNet~\cite{iandola2016squeezenet} and InceptionNet~\cite{szegedy2016inception} to implement PPNNs. However, we find na\"ively adopting mobile neural network architectures for PPNNs does not necessarily achieves shorter inference latency. Mobile neural network models~\cite{szegedy2016inception,iandola2016squeezenet} reduce the total number of parameters but still maintain competitive inference accuracy by adding more linear layers. In spite of less parameters, if a PPNN adopts a mobile neural network architecture, its deeper architecture with more layers greatly increases the HE \textit{multiplicative depth}, thereby decelerating each HE operations of the PPNN, where the multiplicative depth means the number of HE multiplications on the critical path. In this paper, we propose a Homomorphic-Encryption-friendly privacy-preserving Mobile neural network architecture, HEMET, to achieve shorter inference latency and higher inference accuracy. Our contributions can be summarized as the following. \begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*] \item We first identify that na\"ively applying a mobile neural network architecture on a PPNN may even prolong its inference latency. Although the mobile network architecture reduces HE operations of the PPNN, but it greatly increases the multiplicative depth and decelerates each HE operation of the PPNN. \item We propose a simple, greedy HE-friendly mobile network architecture search algorithm to evaluate whether a block should adopt a regular convolutional layer or a mobile module to minimize the inference latency of the entire network. The search algorithm is performed layer by layer, and can reduce HE operations without increasing the multiplicative depth of a PPNN. \item We also present Coefficient Merging to further reduce the multiplicative depth of a PPNN by merging the mask, approximated activation coefficients, and batch normalization coefficients of each layer. \item We evaluated and compared HEMET against SOTA PPNN architectures. Our experimental results show HEMET reduces the inference latency by $59.3\%\sim 61.2\%$, but improves the inference accuracy by $0.4\% \sim 0.5\%$ over various prior PPNNs. \end{itemize} \begin{table}[t!] \small \centering \setlength{\tabcolsep}{4pt} \begin{tabular}{|c||c|} \hline HE Operations & CKKS-RNS $Q=\prod_{i=1}^{r}{Q_i}$ \\\hline\hline Addition, Subtraction & $\mathcal{O}(N\cdot r)$ \\\hline Scalar Multiplication & $\mathcal{O}(N\cdot r)$ \\\hline Plaintext Multiplication & $\mathcal{O}(N\cdot r)$ \\\hline Ciphertext Multiplication & $\mathcal{O}(N\cdot log(N)\cdot r^2)$ \\\hline Ciphertext Rotation & $\mathcal{O}(N\cdot log(N)\cdot r^2)$ \\\hline \end{tabular} \caption{The complexity of HE operations on a $r$-level ciphertext ($N$ is the polynomial degree of the ciphertext).} \label{t:moti2_merge} \vspace{-0.2in} \end{table} \section{Background} \label{s:back} \subsection{Homomorphic Encryption} HE allows arbitrary computations to occur on encrypted data (ciphertexts) without decryption. Given a public key $pk$, a secret key $sk$, an encryption function $\epsilon ()$, and a decryption function $\sigma ()$, a HE operation $\otimes$ can be defined if there is another operation $\times$ such that $\sigma(\epsilon(x_1, pk)\otimes\epsilon(x_2, pk), sk) = \sigma(\epsilon(x_1\times x_2, pk), sk)$, where $x_1$ and $x_2$ are plaintexts, each of which encodes a vector consisting of multiple integer or fixed-point numbers~\cite{dathathri:2019PLDI, Dathathri:PLDI20:EVA}. Each HE operation introduces a certain amount of noise into the ciphertext. When the accumulated noise grows beyond a noise budget, a HE decryption failure happens. Though a \textit{bootstrapping} operation~\cite{Gentry:HE2009} reduces the accumulated noise in a ciphertext, it is computationally-expensive. Prior PPNNs use \textit{leveled} HE having a fixed noise budget to compute only a limited number of HE operations without bootstrapping. \subsection{Privacy-Preserving Neural Networks} \label{s:ppnn_o} Recent PPNNs~\cite{GAZELLE:USENIX18,mishra2020delphi,Brutzkus:ICML2019,Gilad-Bachrach:ICML2016,dathathri:2019PLDI,Dathathri:PLDI20:EVA} adopt HE to implement their linear layers. Unfortunately, HE cannot support non-linear activation layers. Interactive PPNNs~\cite{GAZELLE:USENIX18,mishra2020delphi} take advantage of multi-party computation to compute activation by interactions between the client and the server. In contrast, non-interactive PPNNs~\cite{Brutzkus:ICML2019,Gilad-Bachrach:ICML2016,dathathri:2019PLDI,Dathathri:PLDI20:EVA} approximate activations by a degree-2 polynomial, e.g., square function, so that the entire PPNN inference happens on only the server. Non-interactive PPNNs are more friendly to the clients who have no powerful machines and high-bandwidth network connections. The latest interactive PPNN, Delphi~\cite{mishra2020delphi}, has to exchange 2GB data between the server and the client for only a ResNet-32 inference on an encrypted CIFAR-10 image. In this paper, we focus on only non-interactive PPNNs. \subsection{Threat Model} The threat model of HEMET is similar to that of prior PPNNs~\cite{Brutzkus:ICML2019,Gilad-Bachrach:ICML2016,dathathri:2019PLDI,Dathathri:PLDI20:EVA}. Though an encryption scheme can be used to encrypt data sent to cloud, untrusted servers can make data leakage happen. HE enables a server to perform private inferences over encrypted data. A client sends encrypted data to a server performing encrypted inferences without decrypting the encrypted data or accessing the client's secret key. Only the client can decrypt the inference results using the secret key. \begin{figure*}[t!] \centering \includegraphics[width=6.3in]{ckksnets.pdf} \vspace{-0.1in} \caption{A CKKS-based PPNN.} \label{f:ckks_ppnns} \vspace{-0.15in} \end{figure*} \begin{figure*}[t!] \centering \includegraphics[width=6.6in]{mobilenets.pdf} \vspace{-0.1in} \caption{Various mobile network architectures.} \label{f:prior_networks} \vspace{-0.2in} \end{figure*} \subsection{A RNS-CKKS-based PPNN} \textbf{RNS-CKKS Scheme}. Among all HE schemes, RNS-CKKS is the \textbf{only} scheme that supports fixed-point arithmetic operations. Recent PPNNs~\cite{dathathri:2019PLDI,Dathathri:PLDI20:EVA} use RNS-CKKS to achieve shorter inference latency and higher inference accuracy than the PPNN~\cite{Gilad-Bachrach:ICML2016} implemented by other HE schemes. Via SIMD batching, a RNS-CKKS-based PPNN encrypts $\frac{N}{2}$ fixed-point numbers in $\frac{N}{2}$ slots of a single ciphertext, where $N$ is the polynomial degree of the ciphertext. One HE operation on the ciphertext simultaneously performs the same operation on each slot of the ciphertext. A ciphertext is a polynomial of degree $N$ with each coefficient represented modulo $Q$, where $Q$ is a product of $r$ primes $Q=\prod_{j=1}^{r}{Q_j}$. To represent a fixed-point number, RNS-CKKS uses an integer $I$ and a scaling factor $S$. For instance, 3.14 can be denoted by $I=314$ and $S=100$. $S$ grows exponentially with HE multiplication. To keep $S$ within check, a \textit{rescaling operation} is required to convert $I\times100$ at scale $S$ to $I$ at scale $S\times100$. Totally, RNS-CKKS permits $r$ rescaling operations. A rescaling operation converts a $r$-level ciphertext $C_r$ (which has noise $e$, modulus $Q$, and plaintext $m$) to a $(r-1)$-level ciphertext $C_{r-1}$ (which has noise $\frac{e}{Q_r}$, modulus $\frac{Q}{Q_r}$, and plaintext $\frac{m}{Q_r}$). $(C_{r-1}, \frac{m}{Q_r}, \frac{e}{Q_r}, \frac{Q}{Q_r})=Rescale(C_r, m, e, Q)$. The computational complexity of various RNS-CKKS operations on $r$-level ciphertexts is shown in Table~\ref{t:moti2_merge}. A rescaling operation makes the following operations faster, since the latency of a HE operation on a $(r-1)$-level ciphertext $C_{r-1}$ is shorter than that of the same operation on the $r$-level ciphertext $C_{r}$. Moreover, a rescaling operation also reduces noise in the ciphertext. \textbf{RNS-CKKS-based Convolutions}. RNS-CKKS-based convolutions between an encrypted input tensor ($A$) and multiple plaintext weight filters ($F$s) are shown in Figure~\ref{f:ckks_ppnns}(a). The convolution result can be denoted as $b_{x,y} = \sum_{i,j}{a_{x+i,y+j}}\cdot f_{i,j}$. We assume the input $A$ has $C$ channels, a height of $H$, and a width of $W$. As Figure~\ref{f:ckks_ppnns}(b) shows, the input tensor can be encrypted into $C$ ciphertexts, each of which packs $H\times W$ input elements~\cite{dathathri:2019PLDI}. This is the $HW$ batching scheme. Or to fully utilize the slots in a ciphertext, $C\times H\times W$ input elements can be packed into $\lceil\frac{2\times C\times H\times W}{N}\rceil$ ciphertexts~\cite{dathathri:2019PLDI}, where $\lceil \rceil$ represents a rounding up operation. This is the $CHW$ batching scheme. To compute $a_{x+i,y+j}\cdot f_{i,j}$, element-wise HE multiplications happen between the input tensor and each weight filter. A HE rotation operation $rot(A, s)$ is required to align the corresponding data to support the HE accumulation, where $s$ represents the stride distance. At last, a plaintext mask is used to remove the irrelevant slots $\#\#$. \textbf{The Critical Path of a DDG}. To precisely control noise, a recent RNS-CKKS compiler~\cite{Dathathri:PLDI20:EVA} converts an entire convolution into a data dependency graph (DDG) shown in Figure~\ref{f:ckks_ppnns}(c), and then inserts rescaling operations. We assume both inputs and weight filters use 30-bit fixed-point numbers. The result of a HE addition is still 30-bit, but each HE multiplication yields a 60-bit result. A rescaling operation rescales a multiplication result back to a 30-bit fixed-point number. The critical path (gray nodes) of the DDG has the multiplicative depth of 2. And the convolution example requires two rescaling operations. The multiplicative depth of a PPNN is the total number of HE multiplications along the critical path of its DDG. \subsection{Mobile Neural Network Architecture} In plaintext domain, mobile neural network architectures such as SqueezeNet~\cite{iandola2016squeezenet} and InceptionNet~\cite{szegedy2016inception} are built to reduce network parameters and inference overhead. The topologies of SqueezeNet and Inception are highlighted in Figure~\ref{f:prior_networks}. \begin{itemize}[noitemsep,topsep=0pt,parsep=0pt,partopsep=0pt,leftmargin=*] \item \textbf{SqueezeNet}. RNS-CKKS-based PPNNs~\cite{dathathri:2019PLDI,Dathathri:PLDI20:EVA} adopt the architecture of SqueezeNet~\cite{iandola2016squeezenet} for fast private inferences on CIFAR-10 images. SqueezeNet achieves similar accuracy to AlexNet using 50$\times$ fewer parameters. SqueezeNet replaces conventional convolution layers by fire modules shown in Figure~\ref{f:prior_networks}(b). Assume a conventional convolution layer has the dimension of $I_i\times 3 \times 3 \times O_i $, where $I_i$ is the input channel number, $3$ is the weight filter size, and $O_i$ are the output channel number in the $i$-th layer. On the contrary, a fire module consists of stacked convolution layers. $C_i$ ($<O_i$) is the output channel number of the first convolutional layer. $O_{i}^{0}+O_{i}^{1}=O_i$ is enforced to make the $i_{th}$ layer output have $O_i$ channels. If the $HW$ batching technique is used, to compute a regular convolution layer, we need to do $\mathcal{O}(I_i\times 3 \times 3 )$ rotations, and $\mathcal{O}(I_i\times 3 \times 3\times O_i)$ HE multiplications. In contrast, to compute a fire module of SqueezeNet, we need to perform $\mathcal{O}(I_i\times 1 +C_i\times (3 \times 3+1))$ rotations, and $\mathcal{O}(I_i \times 1 \times 1 \times C_i+ C_i \times 1 \times O_i^0+ C_i \times 3 \times 3\times O_i^1)$ HE multiplications. However, \textit{fire modules of SqueezeNet greatly increase the layer number of the mobile neural network architecture}. \item \textbf{InceptionNet}. InceptionNet~\cite{szegedy2016inception} achieves VGG-like accuracy using $\sim$7$\times$ fewer parameters. As Figure~\ref{f:prior_networks}(c) shows, InceptionNet is built through substituting conventional convolution layers with inception modules. An inception module is composed of four types of filters, each of them has output channels $O_{i}^{0\sim 3}$. Although inception modules greatly reduce model parameters, \textit{InceptionNet has to almost double the layer number, compared to the VGG CNN}. \end{itemize} \subsection{Motivation} \textbf{A HE-friendly Mobile Network Architecture}. Though mobile neural networks such as SqueezeNet and InceptionNet reduce the number of HE operations of a PPNN, they significantly enlarge the multiplicative depth of the PPNN by adding more network layers. The enlarged multiplicative depth increases the computing overhead of each HE operation. As Figure~\ref{f:moti_replacemnt} shows, na\"ively adopting a mobile neural network architecture does not necessarily reduces the inference latency of a PPNN. A PPNN is built by sequentially connecting multiple \textit{block}s, each of which is either a fire module of SqueezeNet or a regular convolution layer. If the first fire module (F1) is replaced by a convolution layer having the same number of input and output channels, the inference latency (F1-to-C) of the PPNN increases by $\sim 32\%$. On the contrary, the inference latency (F4-to-C) of the PPNN reduces by $\sim 2 \times$, when the last fire module (F4) is replaced by a regular convolution layer. Therefore, we need a HE-friendly mobile network architecture to reduce the inference latency, and to maintain SOTA inference accuracy of a PPNN. \begin{figure}[t!] \centering \includegraphics[width=2.7in]{replace_moti.pdf} \vspace{-0.1in} \caption{Building a PPNN by fire modules of SqueezeNet.} \label{f:moti_replacemnt} \vspace{-0.2in} \end{figure} \textbf{Further Reducing the Multiplicative Depth}. As Figure~\ref{f:method2} shows, to build a PPNN, a given neural network is compiled into a DDG composed of additions, multiplications, and rotations. The computing overhead of each HE operation of the PPNN is decided by the PPNN multiplicative depth, which is roughly equal to the number of HE multiplications along the the critical (longest) path of the DDG. A multiplicative depth is defined by both the polynomial degree $N$ and the number of rescaling operations $r$, as shown in Table~\ref{t:moti2_merge}. The polynomial degree $N$ is determined for a fixed security level. A smaller $N$ may hurt the security level, so we aim to reduce the multiplicative depth and to accelerate HE operation by reducing the number of rescaling operations $r$. The larger $r$ is, the slower each HE operation is. To reduce $r$ and speedup each HE operation, the critical path of the DDG has to be shortened. \subsection{Related Work} \textbf{HE-friendly PPNNs}. Recent work~\cite{Bian:ECAI2020,Lou:NIPS2020} creates reinforcement learning agents to search a competitive neural network architecture implemented by BFV. However, these agents consider only regular convolutional layers, but not low-cost mobile network modules. Moreover, BFV supports only integer arithmetic operations, and thus has difficulties in controlling the scale of HE multiplication results. Recent PPNNs adopt RNS-CKKS with rescaling to solve this issue. \textbf{RNS-CKKS Rescaling}. A recent RNS-CKKS-based PPNN~\cite{Dathathri:PLDI20:EVA} proposes a waterline-based rescaling technique to perform rescaling operations as late as possible. In this way, the number of rescaling operations required along the critical path of the DDG can be minimized. The waterline-based rescaling technique statically inserts rescaling operations to positions along the critical path. However, no prior work tries to shorten the critical path to minimize the number of rescaling operations. \section{HEMET} \subsection{Search Algorithm for a HE-Friendly Mobile Network Architecture} \label{s:architecture_search} We propose a simple, greedy search algorithm to find a HE-friendly mobile network architecture by performing block-wise evaluation on whether this block should be a regular convolutional layer or a mobile module consisting of multiple convolutional layers. We measure the latency difference between using a regular convolutional layer and adopting a mobile module. Only when the inference latency is reduced by the mobile module, we actually integrate the mobile module into the PPNN. \textbf{Model Parameters and Multiplicative Depth}. Using a mobile network module, i.e., a fire or inception module, in a block of the PPNN reduces the HE operation in that block. Less HE operations might reduce the inference latency. However, the multi-layer mobile module also increases the number of layers in the PPNN, thereby enlarging the multiplicative depth of the PPNN. The enlarged multiplicative depth decelerates each HE operation in all layers before the next layer of the current mobile module by increasing the number of rescaling operations $r$ of the PPNN. It is critical to make sure that the benefit brought by a mobile network module is not offset by the deceleration caused by the increased multiplicative depth. \setlength{\textfloatsep}{10pt} \begin{algorithm}[tb!] \caption{HE-Friendly Network Architecture Search} \label{alg:replace} \begin{algorithmic}[1] \STATE {\bfseries Input:} Network $O$, Mobile module (block) number $n$ \STATE {\bfseries Output:} HE-friendly network $O'$ \STATE Initialize $O' = O$ \FOR{$i=n$ {\bfseries to} $1$} \STATE $Cost\_O' = compile\_run(O')$\\ \COMMENT {//Replace the $i$-th module with a single Conv.} \STATE $Tmp\_O' = replace\_back(O', i)$ \STATE $Cost\_Tmp\_O' = compile\_run(Tmp\_O' )$ \IF{$Cost\_Tmp\_O' < Cost\_O'$} \STATE $O' = Tmp\_O'$ \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \textbf{Search Algorithm}. The search algorithm for a HE-Friendly mobile network architecture is shown in Algorithm~\ref{alg:replace}. We assume $O$ is a mobile network consisting of only mobile modules and fully-connected layers. We first initialize the current architecture $O'$ to $O$, and compute its inference latency as $Cost\_O'$. We start to replace a mobile module by a regular convolutional layer from the last mobile module of $O'$, because the last mobile module is more likely to be replaced. If the last mobile module is replaced, the rescaling level of all previous mobile modules reduces, thereby greatly accelerating each HE operation in each mobile module. A candidate network architecture $Tmp\_O'$ is generated by replacing the current mobile module of $O'$ with a regular convolutional layer. And then, we compute the inference latency of $Tmp\_O'$ as $Cost\_Tmp\_O'$. We update $O'$ to $Tmp\_O'$ if $Cost\_Tmp\_O' < Cost\_O'$. The process repeats until $n$ mobile modules (blocks) are evaluated. \textbf{A Case Study}. In Figure~\ref{f:method1}, we show a case study on how to find a HE-friendly mobile network architecture based on SqueezeNet by Algorithm~\ref{alg:replace}. SqueezeNet has four fire modules, and requires fifteen rescaling operations in the critical path. The HE operations in the first convolutional layer $Conv1$ happen on $15$-level of RNS-CKKS ciphertexts. Each HE operation manipulating the $15$-level ciphertexts is more computationally expensive than HE operations occurring on the lower level ciphertexts. SqueezeNet spends $72.7$ seconds in performing an inference on an encrypted CIFAR-10 image. By following Algorithm~\ref{alg:replace}, the third fire module with size of $I=64, C=32, O^0=128, O^1=128$ is replaced by a convolutional layer with input channel $I=64$, output channel $O=256$, and kernel size $k=3$. Similarly, the fourth fire module with size of $I=128, C=32, O^0=128, O^1=128$ is replaced by a convolutional layer with input channel $I=128$, output channel $O=256$, and kernel size $k=3$. The output mobile network requires only twelve rescaling operations, while SqueezeNet needs fifteen rescaling operations. Although the first convolutional layer $Conv1$ performs the same HE operations as SqueezeNet, it reduces the inference latency by $78\%$, since they happen on lower level ciphertexts. \begin{figure}[t!] \centering \includegraphics[width=3.3in]{replacement.pdf} \vspace{-0.3in} \caption{HE-friendly network by layers replacement.} \label{f:method1} \end{figure} \textbf{Algorithm Complexity and Security}. The asymptotic complexity of our network architecture search algorithm is $\mathcal{O}(n)$, which means that $n$ mobile modules need to be evaluated. As Figure~\ref{f:method1} shows, to build a HE-friendly network architecture based on SqueezeNet, totally four inferences have to be done on an encrypted CIFAR-10 image. The search overhead of Algorithm~\ref{alg:replace} is $\sum_{i=1}^{n}Cost\_O'_i$, where $O'_i$ is the $i_{th}$ network candidate. Our search algorithm is done offline before the PPNN is deployed on the server, so our algorithm does not expose any private information of input data. We maintain the same security level as~\cite{dathathri:2019PLDI,Dathathri:PLDI20:EVA}. \subsection{Coefficient Merging} To further reduce the inference latency of PPNNs, we propose \textit{Coefficient Merging} to reduce the length of the critical path of the DDG and the multiplicative depth of PPNNs. A PPNN having a larger multiplicative depth requires larger RNS-CKKS parameters to guarantee the 128-bit security level, thereby significantly increasing the computing overhead of each HE operation. Our search algorithm can find a HE-friendly mobile neural network architecture with less network layers. And then each layer of the resulting network can be compiled into a DDG shown in Figure~\ref{f:method2}, where each node is an operand or a HE operation. Coefficient Merging focuses on minimizing the multiplicative depth and reducing the number of rescaling operations $r$ by merging multiple nodes into one in the DDG. \textbf{The DDG of a Convolutional Layer}. A convolution layer shown in Figure~\ref{f:ckks_ppnns}(a) can be described as Equation~\ref{e:convolution0}, where $A$ is the input, $f_{11}$ and $f_{22}$ are weight filers, and $X_{conv}$ is the convolution result. And $Y_{conv}$ is the result after removing wasted slots $\#\#$ from $X_{conv}$ by a mask $m$. \begin{equation} \begin{split} X_{conv} & = A f_{11} + rot(A,1) f_{12} \\ & + rot(A,3) f_{21} + rot(A,3) f_{22}\\ Y_{conv} & = m X_{conv} \label{e:convolution0} \end{split} \end{equation} A convolution is followed by an approximate degree-2 polynomial activation described as Equation~\ref{e:activation}, where coefficients $a$, $b$, and $c$ are learned in training. \begin{equation} \label{e:activation} Y_{act} = a Y_{conv}^2 + b Y_{conv} +c \end{equation} Recent PPNNs adopt a batch normalization layer~\cite{Ioffe:ICML'15batchnorm} after each activation layer to improve inference accuracy. Batch normalization first calculates the mean $\mu$ and the variance $\sigma$ of the input of each layer, and then normalizes the input as $\overline{Y_{act}}$ using $\mu$, $\sigma$, and learned parameters $\gamma$ and $\beta$. \begin{equation} \begin{split} \label{e:batchnorm} Y_{batch} & = \gamma \overline{Y_{act}} +\beta = \gamma \frac{Y_{act}-\mu}{\sqrt{\sigma^2+\epsilon}}+\beta \\ Y_{batch} & = d Y_{act} +e \end{split} \end{equation} In Equation~\ref{e:batchnorm}, we simplify the batch normalization layer using $d=\frac{\gamma}{\sqrt{\sigma^2+\epsilon}}$ and $e=\beta - \frac{\gamma \cdot \mu}{\sqrt{\sigma^2+\epsilon}}$. Figure~\ref{f:method2}(a) shows the DDG of a convolutional layer described by Equation~\ref{e:convolution0}$\sim$~\ref{e:batchnorm}. The convolution layer is followed by an approximate activation layer and a batch normalization layer. \begin{figure}[t!] \centering \includegraphics[width=3.3in]{merge.pdf} \vspace{-0.3in} \caption{Shortening the critical path by merging nodes.} \label{f:method2} \end{figure} \textbf{Coefficient Merging}. To reduce the critical path of the convolutional layer DDG (and the multiplicative depth of the PPNN), we can merge the mask $m$ in Equation~\ref{e:convolution0}, the coefficients $a$ and $b$ of the activation in Equation~\ref{e:activation}, and the coefficients $d$ and $e$ of the batch normalization in Equation~\ref{e:batchnorm}. \begin{equation} \label{e:batch_merge} \begin{split} Y_{batch} &= d(a m^2 X^2_{conv} + b m X_{conv} +c)+e\\ &= (d a m^2) X^2_{conv} + (d b m) X_{conv} +(d c +e) \\ &= a' X^2_{conv} + b' X_{conv} +c' \end{split} \end{equation} Equation~\ref{e:batch_merge} explains this merging process. Equation~\ref{e:batch_merge} merges $d\cdot a\cdot m$ into a single coefficient $a'$, so that two multiplications can be eliminated along the critical path. Equation~\ref{e:batch_merge} also merges $d \cdot b\cdot m$ and $d \cdot c +e$ into $b'$ and $c'$ respectively to further reduce HE multiplications in the DDG of a PPNN. \textbf{Implementation}. Coefficient Merging can be implemented on a compiled DDG in three steps. First, we can generate a DDG for a specific network architecture. One example of a DDG for a convolution layer followed by an activation layer, and a batch normalization layer, is shown in Figure~\ref{f:method2}(a). Second, we parse the DDG to merge the mask, approximated activation coefficients, and batch normalization coefficients according to Equation~\ref{e:batch_merge}. If the convolutional layer is not followed by a batch normalization, we should remove the parts of $d$ and $e$ in Equation~\ref{e:batch_merge}. Batch normalization coefficients are learned from the training data of the server, but not from the input data of clients. Therefore, Coefficient Merging does not leak any information of clients. The last step of Coefficient Merging is to output an optimized DDG that has a smaller multiplicative depth and less HE operations than the original DDG. One example of the output of Coefficient Merging is shown in Figure~\ref{f:method2}(b). The multiplicative depth of the original DDG in Figure~\ref{f:method2}(a) is 5. Coefficient Merging significantly reduces the multiplicative depth of the DDG by $\sim40\%$. Furthermore, Coefficient Merging reduces the number $r$ of rescaling operations in each layer of a PPNN. For instance, Coefficient Merging can reduce $2\sim 3$ rescaling operations for SqueezeNet on CIFAR-10 dataset, leading to $20\%$ latency reduction. \textbf{Security}. Coefficient Merging does not leak any private information of clients. In our threat model where the server owns the neural networks model and the clients are the data owner, the goal is to prevent the untrusted server from accessing to the raw data of clients. In order to obtain higher inference accuracy and reduce inference latency, it is natural for the server with the original DDG, mask, approximated activation coefficients, and batch normalization coefficients, to perform Coefficient Merging before any PPNN service occurs. Coefficient Merging can be done offline, since the server does not require the inputs from clients. \begin{table*}[t!] \centering \footnotesize \setlength{\tabcolsep}{3pt} \begin{tabular}{|c||c|c|c|} \hline Network & Architecture & Accuracy (CIFAR-10) & Accuracy (CIFAR-100)\\\hline\hline AlexNet & C1-A1-P1-C2-A2-P2-C3-A3-C4-A4-C5-A5-P3-D1-A1-D2-A2-D3 & 81.1\% & 54.2\% \\\hline SqueezeNet & C1-P1-F1-F2-P2-F3-F4-C2-P3 & 81.5\% & 65.3\%\\\hline \textbf{Our\_F34} & C1-P1-F1-F2-P2-C2-C3-C4-P3 & \textbf{81.9\%} & 65.5 \% \\\hline \hline InceptionNet & C1-B1-I1-I2-P1-I3-I4-I5-I6-I7-I8-I9-P2-D1 & 83.2\% &69.1\%\\\hline \textbf{Our\_I789} & C1-B1-I1-I2-P1-I3-I4-I5-I6-C3-C4-C5-P2-D1 & \textbf{83.7\%} & 69.6\% \\\hline \end{tabular} \caption{The network architecture of PPNNs. C$x$, A$x$, P$x$, D$x$, B$x$, F$x$, and I$x$ denote $x_{th}$ convolution, activation, pooling, dense, batch normalized activation, fire module, and inception module layers, respectively. More detailed parameters of each layer can be found in Appendix.} \label{t:Networks} \end{table*} \begin{table*}[t!] \centering \small \begin{tabular}{|l||c|c|c|c|c|c|c|} \hline Scheme & Layer\# & rescale\# &N &Q & Latency & Accuracy (CIFAR-10) & Accuracy (CIFAR-100) \\\hline\hline EVA-AlexNet & 11 & 14 &32768 & 820 & 293.5 seconds & 81.1\% & 54.2\% \\\hline EVA-SqueezeNet & 13 & 17 &65536 & 1020 & 72.7 seconds & 81.5\% & 65.3\% \\\hline Ours-F4 & 12 & 15 &32768 & 880 & 43.8 seconds & 81.7\% & 65.3\% \\\hline \textbf{Ours-F34} & 11 & 14 &32768 & 820 & 37.5 seconds & 81.9\% & 65.5\% \\\hline Ours-F234 & 10 & 13 &32768 & 760 & 50.2 seconds & 81.9\% & 65.6\% \\\hline \textbf{Ours-F34-Merge} & 11 & 12 &32768 & 720 & \textbf{29.6 seconds} & 81.9\% & 65.5\% \\\hline \end{tabular} \caption{The inference latency and accuracy of SqueezeNet on CIFAR-10 and CIFAR-100. Ours-F$xyz$ indicates we replace the $x_{th}$, $y_{th}$, and $z_{th}$ fire module by a regular convolution layer. Ours-F$34$-Merge represents coefficient merging is applied on Ours-F$34$.} \label{t:SqueezeNet} \end{table*} \section{Experimental Methodology} \subsection{Datasets and Networks} \textbf{Datasets}. We adopt the datasets of CIFAR-10 and CIFAR-100 to evaluate our proposed techniques, because they are the most complex datasets prior PPNNs can be evaluated on~\cite{dathathri:2019PLDI,Dathathri:PLDI20:EVA}. The CIFAR-10 dataset includes 60K color images in 10 classes with the size of 32 $\times$ 32, where each class consists of 6K images. 50K images are used for training, while 10K images are used for testing in CIFAR-10. The CIFAR-100 dataset is similar to CIFAR-10, except it has 100 classes, each of which has 600 images. \textbf{Networks}. Table~\ref{t:Networks} shows the comparison between our baseline networks and HE-friendly networks. We first adopt AlexNet as our regular CNN baseline, and SqueezeNet as our mobile neural network baseline. We perform our search algorithm on SqueezeNet to find a faster network architecture Ours\_F34 with almost the same inference accuracy. Moreover, we also explore the design space of deeper mobile PPNN to achieve higher inference accuracy by InceptionNet. We find a new HE-friendly network architecture Ours\_I789 by searching on InceptionNet via Algorithm~\ref{alg:replace}. \subsection{Experimental Setup} We ran all PPNN inferences on a server-level hardware platform, which is equipped with an Intel Xeon Gold 5120 2.2GHz CPU with 56 cores and 256GB DRAM memory. Neural networks are trained by TensorFlow. For each neural network in Table~\ref{t:Networks}, we adopt the EVA compiler~\cite{Dathathri:PLDI20:EVA} to convert the neural network to a RNS-CKKS-based PPNN DDG. The EVA compiler is built upon the Microsoft SEAL library~\cite{sealcrypto}. By following~\cite{Dathathri:PLDI20:EVA}, we set the initial scale of encrypted input message to 25-bit, and the scale of weight filters and masks to 15-bit. The coefficients of approximated activation layers and batch normalization layers are set to 10-bit. All PPNNs in our experiments can achieve the 128-bit security level. \section{Results and Analysis} We report inference latency and accuracy of various PPNN architectures, layer numbers, rescaling operation numbers, and RNS-CKKS parameters (i.e., polynomial degree $N$ and modulus size $Q$) in Table~\ref{t:SqueezeNet} and~\ref{t:InceptionNet}. The datasets of CIFAR-10 and CIFAR-100 use the same PPNN architecture but with different weight filter numbers. EVA~\cite{dathathri:2019PLDI} is the SOTA FHE compiler that can convert a plaintext network model to a RNS-CKKS-based PPNN model. We compare HEMET against EVA~\cite{dathathri:2019PLDI} on the datasets of CIFAR-10 and CIFAR-100 by various network architectures. Table~\ref{t:SqueezeNet} and~\ref{t:InceptionNet} show the comparison between EVA and HEMET on SqueezeNet and InceptionNet, respectively. \begin{table*}[hbt!] \centering \small \begin{tabular}{|l||c|c|c|c|c|c|c|c|} \hline Scheme & Layer\# & rescale\# &N & Q bits & Latency & Accuracy (CIFAR-10) & Accuracy (CIFAR-100)\\\hline\hline EVA-InceptionNet & 34 & 39 &131072& 2340 & 213.2 seconds & 83.2\% & 69.1\% \\\hline Ours-I9 & 31 & 36 &131072 & 2160 & 173.5 seconds & 83.7\% & 69.1\%\\\hline \textbf{Ours-I789} & 28 & 33 &131072 & 1980 & \textbf{132.8 seconds} & 83.7\% & 69.6\% \\\hline Ours-I6789 & 26 & 30 &131072 & 1980 & 193.6 seconds & 83.7\% & 69.8\%\\\hline \textbf{Ours-I789-Merge} & 19 & 29 &65536 & 1740 & \textbf{83.2 seconds} & 83.7\% & 69.6\%\\\hline \end{tabular} \caption{The inference latency and accuracy of InceptionNet on CIFAR-10 and CIFAR-100. Ours-I$xyz$ indicates we replace the $x_{th}$, $y_{th}$, and $z_{th}$ inception module by a regular convolution layer. Ours-I$789$-Merge represents coefficient merging is applied on Ours-I$789$.} \label{t:InceptionNet} \end{table*} \subsection{SqueezeNet} As Table~\ref{t:SqueezeNet} shows, the PPNN of AlexNet generated by EVA uses 293.5 seconds to perform one inference on a CIFAR-10 or CIFAR-100 image. The mobile neural network generated by EVA, SqueezeNet, enlarges the layer number of AlexNet, and increases the rescaling operation number to 17 from 14. Each HE operation in SqueezeNet is much slower than that of AlexNet. However, SqueezeNet still reduces the inference latency to 72.7 seconds from 293.5 seconds. This is because SqueezeNet has much less HE operations than AlexNet, leading to a significant performance improvement. However, SqueezeNet generated by EVA is not an optimized mobile neural network architecture. We use Algorithm~\ref{alg:replace} to find a HE-friendly network based on SqueezeNet. Ours-F$4$ is a network architecture where we use a convolutional layer with 256 input channels, $3\times3$ weight kernels, and 256 output channels, to replace the $4_{th}$ fire module with 256 input channels, $32$ squeezed channels, 128 expanded $1\times 1$ channels, and 128 expanded $3\times 3$ channels. Ours-F$4$ reduces the 72.7-second inference latency of SqueezeNet to 43.8 seconds. Compared to SqueezeNet, Ours-F$4$ is a more RNS-CKKS-friendly network architecture. We further build Ours-F$34$ by replacing both the third and fourth fire modules with two convolutional layers. Ours-F$34$ achieves shorter inference latency yet even higher inference accuracy. At last, we generate Ours-F$234$ by replacing more fire modules with convolutional layers. But Ours-F$234$ has longer inference latency than Ours-F$34$, which means that the second fire module should not be replaced with a convolution layer. Ours-F$34$ is the best HE-friendly network architecture generated by Algorithm~\ref{alg:replace}. Coefficient Merging further reduces the number of rescaling operations of Ours-F$34$, thereby decreasing its inference latency. As the Table~\ref{t:SqueezeNet} shows, Ours-F$34$-Merge reduces two rescaling operations in Ours-F$34$ and introduces $20\%$ extra latency reduction. Moreover, Coefficient Merging has no impact on inference accuracy. Overall, compared to the SOTA SqueezeNet built by EVA, Ours-F$34$-Merge reduces the inference latency by $59.3\%$ and improves the inference accuracy by $0.4\%$. \subsection{InceptionNet} InceptionNet is a mobile neural network architecture that has more layers than SqueezeNet, so it can achieve higher inference accuracy on both CIFAR-10 and CIFAR-100 datasets. As Table~\ref{t:InceptionNet} shows, 34-layer InceptionNet generated by EVA obtains $83.2\%$ inference accuracy. To guarantee the 128-bit security level, it requires polynomial degree $N=131072$, coefficients modules $Q=2340$, and 39 rescaling operations. And a single PPNN inference of EVA-InceptionNet takes 213.2 seconds. InceptionNet is not an optimized mobile neural network architecture to achieve such high inference accuracy. Algorithm~\ref{alg:replace} can find more HE-friendly networks based on InceptionNet. As Table~\ref{t:InceptionNet} shows, Ours-I$9$ is one of the architecture generated by Algorithm~\ref{alg:replace}. It replaces the $9_{th}$ inception module by a convolution layer followed by a batch normalization layer. Ours-I$9$ reduces the 213.2-second inference latency of EVA-InceptionNet to only 173.5 seconds. Ours-I$789$ further reduces the inference latency by replacing more inception modules. However, replacing more inception module does not necessarily result in latency reduction. For instance, Ours-I$6789$ suffers from long inference latency than Ours-I$789$. Therefore, Ours-I$789$ is the best HE-friendly mobile neural network architecture found by our Algorithm~\ref{alg:replace}. We apply Coefficient Merging to further decrease the number of rescaling operations of Ours-I$789$. As Table~\ref{t:InceptionNet} shows, Ours-I$789$-Merge reduces four rescaling operations over Ours-I$789$, and thus introduces $37\%$ latency reduction. Meanwhile, Coefficient Merging does not hurt the inference accuracy. By our search algorithm and Coefficient Merging, Ours-I$789$-Merge reduces the inference latency by $61.2\%$, and improves the inference accuracy by $0.5\%$ over the SOTA InceptionNet compiled by EVA. \section{Conclusion} SOTA PPNNs adopt mobile neural network architectures to shorten inference latency and to maintain competitive inference accuracy. We identify that na\"ively applying a mobile neural network architecture on a PPNN increases the multiplicative depth of the entire PPNN, and thus does not necessarily reduce the inference latency. In this paper, we propose HEMET including a simple, greedy HE-friendly network architecture search algorithm and Coefficient Merging to reduce the number of HE operations in a PPNN without increasing the multiplicative depth of the PPNN. Our experimental results show that, compared to SOTA PPNNs, the PPNNs generated by HEMET reduce the inference latency by $59.3\%\sim 61.2\%$, and improves the inference accuracy by $0.4 \sim 0.5\%$. \section*{Acknowledgements} The authors would like to thank the anonymous reviewers for their valuable comments and helpful suggestions. This work was partially supported by the National Science Foundation (NSF) through awards CCF-1908992 and CCF-1909509.
1,314,259,994,006
arxiv
\section{Introduction}\label{sctn:introduction} The space $\mathcal{M}_k(N,\chi)$ of modular forms of level $\Gamma_0(N)$, weight $k$ and nebentypus $\chi$ splits into the direct sum of the Eisenstein subspace $\mathcal{E}_k(N,\chi)$ and the space of cusp forms $\mathcal{S}_k(N,\chi)$. It is straightforward to compute Fourier expansions and Hecke eigenforms in the Eisenstein subspace, but the space of cusp forms is far more mysterious, and any method of generating cusp forms is therefore of great interest. In this article we examine one of the simplest methods of generating cusp forms: What is the subspace of $\mathcal{S}_k(N,\chi)$ generated by (the cuspidal projection of) products of Eisenstein series of lower weight?\\ For $N=1$ the answer to this question is very well-known: the graded ring\begin{footnote}{When $\chi = \mathbf{1}_N$ is the principal character modulo $N$ we write $\mathcal{M}_k(N)$ for $\mathcal{M}_k(N, \mathbf{1}_N)$.}\end{footnote} $\oplus_{k \geq 0} \mathcal{M}_k(1)$ is a polynomial ring with two generators, one in degree four and one in degree six, corresponding to the Eisenstein series $E_4$ and $E_6$. This means that every cusp form of level $N=1$ is a linear combination of products of Eisenstein series. However the number of products required to form the monomials in $E_4$ and $E_6$ for these linear combinations grows linearly with the weight $k$, which means these monomials are rather complicated. It is therefore natural to ask whether one can have simpler products at the expense of taking more Eisenstein series. Pushing this to the extreme we are led to ask: What is the subspace of $\mathcal{M}_k(N, \chi)$ generated by Eisenstein series and products of \textit{two} Eisenstein series?\\ Using the Rankin--Selberg method (as observed in \cite{Zagier1977} \S5) one can show that, for $k \geq 8$, $\mathcal{M}_k(1)$ can be generated by the products $E_l E_{k-l}$ for $4 \leq l \leq k-4$. Similar statements are known to hold for $\mathcal{M}_k(p)$ for $p$ prime and $k \geq 4$ (see Imago\={g}lu--Kohnen \cite{ImamogluKohnen2005} for $p=2$, Kohnen--Martin \cite{KohnenMartin2008} for $p>2$). The most complete result in this direction was found by Borisov--Gunnells in \cite{BorisovGunnells2001toricvarieties}, \cite{BorisovGunnells2001}, and \cite{BorisovGunnells2003}. They show that for weights greater than two and any level $N\geq 1$ the whole space $\mathcal{M}_k(\Gamma_1(N))$ is generated by $\mathcal{E}_k(\Gamma_1(N))$ and products of two \textit{toric} Eisenstein series $\tilde{s}_{a/N}^{(k)}$ for $a\in\{0,\ldots,N-1\}$, while for $k=2$ one only obtains a subspace, $\mathcal{S}_{2,\mathrm{rk}=0}(N)+\mathcal{E}_k(N,\chi)$, where $\mathcal{S}_{2,\mathrm{rk}=0}(N)$ is defined below. The main application we want to present is the calculation of Fourier expansions at arbitrary cusps. While the toric Eisenstein series of Borisov--Gunnells have remarkably simple rational Fourier expansions at $\infty$, the Fourier expansions at other cusps are harder to obtain. This led us to consider instead the well-studied Eisenstein series \begin{equation}\label{eqn:eis-series-def} E_{l}^{\phi,\psi}(z)=e_l^{\phi,\psi} + 2\sum_{n\geq 1} \sigma_{l-1,\phi,\psi}(n)q^n\in\mathcal{M}_l(M,\phi\psi), \end{equation} where $q = e^{2\pi i z}$, $\phi$ and $\psi$ are primitive characters of level $M_1$ and $M_2$, $M_1M_2=M\mid N$, $\sigma_{l-1, \phi, \psi}(n) = \sum_{d \mid n}\phi(n/d)\psi(d)d^{l-1}$, and the constant term $e_{l}^{\phi, \psi}$ (either zero or a value of a Dirichlet $L$-function) is given in Section \ref{sctn:generating-cusp-forms}. The advantage of working directly with the Eisenstein series in \eqref{eqn:eis-series-def} is that their Fourier expansions at cusps other than $\infty$ are comparatively easy to obtain and were explicitly calculated by Weisinger \cite{Weisinger1977} (we use a corrected version by Cohen \cite{Cohen2017unpublished}). Before we describe our results we mention a different, rather general recent result by Raum \cite{Raum2016}: Let $k \geq 8$ be an integer, let $\rho$ be a representation of $\SL_2(\Z)$ on a complex vector space $V$ such that $\ker(\rho)$ contains a congruence subgroup, and define $\mathcal{M}_k(\rho)$ to be the space of $V$-valued functions transforming as modular forms for the automorphy factor $\gamma \mapsto (cz + d)^{-k} \rho(\gamma^{-1})$. Then \begin{equation}\label{eqn:raums-generators} \mathcal{M}_k(\rho) = \mathcal{E}_k(\rho)+\text{span}_{\phi:\rho_M\otimes\rho_{M'}\rightarrow \rho} \left(T_M E_l\otimes T_{M'}E_{k-l}\right), \end{equation} where $4 \leq l \leq k-4$, $\rho_M$ is the permutation representation on $\Gamma_0(M) \backslash \SL_2(\Z)$, the $E_k$ are corresponding vector-valued Eisenstein series, and the $T_M$ are certain natural vector-valued Hecke operators. In order to state our main theorem, when the character $\chi$ is trivial, let us first define the space of products more precisely. Let $B_d$ be the lifting operator that associates to a modular form $f$ of weight $k$ the form $f|B_d(\tau) = d^{\frac{k}{2}}f(d\tau)$. Let $k \geq 2$ be even and fix a positive integer $N$. We then define $\mathcal{Q}_k(N) \subset \mathcal{M}_k(N)$ to be the subspace generated by all products of the form \[ E_{l}^{\phi, \psi}|B_{d_1d}\cdot E_{k-l}^{\overline{\phi}, \overline{\psi}} | B_{d_2d} \] that lie in $\mathcal{M}_k(N)$ with the additional condition that $d_1 M_1$ divides the squarefree part of $N$. In other words $l\in\{1,\ldots,k-1\}$, $\phi$ and $\psi$ are primitive characters modulo $M_1,M_2$ with $\phi \psi(-1) = (-1)^l$, and $d_1,d_2$ are integers such that $d_1 M_1 d_2 M_2d \mid N$. We only need to require $(\phi, \psi, l) \neq (\mathbf{1}, \mathbf{1}, 2), (\mathbf{1}, \mathbf{1}, k-2)$, where $\textbf{1}$ is the trivial character, since these choices do not produce modular forms. The additional condition on $d_1 M_1$ implies that $(d_1 M_1, d_2 M_2 d) = 1$. Our main result is: \begin{thm}\label{intro:prods-full-space} Let $k \geq 4$ be even. Let $N=p^a q^b N'$ where $p$ and $q$ are primes, $a, b \in \Z_{\geq 0}$, and $N'$ is squarefree. Then the restriction of the cuspidal projection to $\mathcal{Q}_k(N)$ is surjective, i.e. \[ \mathcal{M}_k(N) = \mathcal{Q}_k(N)+\mathcal{E}_k(N). \] \end{thm} In particular, if $N=p^a q^b$ we only need $\mathcal{E}_k(N)$ and products of the form $E_{l}^{\textbf{1}, \psi}|B_{d}\cdot E_{k-l}^{\textbf{1}, \overline{\psi}} | B_{d_2d}$ to generate $\mathcal{M}_k(N)$. This is a minor improvement to \cite{BorisovGunnells2001toricvarieties},\cite{BorisovGunnells2001}, and \cite{BorisovGunnells2003}, from which one can deduce that $\mathcal{E}_k(N)$ and products of the form $E_{l}^{\textbf{1}, \psi}|B_{d_1}\cdot E_{k-l}^{\textbf{1},\overline{\psi}} | B_{d_2}$ generate $\mathcal{M}_k(N)$. An advantage of our proof is that it does not use the theory of toric modular forms from which Borisov--Gunnells draw a powerful Hecke stability result. Instead we use a vanishing criterion for cusp forms with many vanishing $L$-values, Theorem \ref{thm:eichler-shimura-paqb}, which could be of independent interest. The case of weight $2$ is different: Indeed, one sees immediately from the Rankin--Selberg method that the products of Eisenstein series are orthogonal to every newform $f$ with vanishing central $L$-value, i.e. $L(f,1)=0$. Accordingly we define the space $\mathcal{S}_{2,\mathrm{rk}=0}(N)$ to be generated by newforms and lifts of newforms with non-zero central $L$-value. We obtain the analogue of Theorem \ref{intro:prods-full-space} subject to this constraint. \begin{thm}\label{intro:prods-full-space-k=2} Let $N$ and $\mathcal{Q}_2(N)$ be as in Theorem \ref{intro:prods-full-space}. Then \[ \mathcal{S}_{2,\mathrm{rk}=0}(N) \oplus \mathcal{E}_2(N)= \mathcal{Q}_2(N)+\mathcal{E}_2(N). \] \end{thm} This phenomenon of isolating $\mathcal{S}_{2,\mathrm{rk}=0}(N)$ is also observed by Borisov--Gunnells \cite{BorisovGunnells2001}. \\ We develop much of the theory to allow for more general level than $N=p^a q^b N'$ and will discuss this restriction for the level $N$ below. We also point out how similar results can be obtained when the character $\chi$ is non-trivial by proving the analogue of Theorem \ref{intro:prods-full-space-k=2} for $\mathcal{S}_2(p, \chi)$, see Theorem \ref{thm:prods-gen-weight-2-prime}. Before we give an idea for the proof of the theorems we give a few explicit examples, and highlight some of the applications of such an expression for a newform: \begin{enumerate} \item $N=1,k=12$: The most well-known example is of course the discriminant modular form, which in our normalisation becomes \[ \Delta = \frac{50}{3}E_4^{\textbf{1},\textbf{1}}E_8^{\textbf{1},\textbf{1}}-\frac{147}{4}(E_6^{\textbf{1},\textbf{1}})^2. \] \item\label{intro:example-2} $N=11, k=2$: Let $\phi$ be the character modulo $11$ that maps $2$ to $\zeta_{10}$ and $f_{11} \in \mathcal{S}_2(11)$ be the unique newform in this space. Then \[ f_{11} = \left(\frac{1}{\sqrt{5}} - \frac{1}{4}\right)E_1^{\textbf{1},\phi}E_1^{\textbf{1},\overline{\phi}}-\left(\frac{1}{\sqrt{5}} + \frac{1}{4}\right)E_1^{\textbf{1},\phi^3}E_1^{\textbf{1},\overline{\phi}^3}. \] \item\label{intro:example-3} $N=32, k=2$: Let $\chi_4$ be the primitive character modulo $4$ and let $f_{32}= q - 2q^5 -3q^9 + 6q^{13} +2q^{17}+ O(q^{20}) \in \mathcal{S}_2(32)$ be the unique newform in this space. Then \begin{equation*} f_{32} = -\frac12 E_1^{\textbf{1},\chi_4}\cdot E_1^{\textbf{1},\chi_4}|B_4 + \frac{1}{\sqrt{2}}E_1^{\textbf{1},\chi_4}\cdot E_1^{\textbf{1},\chi_4}|B_8 +\frac{1}{2\sqrt{2}}E_1^{\textbf{1},\chi_4}|B_2\cdot E_1^{\textbf{1},\chi_4}|B_4 -\frac{1}{2}E_1^{\textbf{1},\chi_4} |B_2\cdot E_1^{\textbf{1},\chi_4}|B_8. \end{equation*} \end{enumerate} An expression of a modular form $f$ as a sum of products of Eisenstein series provides a way of calculating the Fourier expansion of $f$ at $\infty$. Once such an expression for $f$ is obtained, $O(n \log(n))$ operations are required to compute $n$ Fourier coefficients of $f$ which is theoretically best possible,\begin{footnote}{We thank A. Booker for pointing this out.}\end{footnote} and also appears to work well in practice.\begin{footnote}{See \url{http://mathoverflow.net/q/221781/} for an example computed by D. Loeffler}\end{footnote} \\ Moreover, as mentioned in \cite{Raum2016}, one can use such expressions to compute Fourier expansions at \textit{any} cusp of $\Gamma_0(N)$. For square-free $N$ one can deduce the Fourier expansion of a modular form $f\in\mathcal{M}_k(N)$ at any cusp from the one at $\infty$ by using Atkin-Lehner operators. However for general $N$ the Fourier expansions at certain cusps are difficult to access yet carry important information. We have implemented an algorithm in Sage \cite{sage} that calculates Fourier expansions at any cusp of $\Gamma_1(N)$ based on a representation of the modular form as a linear combination of products of Eisenstein series. For example we can use the expansion in example \eqref{intro:example-3} to obtain the Fourier expansion of $f_{32}$ at the cusp $\frac{1}{8}$. For that purpose we first choose $\gamma = \sabcd{1}{0}{8}{1}\in\SL_2(\Z)$ that sends $\infty$ to $1/8$. The expansion is then given by \[ f_{32}|\gamma = -iq + 2iq^5 + 3iq^9 - 6iq^{13} - 2iq^{17} + O(q^{20})=-if_{32}. \] Using the Fourier expansions of a newform $f$ at cusps we can also compute the root number of $f$ and eigenvalues of Atkin-Lehner operators. This furnishes another example of an important datum which cannot immediately be read from the Fourier expansion of $f$ at $\infty$ when the level is not squarefree. In Section \ref{scn:Fourier expansions} we give several more examples of Fourier expansions of modular forms of levels that involve high prime powers, where we also calculate Atkin-Lehner eigenvalues. The algorithms we use are available at \cite{Github_mneururer}. \\ In future work \cite{DicksonNeururer17} we plan to use this method of computing Fourier expansions at cusps other than $\infty$ to study the local representation-theoretic aspects of newforms. The connection between Fourier expansions at cusps and local components of newforms is explained in \cite{Brunault2016} and \cite{CorbettSaha2017}. In brief, the Fourier expansions at cusps of a newform $f$ can be used to obtain values of the new vector in the local Whittaker model. From this one can extract root numbers of twists of the local representation, and also determine the local component of $f$.\\ Let us now give a sketch of proof of Theorem \ref{intro:prods-full-space} (the proof of Theorem \ref{intro:prods-full-space-k=2} requires minor modifications). Note that in Section \ref{sctn:generating-cusp-forms} we argue for the most part with a space $P_k(N)$ instead of $\mathcal{Q}_k(N)$, and in \S \ref{sctn:new-part} we show that $P_k(N)$ has the same projection to $\mathcal{S}_k^{\text{new}}(N)$ as $\mathcal{Q}_k(N)$. Theorem \ref{intro:prods-full-space} then follows by induction from the fact that the projection of $P_k(N)$ to $\mathcal{S}_k^{\text{new}}(N)$ equals $\mathcal{S}_k^{\text{new}}(N)$, which is the statement of Theorem \ref{thm:prods}. For the proof of Theorem \ref{thm:prods} suppose to the contrary, that the projection of $P_k(N)$ cuts out a proper subspace of $\mathcal{S}_k^{\text{new}}(N)$. We may pick a non-zero $G \in \mathcal{S}_k^{\text{new}}(N)$, orthogonal to $P_k(N)$. A standard application of the Rankin--Selberg method (\S \ref{sctn:generating-cusp-forms}) allows one to see that, if $G$ were a \textit{newform}, then all the critical $L$-values $L(G_\psi | W_S^{NM}, j)$ would have to vanish for all primitive characters $\psi$ of conductor dividing $N$ and all sets $S$ of prime divisors of $N'$ (except for some cases when $j=2, k-2$, when the technical difficulties coming from weight two Eisenstein series enter). Here $G_\psi$ is the twist of $G$ by $\psi$. However, since our $P_k(N)$ might not be closed under the action of the Hecke operators, we cannot assume that $G$ is a newform. On the other hand, our space $P_k(N)$ will, by construction, be closed under the action of the Atkin--Lehner operators $W_\ell^N$ for $\ell|N'$ and so we can at least assume that $G$ is an eigenfunction of all these operators. Moreover we can modify $G$ so that the orthogonality of $G$ to $P_k(N)$ is equivalent to the vanishing of many twisted $L$-values of $G$. We prove a general statement, possibly of independent interest, that if $G$ is a cusp form which is an eigenfunction of certain Atkin--Lehner operators and for which sufficiently many twisted $L$-values vanish, then $G=0$. The argument proceeds via modular symbols, and extends a result of Merel who proves a similar vanishing criterion in the case when $G$ is a newform.\\ The reason the assumption $N = p^a q^b N'$ enters is because we want to be in a situation where, if $f$ is a newform (or a sum of newforms with the same $W_\ell^N$-eigenvalue for all $\ell \in T$) and $\alpha$ is a primitive character modulo $M \mid N$, then the $W_S^{NM}$ (pseudo-)eigenvalues of the twists $f_\alpha$ for each set $S$ of prime divisors of $N/M$ are determined by those of $f$. With our methods, this conditon arises naturally in the proof of Theorem \ref{intro:prods-full-space}, and our argument would extend immediately to any situation where it holds. When $N$ is squarefree this condition is automatic, since the twisting and Atkin--Lehner operators must commute (c.f. Proposition \ref{prop:twists-commute-with-al-and-hecke}). When $N$ is not squarefree this is a much more difficult question, and it seems unlikely that a purely local argument will work. Indeed our extension to level $N = p^a q^b N'$ stems from a rather different argument using the modular symbols relations, which allows us to to avoid this condition altogether when the number of prime factors of $N$ is restricted as above.\\ Let us finish by remarking that one can easily compute (see \S\ref{sctn:generating-cusp-forms}) a trivial upper bound $p_k(N)$ for the number of generators of $Q_k(N)$. We compare this to the dimension $\dim \mathcal{S}^{\text{new}}_k(N)$ in the ``squarefree'' and ``prime-power'' level aspects. In both cases the result is that $P$ grows quicker than $\dim \mathcal{S}_k(N)$, although not by much, particularly in the prime-power case. When $k=2$ the problem of improving the upper bound $p_k(N)$ is interesting because of potential applications to the conjecture of Brumer on the number of newforms of level $N$ for which $L(f, 1) \neq 0$.\\ \section{Preliminaries}\label{sctn:preliminaries} Let $N, k \in \Z$ be a positive integers, and let $\chi$ be a Dirichlet character modulo $N$. We keep the notations from the introduction for spaces of modular forms; we tacitly assume that $\chi(-1) = (-1)^k$ since otherwise these spaces are zero. Our normalisation of the slash operator is \[\left(f |_k \begin{pmatrix} a & b \\ c & d \end{pmatrix}\right)(z) = \frac{(ad-bc)^{k/2}}{(cz+d)^k} f\left(\frac{az + b}{cz+d}\right),\] so that diagonal matrices act trivially. We write $|$ for $|_k$ since the weight $k$ will be clear from the context.\\ We denote by $\mathbf{1}_N$ the principal character modulo $N$, which satisfies $\mathbf{1}_N(n) = 1$ for $(n, N) = 1$, and $\mathbf{1}_N(n) = 0$ otherwise. We write $\mathbf{1}$ for the trivial character, which satisfies $\mathbf{1}(n) = 1$ for all $n$. Any character $\chi$ modulo $N$ can be factorised as a product $\chi = \prod_{p|N} \chi_p$ over the prime divisors of $p|N$, where $\chi_p$ is a uniquely determined character modulo $p^{v_p(N)}$. If $S$ is a set of prime divisors of $N$, then we write $\chi_S=\prod_{p\in S}\chi_p$ for the $S$-part of $\chi$. For a set $S$ of prime divisors of $N$ and a divisor $M$ of $N$, we write $M_S$ for the $S$-part of $M$, i.e. $\prod_{p\in S}p^{v_p(M)}$. We will also use the notation $S_M=\{p\in S; p\mid M\}$ and $\overline{S} = \{p \mid N; p\text{ prime}\} \setminus S$ (we will clarify the dependence on $N$ when confusion may arise). With this notation we have $M_SM_{\overline{S}}=M$ for any divisor $M|N$, in particular $N_SN_{\overline{S}}=N$.\\ For primes $p$ with $(p, N)=1$, we write $T_p$ for the Hecke operators on $\mathcal{M}_k(N, \chi)$; these are extended multiplicatively to $T_n$ for $(n, N)=1$. When $q \mid N$ we write $U_q$ for the Hecke operators extended from the operators $U_p$ (where $p$ is a prime divisor of $N$); the normalisation is \[f | U_p = p^{k/2-1} \sum_{j=0}^{p-1} f | \begin{pmatrix} 1 & j \\ 0 & p\end{pmatrix}.\] For a set of prime divisors $S$ of $N$ we define the Atkin--Lehner involution \[ W_S^N=\begin{pmatrix} N_Sx & y \\ Nz & N_Sw \end{pmatrix}\in\text{M}_2(\Z), \] where $y\equiv 1\pmod {N_S}$, $x\equiv 1\pmod {N_{\overline{S}}}$ and $\det W_S^N=N_S$. If $M$ is a divisor of $N$, then we sometimes use the notation $W_M^N$ for $W_S^N$ with $S = \{p \mid M\}$. We simply write $W_N$ for $W_N^N=\left(\begin{smallmatrix} 0 & 1 \\ -N & 0\end{smallmatrix}\right)$. The following properties of $W_S^N$ are well-known (see for example \cite[\S 1]{AtkinLi78}): \begin{prop}\label{prop:al} \begin{enumerate}[(i)] \item Let $S$ be a set of prime divisors of $N$. If \[\begin{aligned} M= \begin{pmatrix} N_Sx' & y' \\ Nz' & N_Sw' \end{pmatrix} \end{aligned}\] is any matrix with $x',y',z',w'\in\Z$ of determinant $N_S$ then \begin{align}\label{eq:other-atkin-op} f| M = \overline{\chi}_S(y')\overline{\chi}_{\overline{S}}(x')f| W^N_S. \end{align} In particular, $W_S^N$ does not depend on the choice of $x, y, z, w$. \item Let $f\in \mathcal{M}_k(N,\chi)$. Then \[ f| W_S^N \in \mathcal{M}_k(N,\overline{\chi}_S\chi_{\overline{S}}), \] and $W_S^N$ preserves the subspace of cusp forms. \item If $S$ and $S'$ are disjoint sets of prime divisors of $N$, then \[ f|W_{S\cup S'}^N = \chi_{S'}(N_S)(f|W_S^N)|W_{S'}^{N}. \] We also have \begin{align}\label{eq:al-twice} f | W_S^N | W_S^N = \chi_S(-1)\overline{\chi}_{\overline{S}}(N_S)f. \end{align} \item The adjoint of $W_S^N$ on $\mathcal{M}_k(N,\chi)$ with respect to the Petersson inner product is given by \[ W_S^{N,*}=\chi_S(-1)\chi_{\overline{S}}(N_S)W_S^N. \] \item Let $p$ be a prime divisor of $N$ with $p \notin S$. Then \[ f|U_p|W_S^N = \chi_S(p)f|W_S^N|U_p. \] \end{enumerate}\end{prop} By a newform, we mean an element $f \in \mathcal{S}_k(N, \chi)$ which is an eigenfunction of all Hecke operators, normalised to have first Fourier coefficient equal to one. We write $\mathcal{S}_k^{\text{new}}(N, \chi)$ for the subspace of $\mathcal{S}_k(N, \chi)$ generated by the newforms, so $f \in \mathcal{S}_k^{\text{new}}(N, \chi)$ is a linear combination of newforms; we refer to these as elements of the new subspace. If $\chi$ can be defined modulo $N/q$, then any newform in $\mathcal{S}_k(N,\chi)$ is an eigenfunction of the operator $W_q^N$, see Proposition 13.3.11 in \cite{CohenStromberg2017}.\\ Let $q$ be a prime divisor of $N$. On the new subspace there is a close connection between the Hecke operator $U_q$ and the Atkin-Lehner operator $W_q^N$: \begin{prop}\label{prop:al-hecke-connection}[\cite{CohenStromberg2017} Proposition 13.3.14] Let $\chi$ be a Dirichlet character modulo $N$ and suppose it is induced by a character modulo $N/q$ (or equivalently that $\chi_q$ is principal). Let $f$ be a newform of $\mathcal{S}_k(N,\chi)$ with $q$-th Fourier coefficient $a_q$ and Atkin-Lehner eigenvalue $\lambda_q(f)$. \begin{itemize} \item If $q^2\mid N$ then $a_q=0$. \item If $q \mid N$ but $q^2\nmid N$ then $\lambda_q(f) = -q^{1-\frac{k}{2}}a_q$ and hence we have the equality of operators \[ W_q^N = -q^{-\frac{k}{2}+1}U_q. \] on $\mathcal{S}_k^{\text{new}}(N, \chi)$. \end{itemize} \end{prop} The third class of operators that play a major role for us are various twisting operators. Let $f \in \mathcal{S}_k(N, \chi)$ with Fourier expansion $f(z) = \sum_{n \geq 1} a_n e(n z)$, let $\alpha$ be a Dirichlet character modulo $M$, and define \[f_\alpha(z) = \sum_{n \geq 1} a_n \alpha(n) e(nz).\] With $\alpha, f$ as above, define also \[S_\alpha (f) = \sum_{a \bmod M} \overline{\alpha(a)} f|_k \begin{pmatrix} 1 & a/M \\ 0 & 1\end{pmatrix}.\] Note that if $\alpha$ is primitive modulo $M$ we have \begin{equation}\label{eqn:twists-for-prim-char}S_\alpha (f) = G(\overline{\alpha}) f_\alpha,\end{equation} where $G(\overline{\alpha})$ is the Gauss sum of $\alpha$. For any $z \in \mathfrak{H}$ we can view the function $n' \mapsto \left(f|_k \left(\begin{smallmatrix} 1 & n'/N' \\ 0 & 1 \end{smallmatrix}\right)\right)(z)$ as a function $F : (\Z/N'\Z)^\times \to \C^\times$, and we see by Fourier inversion that \begin{equation}\label{eqn:fourier-inversion} f|_k \begin{pmatrix} 1 & n'/N' \\ 0 & 1 \end{pmatrix} = \sum_{\alpha \bmod N'}\frac{\alpha(n')}{\varphi(N')} S_\alpha(f),\end{equation} the sum being over all Dirichlet characters modulo $N'$.\\ Finally we state some standard facts about the commutation relations for the above operators in the cases we will need them. These can be proved by direct computation (see also \cite{AtkinLi78} \S 3). \begin{prop}\label{prop:twists-commute-with-al-and-hecke} Let $N \in \N$, let $f \in \mathcal{M}_k(N, \chi)$, let $\alpha$ be a Dirichlet character modulo $N' \mid N$. Then \[S_{\alpha}(f) \in \mathcal{M}_k(NN', \chi \alpha^2).\] Let $q$ be any divisor of $N$ that is coprime to $N'$, then \[S_\alpha(f) | U_q = \alpha(q) S_\alpha(f | U_q).\] Similarly, if $S$ is a set of prime divisors of $N$ such that $N_S$ and $N'$ are coprime, then \[S_\alpha(f) | W_S^{NN'} = \overline{\alpha}(N_S) S_\alpha(f | W_S^N).\] \end{prop} \section{A vanishing criterion for cusp forms}\label{sctn:eichler--shimura} In this section we prove several criteria that relate the vanishing of twisted $L$-values of a modular form $f$ to the vanishing of $f$. We recall some facts from the theory of modular symbols; for details see \cite{Merel1994} or \cite{Stein2007} \S 8. Let $k$ be an integer $\geq 2$. The space $\mathbb{M}_k(\Gamma_1(N))$ of modular symbols is generated by the Manin symbols $[P, g]$ where $P$ is a homogeneous polynomial in $\C[X,Y]$ of degree $k-2$, and $g \in \SL_2(\Z)$. In fact the Manin symbols $[P, g]$ only depends on $P$ and the coset $\Gamma_1(N)g$. By mapping a matrix $g$ to its bottom row modulo $N$, the cosets of $\Gamma_1(N)\backslash\SL_2(\Z)$ are in bijection with the set \[E_N = \{(u,v) \in (\Z/N\Z)^2;\: \gcd(u, v, N) = 1\};\] see for example \cite{Stein2007} Proposition 8.6. Note that $\gcd(u, v, N)$ is well-defined, i.e. does not depend on the choice of representative of the residue classes $u$ and $v$. We write $[P, (u, v)] = [P, g]$ for any $g \in \SL_2(\Z)$ with bottom row congruent to $(u, v)$ modulo $N$. For $f\in\mathcal{S}_k(\Gamma_1(N))$ let $\xi_f$ be the map \[ \xi_f([P,g]) = \int_{g0}^{g\infty} f(z) (gP)(z,1)dz, \] with $g \in \SL_2(\Z)$ acting on $P\in\C[X, Y]$ by $(gP)(X,Y)= P(g^{-1}(X,Y)^T)$.\\ Using the generators $[X^j Y^{k-2-j}, (u, v)]$, where $0 \leq j \leq k-2$ and $(u, v) \in E_N$, we define $\xi_f(j; u, v) = \xi_f([X^j Y^{k-2-j}, (u, v)])$. The Manin symbol relations from \cite[Theorem 8.4]{Stein2007} translate to \begin{align} \label{eq:manin-symb-rel1} &\xi_f(j; u, v) + (-1)^j \xi_f(k-2-j; v, -u) = 0,\\ \label{eq:manin-symb-rel2} &\xi_f(j; u, v) + \sum_{i=0}^{k-2-j}(-1)^{k-2-i} \binom{k-2-j}{i} \xi_f(i; v, -u-v) \nonumber \\ &\qquad + \sum_{i=k-2-j}^{k-2} (-1)^i \binom{j}{i-k+2+j}\xi_f(i; -u-v, u) = 0,\\ \label{eq:manin-symb-rel3} &\xi_f(j; u, v) - (-1)^{k-2} \xi_f(j, -u, -v)=0, \end{align} There is an involution $\iota$ of $\mathbb{M}_k(\Gamma_1(N))$, namely $\iota [X^j Y^{k-2-j}, (u, v)] = (-1)^{j+1}[X^j Y^{k-2-j}, (-u, v)]$. Accordingly, we define \[\xi_f^{\pm}(j; u, v) := \frac{\xi_f(j; u, v) \pm (-1)^{j+1} \xi_f(j; -u, v)}{2}.\] The relations \eqref{eq:manin-symb-rel1}, \eqref{eq:manin-symb-rel2}, and \eqref{eq:manin-symb-rel3} hold because of a relation on the underlying Manin symbols, see \cite{Stein2007} \S 8.2.1. One can apply $\iota$ to these relations for the Manin symbols to obtain another set of relations. Applying $\xi_f$ and adding or subtracting as appropriate, we see that \eqref{eq:manin-symb-rel1}, \eqref{eq:manin-symb-rel2}, and \eqref{eq:manin-symb-rel3} hold for $\xi$ replaced by $\xi_f^{\pm}$.\\ By \cite{Merel1994} Proposition 8 the maps $f\mapsto \xi_f^+$ and $f\mapsto \xi_f^-$ are injective, so $f$ vanishes if all $\xi_f^{\pm}(j; u, v)$ do. Note also that the $\xi_f(j; u, v)$ are related to critical values of $L$-functions: Indeed, taking $g = \left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right)\in \SL_2(\Z)$ with $(c, v) \equiv (u, v) \bmod N$ we have \begin{equation}\label{eqn:xif-as-l-value}\xi_f(j; u, v) = \frac{j!}{(-2\pi i)^{j+1}} L(f|g, j+1),\end{equation} where $L(f|g,j+1)$ is defined as follows. The $L$-series of a cusp form $F=\sum_{n\in\Q_{>0}} b_n e^{2\pi i n \tau}$ for a congruence subgroup is \[ L(F,s) = \sum_{n\in\Q_{>0}} \frac{b_n}{n^s}. \] It converges for $\Re s\gg 0$ and can be extended analytically to the whole complex plane. We also denote this extended function, the $L$-function of $F$, by $L(F,s)$. The main goal of this section is to prove Theorem \ref{thm:eichler-shimura-triv-char}, which is a vanishing criterion for $f$ in terms of vanishing of certain twisted $L$-functions. The result is in the spirit of Corollaire 2 of \cite{Merel2009}, although we require some modifications since we do not assume that $f$ is a newform, or even an eigenfunction of almost all Hecke operators. First we recall an identity from the proof of Proposition 6 in \cite{Merel2009}: \begin{lem}\label{lem:merel-coset-reps} Let $N \in \N$, let $(u, v) \in E_N$, let $S$ denote the set of prime divisors of $N$ which divide $u$, let $\overline{S}$ denote the remaining prime divisors of $N$, and let $N'$ be the order of $uv$ in $\Z/N\Z$. Let $g = \left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right) \in \SL_2(\Z)$ be such that $(c, d) \equiv (u, v) \bmod N$. Then \[\Gamma_1(N)g = \Gamma_1(N) \begin{pmatrix} 0 & -1 \\ N & 0 \end{pmatrix} \begin{pmatrix} 1 & \frac{n}{N} \\ 0 & 1 \end{pmatrix} \begin{pmatrix} A & B \\ C & D \end{pmatrix} \begin{pmatrix} N N'_S & 0 \\ 0 & N_S\end{pmatrix}^{-1},\] where $n$ is chosen so that $n \equiv uv \bmod N_{\overline{S}}$ and $n \equiv -uv \bmod N_S$, and $\left(\begin{smallmatrix} A & B \\ C & D \end{smallmatrix}\right) \in \Z^{2 \times 2}$ satisfies $AD - BC = N_S N'_S$, $A \equiv uN'_S \bmod N_{\overline{S}}$, $B \equiv v/N_{\overline{S}} \mod N_S$, and $N_S N'_S \mid A$, $N_S N'_S \mid D$, $N N' \mid C$, $N_{\overline{S}} N'_{\overline{S}} \mid B$. \end{lem} The existence of $A, B, C, D$ follows from the Chinese Remainder Theorem. We omit the proof of this identity, which is simply a matter of checking that the matrix on the right hand side is integral with determinant one and with bottom row congruent to $(u, v)$ modulo $N$ (whence the conditions on $A, B, C, D$).\\ Let $N$ be a positive integer and let $T$ be the set of prime divisors $p$ of $N$ with $v_p(N)=1$, so \[N=\prod_{p\in T} p \prod_{p\in\overline{T}} p^{v_p(N)},\] where the exponents $v_p(N)$ for $p\in\overline{T}$ are all greater than $1$. \begin{thm}\label{thm:eichler-shimura-triv-char} Let $\epsilon\in\{0,1\}$, $N \in \N$, $k \geq 2$, and let $f \in \mathcal{S}_k^{\text{new}}(N)$ be an eigenfunction of all Atkin--Lehner operators $W_{p}^N$ for $p\in T$, with $T$ as above. Assume that $L(f_{\alpha} | W_S^{NM}, j+1) = 0$ for all primitive characters $\alpha$ modulo $M \mid N$, all $j=0, 1, ..., k-2$ such that $\alpha(-1)=(-1)^{j+\epsilon}$, and all sets $S\subseteq \overline{T}$ of prime divisors $p$ that divide $\frac{N}{M}$. Then $f=0$. \end{thm} \begin{proof} We will present the argument for the case $\epsilon=1$, which uses the function $\xi^+$. The other case, using $\xi^{-}$, is almost idential, the only difference being which characters cancel in \eqref{eq:period-calculation-pm}. We will show that the conditions in the theorem imply $\xi^+_{f|W_N}(j; u, v) = 0$ for all $j = 0, 1, ..., k-2$ and $(u, v) \in E_N$, which in turn implies that $f=0$. Let us therefore fix $(u, v) \in E_N$ and consider $\xi^+_{f|W_N}(j; u, v)$. As in the statement of Lemma \ref{lem:merel-coset-reps}, let $S$ be the set of those prime divisors of $N$ that divide $u$ and write $N'$ for the order of $uv$ in $\Z/N\Z$. Note that every prime in $S$ divides $\frac{N}{N'}$. Choose $g = \left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right) \in \SL_2(\Z)$ such that $(c, d) \equiv (u, v) \bmod N$. By Lemma \ref{lem:merel-coset-reps} we have \begin{equation}\label{eqn:Gamma_1(N)g-reps} \Gamma_1(N)g = \Gamma_1(N)\begin{pmatrix}0&-1\\ N& 0\end{pmatrix} \begin{pmatrix}1&\frac{n}{N}\\0&1\end{pmatrix} \begin{pmatrix}A&B\\C&D\end{pmatrix}\begin{pmatrix}N N_S' & 0 \\ 0 & N_S\end{pmatrix}^{-1}, \end{equation} with $A, B, C, D$ and $n$ satisfying the conditions of the lemma. Since $f | W_N | W_N = f$, we have \[ f|W_N|g = f | \begin{pmatrix} 1 & \frac{n}{N} \\ 0 & 1 \end{pmatrix} \begin{pmatrix}A&B\\C&D\end{pmatrix}\begin{pmatrix}NN_S' & 0 \\ 0 & N_S\end{pmatrix}^{-1}.\] Now $n \equiv uv \bmod N_{\overline{S}}$ and $n \equiv -uv \bmod N_S$, so $n$ also has order $N'$ modulo $N$. Hence $nN' = n' N$ for some $n'$ which is coprime to $N'$. Writing this as $n/N = n'/N'$ and using \eqref{eqn:fourier-inversion} we get \[f|W_N|g = \sum_{\alpha \bmod N'}\frac{\alpha(n')}{\phi(N')}S_\alpha(f)| \begin{pmatrix}A&B\\C&D\end{pmatrix}\begin{pmatrix}N N'_S & 0 \\ 0 & N_S\end{pmatrix}^{-1}, \] where $\alpha$ varies over all Dirichlet characters modulo $N'$. By Proposition \ref{prop:twists-commute-with-al-and-hecke} we have $S_\alpha(f)\in \mathcal{S}_k(NN',\alpha^2)$, and the conditions of Lemma \ref{lem:merel-coset-reps} together with Proposition \ref{prop:al} give \[S_\alpha(f) | \begin{pmatrix} A & B \\ C & D \end{pmatrix} = \overline{\alpha^2_S}(B) \overline{\alpha^2_{\overline{S}}}\left(\frac{A}{N_S N'_S}\right) S_\alpha(f) | W_S^{NN'}.\] Hence, using \eqref{eqn:xif-as-l-value}, \[\begin{aligned} &\xi_{f|W_N}(j;u,v) \\ &\qquad =\frac{j!}{(-2\pi i)^{j+1}\phi(N')} \sum_{\alpha} \alpha(n')\overline{\alpha^2_S}(B) \overline{\alpha^2_{\overline{S}}}\left(\frac{A}{N_SN'_S}\right) L\left(S_\alpha(f)|W_S^{NN'}|\begin{pmatrix}\frac{1}{N N'_S }&0\\ 0&\frac{1}{N_S}\end{pmatrix},j+1\right)\\ \label{eq:period-calculation} &\qquad = \frac{j! (N_{\overline{S}}N'_S)^{j+1-\frac{k}{2}}}{(-2\pi i)^{j+1}\phi(N')} \sum_{\alpha} \alpha(n')\overline{\alpha^2_S}(B) \overline{\alpha^2_{\overline{S}}}\left(\frac{A}{N_S N'_S}\right) L\left(S_\alpha (f)|W_S^{NN'},j+1\right), \end{aligned}\] where the sum is over all characters modulo $N'$. Here we used that for any cusp form $G$ of weight $k$ for a congruence subgroup, we have $L(G|\left(\begin{smallmatrix}t_1&0\\0&t_2\end{smallmatrix}\right),s) = (t_1/t_2)^{k/2-s}L(G,s)$.\\ To compute $\xi_{f|W_N}(j;-u, v)$ we proceed analogously with $\tilde{g} = \left(\begin{smallmatrix} a & -b \\ -c & d \end{smallmatrix}\right)$, since this has bottom row $(-c, d) \equiv (-u, v) \bmod N$. With $A, B, C, D, n$ as in \eqref{eqn:Gamma_1(N)g-reps} we see that \begin{equation}\label{eqn:Gamma_1(N)tildeg-reps} \Gamma_1(N)\tilde{g} = \Gamma_1(N)\begin{pmatrix}0&-1\\ N& 0\end{pmatrix} \begin{pmatrix}1&-\frac{n}{N}\\0&1\end{pmatrix} \begin{pmatrix}-A&B\\C&-D\end{pmatrix}\begin{pmatrix} N N'_S & 0 \\ 0 & N_S \end{pmatrix}^{-1}.\end{equation} The argument is as above, with $n'$ replaced by $-n'$, and each individual summand in the final expression for $\xi_{f|W_N}(j; u, v)$ changes by a factor of $\alpha(-1)\overline{\alpha_{\overline{S}}^2}(-1) = \alpha(-1)$. From the definition of $\xi^{+}_{f|W_N}$ we then see \begin{align}\label{eq:period-calculation-pm} \xi^{+}_{f|W_N}(j;u,v)=\frac{j! (N_{\overline{S}}N'_S)^{j+1-\frac{k}{2}}}{(-2\pi i)^{j+1} \phi(N')} \sum_{\alpha} \alpha(n')\overline{\alpha^2_S}(B) \overline{\alpha^2_{\overline{S}}}\left(\frac{A}{N_S N'_S}\right) L\left(S_\alpha(f)|W_S^{NN'},j+1\right). \end{align} where the sum is over all characters $\alpha$ modulo $N'$ with $\alpha(-1)= (-1)^{j+1}$.\\ The next step is to relate $S_\alpha(f)$ to the twist by the primitive character underlying $\alpha$. The key to this is the following lemma, which can be proved by a direct computation: \begin{lem}\label{lem:twist-by-non-prim-char} Let $N$ and $k$ be positive integers, let $\chi$ be a Dirichlet character modulo $N$, and let $f \in \mathcal{S}_k(N, \chi)$. Let $N' \in \N$, let $\alpha$ be a character modulo $N'$ with conductor $M$. Assume that $M<N'$, let $p$ be any prime dividing $N'/M$, and let $\beta$ be the character modulo $N'/p$ inducing $\alpha$. Then \[ S_\alpha(f) = p^{1-k/2} S_\beta(f|U_p) | \begin{pmatrix} p& 0 \\ 0 & 1 \end{pmatrix} - \overline{\beta}(p) S_\beta(f).\] \end{lem} In our case $f \in \mathcal{S}_k^{\text{new}}(N)$ is an eigenfunction of each $W_p^N$ for $p\in T$, so by Proposition \ref{prop:al-hecke-connection} it is also an eigenfunction of $U_p$ for each $p \in T$. By the same proposition if $p\in\overline{T}$, i.e., $p^2|N$, then $U_p$ is the zero operator on $\mathcal{S}_k^{\text{new}}(N)$, so $f$ is trivially an eigenfunction of $U_p$ in that case. In both cases we denote the $U_p$-eigenvalue of $f$ by $a_p$. Then Lemma \ref{lem:twist-by-non-prim-char} gives \[S_{\alpha}(f) = p^{1-k/2} a_p S_\beta(f) | \begin{pmatrix} p & 0 \\ 0 & 1 \end{pmatrix} - \overline{\beta}(p) S_\beta(f),\] and so \[L(S_{\alpha}(f) | W_S^{NN'}, j+1) = (p^{-j} a_p - \overline{\beta}(p)) L(S_\beta(f) | W_S^{NN'}, j+1).\] Applying this repeatedly we see that $L(S_\alpha(f) | W_S^{NN'}, j+1)$ is a multiple of $L(S_{\alpha_0}(f) | W_S^{NN'}, j+1)$, where $\alpha_0$ is the the primitive character modulo $M \mid N'$ inducing $\alpha$ modulo $N'$. Next note that $S_{\alpha_0}(f)=G(\overline{\alpha_0})f_{\alpha_0} \in \mathcal{S}_k(NM, \alpha_0^2)$. We then use $S_{\alpha_0}(f) | W_S^{NN'} = S_{\alpha_0}(f) | W_S^{NM} | B_d$, where $d = (\frac{N'}{M})_S$. Thus $L(S_\alpha(f) | W_S^{NN'}, j+1)$ is a multiple of $L(f_{\alpha_0} | W_S^{NM}, j+1)$. If $S\subseteq \overline{T}$, then $L(f_{\alpha_0} | W_S^{NM}, j+1)=0$, as this is one of the $L$-values that is assumed to vanish in the statement of the theorem.\\ Now suppose that there is a prime $p\in S\cap T$. Since $v_p(N)=1$ and $p$ divides $u$, we have $\gcd(p,N')=\gcd(p,M)=1$. Write $S = S'\cup\{p\}$ and set $\alpha'=(\alpha_0)_{S'}$, the $S'$-part of $\alpha_0$. By Propositions \ref{prop:al} and \ref{prop:twists-commute-with-al-and-hecke} we have \[ f_{\alpha_0} | W_S^{NM} = \alpha'^2(p) (f_{\alpha_0}|W_{p}^{NM})|W_{S'}^{NM} = \alpha'^2(p)\overline{\alpha_0}(p)((f|W_p^{NM})_{\alpha_0})|W_{S'}^{NM}. \] Since we assume that $f$ is an eigenfunction of $W_p^N$ we get that $f_{\alpha_0} | W_S^{NM}$ is a multiple of $f_{\alpha_0} | W_{S'}^{NM}$. Applying this procedure for every prime $p\in S\cap T$ we deduce that $f_\alpha|W_S^{NM}$ is a multiple of $f_{\alpha_0}|W_{S''}^{NM}$, where $S''$ is a set of prime divisors of $N/M$ that is disjoint with $T$. Thus $L(f_\alpha|W_S^{NM},j+1)=0$, since we assume that $L(f_{\alpha_0}|W_{S''}^{NM},j+1)$ vanishes.\\ Since $\xi^{+}_{f|W_N}(j;u,v)$ is a sum of such $L$-values this shows that $\xi^{+}_{f|W_N}(j;u,v)=0$ for all $(u,v)\in E_N$, and hence $f=0$. \end{proof} Note that in Theorem \ref{thm:eichler-shimura-triv-char} Atkin--Lehner operators are used in two different ways: first at $p \in T$ where we insist $f$ is an eigenfunction, second at $S \subset \overline{T}$ where we insist the $L$-values of Atkin--Lehner images of twists vanish. For our applications we will restrict to levels of the form $N=p^a q^b N'$ with $N'$ squarefree; a simple trick then allows us to do away with the latter use: \begin{thm}\label{thm:eichler-shimura-paqb} Let $\epsilon\in\{0,1\}$ and $N = p^a q^b N'$ where $p$ and $q$ are distinct primes, $a, b \in \Z_{\geq 0}\setminus \{1\}$, and $N'$ is squarefree and coprime to $pq$. Let $k \geq 2$, and let $f \in \mathcal{S}_k^{\text{new}}(N)$ be an eigenfunction of all Atkin-Lehner operators $W_\ell^N$ for primes $\ell|N'$. Assume that $L(f_\alpha, j+1) = 0$ for all characters $\alpha$ primitive modulo $M \mid N$ and all $j=0,...,k-2$ with $\alpha(-1)=(-1)^{j+\epsilon}$. Then $f=0$.\end{thm} \begin{proof} We may assume that $a>1$ or $b>1$, in particular that the set $\overline{T} \subseteq\{p,q\} $ is non-empty, since otherwise this is just the statement of Theorem \ref{thm:eichler-shimura-triv-char}. Let $(u,v)\in E_N$. If no prime of $\overline{T}$ divides $u$ then the proof of Theorem \ref{thm:eichler-shimura-triv-char} shows that $\xi^+_{f|W_N}(j;u,v)$ is a linear combination of the $L$-values $L(f_{\alpha},j+1)$ so our assumptions give $\xi^+_{f|W_N}(j;u,v)=0$. The same argument for $(v,-u)\in E_N$ shows that if no prime of $\overline{T}$ divides $v$ then $\xi^+_{f|W_N}(k-2-j;v,-u)=0$; we can then use the modular symbols relation \eqref{eq:manin-symb-rel1} to see that $\xi^+_{f|W_N}(j;u,v)=0$.\\ Now suppose that both $u$ and $v$ are divisible by a prime in $\overline{T}$. By the definition of $E_N$, it cannot be the case that the same prime divides both, so we are in the case when $a>1$ and $b>1$, and we may assume that $p$ divides $u$ and $q$ divides $v$. Then the residue class $-u-v$ is divisible by neither $p$ nor $q$, so $\xi_f^{+}(i; v, -u-v) = \xi_f^+(i; -u-v, u)=0$ for all $0 \leq i \leq k-2$ by the above. Hence using \eqref{eq:manin-symb-rel2} we obtain $\xi_f(j; u, v)=0$.\end{proof} \begin{rmk} More generally, an argument similar to the proof of Theorem \ref{thm:eichler-shimura-paqb} shows that one can restrict the sets $S \subset \overline{T}$ in Theorem \ref{thm:eichler-shimura-triv-char} to avoid two prescribed elements of $\overline{T}$.\end{rmk} In the next section we will show that a cusp form $f$ in $\mathcal{S}^{\mathrm{new}}_k(N)$ is orthogonal to products of Eisenstein series if many twisted $L$-values of $f$ vanish. A technical difficulty arises in our application of Theorem \ref{thm:eichler-shimura-triv-char} when $k \geq 4$, where we cannot deduce that $L(f,2)$ and $L(f,k-2)$ vanish due to the fact that the weight two Eisenstein series with trivial character is not holomorphic. To this end we prove a result which states that the problematic cases are in fact already a consequence of the other assumptions: \begin{prop}\label{prop:eichler-shimura-trivial-char} Let $N \in \N$, $k \geq 4$ be even and $f\in \mathcal{S}^{\mathrm{new}}_k(N)$. Assume that $L(f_\alpha, j+1)=0$ for all primitive characters $\alpha$ modulo $M \mid N$ where $M>1$, and all $j=0,\ldots,k-2$ with $\alpha(-1)=(-1)^{j+1}$. Assume moreover that $L(f, j+1) = 0$ for all odd $j$ with $3\leq j\leq k-5$. Then $L(f, 2)=0$ and $L(f, k-2) = 0$ must hold as well.\end{prop} \begin{proof} As in the proof of Theorem \ref{thm:eichler-shimura-paqb} we see $\xi_{f|W_N}^+(j; u, v) = 0$ as long as $j \neq 1, k-3$ and at least one of $\gcd(u, N) = 1$ or $\gcd(v, N)=1$ holds. First assume that $k \geq 6$. Applying \eqref{eq:manin-symb-rel2} with $j=2$, $(v, -u-v) = (0, 1)$ and using the vanishing we just observed, we get \[-(k-4) \xi_{f|W_N}^+(1; 0, 1) - 2 \xi_{f|W_N}^+(k-3; 1, 0) = 0.\] Relation \eqref{eq:manin-symb-rel1} with $j=1$, $(u, v) = (0, 1)$ gives \[\xi_{f|W_N}^+(1; 0, 1) - \xi_{f|W_N}^+(k-3; 1, 0) = 0,\] hence \[\xi_{f|W_N}^+(1; 0, 1) = 0,\] since $k \geq 6$. Since $\xi_{f|W_N}^+(1; 0, 1) = \xi_{f|W_N}(1; 0, 1)$, \eqref{eqn:xif-as-l-value} gives $L(f|W_N, 2) = 0$, hence $L(f, k-2)=0$ by the functional equation of $L(f,s)$. The other case follows a similar argument, applying \eqref{eq:manin-symb-rel2} with $j=2$ and $(v, -u-v) = (1, 0)$ and \eqref{eq:manin-symb-rel1} with $j=1$ and $(u, v) = (1, 0)$ we get $\xi_{f|W_N}^+(k-3; 0, 1)=0$, hence $L(f, 2)=0$.\\ For $k=4$, apply \eqref{eq:manin-symb-rel2} with $j=0$, $(v, -u-v)=(0, 1)$ to get \[\xi_{f|W_N}^{+}(1; 0, 1) = \frac{1}{(-2\pi i)^2}L(f,2)= 0.\] \end{proof} \section{Generating spaces of cusp forms by products of Eisenstein series}\label{sctn:generating-cusp-forms} We begin by recalling the theory of Eisenstein series as developed in \cite{Miyake2006} \S 7. Let $N \in \N$, $l \in \N$, and let $\phi$ and $\psi$ be Dirichlet characters modulo $N_1$ and $N_2$ respectively such that $N_1N_2=N$. We assume that $\phi(-1)\psi(-1) = (-1)^l$, and we extend $\phi$ to a character of $\Gamma_0(N)$ by $\phi\left(\left(\begin{smallmatrix} a & b \\ c & d \end{smallmatrix}\right)\right) = \phi(d)$, and similarly for $\psi$. Define the Eisenstein series \[E_l^{\phi,\psi}(z, s) = \frac{(l-1)!N_2^l}{(-2\pi i)^l G(\overline{\psi_0})}\sum_{\substack{(c,d)\in\Z^2\setminus\{(0,0)\}}}\frac{\phi(c)\overline{\psi}(d)}{(Ncz+d)^l|Ncz+d|^{2s}}, \] where $\psi_0$ is the primitive character that induces $\psi$. It converges uniformly and absolutely for $l + 2\Re(s) \geq 2+\epsilon$, for any $\epsilon>0$. In the region of absolute convergence this satisfies the transformation law \begin{equation}\label{eqn:analytic-eis-trans}E_l^{\phi,\psi}(\delta z, s) = \phi(\delta)\psi(\delta) j(\delta, z)^l \abs{j(\delta, z)}^{2s} E_l^{\phi,\psi}(z, s)\end{equation} for $\delta \in \Gamma_0(N)$. The function $E_l^{\phi, \psi}(z, s)$ can be analytically continued in the $s$-variable to $s=0$ (see \cite[\S 7]{Miyake2006}), so we can define $E_{l}^{\phi,\psi}(z) = E_l^{\phi,\psi}(z, 0)$. Moreover, unless $l=2$ and $\psi$ is principal, the value at $s=0$ is a holomorphic function of $z$, so \eqref{eqn:analytic-eis-trans} along with appropriate growth estimates shows that in fact $E_l^{\phi,\psi} \in \mathcal{M}_l(N, \phi\psi)$.\\ If $\phi$ and $\psi$ are primitive, the Fourier expansion of $E_l^{\phi,\psi}$ can be deduced from Theorems 7.1.3, 7.2.12, and 7.2.13 of \cite{Miyake2006}: \begin{align}\label{eqn:eis-fourier-coeff} E_{l}^{\phi,\psi}(z)=e_l^{\phi,\psi} + 2\sum_{n\geq 1} \sigma_{l-1,\phi,\psi}(n)q^n\in\mathcal{M}_k(N,\phi\psi) \end{align} where $\sigma_{l-1,\phi,\psi}(n) = \sum_{d|n}\phi(n/d)\psi(d)d^{l-1}$ and \[ e_l^{\phi,\psi} = \begin{cases} L(\psi,1-l)&N_1=1,\\ L(\phi,0) &N_2=1\text{ and }l=1,\\ 0 &\text{else.} \end{cases} \] In the special case $\phi=\textbf{1}$ the Eisenstein series $E_l^{\textbf{1},\psi}(z,s)$, appropriately normalised, is given by a Poincar\'{e} series: \begin{align} \frac{2(-2\pi i)^lL(\overline{\psi},l+2s)G(\overline{\psi_0})}{(l-1)!N^l}E_l^{\textbf{1},\psi}(z, s)=\sum_{\gamma \in \Gamma_{\infty} \backslash \Gamma_0(N)} \frac{\overline{\psi(\gamma)}}{j(\gamma, z)^l \abs{j(\gamma, z)}^{2s}}. \end{align} \\ Let $k \in \N$, $\chi$ be a Dirichlet character modulo $N$ with $\chi(-1)=(-1)^{k}$, and let $f \in \mathcal{S}_{k}(N, \chi)$. Given any $g \in \mathcal{M}_l(N, \overline{\psi} \chi)$, we consider the inner product \[\langle gE_{k-l}^{\textbf{1},\psi}(\cdot,s), f \rangle = \int_{\Gamma_0(N) \backslash \mathcal{H}} g(z) E_{k-l}^{\textbf{1},\psi}(z,s) \overline{f(z)} y^{s + k} d\mu(z).\] Here $d\mu(z) = (dx dy)/y^2$ is the $\SL_2(\R)$-invariant measure on the upper half plane. Note that the integrand is indeed $\Gamma_0(N)$-invariant so the integral over this quotient is well-defined, at least when it converges. This is the case if $s$ has sufficiently large real part, which we assume for the following proposition: \begin{prop}\label{prop:inner-prod-as-l-values} Let $N, k, l \in \N$, $\chi$ be a Dirichlet character modulo $N$, and $f$ be a newform in $\mathcal{S}_{k}(N, \chi)$. Let $\phi,\psi$ be Dirichlet characters modulo $N$ such that $\phi\psi=\chi$, $\phi(-1)=(-1)^l$ and denote by $\phi_0$ the primitive character that induces $\phi$. Exclude the two cases $\phi_0 = \mathbf{1}$ and $l=2$, and $\phi=\chi$ and $l=k-2$. Then \begin{multline}\label{eq:inner-product-as-l-values} \langle E_{l}^{\textbf{1},\phi_0} E_{k-l}^{\textbf{1},\psi}(\cdot,s), f \rangle =\\ \frac{i^{k-l}\Gamma(s + k - 1)(k-l-1)!N^{k-l}}{2^{2s+3k-l-2} \pi^{s + 2k-l- 1}L(\overline{\psi},k-l+2s)^2G(\overline{\psi_0}) } L(f^c, s+k-1) L((f^c)_{\phi_0}, s+k-l), \end{multline} where $f^c(z) = \overline{f(-\overline{z})}\in \mathcal{S}_k(N, \overline{\chi})$. \end{prop} \begin{proof} Using the Rankin-Selberg method (see \cite{Shimura1976}) we obtain \begin{equation}\label{eqn:rankin--selberg-unfolded}\langle gE_{k-l}^{\textbf{1},\psi}(\cdot, s), f \rangle = \frac{\Gamma(s + k - 1)(k-l-1)!N^{k-l}}{2(4 \pi)^{s + k- 1}(-2\pi i)^{k-l}L(\overline{\psi},k-l+2s)G(\overline{\psi_0}) } \sum_{n \geq 1} \frac{\overline{a_n} b_n}{n^{s + k -1}},\end{equation} where $a_n$ and $b_n$ are the Fourier coefficients of $f$ and $g$. Note that $\overline{a_n}$ are the Fourier coefficients of $f^c(z)$. A standard computation (see e.g. \cite{Raum2016} Proposition 4.1\begin{footnote}{Our divisor function is $\sigma_{l-1, \phi, \mathbf{1}}$ in Raum's notation.}\end{footnote}) gives \[\sum_{n \geq 1} \frac{\overline{a_n} \sigma_{l-1, \mathbf{1}, \phi_0}(n)}{n^{s+k-1}} = \frac{L(f^c, s+k-1) L((f^c)_{\phi_0}, s+k-l)}{L(\overline{\chi}\phi_0, 2s + k-l)} .\] So taking $g = E_l^{\mathbf{1}, \phi}$ in \eqref{eqn:rankin--selberg-unfolded} and using \eqref{eqn:eis-fourier-coeff} we obtain the result. \end{proof} Note that both sides of \eqref{eq:inner-product-as-l-values} have analytic continuation to $\C$, so by the uniqueness of analytic continuation the equality also holds at $s=0$: \begin{cor}\label{cor:inner-prod-as-l-values} Under the hypotheses of Proposition \ref{prop:inner-prod-as-l-values}, \[\begin{aligned} &\langle E_{l}^{\textbf{1},\phi_0} E_{k-l}^{\textbf{1},\psi}, f \rangle = \frac{i^{k-l}(k-l-1)!(k-2)!N^{k-l}}{2^{3k-l-2}\pi^{2k-l-1}L(\overline{\psi},k-l)^2G(\overline{\psi_0})} L(f^c, k-1) L((f^c)_{\phi_0}, k-l). \end{aligned}\] \end{cor} We can now proceed to the first result on generating cusp forms by products of Eisenstein series. \begin{dfn}\label{dfn:PkN} Let $N=p^aq^bN'$ be as in Theorem \ref{thm:eichler-shimura-paqb}. For each $M \mid N$, write $D(M)$ for the set of primitive Dirichlet characters modulo $M$. Let $B(N) \subset \bigsqcup_{M \mid N} D(M) \times \{1,...,k-1\}$ consist of the pairs $(\alpha, l)$ such that \[\begin{aligned} \alpha(-1) &= (-1)^l, \\ (\alpha, l) &\neq (\mathbf{1}, 2), (\mathbf{1}, k-2). \end{aligned}\] Define $P_k(N) \subset \mathcal{M}_k(N)$ to be the space generated by the products \[ ( E_l^{\textbf{1},\alpha} E_{k-l}^{\textbf{1},\overline{\alpha_N}}) | W_{S}^N \] for all $(\alpha,l) \in B(N)$ and all sets $S$ of prime divisors of $N'$. Here $\alpha_N$ denotes the extension of $\alpha$ to a character modulo $N$. \end{dfn} \begin{thm}\label{thm:prods} Let $N = p^aq^bN'$ be as in Theorem \ref{thm:eichler-shimura-paqb}. For $M \subset \mathcal{M}_k(N)$, write $\overline{M}$ for the projection of $M$ to $\mathcal{S}_k^{\mathrm{new}}(N)$. Then for $k \geq 4$ even \[\overline{P_k(N)}=\mathcal{S}^{\mathrm{new}}_k(N).\] In the case $k=2$ we define $\mathcal{S}^{\mathrm{new}}_{2,\mathrm{rk}=0}(N) \subset \mathcal{S}^{\mathrm{new}}_2(N)$ to be the subspace generated by newforms $f$ with $L(f,1)\neq 0$. Then \[\overline{P_2(N)}=\mathcal{S}^{\mathrm{new}}_{2,\mathrm{rk}=0}(N).\]\end{thm} \begin{proof} First assume $k>2$. As in Section \ref{sctn:eichler--shimura} denote the set of prime divisors of $N'$ by $T$. Assume that $\overline{P_k(N)}$ is a proper subset of $\mathcal{S}_k^{\mathrm{new}}(N)$. Since $P_k(N)$ is closed under the action of the Atkin-Lehner operators $W_S^N$ for $S\subseteq T$, so is the orthogonal complement of $\overline{P_k(N)}$ in $\mathcal{S}_k^{\mathrm{new}}(N)$. Therefore there exists a non-zero form $g\in \mathcal{S}^{\mathrm{new}}_k(N)$ that is orthogonal to $P_k(N)$ and an eigenform of the $W_S^N$. We can write \begin{equation}\label{eqn:g-as-sum-of-newforms} g=\sum_{i=1}^r \beta_i f_i, \end{equation} where $f_1,\ldots,f_r$ are the newforms in $\mathcal{S}^{\mathrm{new}}_k(N)$ with the same $W_S^N$-eigenvalues as $g$ for all $S\subseteq T$. Using Corollary \ref{cor:inner-prod-as-l-values} (note $f_i = f_i^c$ since the $f_i$ have real Fourier coefficients) we get \begin{align*} \langle E_l^{\mathbf{1}, \alpha} E_{k-l}^{\overline{\mathbf{1}, \alpha_N}}, f_i\rangle = \frac{(k-l-1)!(k-2)!N^{k-l}}{(-2\pi i)^{k-l}(4\pi)^{k-1}L(\alpha_N,k-l)^2 G(\alpha)} L(f_i, k-1) L((f_i)_{\alpha}, k-l).\end{align*} Since $g$ is a $W_S^N$-eigenform for $S\subseteq T$ and the operators $W_S^N$ are self-adjoint, for all $S \subset T$ we have $\langle E_l^{\mathbf{1}, \alpha} E_{k-l}^{\overline{\mathbf{1}, \alpha_N}}|W_S^N, g\rangle=0$ if and only if $\langle E_l^{\mathbf{1}, \alpha} E_{k-l}^{\overline{\mathbf{1}, \alpha_N}}, g\rangle = 0$. We see that orthogonality of $g$ to $P_k(N)$ is equivalent to \begin{align}\label{eqn:orthogonality-of-g1} \sum_{i=1}^r \beta_i L(f_i, k-1)L((f_i)_{\alpha}, k-l)=0 \end{align} for all $(\alpha,l) \in B(N)$. Following an idea from the proof of Theorem 1 \cite[Theorem 1]{KohnenMartin2008}, we define another form in $G \in \mathcal{S}_k^{\mathrm{new}}(N)$ by \[G = \sum_{i=1}^r \beta_i L(f_i,k-1)f_i.\] Since the $f_i$ all have the same $W_S^N$-eigenvalues as $g$ for $S\subseteq T$, so does $G$. Then \eqref{eqn:orthogonality-of-g1} translates to \begin{equation}\label{eqn:orthogonality-of-G} L(G_\alpha, k-l)=0\end{equation} for $(\alpha, l) \in B$. Now applying Proposition \ref{prop:eichler-shimura-trivial-char} we see that $L(G, 2) = 0$ and $L(G, k-2) = 0$. Thus $G$ satisfies the conditions of Theorem \ref{thm:eichler-shimura-paqb}, so $G = 0$. Since $k \geq 4$, $L(f_i, k-1) \neq 0$, so all $\beta_i$ must be zero, and we arrive at the contradiction $g=0$.\\ In the case where $k=2$ the proof is similar. The inclusion $\overline{P_2(N)} \subset \mathcal{S}^{\mathrm{new}}_{2,\mathrm{rk}=0}(N)$ follows from Corollary \ref{cor:inner-prod-as-l-values}, which shows that $P_2(N)$ is orthogonal to every newform $f$ with $L(f,1)=0$. The rest of the argument works as above. \end{proof} When $a = b = 0$, so $N = N'$ is squarefree, one easily sees that the set $B(N)$ in Definition \ref{dfn:PkN} satisfies \[\#B(N) \sim_k \frac{k-1}{2} \prod_{p \mid N} (p-1)\] for $k\to\infty$. Therefore, writing $p_k(N)$ for the number of generators for $P_k(N)$ as given in Definition \ref{dfn:PkN}, we have \[p_k(N) \sim_k \frac{k-1}{2} \prod_{p \mid N} 2(p-1).\] This should be compared to \[\dim \mathcal{S}_k^{\mathrm{new}}(N) \sim_k \frac{k-1}{12} \prod_{p \mid N} (p-1)\] (c.f. \cite{Martin2005}). It is therefore an interesting question whether we can remove the Atkin--Lehner operators appearing in the definition of $P_k(N)$ so to obtain a space $P_k(N)$ for which the number of generators is of similar size to the dimension of the target space. When $N=p^a q^b$ for $a,b>1$ we have \[p_k(N) \sim_k \frac{k-1}{2} p^{a-1}(p-1) q^{b-1}(q-1),\] while (with $a, b \geq 3$ for simplicity), \[\dim \mathcal{S}_k^{\mathrm{new}}(N) \sim_k \frac{k-1}{12} p^{a-1}(p-1)\left(1 - \frac{1}{p^2}\right) q^{b-1}(q-1) \left(1-\frac{1}{q^2}\right).\] In this case (as well as the case of $N=N'$) it is also an interesting question whether one can quantify the linear dependency among the products of Eisenstein series generating $P_k(N)$ (or $Q_k(N)$, defined below). These questions may be asked for any $k$, but they are particularly pertinent when $k=2$, as any additional symmetry in this case which would allow lowering the constant further could have application to Brumer's conjecture (see \cite{Iwaniec2000}) regarding the number of newforms $f$ with $L(f,1) = 0$.\\ Most of the methods we have developed also work for the spaces $\mathcal{M}_k(N,\chi)$ where $\chi$ is a non-principal character modulo $N$. However, there are some complications, in particular because the Atkin--Lehner operators $W_S^N$ are no longer endomorphisms of $\mathcal{M}_k(N,\chi)$ when $\chi$ is not quadratic. This means that it is necessary to take Eisenstein series coming from different eigenspaces of the diamond operators to generate $\mathcal{S}_k(N, \chi)$. To minimise the technicalities we give an example of how the same methods can be used to treat the case of prime level, where the Atkin--Lehner operators are not needed: \begin{thm}\label{thm:prods-gen-weight-2-prime} Let $p$ be prime and let $\chi$ a character modulo $p$. Let $P_2(p,\chi)$ be the space generated by \[E_{1}^{\textbf{1},\overline{\alpha}} E_{1}^{\textbf{1},\overline{\chi}\alpha},\] for $\alpha$ varying over all (primitive) odd characters modulo $p$. Write $\overline{P_2(p,\chi)}$ for the projection of $P_2(p, \chi)$ to $\mathcal{S}_2(p,\chi)$. Let $\mathcal{S}^{\mathrm{new}}_{2,\mathrm{rk}=0}(p, \chi) \subset \mathcal{S}_2(p, \chi)$ be the subspace generated by newforms $f$ with $L(f, 1) \neq 0$. Then \[ \overline{P_2(p,\chi)}=\mathcal{S}^{\mathrm{new}}_{2,\mathrm{rk}=0}(p, \chi) \] \end{thm} \begin{proof} The proof proceeds along the same lines as the proof of \ref{thm:prods}. Instead of Theorem \ref{thm:eichler-shimura-paqb} we use the following vanishing criterion, which can be proved with the same methods as Theorem \ref{thm:eichler-shimura-triv-char}. If, for $G\in\mathcal{S}_2(p,\overline{\chi})$, the $L$-value $L(G_\alpha,1)$ for all odd characters $\alpha$ modulo $p$ vanishes, then $G$ vanishes. \end{proof} \section{The new part of $P_k(N)$}\label{sctn:new-part} In this section we will analyse the new parts of the generators of $P_k(N)$ for any $N$. We use this to construct another space $Q_k(N)$ with the same projection to the new space as $P_k(N)$ whose generators do not involve Atkin--Lehner operators. While $P_k(N)$ was more useful for the proof of Theorem \ref{thm:prods}, $Q_k(N)$ is more explicit and easy to implement on a computer. The first step is to write $E_{k-l}^{\textbf{1},\overline{\alpha_N}}$ in terms of the Eisenstein series attached to the underlying primitive character: \begin{lem}\label{lem:eisenstein-decomposition} Let $\alpha$ be a primitive character modulo $M$ with $\alpha(-1)=(-1)^k$. Writing $N = \prod p^{v_p(N)}$, let $N_M = \prod_{p \mid M} p^{v_p(N)}$ be the $M$-part of $N$, so that $M\mid N_M$ and $\gcd(M,N/N_M)=1$. Then \[ E_{k-l}^{\textbf{1},\overline{\alpha_N}} = \left(\frac{N}{M}\right)^{\frac{k}{2}-l}\sum_{e\mid N/N_M}\mu(e)\alpha(e)e^{-\frac{k}{2}+l}E_{k-l}^{\textbf{1},\overline{\alpha}}|B_{N/Me} \] \end{lem} \begin{proof} The proof is analogous to the proof of \cite[Lemma 8.4.2]{CohenStromberg2017}. For Re$(s)\gg 0$ we have \begin{align*} E_{k-l,N}^{\mathbf{1}, \overline{\alpha_N}}(z,s) &= \frac{(k-l-1)!N^{k-l}}{(-2\pi i)^{k-l} G(\overline{\alpha})}\sum_{(c,d)\neq(0,0)} \frac{\alpha_N(d)}{(cNz+d)^{k-l}|cNz+d|^{2s}}. \end{align*} Using the fact that $\sum_{d \mid n} \mu(d)$ is the indicator function for $n=1$, we get \begin{align*} \sum_{(c,d)\neq(0,0)} \frac{\alpha_N(d)}{(cNz+d)^{k-l}|cNz+d|^{2s}}&=\sum_{(c,d)\neq(0,0)}\sum_{e\mid\gcd(d,N/N_M)}\mu(e)\frac{\alpha(d)}{(cNz+d)^{k-l}|cNz+d|^{2s}}\\ &=\sum_{e\mid N/N_M}\frac{\mu(e)\alpha(e)}{e^{k-l+2s}}\sum_{(c,d)\neq(0,0)}\frac{\alpha(d)}{(cM(\frac{N}{Me})z+d)^{k-l}|cM(\frac{N}{Me})z+d|^{2s}}\\ &= \frac{(-2\pi i)^{k-l} G(\overline{\alpha})}{(k-l-1)!M^{k-l}}\sum_{e\mid N/N_M}\mu(e)\alpha(e)e^{-k+l-2s}E_{k-l}^{\textbf{1},\overline{\alpha}}((N/Me)z,s). \end{align*} We obtain an equality of functions of the variable $s$, which remains true for $s=0$ by uniqueness of analytic continuation.\end{proof} Thus the product $E_l^{\textbf{1},\alpha}E_{k-l}^{\textbf{1},\overline{\alpha_N}}$ is a linear combination of products of the form \[E_l^{\textbf{1},\alpha}\cdot \left(E_{k-l}^{\textbf{1},\overline{\alpha}}|B_{N/Me}\right)\] for $e\mid N/N_M$. If $e\neq 1$ these products clearly have level smaller than $N$, so are old forms. Hence the projection of $P_k(N)$ to the new space $\overline{P_k(N)}$ is generated by the projections of the products \begin{align}\label{eqn:prod-generators} \left(E_l^{\textbf{1},\alpha}|W_S^N\right) \cdot\left(E_{k-l}^{\textbf{1},\overline{\alpha}}|B_{N/M}|W_S^N\right). \end{align} where $S\subseteq T$ is a set of prime divisors of the squarefree part of $N$. Let us focus on the first factor for now. It is easy to see that, as operators on $\mathcal{M}_k(M, \alpha)$, we have the equality $W_S^N = W_{S_M}^M | B_{(N/M)_S}$, where $S_M$ is the set of primes in $S$ that divide $M$. Using Proposition 14 of \cite{Weisinger1977} we see that the first factor in \eqref{eqn:prod-generators} is a multiple of \[E_l^{\overline{\alpha}_{S_M},\alpha_{\overline{S}_M}} | B_{(N/M)_S},\] where $\overline{S}_M = \{p \mid M\} \setminus S_M$. To study the second factor in \eqref{eqn:prod-generators} we use an extension of Proposition 1.5 of \cite{AtkinLi78} that allows us to swap the order of the lifting operator and the Atkin--Lehner operator in equation \eqref{eqn:prod-generators}. \begin{lem}\label{lem:al-lift-swap} Let $F\in\mathcal{M}_k(M,\chi)$, $d\in\mathbb{Z}_{\geq 1}$, and $S$ be a set of primes dividing $dM$. Let $\overline{S} = \{p \mid dM\} \setminus S$, $S_M = S \cap \{p \mid M\}$, and define $d_S = \prod_{p \in S} p^{v_p(d)}$ and $d_{\overline{S}}$ as usual. Then \begin{align*} F | B_d | W_S^{Md}= \overline{\chi}_{S}(d_{\overline{S}}) \overline{\chi}_{\overline{S}}(d_S) F | W_{S_M}^M | B_{d_{\overline{S}}} \end{align*} \end{lem} \begin{proof} Choose $x,y,z,w\in \Z$ as in the definition of $W_S^{Md}$, i.e. satisfying $y\equiv 1\pmod{d_SM_S}$, $x\equiv 1\pmod{d_{\overline{S}}M_{\overline{S}}}$ and $(M_Sd_S)^2xw-Mdzy=M_Sd_S$. As operators on $\mathcal{M}_k(N,\chi)$, we have \[ B_dW_S^{Md}=\begin{pmatrix} d & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} d_SM_Sx & y \\ Md z & d_SM_Sw\end{pmatrix} = \begin{pmatrix} M_Sd_Sx & d_{\overline{S}}y \\ Mz & M_Sw \end{pmatrix}\begin{pmatrix} d & 0 \\ 0 & d_S \end{pmatrix}.\] The determinant of $\left(\begin{smallmatrix} M_Sd_Sx & d_{\overline{S}}y \\ Mz & M_Sw \end{smallmatrix}\right)$ is $M_S$. The result now follows by Proposition \ref{prop:al} and the fact that $y\equiv 1 \pmod{M_S}$ and $x\equiv 1\pmod{M_{\overline{S}}}$. \end{proof} Applying Lemma \ref{lem:al-lift-swap} with $d = N/M$ to $E_{k-l}^{\textbf{1},\overline{\alpha}}|B_{N/M} | W_S^N$ and using Proposition 14 of \cite{Weisinger1977}, we see that the second factor in \eqref{eqn:prod-generators} is a multiple of \[ E_{k-l}^{\alpha_{S_M},\overline{\alpha}_{\overline{S}_M}}|B_{\left(\frac{N}{M}\right)_{\overline{S}}}, \] so the product in \eqref{eqn:prod-generators} is a multiple of \[ \left(E_l^{\overline{\alpha}_{S_M},\alpha_{\overline{S}_M}} | B_{\left(\frac{N}{M}\right)_{S}}\right) \cdot \left(E_{k-l}^{\alpha_{S_M},\overline{\alpha}_{\overline{S}_M}}|B_{\left(\frac{N}{M}\right)_{\overline{S}}}\right). \] In order to unwind this, set \[\begin{aligned} &M_1 = M_S= \prod_{p \in S_M} p^{v_p(M)},\: &&M_2 = M_{\overline{S}}= \prod_{p \in \overline{S}_M} p^{v_p(M)}, \\ &d_1 = (N/M)_S = \prod_{p \in S} p^{v_p(N) - v_p(M)},\: &&d_2 = (N/M)_{\overline{S}} = \prod_{p \in \overline{S}} p^{v_p(N) - v_p(M)}.\end{aligned}\] Note that $\overline{S}_M = \{p \mid M\} \setminus S_M \subset \overline{S}$. With these definitions $\overline{\alpha}_{S_M}$ and $\alpha_{\overline{S}_{M}}$ are primitive characters modulo $M_1$ and $M_2$ respectively, which we now relabel as $\phi$ and $\psi$. Note that as $\alpha$ varies over all primitive characters of parity $\epsilon$ modulo $M$, $\phi$ and $\psi$ vary over all primitive characters modulo $M_1$ and $M_2$ such that $\phi\psi$ has parity $\epsilon$. Now fix $M|N$ and let $S\subseteq T$ vary: we obtain all $M_1, M_2$ such that $M_1\mid N_T$ and $M_1M_2 = M$, and for given $M_1, M_2$ we obtain all $d_1, d_2$ such that $d_1M_1\mid N_T$ and $d_1 M_1 d_2 M_2 = N$. \begin{dfn}\label{dfn:QkN} Let $N \in \N$. Let $T$ be the set of primes $p$ such that $v_p(N)=1$. Let $B'(N)$ consist of all quintuples $(\phi, \psi, l, d_1, d_2)$, where $\phi$ varies over all primitive characters of conductor $M_1$, with $M_1$ varying over all divisors of $N_T$, $\psi$ varies over all primitive characters of conductor $M_2$, with $M_2$ varying over all divisors of $N$, all $l \in \{1,\ldots,k-1\}$, and all $d_1,d_2\in\N$, such that \begin{footnote}{Of course the second condition is only relevant when $M = M_1 = M_2 = 1$.}\end{footnote} \[\begin{aligned} \phi\psi(-1) &= (-1)^l, \\ (\phi, \psi, l) &\neq (\mathbf{1}, \mathbf{1}, 2), (\mathbf{1}, \mathbf{1}, k-2), \\ d_1M_1&\mid N_T,\\ d_1 M_1 d_2 M_2 &= N.\end{aligned}\] We define $Q_k(N)$ to be the vector space generated by \begin{equation}\label{eq:final-form-of-products} E_{l}^{\phi,\psi}|B_{d_1}\cdot E_{k-l}^{\overline{\phi},\overline{\psi}}|B_{d_2}. \end{equation} for all $(\phi, \psi, l, d_1, d_2) \in B'(N)$. \end{dfn} The above calculation shows that $Q_k(N)$ and $P_k(N)$ have the same projection onto the new subspace $\mathcal{S}_k^{\mathrm{new}}(N)$. Using the spaces $Q_k(N)$ and their lifts we can extend Theorem \ref{thm:prods} to the full space $\mathcal{S}_k(N)$: \begin{thm}\label{thm:prods-full-space} Let $N$ be as in Theorem \ref{thm:eichler-shimura-paqb} and $\mathcal{Q}_k(N)=\bigcup\limits_{N_0 d | N}Q_k(N_0)|B_d$ be the subspace of $\mathcal{M}_k(N)$ generated by the products \[ E_{l}^{\phi,\psi}|B_{d_1d}\cdot E_{k-l}^{\overline{\phi},\overline{\psi}}|B_{d_2d} \] for $(\phi, \psi, l, d_1,d_2) \in B'(N_0)$ (as in Definition \ref{dfn:QkN}). Then for $k\geq 4$ \[ \mathcal{M}_k(N) = \mathcal{Q}_k(N)+\mathcal{E}_k(N). \] \end{thm} \begin{proof} This follows from Theorem \ref{thm:prods}, the previous calculations, and an inductive argument using the fact that \[\mathcal{S}_k(N) = \bigoplus_{N_0 | N}\bigoplus_{d\mid N/N_0}\mathcal{S}^{\mathrm{new}}_k(N_0)|B_d.\]\end{proof} To treat the case $k=2$ we need one more result. \begin{prop}\label{prop:oldforms-orthogonal-to-prods} Let $f\in\mathcal{S}_2^{\mathrm{new}}(N_0)$ be a newform of level $N_0\mid N$ with $L(f,1)=0$, and let $d$ be such that $dN_0\mid N$. Then $f| B_d$ is orthogonal to $P_2(N)$. \end{prop} \begin{proof} It suffices to show that $f| B_d$ is orthogonal to each of the generators of $P_2(N)$, so we fix a product $ (E_1^{\textbf{1},\alpha} E_1^{\mathbf{1}, \overline{\alpha_N}}) | W_S^N $ where $\alpha$ is a primitive odd character modulo $M \mid N$ and $S \subset T$ is a subset of the primes $p$ with $v_p(N)=1$. Since $W_S^N$ is self-adjoint, \[ \langle (E_1^{\textbf{1},\alpha} E_1^{\mathbf{1}, \overline{\alpha_N}}) | W_S^N,f|B_d\rangle= \langle E_1^{\textbf{1},\alpha} E_1^{\mathbf{1}, \overline{\alpha_N}},f|B_d | W_S^N). \] Using Lemma \ref{lem:al-lift-swap} and the fact that $f$ is an eigenfunction of all $W_{S'}^M$ for sets $S'\subseteq T$ of prime divisors of $N_0$, we see that $f|B_d | W_S^N$ is a multiple of $f|B_{d'}$ for some $d'|d$. Arguing as in Proposition \ref{prop:inner-prod-as-l-values}, \[ \langle E_1^{\textbf{1},\alpha} E_1^{\mathbf{1}, \overline{\alpha_N}}(\cdot, s), f|B_{d'} \rangle = \frac{\Gamma(s + 1)}{d'^{s+1}(4 \pi)^{s + 1} } \sum_{n \geq 1} \frac{a_n \sigma_{1,\textbf{1},\alpha}(d'n)}{n^{s + 1}}, \] where $a_n$ are the Fourier coefficients of $f$ (note that $f^c = f$ in this case), and $\Re ~ s \gg 0$. Let $d'=\prod p^{e_p}$. Then \begin{align} \sum_{n \geq 1} \frac{a_n \sigma_{1,\textbf{1},\alpha}(d'n)}{n^{s + 1}}=\sum_{\gcd(n,d')=1}\frac{a_n \sigma_{1,\textbf{1},\alpha}(n)}{n^{s + 1}}\prod_{p\mid d'}\left(\sum_{b=0}^{\infty}\frac{a_{p^b} \sigma_{1,\textbf{1},\alpha}(p^{b+e_p})}{(p^{b})^{s + 1}}\right). \end{align} The first sum over $n$ coprime to $d'$ is, up to the Euler factors corresponding to the prime divisors of $d'$, given in the proof of Proposition \ref{prop:inner-prod-as-l-values}: \[\sum_{(n,d')=1} \frac{a_n \sigma_{0, \mathbf{1}, \alpha}(n)}{n^{s+1}} = \frac{L(f, s+1) L(f_{\alpha}, s+1)}{L(\alpha, 2s + 1)}\prod_{p|d'} \frac{ L_p(\alpha, 2s + 1)}{L_p(f, s+1) L_p(f_{\alpha}, s+1)} ,\] where the Euler factors are $L_p(\alpha,2s+1) =(1-\alpha(p)p^{-(2s+1)})^{-1} $, $L_p(f,s+1)=(1-a_p p^{-(s+1)}+p^{-1-2(s+1)})^{-1}$, and $L_p(f_{\alpha},s+1)=(1- \alpha(p)a_p p^{-(s+1)}+\alpha(p)^2 p^{-1-2(s+1)})^{-1}$. Noting that the coefficients $a_p$ are algebraic integers because $f$ is a newform, we see that all the Euler factors are holomorphic at $s=0$ and do not vanish there. The same is true for $L(\alpha,2s+1)$, as $\alpha$ is primitive and non-trivial. Since we assume $L(f,1)=0$ the sum $\sum_{(n,d')=1} a_n \sigma_{0, \mathbf{1}, \alpha}(n)n^{-(s+1)}$ vanishes at $s=0$. It remains to show that the sums \[ f_p(s)=\sum_{b=0}^{\infty}\frac{a_{p^b} \sigma_{1,\textbf{1},\alpha}(p^{b+e_p}) }{(p^{b})^{s + 1}}= \sum_{b=0}^{\infty}\frac{a_{p^b} \sigma_{1,\textbf{1},\alpha}(p^{b})}{(p^{b})^{s + 1}}+\sum_{b=0}^{\infty}\frac{a_{p^b} \alpha(p^b)p^b}{(p^{b})^{s+1}}(\alpha(p)p+\ldots+\alpha(p^{e_p})p^{e_p}) \] can be analytically continued to $s=0$ for all $p$ dividing $d'$. The first sum corresponds to the Euler factors at $p$ of the quotient of $L$-functions given in Proposition \ref{prop:inner-prod-as-l-values} and hence has analytic continuation to $s=0$. The second sum equals \[ (\alpha(p)p+\ldots+\alpha(p^{e_p})p^{e_p})L_p(f_\alpha,s), \] which can again be analytically continued to $s=0$. \end{proof} We now define a space $\mathcal{S}_{2,\mathrm{rk}=0}(N)$ that projects onto $\mathcal{S}^{\mathrm{new}}_{2,\mathrm{rk}=0}(N)$ from Theorem \ref{thm:prods}, by \[\mathcal{S}_{2,\mathrm{rk}=0}(N) = \bigoplus_{N_0 | N}\bigoplus_{d\mid N/N_0}\mathcal{S}^{\mathrm{new}}_{2,\mathrm{rk}=0}(N_0)|B_d.\] By Proposition \ref{prop:oldforms-orthogonal-to-prods} $P_2(N)$ is contained in $\mathcal{S}_{2,\mathrm{rk}=0}(N)$ and by Theorem \ref{thm:prods-full-space} the two spaces have the same projection to the new space of $\mathcal{S}_2(N)$. This projection is equal to the projection of $Q_2(N)$ and so we can again use induction to prove: \begin{thm}\label{thm:prods-full-space-weight-2} Let $N$ be as in Theorem \ref{thm:eichler-shimura-paqb} and let $\mathcal{Q}_2(N) = \cup_{N_0 d \mid N} Q_2(N_0) | B_d$ be the subspace of $\mathcal{M}_2(N)$ generated by the products \[ E_{1}^{\phi,\psi}|B_{d_1d}\cdot E_{1}^{\overline{\phi},\overline{\psi}}|B_{d_2d} \] for all $(\phi, \psi,1,d_1, d_2) \in B'(N_0)$. Then \[ \mathcal{S}_{2,\mathrm{rk}=0}(N)+\mathcal{E}_2(N) = \mathcal{Q}_2(N)+\mathcal{E}_2(N). \] \end{thm} Let $B$ be a basis of $\mathcal{S}_2(N)$ consisting of newforms of the form $f_i(d z)$, where $f_i$ is a newform for level $M_i|N$ and $d$ a divisor of $N/M_i$. Let $f$ be a cusp form that is given as a linear combination $f(z) = \sum_i \sum_{d|N/M_i} a_{i,d} f_i(dz)$. If $N$ is as in \ref{thm:eichler-shimura-paqb}, then Theorem \ref{thm:prods-full-space-weight-2} states that $f$ can be written as a linear combination of products of Eisenstein series in $\mathcal{Q}_2(N)$ and Eisenstein series in $\mathcal{E}_2(N)$ if and only if $a_{i,d}=0$ for all $i$ with $L(f_i,1)=0$. \section{Fourier expansions at arbitrary cusps}\label{scn:Fourier expansions} Let $f\in \mathcal{M}_k(\Gamma_0(N))$ and $\alpha = \frac{a}{c}\in\Q\cup\{\infty\}$ with $(a,c)=1$. In this section we discuss how a representation of $f$ as a linear combination of products of Eisenstein series can be used to obtain a Fourier expansion of $f$ at $\alpha$. The method described here applies to modular forms for any congruence subgroup but we focus on $\Gamma_0(N)$ because our main theorem \ref{thm:prods-full-space} applies to $\Gamma_0(N)$. Choose a matrix $\gamma\in\SL_2(\Z)$ that maps $\infty$ to $\alpha$. The form $f|\gamma$ is invariant under $T^w$, where $w = \frac{N}{\gcd(c^2,N)}$ is the width of the cusp $\alpha$. Hence $f|\gamma$ has a Fourier expansion in $q_w = e^{\frac{2\pi i\tau}{w}}$: \[ f|\gamma(\tau) = \sum_{n=0}^\infty a_n^\gamma(f)q_w^n. \] We will call this a \textit{Fourier expansion of $f$ at the cusp $\alpha$}. Note that the coefficients $a_n^\gamma(f)$ depend on the choice of $\gamma$. Indeed, if $\gamma\infty = \alpha$, then also $\gamma T^m \infty = \alpha$, and $a_n^{\gamma T^m}(f) = \zeta_w^{nm}a_n^\gamma(f)$, where $\zeta_w$ is a primitive $w$-th root of unity. Now assume we have a representation of $f$ as a linear combination of Eisenstein series and products of two Eisenstein series. By Theorem \ref{thm:prods-full-space} and Theorem \ref{thm:prods-full-space-weight-2}, for $N = p^aq^b N'$ for squarefree $N'$ we can find such an expansion with products of the form $ E_{l}^{\phi,\psi}|B_{d_1d}\cdot E_{k-l}^{\overline{\phi},\overline{\psi}}|B_{d_2d}, $ as long as $f$ is in $\mathcal{S}_{2,\mathrm{rk}=0}(N)+\mathcal{E}_2(N)$ if $k=2$. Since the slash operator $|\gamma$ is linear and satisfies \[ (E_{l}^{\phi,\psi}|B_{d_1d}\cdot E_{k-l}^{\overline{\phi},\overline{\psi}}|B_{d_2d})|\gamma = E_{l}^{\phi,\psi}|B_{d_1d}\gamma\cdot E_{k-l}^{\overline{\phi},\overline{\psi}}|B_{d_2d}\gamma, \] the problem of finding the Fourier expansion of $f|\gamma$ reduces to the problem of finding the expansion of $E_k^{\phi,\psi}|B_d\gamma$ for primitive characters $\phi,\psi$ and $d\in\mathbb{N}$. This is discussed in the second chapter of \cite{Weisinger1977}, although there are several mistakes in the formulas. A corrected version was communicated to the authors by Henri Cohen \cite{Cohen2017unpublished}. We have implemented the above method of finding Fourier expansions of modular forms at arbitrary cusps in Sage \cite{sage} and present several examples of Fourier expansions below. Our program is available at \cite{Github_mneururer}. In our examples we will calculate Atkin-Lehner eigenvalues and the expansions at cusps of the form $\frac{1}{d}$ for $d|N$, although the expansions at other cusps can be obtained by the same method. \begin{comment By \cite[Theorem 1.3.1]{StevensArithmetic} all other cusps of $\Gamma_0(N)$ are Galois conjugates of these and so the Fourier coefficients of a modular form at any cusp can be obtained by applying an automorphism of $\C$ to the Fourier coefficients at a cusp $\frac{1}{d}$. \end{comment} A matrix that maps $\infty$ to $\frac{1}{d}$ is given by $\gamma_d=\sabcd{1}{0}{d}{1}$. \begin{enumerate} \item Let $f_{49}$ be the unique newform of level $49$ and weight $2$. It is a linear combination of the products $E_1^{\textbf{1},\phi}E_1^{\textbf{1},\overline{\phi}}$ and $E_1^{\textbf{1},\phi^3}E_1^{\textbf{1},\overline{\phi}^3}$, where $\phi$ is the character modulo $49$ that maps $3$ to $\zeta_{42}$: \begin{multline*} f_{49} = \frac{1}{28}(-20 \zeta_{21}^{11} + 5 \zeta_{21}^{10} + \zeta_{21}^{9} - 19 \zeta_{21}^{8} + 10 \zeta_{21}^{7} + 9 \zeta_{21}^{6} - 11 \zeta_{21}^{5} - 5 \zeta_{21}^{4} + 15 \zeta_{21}^{3} - 9 \zeta_{21}^{2} + \zeta_{21} + 21 )E_1^{\textbf{1},\phi}E_1^{\textbf{1},\overline{\phi}}\\ +\frac{1}{28}(4 \zeta_{21}^{11} - 2 \zeta_{21}^{10} + \zeta_{21}^{9} + 3 \zeta_{21}^{8} - 4 \zeta_{21}^{7} + \zeta_{21}^{6} + 3 \zeta_{21}^{5} - 2 \zeta_{21}^{4} - 2 \zeta_{21}^{3} + 5 \zeta_{21}^{2} - 5 \zeta_{21} - 6 )E_1^{\textbf{1},\phi^3}E_1^{\textbf{1},\overline{\phi}^3}. \end{multline*} Using this we obtain the Fourier expansion of $f_{49}$ at $\alpha=0$. \begin{equation}\label{eqn:f49 at 0} f_{49}|S = \frac{1}{49}(-q_{49} - q_{49}^2 + q_{49}^4 + O(q_{49}^8)), \end{equation} We deduce that $f_{49}$ has $W_{49}$-eigenvalue $-1$. Indeed \eqref{eqn:f49 at 0} implies \[ f_{49}|W_{49}(z) = f_{49}|SB_{49}(z) = 49 f_{49}|S(49z) = -q + O(q^2) = -f(z). \] We can see this already from the fact that $f_{49}$ has an expansion in terms of Eisenstein series and products of two Eisenstein series. According to Theorem \ref{thm:prods-full-space-weight-2} that implies that $L(f_{49},1)\neq 0$. Denoting the $W_{49}$-eigenvalue of $f_{49}$ by $\lambda$, the completed $L$-function $\Lambda(f_{49},s)$ of $f_{49}$ satisfies the functional equation \[ \Lambda(f_{49},s) = 7^s\Gamma(s)(2\pi)^{-s}L(f_{49},s) = -\lambda\Lambda(f_{49},2-s). \] So if $L(f_{49},1)\neq 0$ we must have $\lambda=-1$. The expansion at the cusp $1/7$ is given by \begin{align*} f_{49}|\gamma_7 = \frac{1}{7}\left((-2 \zeta_{7}^{5} - 4 \zeta_{7}^{4} - 6 \zeta_{7}^{3} - 8 \zeta_{7}^{2} - 3 \zeta_{7} - 5 )q + (6 \zeta_{7}^{5} - 2 \zeta_{7}^{4} + 4 \zeta_{7}^{3} + 3 \zeta_{7}^{2} + 2 \zeta_{7} + 1)q^2 + O(q^3)\right) \end{align*} \item $N = 8, ~k=16$. There are two rational newforms in $\mathcal{S}_{16}(8)$, \[ f_{16,1} = q - 3444 q^3 + 313358q^5 +O(q^7)\text{ and } f_{16,2} = q + 2700 q^3 - 251890q^5+O(q^7). \] Using products of Eisenstein series we find that both have $W_8$-eigenvalue $-1$. The expansions at at cusps of $\Gamma_0(8)$ other than $0$ and $\infty$ are \begin{align} f_1|\gamma_2 &=\frac{i}{256}\left(q_2 +3444q_2^3 + 313358 q_2^5 +O(q_2^7)\right),~f_1|\gamma_4 = -f_1,\\ f_2|\gamma_2 &=\frac{i}{256}\left(q_2 -2700 q_2^3-251890 q_2^5 +O(q_2^7)\right),~f_2|\gamma_4 = -f_2. \end{align} We see that $f_1|\gamma_2B_2 = if_{1,\chi_4}$ and $f_2|\gamma_2B_2 = if_{2,\chi_4}$, where $\chi_4$ is the primitive character of conductor $4$. \item $N= 36,~ k=8$. There is one newform $f_{36} = q - 270q^5 + O(q^6)\in \mathcal{S}_8(36)$. The Atkin-Lehner operators are $W_{\{2\}}= \sabcd{1}{1}{-9}{-8}B_4$ and $W_{\{3\}} = \sabcd{1}{1}{8}{9}B_9$. \[ f|W_{36} = f_{36},~ f_{36}|W_{\{2\}} = -f_{36},~ f_{36}|W_{\{3\}} = -f_{36}. \] \item $N=3^5 = 243,~k=4$. We find the first few coefficients for $f_{243} = q -3q^2 +q^4+ 3q^5 -10q^7 + 21q^8 + O(q^{10})$ at the cusps $\frac13, \frac19$, $\frac{1}{27}$, and $\frac{1}{81}$. \begin{align*} f_{243}|\gamma_3 &= \frac{1}{729}\left((-\zeta_{162}^{29} + \zeta_{162}^{2})q_{27} -3 \zeta_{162}^{31} q_{27}^2 + (-\zeta_{162}^{35} + \zeta_{162}^{8})q_{27}^4 + 3 \zeta_{162}^{37} q_{27}^5 + O(q_{27}^{6})\right),\\ f_{243}|\gamma_9 &= \frac19\left((\zeta_{54}^{14} - \zeta_{54}^{5})q_3 +3(- \zeta_{54}^{10} + \zeta_{54})q_3^2 +\zeta_{54}^{11} q_3^4 +3 \zeta_{54}^{7}q_3^5 + O(q_3^6)\right),\\ f_{243}|\gamma_{27} &=\zeta_{9}q - 3 \zeta_{9}^{2}q^{2} + \zeta_{9}^{4}q^{4} + 3 \zeta_{9}^{5}q^{5}+O(q^6),\\ f_{243}|\gamma_{81} &= -(\zeta_{3} + 1)q -3 \zeta_{3}q^2 -(\zeta_{3} + 1)q^4 +3 \zeta_{3} q^5+ O(q^6). \end{align*} \end{enumerate} \textbf{Acknowledgements.} We thank Andrew Booker for asking the off-the-cuff question that led us to study this problem, and for helpful remarks. We are grateful to Fredrik Str\"omberg for many remarks and corrections, Christian Wuthrich for discussions on Fourier coefficients of newforms at cusps, and Henri Cohen for providing us with formulas for the Fourier coefficients of Eisenstein series at cusps. We further thank Nikos Diamantis, Winfried Kohnen, and Martin Raum for their comments on the paper. The authors were funded by the EPSRC grants EP/M016838/1 ``Arithmetic of hyperelliptic curves" and EP/N007360/1 ``Explicit methods for Jacobi forms over number fields" respectively. \bibliographystyle{plain}
1,314,259,994,007
arxiv
\section{Introduction} \IEEEPARstart{B}{lind} identification of information signals' parameters of a transmitter from received signals has important applications both in military and civilian communication systems. In the context of military applications, this parametric knowledge can help an attacker to carry out electronic warfare operations, such as surveillance and jamming signal selection. Moreover, blind identification has found wide applications in civilian reconfigurable systems including software-defined and cognitive radios\cite{Survey_Signal_Identification}. Multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) technologies are adopted in cellular and WiFi standards because they achieve high spectral efficiency. Different from the identification of single-antenna systems, the blind identification of MIMO or MIMO-OFDM signals requires the enumeration of the number of transmit antennas\cite{AIC_MDL,eigenvector_nt_est,PET,WME,HOM_TD_Nt_est} and identification of MIMO schemes\cite{Likelihood_Based,correlator_function,higher_order_cyclic,blind_recognition_STBC,Hierarchical_STBC, Fourth_order_TC,Second_Order_cyclic,K_S_test,Classify_STBC_Over_FS,STBC_cyclic_2015_ICC,Blind_MIMO_OFDM,Blind_MIMO_OFDM_SM_AL,Identification_SM_AL_OFDM_cyclic,blind_SFBC,My_paper_Globecom,My_paper_TWC2,My_paper_TVT} for single-carrier or OFDM systems. The identification of MIMO schemes is the process of classifying the spatial multiplexing (SM) or transmit diversity (TD) codes, namely space-time block codes (STBC), which utilize space-time redundancy to reduce the error rate. Previous works on the identification of MIMO schemes include \cite{Likelihood_Based,correlator_function,higher_order_cyclic,blind_recognition_STBC,Hierarchical_STBC, Fourth_order_TC,Second_Order_cyclic,K_S_test,Classify_STBC_Over_FS,STBC_cyclic_2015_ICC} for single-carrier systems and \cite{Blind_MIMO_OFDM,Blind_MIMO_OFDM_SM_AL,Identification_SM_AL_OFDM_cyclic,blind_SFBC,My_paper_Globecom,My_paper_TWC2,My_paper_TVT} for OFDM systems. Regarding the identification of MIMO schemes for single-carrier systems, previous works follow either likelihood-based \cite{Likelihood_Based} or feature-based \cite{correlator_function,higher_order_cyclic,blind_recognition_STBC,Hierarchical_STBC, Fourth_order_TC,Second_Order_cyclic,K_S_test,Classify_STBC_Over_FS,STBC_cyclic_2015_ICC} methods. The former relies on the likelihood function of the received signals to quantify the space-time redundancy and classify different STBCs. The latter detects the presence of the space-time redundancy at some specific time-lag locations based on the features of signal statistics or cyclic statistics; however, it can only identify a small number of STBC types due to identical features for several SFBCs. Although \cite{STBC_cyclic_2015_ICC} can identify 11 types of STBCs utilizing the feature of second-order cyclostationary statistics, it requires the number of transmit antennas and channel coefficients as \textit{a priori} information. As for OFDM systems, there are two main approaches to combine the TD codes with OFDM signals. The first approach is STBC-OFDM, where the diversity STBC is implemented over consecutive OFDM symbol intervals. STBC-OFDM has been adopted in several indoor MIMO wireless standards, such as WiFi\cite{IEEE802_11,STBC_on_80211}, owing to its excellent performance. Another TD coding scheme used in OFDM systems is space-frequency block coding (SFBC)-OFDM, where the diversity SFBC is employed over consecutive subcarriers of an OFDM symbol. Several cellular wireless standards supporting high mobility, such as LTE\cite{sesia2009lte} and WiMAX \cite{IEEE802_16}, favor SFBC-OFDM over STBC-OFDM because of its superior performance in a high mobility environment \cite{ICI_affects_OSTBC}. The identification approaches for single-carrier systems fail to identify MIMO schemes of OFDM systems under frequency-selective fading channels due to multipath effects. Previous works on the identification of STBC-OFDM systems include \cite{Blind_MIMO_OFDM,Blind_MIMO_OFDM_SM_AL,Identification_SM_AL_OFDM_cyclic}, which detect the presence of space-time redundancy based on the peaks of the cross-correlation functions between two receive antennas in the time-domain. Specifically, \cite{Blind_MIMO_OFDM,Blind_MIMO_OFDM_SM_AL} use different cross-correlation functions, while \cite{Identification_SM_AL_OFDM_cyclic} employs a cyclic cross-correlation function with a specific time-lag during adjacent OFDM symbols to detect the space-time redundancy. However, the approaches for identifying STBC-OFDM signals cannot be directly applied to SFBC-OFDM signals since the peaks of the cross-correlation functions between two adjacent OFDM symbols are difficult to detect. The previous works on the identification of SFBC-OFDM systems include \cite{blind_SFBC,My_paper_Globecom,My_paper_TWC2,My_paper_TVT}. In \cite{blind_SFBC}, the idea of detecting the peak of the cross-correlation function between two receive antennas is extended to identify SFBC-OFDM signals, which uses a specific time-lag during the same OFDM symbol. In \cite{My_paper_Globecom,My_paper_TWC2}, we utilize a cross-correlation function between two receive antennas at adjacent OFDM subcarriers to improve the identification performance by detecting both the space and frequency redundancy. In \cite{My_paper_TVT}, we use the random matrix theory to identify 5 types of SFBCs by detecting the space-frequency redundancy at adjacent OFDM subcarriers. However, most previous works can only discriminate between a few MIMO schemes since they only consider detecting the presence of the redundancy. Specifically, a widely-used TD code, namely the frequency switched transmit diversity (FSTD)\cite{sesia2009lte,IEEE802_16}, and several non-orthogonal STBCs/SFBCs can not be discriminated by the previous non-likelihood-based methods. Although the likelihood-based method \cite{Likelihood_Based} can identify more STBCs, it does not work in a frequency-selective fading environment. In the existing literature, the enumeration of the number of transmit antennas and identification of MIMO schemes are handled as two independent problems. The enumeration problem of the number of transmit antennas is formulated as the enumeration of independent channel pathways between transmit antennas and receive antennas in general. Previous works on the identification of the number of transmit antennas mainly fall into two classes, namely second- \cite{AIC_MDL,eigenvector_nt_est,PET,WME} and higher-order statistics-based methods \cite{HOM_TD_Nt_est}. Basically, the second-order statistics-based methods analyze eigenvalues or eigenvectors of the covariance matrix of the received signals to determine the number of transmit antennas by distinguishing between the signal and noise subspaces. These methods also fall under two categories, information-theoretic criteria-based algorithms\cite{AIC_MDL} and hypothesis-testing-based algorithms \cite{eigenvector_nt_est,PET,WME}. Methods in the first category determine the number of transmit antennas by minimizing the Kullback-Leibler distance metric. Reference \cite{AIC_MDL} introduces two classical calculations of the Kullback-Leibler metric, the Akaike information criterion (AIC) and the minimum description length (MDL), for the enumeration of the number of transmit antennas. Methods in the second category transform the problem into a detection problem, which compares an elaborately constructed statistic with a threshold. Furthermore, for the class of higher-order statistics-based methods, the sole existing algorithm \cite{HOM_TD_Nt_est} constructs a fourth-order decision statistic of the received signals with only one receive antenna by using the feature of time-varying block fading channels. To the best of our knowledge, no method exists in the literature for the joint blind identification of the number of transmit antennas and MIMO schemes. It is our main goal in this paper to fill this research gap. Artificial neural networks (ANN) have been applied to signal identification problems, such as automatic modulation classification (AMC) \cite{AMC_ANN,MIMOAMC_ANN,AMC_CNN,AMC_LSTM}, since they are suitable for non-linear fitting and classification problems and do not impose any restrictions on the input variables, unlike other prediction techniques. Traditional ANNs require expert features, while modern deep learning neural networks can directly learn the statistical features from training data. References\cite{AMC_CNN,AMC_LSTM} use deep learning neural networks on the raw in-phase and quadrature phase (IQ) data to solve the AMC problem. However, their goal is to classify the single-antenna system. For the MIMO system, it is difficult to directly employ deep learning on the raw IQ data since MIMO overlapped signals destroy the statistical features. Given that the feed-forward neural network (FNN) is a popular family of ANN owing to its simple structure and strong fitting ability, it is used to develop high performance signal identification solutions \cite{AMC_ANN, MIMOAMC_ANN}. As mentioned earlier, the quantification of the space-time/frequency redundancy can classify more STBCs/SFBCs since the redundancy of some STBCs/SFBCs is in the same location. On the other hand, this quantification is also needed in the enumeration of the number of transmit antennas. In this paper, a subspace-rank feature-based joint blind identification algorithm of the number of transmit antennas and MIMO schemes is proposed. Three different subspace-rank features for the number of transmit antennas and redundancy are derived from the eigenvalue analysis of the covariance matrix of the received signals at adjacent symbols or OFDM symbols/subcarriers with multiple receive antennas. A Gerschgorin radii-based method and an FNN are applied to calculate these features, and a minimal weighted norm-1 distance metric is proposed to determine the number of transmit antennas and MIMO schemes. The proposed algorithm does not require \textit{a priori} knowledge of the signal parameters, such as channel coefficients, modulation type or noise power. The main contributions of this paper are the following: \begin{itemize} \item The proposed algorithm jointly identifies the number of transmit antennas and MIMO schemes, which has not been considered in the previous works. \item The scenarios of single-carrier and OFDM, including STBC-OFDM and SFBC-OFDM are all investigated in this paper, unlike previous works. \item Unlike the existing algorithms, more STBC/SFBC types, such as the orthogonal STBCs/SFBCs (OSBC) with the same rate, FSTD, quasi-orthogonal STBC/SFBC (QOSBC) and non-orthogonal STBCs/SFBCs (SBC), are identified by the proposed algorithm thanks to the analysis of subspace-rank features. \item A Gerschgorin radii-based method and an FNN are efficiently combined to calculate the subspace-rank features. Furthermore, we extend the investigation to OFDM systems. \item The computational complexity of the proposed algorithm is analyzed and shown to be comparable to the enumeration algorithm of the number of transmit antennas in \cite{AIC_MDL} or identification algorithm of MIMO schemes in \cite{Likelihood_Based}. \item Simulation results are presented to demonstrate the viability of the proposed algorithm both in single-carrier and OFDM systems, with different system parameters. \end{itemize} This paper is organized as follows. In Section II, the system model is introduced. Then, Section III derives the three subspace-rank features. The proposed algorithm is described in Section IV. The simulations results are presented in Section V. Finally, conclusions are drawn in Section VI. {\bf Notation:} The following notation is used throughout the paper. The superscripts ${ (\cdot ) ^ * }$, ${ (\cdot) ^ {T} }$ and ${ (\cdot) ^ {H} }$ denote the complex conjugate, transposition and conjugate transposition, respectively. $\Pr \left( B \right)$ represents the probability of the event $B$. ${\rm{E}}\left[ \cdot \right]$ indicates the statistical expectation. $\Re \left\{ \cdot \right\}$ and $\Im \left\{ \cdot \right\}$ denote the real and imaginary parts, respectively. $\bf{I}$, $\bf{0}$ and $\bf{O}$ denote the identity matrix, zero vector and zero matrix, respectively. $\mathbb{N}$, $\mathbb{Z}^+$ and $\mathbb{C}$ are the set of natural numbers, positive integers and complex numbers, respectively. The notation ${\rm{card}}(A)$ denotes the cardinality of the set $A$. ${\rm Tr} (\cdot)$ denotes the trace of a matrix. Conventionally, $e$ and $\rm{log}$ denote the Euler constant and natural logarithm, respectively. Finally, ${\cal O}(\cdot)$ denotes the complexity order. \section{System Model} \subsection{Signal Model of MIMO Single-Carrier System} \begin{figure*} \centering \includegraphics[width=0.98\textwidth,height=0.5\textheight]{Fig1_system_model.eps}\\ \caption{System structures and signal mappings of STBC/SFBC.}\label{fig1} \end{figure*} We consider a MIMO single-carrier wireless communication system employing TD or SM with ${N_t}$ transmit antennas and ${N_r}$ $( {{N_r} > {N_t}} )$ receive antennas, as shown in Fig. \ref{fig1} (a). As a special case, single-antenna systems are also considered. The transmitted data symbols are drawn from an ${M}$-PSK (Phase-Shift-Keying) or ${M}$-QAM (Quadrature Amplitude Modulation), $M \ge 4$, signal constellation. Subsequently, the modulated symbol stream is parsed into a data block of ${N_s}$ symbols, denoted by the vector ${{\bf{x}}_b} = {[ {{x_{b,0}}, \cdots ,{x_{b,{N_s} - 1}}} ]^T}$ ($b \in \mathbb{N}$). A TD/SM encoder takes the row of an $N_t \times T$ codeword matrix, denoted by ${\bf{C}}( {{{\bf{x}}_b}} )$, to span $T$ consecutive time slots and maps every column of the matrix into $N_t$ different transmit antennas. In this paper, the codewords include the single-antenna, Alamouti (AL), SM, 7 types of OSBC\cite{STBC_Tarokh,STBC_Ganesan,STBC_4_ant}, one type of QOSBC\cite{QOSTBC}, FSTD in LTE\cite{sesia2009lte} and 3 types of SBC in WiMAX\cite{IEEE802_16} (see Appendix A). Then, mapped signals are transmitted after the pulse shaping and carrier modulation operations. The receiver is assumed to successfully synchronize the received signals at the beginning to simplify the analysis; however, later we analyze the sensitivity to model mismatches in Section V. We construct an ${N_t} \times 1$ transmit vector and an ${N_r} \times 1$ receive vector, denoted by ${\bf{s}}(n)$ and ${\bf{y}}(n)$, which represent the transmitted and received signals at the $n$-th ($n \in \mathbb{Z}^+$) time slot, respectively. The channel is assumed to be flat-fading and characterized by an ${N_r} \times {N_t}$ matrix of Rayleigh fading coefficients, denoted by \begin{equation} {{\bf{H}}} = \left[ {\begin{array}{*{20}{c}} {h^{( {1,1} )}}& \cdots &{h^{( {{N_t},1} )}}\\ \vdots & \ddots & \vdots \\ {h^{( {1,{N_r}} )}}& \cdots &{h^{( {{N_t},{N_r}} )}} \end{array}} \right] \label{eq1} \end{equation} where ${h^{( {f_1,f_2} )}}$ represents the channel coefficient between the $f_1$-th transmit antenna and $f_2$-th receive antenna. The channel matrix ${{\bf{H}}}$ is assumed to be of full-column rank and the channel gains remain constant over the observation interval. Then, the $n$-th received signal is described by the following model \begin{equation} {{\bf{y}}}( n ) = {{\bf{H}}} {{\bf{s}}}( n ) + {{\bf{w}}}( n ) \label{eq2} \end{equation} where the vector ${\bf{w}} \left( n\right) $ represents a white Gaussian noise vector with zero-mean and covariance $\sigma _w^2{{\bf{I}}_{{N_r}}}$. The first processed sample is assumed to be the start of a TD code block, which allows simplifications of the following mathematical expressions. However, extensions of the proposed methods can be easily obtained when this assumption does not hold. \subsection{Signal Model of MIMO-OFDM System} \subsubsection{STBC-OFDM System} Consider a MIMO-OFDM wireless communication system with $N$ subcarriers and a cyclic prefix (CP) of length $\nu$, as shown in Fig. \ref{fig1} (b). Different from the single-carrier system, the TD/SM encoder puts $N$ data blocks, denoted by $ {{\bf{x}}_b , \cdots ,{\bf{x}}_{b + N - 1} } $, on $N$ consecutive subcarriers with the same operation as the single-carrier system. At the receiver side, assume that the carrier type is successfully estimated which can be achieved based on the cyclic cumulant \cite{OFDM_v_single}. the received OFDM symbol is converted into a frequency-domain block via an $N$-point fast Fourier transform (FFT) after removing the CP. We construct an ${N_t} \times 1$ transmit vector and an ${N_r} \times 1$ receive vector, denoted by ${\bf{s}}_k(n)$, and ${\bf{y}}_k(n)$, which represent the transmitted and received signals at the $n$-th time slot and $k$-th ($1 \leq k \leq N$) subcarrier, respectively. The channel is assumed to be frequency-selective fading and the $k$-th subchannel is characterized by an ${N_r} \times {N_t}$ full-rank matrix of fading coefficients, denoted by ${\bf{H}}_k$. Then, the $n$-th received signal at the $k$-th subcarrier is described by the following model \begin{equation} {{\bf{y}}_k}( n ) = {{\bf{H}}_k} {{\bf{s}}_k}( n ) + {{\bf{w}}_k}( n ) \label{eq4} \end{equation} where the ${N_r} \times 1$ vector ${\bf{w}}_k \left( n\right)$ represents a frequency-domain white Gaussian noise vector at the $k$-th subcarrier. \subsubsection{SFBC-OFDM System} The SFBC-OFDM system model is similar to the STBC-OFDM system model, with the difference that the SFBC encoder takes the row of the codeword matrix ${\bf{C}}( {{{\bf{x}}_b}} )$ to span $T$ consecutive subcarriers directly ($T$ is the number of the column of ${\bf{C}}( {{{\bf{x}}_b}} )$). The three mappings are shown in Fig. \ref{fig1} (c). \section{Subspace-Rank Features of Different Numbers of Transmit Antennas and MIMO Schemes} In this section, three subspace-rank features are defined as discriminating features of different MIMO schemes, by considering the dimension of the subspace of restructured received signals. Without loss of generality, we consider the single-carrier system first and then extend the analysis to the OFDM system. \subsection{Number of Transmit Antennas Feature} Let us construct a time-domain receive window to observe the received signals at adjacent time slots. The window length is set to two since it is the finest granularity to quantify the features of different MIMO schemes, while a larger window length results in failure to distinguish many MIMO schemes. By using \eqref{eq2}, the $l$-th received signal block inside the window is expressed as \begin{equation} {{\bf{Y}}} ( l ) = {{\bf{H}}} {{\bf{S}}} ( l ) + {{\bf{W}}} ( l ) \label{eq5} \end{equation} where the transmitted signal block of adjacent time slots is denoted by an ${N_t} \times 2$ matrix ${\bf{S}}( l ) = [ {{\bf{s}}( l ),{\bf{s}}( {l + 1} )} ]$, and the noise vector is denoted by ${{\bf{W}}} ( l ) = [ {{{\bf{w}}} ( l ),{{\bf{w}}} ( l+1 )} ]$. The covariance matrix of the received block is \begin{equation} {{\bf{\Sigma }}_{\bf{Y}}} ( l ) = {\rm{E}} [ {{{\bf{Y}}} ( l ){\bf{Y}}^H ( l )} ] = {{\bf{H}}} {{\bf{\Sigma }}_{\bf{S}}} ( l ) {\bf{H}}^H + 2\sigma _w^2{{\bf{I}}_{{N_r}}} \label{eq6} \end{equation} where ${{\bf{\Sigma }}_{\bf{S}}} ( l ) = {\rm{E}} [ {{{\bf{S}}} ( l ){\bf{S}}^H ( l )} ]$ is the covariance matrix of the $l$-th transmitted block. Denoting the eigenvalues of ${{\bf{\Sigma }}_{\bf{Y}}} ( l )$ by in descending order, we have the eigenvalues $\varphi _1 ( l ) \ge \varphi _2 ( l ) \cdots \ge \varphi _{{N_r}} ( l )$. \emph{Proposition 1:} The smallest ${N_r} - {N_t}$ ordered eigenvalues of ${{\bf{\Sigma }}_{\bf{Y}}}$ are all equal to $2 \sigma _w^2$, i.e., $\varphi _{{N_t} + 1} = \cdots = \varphi _{{N_r}} = 2 \sigma _w^2$ if the rank of ${{\bf{\Sigma }}_{\bf{S}}}$ is $N_t$. \emph{Proof:} See Appendix B. The corresponding eigenvectors of the eigenvalues $ \varphi _1\left( l \right) ,\varphi _2\left( l \right) \cdots ,\varphi _{N_r-N_t}\left( l \right) $ form a basis for the signal subspace. Define the following subset of signal subspace eigenvalues of ${{\bf{\Sigma }}_{\bf{Y}}} ( l )$ \begin{equation} {{\cal A}_l} = \left\{ \varphi _1\left( l \right) ,\varphi _2\left( l \right) \cdots ,\varphi _{N_r-N_t}\left( l \right) \right\}. \label{eq8} \end{equation} By sliding the window, the cardinality of the set ${\cal A}_l$ with even subscript ($l = 2m$, $m \in \mathbb{Z}^+$) can be used as the first type of the subspace-rank feature since the cardinality of ${\cal A}_{2m-1}$ for FSTD is equal to two. Therefore, we define this cardinality as the number of transmit-antenna feature (NTAF), denoted by \begin{equation} {\alpha } = {\rm card} ({{\cal A}_l}), \quad l = 2m, m \in \mathbb{Z}^+. \label{eq9} \end{equation} Note that the NTAF, $\alpha$, is the discriminating feature for different numbers of transmit antennas. \subsection{Number of Independent Complex Symbols Feature} Let us vectorize the $l$-th signal and noise block inside the window in \eqref{eq5} as follows \begin{gather} \mathbf{\bar{y}}\left( l \right) =\left[ \begin{array}{c} \mathbf{y}\left( l \right)\\ \mathbf{y}\left( l+1 \right)\\ \end{array} \right], \quad \mathbf{\bar{s}}\left( l \right) =\left[ \begin{array}{c} \mathbf{s}\left( l \right)\\ \mathbf{s}\left( l+1 \right)\\ \end{array} \right] \notag \\ \mathbf{\bar{w}}\left( l \right) =\left[ \begin{array}{c} \mathbf{w}\left( l \right)\\ \mathbf{w}\left( l+1 \right)\\ \end{array} \right] . \label{eq10} \end{gather} Then, the $l$-th received vectorized block is rewritten as \begin{equation} {{{\bf{\bar y}}}} ( l ) = {{{\bf{\bar H}}}} {{{\bf{\bar s}}}} ( l ) + {{{\bf{\bar w}}}} ( l ) \label{eq12} \end{equation} where the $2{N_r} \times 2{N_t}$ matrix ${{{\bf{\bar H}}}}$ is \begin{equation} {{{\bf{\bar H}}}} = \left[ {\begin{array}{*{20}{c}} {{{\bf{H}}}}&{\bf{O}}\\ {\bf{O}}&{{{\bf{H}}}} \end{array}} \right]. \label{eq13} \end{equation} Like \eqref{eq6}, the covariance matrix of the $l$-th received vectorized block is \begin{equation} {{\bf{\Sigma }}_{{\bf{\bar y}}}} ( l ) = {\rm{E}} [ {{{{\bf{\bar y}}}} ( l ){\bf{\bar y}}^H ( l )} ] = {{{\bf{\bar H}}}} {{\bf{\Sigma }}_{{{{\bf{\bar s}}}}}} ( l ) {\bf{\bar H}}^H + \sigma _w^2{{\bf{I}}_{2{N_r}}} \label{eq14} \end{equation} where ${{\bf{\Sigma }}_{{{{\bf{\bar s}}}}}} ( l ) = {\rm{E}} [ {{{{\bf{\bar s}}}} ( l ){\bf{\bar s}}^H ( l )} ]$ is the covariance matrix of the $l$-th transmitted vectorized block. The eigenvalues of ${{\bf{\Sigma }}_{{\bf{\bar y}}}} ( l )$ are $\phi _1 ( l ) \ge \phi _2 ( l ) \cdots \ge \phi _{2{N_r}} ( l )$. Next, we define the notion of linearly independent random symbols. Let ${\bf{v}}_1, \cdots, {\bf{v}}_n$ be the vector observations of the random variables $v_1, \cdots, v_n$. Then, we define $v_1, \cdots, v_n$ as linearly independent random symbols if the equation $c_1{\bf{v}}_1 + c_2{\bf{v}}_2 + \cdots + c_n{\bf{v}}_n = 0$ can only be satisfied by $c_i = 0$ for $i = 1, \cdots, n$ \cite{linear_indep_v}. For example, assuming ${\bf{u}} = [ {v_1},{v_2},{{v_3} /{\sqrt 2 }}, - v_2^ * ,v_1^ * ,{{v_3} /{\sqrt 2 }} ]^T$ whose elements are complex random variables, then ${\bf{u}}$ has five linearly independent complex random symbols, i.e., ${v_1}$, ${v_2}$, ${{v_3} /{\sqrt 2 }}$, ${v_1^ *}$, ${- v_2^ * }$. Assuming $\mathbf{u}= [ \Re \left( v_1 \right), \Im \left( v_1 \right), \Re \left( v_2 \right), \Im \left( v_2 \right), \Re \left( -v_{2}^{*} \right), \Im \left( -v_{2}^{*} \right), \Re \left( v_{1}^{*} \right), \\ \Im \left( v_{1}^{*} \right) ] ^T$, then $\mathbf{u}$ has four linearly independent real random symbols, i.e., ${\Re ( {{v_1}} )}$, ${\Im ( {{v_1}} )}$, ${\Re ( {{v_2}} )}$, ${\Im ( {{v_2}} )}$. \emph{Proposition 2:} Assume that $M_c$ is the number of linearly independent complex random symbols of a transmitted vectorized block. The smallest ${2N_r} - {M_c}$ ordered eigenvalues of ${{\bf{\Sigma }}_{{\bf{\bar y}}}}$ are all equal to $ \sigma _w^2$, i.e., $\phi _{{M_c} + 1} = \cdots = \phi _{2{N_r}} = \sigma _w^2$. \emph{Proof:} See Appendix B. Similarly, let us define a subset of the signal subspace eigenvalues of ${{\bf{\Sigma }}_{{\bf{\bar y}}}}$ as \begin{equation} {\cal B}_l = \left\{ \phi _1\left( l \right) ,\phi _2\left( l \right) \cdots ,\phi _{M_c}\left( l \right) \right\} . \label{eq17} \end{equation} By sliding the window, we can select the cardinalities of three sets, namely ${\cal B}_{l_1}$, ${\cal B}_{l_2}$ and ${\cal B}_{l_3}$ with subscripts $l_1 = 4m-3$, $l_2=4m-2$, and $l_3=4m-1$ ($m \in \mathbb{Z}^+$), respectively, as the second type of subspace-rank features to identify more MIMO schemes in the MIMO scheme pool. Then, we define three linearly independent complex random-symbol features (ICSF), which represent the cardinalities of the three sets, denoted by \begin{subequations} \label{eq18} \begin{align} &\beta _1=\text{card}\left( \mathcal{B}_{l_1} \right) ,\quad l_1=4m-3,m\in \mathbb{Z}^+ \label{eq18a}\\ &\beta _2=\text{card}\left( \mathcal{B}_{l_2} \right) ,\quad l_2=4m-2,m\in \mathbb{Z}^+ \label{eq18b}\\ &\beta _3=\text{card}\left( \mathcal{B}_{l_3} \right) ,\quad l_3=4m-1,m\in \mathbb{Z}^+.\label{eq18c} \end{align} \end{subequations} \subsection{Number of Independent Real Symbols Feature} By stacking the real and imaginary parts of the block inside the window in \eqref{eq10}, we obtain a transmitted, a received stacked block and the noise block as \begin{gather} \mathbf{\tilde{y}}\left( l \right) =\left[ \begin{array}{c} \Re \left( \mathbf{y}\left( l \right) \right)\\ \Im \left( \mathbf{y}\left( l \right) \right)\\ \Re \left( \mathbf{y}\left( l+1 \right) \right)\\ \Im \left( \mathbf{y}\left( l+1 \right) \right)\\ \end{array} \right], \,\,\mathbf{\tilde{s}}\left( l \right) =\left[ \begin{array}{c} \Re \left( \mathbf{s}\left( l \right) \right)\\ \Im \left( \mathbf{s}\left( l \right) \right)\\ \Re \left( \mathbf{s}\left( l+1 \right) \right)\\ \Im \left( \mathbf{s}\left( l+1 \right) \right)\\ \end{array} \right] \notag \\ \mathbf{\tilde{w}}\left( l \right) =\left[ \begin{array}{c} \Re \left( \mathbf{w}\left( l \right) \right)\\ \Im \left( \mathbf{w}\left( l \right) \right)\\ \Re \left( \mathbf{w}\left( l+1 \right) \right)\\ \Im \left( \mathbf{w}\left( l+1 \right) \right)\\ \end{array} \right]. \label{eq19} \end{gather} Similarly, the $l$-th received stacked block is rewritten as \begin{equation} {{{\bf{\tilde y}}}} ( l ) = ( {{{\bf{I}}_2} \otimes {{{\bf{\tilde H}}}}} ){{{\bf{\tilde s}}}} ( l ) + {{{\bf{\tilde w}}}} ( l ) \label{eq21} \end{equation} where the $2{N_r} \times 2{N_t}$ matrix ${{\bf{\tilde H}}}$ is given by \begin{equation} {{{\bf{\tilde H}}}} = \left[ {\begin{array}{*{20}{c}} {\Re ( {{{\bf{H}}}} )}&{ - \Im ( {{{\bf{H}}}} )}\\ {\Im ( {{{\bf{H}}}} )}&{\Re ( {{{\bf{H}}}} )} \end{array}} \right ] \label{eq22} \end{equation} and $ \otimes $ corresponds to the Kronecker product. Then, the covariance matrix of ${\bf{\tilde y}} ( l )$ is \begin{align} {{\bf{\Sigma }}_{{\bf{\tilde y}}}} ( l ) &= {\rm{E}} [ {{{{\bf{\tilde y}}}} ( l ){\bf{\tilde y}}^T ( l )} ] \notag \\ &= ( {{{\bf{I}}_2} \otimes {{{\bf{\tilde H}}}}} ){{\bf{\Sigma }}_{{\bf{\tilde s}}}} ( l ) ( {{{\bf{I}}_2} \otimes {\bf{\tilde H}}^T} ) + \frac{{\sigma _w^2}}{2}{{\bf{I}}_{4{N_r}}} \label{eq23} \end{align} where ${{\bf{\Sigma }}_{{\bf{\tilde s}}}} ( l ) = {\rm{E}} [ {{{{\bf{\tilde s}}}} ( l ){\bf{\tilde s}}^T ( l )} ]$ is the covariance matrix of the $l$-th transmitted stacked block. The eigenvalues of ${{\bf{\Sigma }}_{{\bf{\tilde y}}}} ( l )$ are $\eta _1 ( l ) \ge \eta _2 ( l ) \cdots \ge \eta _{4{N_r}} ( l )$. \emph{Proposition 3:} Assume that $M_r$ is the number of linearly independent real random symbols of a transmitted stacked block. The smallest ${4N_r} - {M_r}$ ordered eigenvalues of ${{\bf{\Sigma }}_{{\bf{\tilde y}}}} ( l )$ are all equal to $ {{\sigma _w^2}/ 2}$, i.e., $\eta _{{M_r} + 1} = \cdots = \eta _{4{N_r}} = {{\sigma _w^2} /2}$. \emph{Proof:} See Appendix B. Let us define the following subset of the signal subspace eigenvalues of ${{\bf{\Sigma }}_{{\bf{\tilde s}}}} ( l )$ as \begin{equation} {\cal C}_l = \left\{ \eta _1\left( l \right) ,\eta _2\left( l \right), \cdots ,\eta _{M_r}\left( l \right) \right\} . \label{eq27} \end{equation} By sliding the window, the cardinalities of the sets ${\cal C}_{l_1}$ and ${\cal C}_{l_2}$ with subscripts $l_1 = 4m-3$ and $l_2 = 4m-1$ ($m \in \mathbb{Z}^+$), respectively, are chosen to be the third type of subspace-rank features since the cardinalities of the other sets can not be used to identify several TD codes with different transmission rates. Then, we define two linearly independent real random-symbol feature (IRSF) representing these cardinalities given by \begin{subequations} \label{eq28} \begin{align} &\gamma_1 =\text{card}\left( \mathcal{C}_{l_1} \right) ,\quad l_1=4m-3,m\in \mathbb{Z}^+ \label{eq28a}\\ &\gamma_2 =\text{card}\left( \mathcal{C}_{l_2} \right) ,\quad l_2=4m-1,m\in \mathbb{Z}^+. \label{eq28b} \end{align} \end{subequations} Actually, the ICSF and IRSF can quantify the space-time redundancy. For different MIMO schemes with the same number of transmit antennas, the transmitted signal block inside the window has more space-time redundancies, or in other words, the block transmits some identical symbols due to the structure of the codeword matrix. Therefore, smaller ICSFs and IRSFs values are calculated at the receiver since these symbols are linearly dependent. All signal features using different numbers of transmit antennas and common MIMO schemes are listed, and a representative example are described in Appendix C. \subsection{Subspace-Rank Features in the OFDM System} \subsubsection{STBC-OFDM Case} Since STBC-OFDM is a direct extension of STBC single-carrier, we can use the $l$-th received signal block at the $k$-th subcarrier, denoted by $\mathbf{Y}_l\left( n \right)$, to derive the NTAF, ICSFs and IRSFs as we described previously. \subsubsection{SFBC-OFDM Case} Regarding the SFBC-OFDM system, we construct and slide a frequency-domain receive window to observe the received signals at adjacent subcarriers and calculate the subspace-rank features. Thus, the $l$-th received block, vectorized block and stacked block in \eqref{eq5}, \eqref{eq10} and \eqref{eq19} inside the window are, respectively, modified to \begin{gather} \mathbf{Y}_l\left( n \right) =\left[ \mathbf{y}_l\left( n \right) ,\mathbf{y}_{l+1}\left( n \right) \right] , \quad \mathbf{\bar{y}}_l\left( n \right) =\left[ \begin{array}{c} \mathbf{y}_l\left( n \right)\\ \mathbf{y}_{l+1}\left( n \right)\\ \end{array} \right] \notag \\ \mathbf{\tilde{y}}_l\left( n \right) =\left[ \begin{array}{c} \Re \left( \mathbf{y}_l\left( n \right) \right)\\ \vdots\\ \Im \left( \mathbf{y}_{l+1}\left( n \right) \right)\\ \end{array} \right]. \label{eq32} \end{gather} \section{Proposed Blind Identification Algorithm} In this section, we use a Gerschgorin radii-based method and an FNN to calculate the subspace-rank features and employ a minimal weighted norm-1 distance metric to discriminate between these features. Different from the original Gerschgorin radii-based method in \cite{Gerschgorin}, the radii of the circles are compressed after a similarity transformation, and then, an FNN is used to calculate the subspace-rank features. Additionally, extensions to the STBC-OFDM and SFBC-OFDM systems are proposed by combining the calculations from different subcarriers after an analysis of the channel response. \subsection{Proposed Algorithm for Single-Carrier System} The covariance matrices estimators are given by \begin{subequations} \label{eq34} \begin{align} &\mathbf{\hat{\Sigma}}_{\alpha }=\frac{1}{L/2-1}\sum_{m=1}^{L/2-1}{\mathbf{Y}\left( 2m \right) \cdot \mathbf{Y}^H\left( 2m \right)} \label{eq34a}\\ &\mathbf{\hat{\Sigma}}_{{\beta }_1}=\frac{1}{L/4}\sum_{m=1}^{L/4}{\mathbf{\bar{y}}\left( 4m-3 \right) \cdot \mathbf{\bar{y}}^H\left( 4m-3 \right)} \label{eq34b}\\ &\mathbf{\hat{\Sigma}}_{{\beta }_2}=\frac{1}{L/4}\sum_{m=1}^{L/4}{\mathbf{\bar{y}}\left( 4m-2 \right) \cdot \mathbf{\bar{y}}^H\left( 4m-2 \right)}\label{eq34c}\\ &\mathbf{\hat{\Sigma}}_{{\beta }_3}=\frac{1}{L/4}\sum_{m=1}^{L/4}{\mathbf{\bar{y}}\left( 4m-1 \right) \cdot \mathbf{\bar{y}}^H\left( 4m-1 \right)} \label{eq34d}\\ &\mathbf{\hat{\Sigma}}_{{\gamma_1 }}=\frac{1}{L/4}\sum_{m=1}^{L/4}{\mathbf{\tilde{y}}\left( 4m-3 \right) \cdot \mathbf{\tilde{y}}^T\left( 4m-3 \right)} \label{eq34e}\\ &\mathbf{\hat{\Sigma}}_{{\gamma_2 }}=\frac{1}{L/4}\sum_{m=1}^{L/4}{\mathbf{\tilde{y}}\left( 4m-1 \right) \cdot \mathbf{\tilde{y}}^T\left( 4m-1 \right)} \label{eq34f} \end{align} \end{subequations} where $L$ is the number of symbols. For convenience, we employ a common notation, ${\bf{\hat{\Sigma }}}$, to represent the estimated covariance matrices in \eqref{eq34}. Assume that ${\bf{\hat{\Sigma }}}$ is a $J \times J$ matrix. First, we partition the estimated covariance matrix as \begin{equation} {\bf{\hat \Sigma }} = \left [ {\begin{array}{*{20}{c}} {{a_{11}}}&{{a_{12}}}& \cdots &{{a_{1J}}}\\ {{a_{21}}}&{{a_{22}}}& \cdots &{{a_{2J}}}\\ \vdots & \vdots & \ddots & \vdots \\ {{a_{J1}}}&{{a_{J2}}}& \cdots &{{a_{JJ}}} \end{array}} \right] = \left [ {\begin{array}{*{20}{c}} {{{\bf{\Sigma }}_1}}&{\bf{a}}\\ {{{\bf{a}}^H}}&{{a_{JJ}}} \end{array}} \right] \label{eq35} \end{equation} where the reduced covariance matrix ${{{\bf{\Sigma }}_1}}$ is a $(J - 1) \times (J - 1)$ leading principal submatrix of ${{\bf{\hat \Sigma }}}$ obtained by removing the last row and column of ${{\bf{\hat \Sigma }}}$. Then, the reduced covariance matrix ${{{\bf{\Sigma }}_1}}$ can be decomposed by its eigenstructure as follows \begin{equation} {{\bf{\Sigma }}_1} = {{\bf{Q}}_1}{{\bf{\Lambda }}_1}{\bf{Q}}_1^H \label{eq36} \end{equation} where ${{\bf{\Lambda }}_1}$ is a diagonal matrix constructed from the eigenvalues of ${{{\bf{\Sigma }}_1}}$ as \begin{equation} {{\bf{\Lambda }}_1} = {\rm{diag}} ( {{\mu _1},{\mu _2}, \cdots ,{\mu _{J - 1}}} ) \label{eq37} \end{equation} and ${{\bf{Q}}_1}$ is a $(J - 1) \times (J - 1)$ unitary matrix formed by the corresponding eigenvectors as follows \begin{equation} {{\bf{Q}}_1} = \left[ {{{\bf{q}}_1},{{\bf{q}}_2}, \cdots ,{{\bf{q}}_{J - 1}}} \right]. \label{eq38} \end{equation} Then, we construct the following $J \times J$ unitary transformation matrix \begin{equation} {{\bf{Q}}_2} = \left[ {\begin{array}{*{20}{c}} {{{\bf{Q}}_1}}&{\bf{0}}\\ {{{\bf{0}}^T}}&1 \end{array}} \right] \label{eq39} \end{equation} where ${{\bf{Q}}_2}{\bf{Q}}_2^H = {\bf{I}}$. The transformation of ${{\bf{\hat \Sigma }}}$ is \begin{align} \mathbf{R}=\mathbf{Q}_2\mathbf{\hat{\Sigma}Q}_{2}^{H} &=\left[ \begin{matrix} \mathbf{\Lambda }_1& \mathbf{Q}_{1}^{H}\mathbf{a}\\ \mathbf{a}^H\mathbf{Q}_1& a_{JJ}\\ \end{matrix} \right] \notag \\ &=\left[ \begin{matrix} \mu _1& 0& \cdots& 0& \rho _1\\ 0& \mu _2& \cdots& 0& \rho _2\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0& 0& \cdots& \mu _{J-1}& \rho _{J-1}\\ \rho _{1}^{*}& \rho _{2}^{*}& \cdots& \rho _{J-1}^{*}& a_{JJ}\\ \end{matrix} \right] \label{eq40} \end{align} where ${\rho _i} = {\bf{q}}_i^H{\bf{a}}$ for $i = 1, 2, \cdots , J - 1$. Thus, the radius of the $i$-th Gerschgorin circle is \begin{equation} {r_i} = | {{\rho _i}} | = | {{\bf{q}}_i^H{\bf{a}}}| . \label{eq41} \end{equation} In order to scale the radii of Gerschgorin circles in proportion, we construct the diagonal matrix \begin{equation} {\bf{P}} = {\rm{diag}} ( {{\mu _1},{\mu _2}, \cdots ,{\mu _{J - 1}},{\mu _{J}}} ) \label{eq42} \end{equation} where $\mu _{J}$ is the mean of the eigenvalues $\mu_1, \cdots, \mu_{J-1}$ given by \begin{equation} {\mu _J} = \frac{1}{{J - 1}}\sum\limits_{i = 1}^{J - 1} {{\mu _i}} . \label{eq43} \end{equation} The matrix ${\bf{R}}$ can be similarly transformed into \begin{align} \mathbf{R'}&=\mathbf{PRP}^{-1} \notag \\ &=\left[ \begin{matrix} \mu _1& 0& \cdots& 0& \frac{\mu _1}{\mu _J}\rho _1\\ 0& \mu _2& \cdots& 0& \frac{\mu _2}{\mu _J}\rho _2\\ \vdots& \vdots& \ddots& \vdots& \vdots\\ 0& 0& \cdots& \mu _{J-1}& \frac{\mu _{J-1}}{\mu _J}\rho _{J-1}\\ \frac{\mu _J}{\mu _1}\rho _{1}^{*}& \frac{\mu _L}{\mu _2}\rho _{2}^{*}& \cdots& \frac{\mu _J}{\mu _{J-1}}\rho _{J-1}^{*}& a_{JJ}\\ \end{matrix} \right] . \label{eq44} \end{align} Practically, the centers of the Gerschgorin circles are fixed while their radii are relatively squeezed by the factor ${\mu_i}/{\mu_J}$. The Gerschgorin circles of the noise subspace are squeezed more than those of the signals since the noise circles radii are generally smaller than $\mu _J$. Then, the radii of the compressed Gerschgorin circles for ${i = 1, \cdots ,J - 1 }$, are denoted by \begin{equation} {R_i} = \left| {\frac{{{\mu _i}{\rho _i}}}{{{\mu _J}}}} \right| = \frac{{{\mu _i}}}{{{\mu _J}}}{r_i}. \label{eq45} \end{equation} \begin{figure} \centering \includegraphics[width=.45\textwidth]{Fig2_NN.eps} \caption{FNN for the calculation of the Gerschgorin radii.} \label{NN} \end{figure} After extracting the radii of the compressed Gerschgorin circles, the identification problem can be considered as a fitting problem of the Gerschgorin circles which determines how many Gerschgorin circles are those of the signal subspace. However, the radii of the Gerschgorin circles of the signal and noise subspaces have a wide range under different conditions, including signal-to-noise ratio (SNR), the number of processed samples and number of receive antennas, which results in the non-linearity between the inputs and outputs of the learning system. In this paper, since an FNN can fit any finite input-output mapping problem and has a simple structure, we use a three-layer FNN, as shown in Fig. \ref{NN}, to determine the number of the signal-subspace Gerschgorin circles. The FNN includes an input layer, a hidden layer and an output layer. The hidden layer has 10-20 neurons using the sigmoid transfer function while the output layer only has one linear neuron. After the SNR is normalized, the Levenberg-Marquardt backpropagation algorithm \cite{more1978levenberg} is utilized to train the FNN by using test data. To avoid overfitting, we use a large set of data to train the FNNs, which is described later on in Section V. Then, the feature value is a fitting function of the radii of compressed Gerschgorin circles given by \begin{equation} Q=f\left( R_1,R_2,\cdots ,R_{J-1} \right) . \label{eq46} \end{equation} The quantity $Q$ represents the calculated feature, $\hat{\alpha}$, $\hat{\beta}_1$, $\hat{\beta}_2$, $\hat{\beta}_3$, $\hat{\gamma_1}$ or $\hat{\gamma_2}$ depending on the corresponding covariance matrix in Equations \eqref{eq34a}-\eqref{eq34f}. Since the sizes of the covariance matrices and the eigenvalues after decomposition have large differences for the three subspace-rank features, the numbers and values (the distributions of values of the radii) of the FNN inputs are significantly different for different features. To enhance performance, we use three trained FNNs to determine $\{ \hat{\alpha} \}$, $\{ \hat{\beta}_1, \hat{\beta}_2, \hat{\beta}_3 \}$ and $\{ \hat{\gamma_1}, \hat{\gamma_2} \}$, respectively. Finally, since the MIMO scheme $C$ contains the information on the number of transmit antennas $N_t$, they are simultaneously determined by a minimal weighted norm-1 distance metric given by \begin{align} \hat{N}_t,\hat{C}=\text{arg}\underset{C\in \left\{ \text{CODE} \right\}}{\min} (& 24 \cdot \left| \hat{\alpha}-\alpha \right|+4 \cdot \sum_i{\left| \hat{\beta}_i-\beta _i \right|} \notag \\ &+3 \cdot \sum_i{\left| \hat{\gamma}_i-\gamma _i \right|} ) \label{eq47} \end{align} where the notation \{CODE\} refers to the set of all schemes listed in Table \ref{table1} (see Appendix C). The reason for employing a norm-1 distance metric is that it is more robust against outliers than other distance metrics \cite{L1_distance}. The weighting coefficients are chosen to eliminate the bias caused by the features with larger values and equal to the least common multiple of the NTAF, sum of ICSFs and sum of IRSFs for single-antenna, which is equal to 24, divided by these values resulting in the weighting coefficients of 1, 6, 8, respectively.\footnote{ It is unfair for the coefficient of the NTAF if the MIMO scheme with the space-time redundancy is chosen here, since the NTAF does not quantify the space-time redundancy and has the same value for different MIMO schemes with the same number of transmit antennas.} For clarity, the main steps of the proposed algorithm are summarized subsequently. \begin{algorithm} \renewcommand{\thealgorithm}{} \caption{} \begin{algorithmic}[1] \renewcommand{\algorithmicrequire}{\textbf{Input: }} \renewcommand{\algorithmicensure}{\textbf{Output:}} \REQUIRE The observed sequence $\bf{y}$. \ENSURE The number of transmit antennas $\hat{N}_t$ and MIMO scheme $\hat{C}$. \STATE{ Construct the received block ${\bf{Y}}$, the vectorized block ${\bf{\bar y}}$ and the stacked block ${\bf{\tilde y}}$ using \eqref{eq5}, \eqref{eq10} and \eqref{eq19}, respectively.} \STATE{ Compute the covariance matrices defined in \eqref{eq34}.} \STATE{ Compute ${\bf{\Lambda }}_1$ and ${\bf{Q}}_1$ using the eigenvalue decomposition. } \STATE{ Compute the radii of the original Gerschgorin circles ${r_i}$ using \eqref{eq41}.} \STATE{ Compute the radii of the compressed Gerschgorin circles ${R_i}$ using \eqref{eq45}.} \STATE{ Calculate the subspace-rank features $\{ \hat{\alpha} \}$, $\{ \hat{\beta}_1, \hat{\beta}_2, \hat{\beta}_3 \}$ and $\{ \hat{\gamma_1}, \hat{\gamma_2} \}$ by three trained FNNs, respectively.} \STATE{ Compute $\hat{N}_t$ and $\hat{C}$ using the weighted norm-1 distance formula in \eqref{eq47}.} \RETURN { $\hat{N}_t$, $\hat{C}$.} \end{algorithmic} \end{algorithm} \subsection{Extension to OFDM Systems} For an OFDM system, each frequency subchannel can be reasonably assumed to be a quasi-static flat-fading channel since the subchannel width is designed to be much less than the channel's coherence bandwidth constraint. The frequency responses of adjacent subchannels are close to each other, especially when increasing the number of subchannels under a given total bandwidth constraint. Therefore, we can rewrite the subchannel frequency response at the $(k+1)$-th subcarrier as \begin{equation} {{\bf{H}}_{k + 1}} = {{\bf{H}}_k} + \Delta {\bf{H}} \label{eq48} \end{equation} where the tiny increment, $\Delta {\bf{H}}$, has the form \begin{equation} \Delta {\bf{H}} = \left[ {\begin{array}{*{20}{c}} {\Delta {h^{ ( {1,1} )}}}& \cdots &{\Delta {h^{ ( {{N_t},1} )}}}\\ \vdots & \ddots & \vdots \\ {\Delta {h^{ ( {1,{N_r}} )}}}& \cdots &{\Delta {h^{ ( {{N_t},{N_r}} )}}} \end{array}} \right]. \label{eq49} \end{equation} \begin{figure} \centering \includegraphics[width=.5\textwidth]{Fig3_channel_response.eps} \caption{Channel response of a frequency-selective fading channel consisting of 3 independent taps for $N=64$. } \label{fig2} \end{figure} Fig. \ref{fig2} shows the frequency response for a frequency-selective fading channel which consists of 3 independent taps with an exponential power delay profile\cite{Blind_MIMO_OFDM_SM_AL}, $\sigma _t^2 = {e^{ - {t / 5}}}$. In this figure, the number of subchannels is set to 64, and the two y-axes represent the amplitude and phase responses, respectively. From the figure, we can reasonably assume that four consecutive subcarriers have similar responses, expressed as \begin{equation} \mathbf{H}_k\approx \mathbf{H}_{k+1}\approx \mathbf{H}_{k+2}\approx \mathbf{H}_{k+3}. \label{eq50} \end{equation} \subsubsection{STBC-OFDM Case} For convenience, let us use a new variable to rearrange the subscript indices of subcarriers, denoted by $p=\lceil {k}/{4} \rceil$, where $\lceil \cdot \rceil$ represents the ceiling function. Assume that $N_b$ OFDM symbols are observed at the receiver. Signals at four consecutive subcarriers are serially incorporated into a data block and the $p$-th data block is denoted by \begin{equation} \mathbf{\dot{y}}_p=\left[ \mathbf{y}_k\left( 1 \right) ,\cdots ,\mathbf{y}_k\left( N_b \right) ,\mathbf{y}_{k+1}\left( 1 \right), \cdots ,\mathbf{y}_{k+3}\left( N_b \right) \right] . \label{eq52} \end{equation} According to the assumption of \eqref{eq50}, the block $\mathbf{\dot{y}}_p$ can be restructured into a received block, vectorized block and stacked block via \eqref{eq5}, \eqref{eq10} and \eqref{eq19}, respectively, and be processed by the operations from \eqref{eq35} to \eqref{eq45} to compute the radii of the compressed Gerschgorin circles, denoted by $\left\{ R_{1,p},R_{2,p},\cdots ,R_{J-1,p} \right\}$. Different data blocks transmit over independent channels and have independent data so that we can combine results from different detectors of the data blocks $\mathbf{\dot{y}}$ with different $p$, expressed as \begin{equation} R_{i}=\frac{1}{ {N_d} }\sum_{p=1}^{ {N_d} }{R_{i,p}} \label{eq53} \end{equation} where $N_d$ is the number of detectors. Finally, the minimal weighted norm-1 distance metric in \eqref{eq47} decides on the number of transmit antennas and the MIMO scheme using the trained FNNs. \subsubsection{SFBC-OFDM Case} Here, we use ${\bf{\dot y}}$ to represent $\mathbf{Y}$, $\mathbf{\bar{y}}$ and $\mathbf{\tilde{y}}$. The estimators of different covariance matrices need to be rewritten as follows \begin{equation} \mathbf{\hat{\Sigma}} _{l}=\frac{1}{N_b}\sum_{n=1}^{N_b}{\mathbf{\dot y}_l\left( n \right) \cdot \mathbf{\dot y}_{l}^{H}\left( n \right)} \label{eq54} \end{equation} where the subscript $l$ is constrained by the conditions $1\le l\le N$ and has different values for different features as in \eqref{eq34}. We use detectors to calculate the radii of the compressed Gerschgorin circles, $\left\{ R_{1,l},R_{2,l},\cdots ,R_{J-1,l} \right\}$ for each adjacent subcarriers and combine $N_d$ detectors as \begin{equation} R_i=\frac{1}{N_d}\sum_{l=1}^{N_d}{R_{i,l}}. \label{eq55} \end{equation} Then, the radii are input into the trained FNNs and the features are computed. Finally, the number of transmit antennas and MIMO scheme are determined by the minimal weighted norm-1 distance metric in \eqref{eq47}. \section{Simulation Results} \subsection{Simulation Setup} \subsubsection{Training} The training data fed to FNNs are the radii of the compressed Gerschgorin circles and the targets are the NTAF, ICSFs and IRSFs. For the single-carrier system, equiprobable 0/1 data are generated and fed to the transmit diversity encoder after being modulated as $4$-PSK symbols. Then, the symbols are transmitted through MIMO Rayleigh fading channels. After timing and frequency synchronization, the receiver decomposes the received signals and generates the training data via \eqref{eq34}-\eqref{eq45}. For the OFDM system, the differences are additional OFDM operations, frequency-selective fading channels, and generating the final radii via \eqref{eq53}. To achieve a better performance, we retrained the FNNs for the OFDM system since the radii have different distributions between the single-carrier system and OFDM system. This process was repeated 200 times using the Monte Carlo method for each scheme at each SNR. The SNR was varied from -5 dB to 20 dB. The training parameters of the number of receive antennas, observed samples and OFDM symbols, subcarriers, detectors, CP length, modulation type, MIMO scheme pool and channel parameters are as listed in Table \ref{default_sim}. \subsubsection{Simulation Setting} \begin{table*} \centering \caption{Default system parameters used in the training and simulations.} \label{default_sim} \begin{threeparttable} \begin{tabular}{@{}ccccccccc@{}} \toprule System &$N_r$ & $L$/$N_b$ & $N$ & $N_d$ & CP & Modulation & Scheme pool & Channel \\ \midrule Single- & \multirow{2}{*}{8} & 2048\tnote{*} & \multirow{2}{*}{-} &\multirow{2}{*}{-} & \multirow{2}{*}{-} & \multirow{2}{*}{4-PSK} & \multirow{2}{*}{17 types} & \multirow{2}{*}{Flat-fading with Rayleigh fading coefficients} \\ carrier&& 256\tnote{**}&&&&&& \\ \midrule \multirow{2}{*}{OFDM} & \multirow{2}{*}{8} & 500\tnote{*} & \multirow{2}{*}{256} & 64\tnote{*} & \multirow{2}{*}{10} & \multirow{2}{*}{4-PSK} & \multirow{2}{*}{17 types} & \multirow{2}{8cm}{Frequency-selective fading consisting of 4 independent complex Gaussian taps with power delay profile $\sigma _t^2 = {e^{ - {t / 5}}}$} \\ &&100\tnote{**} & &32\tnote{**}&&&&\\ \bottomrule \end{tabular} \begin{tablenotes} \footnotesize \item[*] Default parameters used in the training. \item[**] Default parameters used in the simulations. \end{tablenotes} \end{threeparttable} \end{table*} \begin{table}[htbp] \centering \caption{Comparison of size of MIMO scheme pool between exiting algorithms and the proposed algorithm.} \label{pool_comp} \begin{tabular}{@{}ccc@{}} \toprule System & Algorithm & MIMO scheme pool \\ \midrule \multirow{4}{*}{single-carrier} & \cite{correlator_function,higher_order_cyclic,blind_recognition_STBC,Hierarchical_STBC, Fourth_order_TC,Second_Order_cyclic,K_S_test,Classify_STBC_Over_FS} & $\le 5$ types \\ & \cite{STBC_cyclic_2015_ICC} & $ 11$ types \\ & \cite{Likelihood_Based} & $ 13$ types \\ & Proposed algorithm & $ 17$ types \\ \midrule \multirow{2}{*}{STBC-OFDM} & \cite{Blind_MIMO_OFDM,Blind_MIMO_OFDM_SM_AL,Identification_SM_AL_OFDM_cyclic} & $\le 3$ types \\ & Proposed algorithm & $ 17$ types \\ \midrule \multirow{2}{*}{SFBC-OFDM} & \cite{blind_SFBC,My_paper_Globecom,My_paper_TWC2,My_paper_TVT} & $\le 5$ types \\ & Proposed algorithm & $ 17$ types \\ \bottomrule \end{tabular} \end{table} Monte Carlo simulations are conducted to evaluate the performance of the proposed algorithm. We consider three systems, namely, the single-carrier, STBC-OFDM, and SFBC-OFDM systems. Unless otherwise mentioned, the default system parameters are as listed in Table \ref{default_sim}. The noise is assumed zero-mean additive white Gaussian with variance $\sigma _n^2$. The total power of transmitted signals is constrained to $P=\left( 1/L \right) \text{E}\left[ \text{Tr}\left( \mathbf{C}\left( \mathbf{x}_b \right) \mathbf{C}^H\left( \mathbf{x}_b \right) \right) \right] $ regardless of the number of transmit antennas $N_t$ and the SNR is defined as $10\log _{10}\left( P/\sigma _{n}^{2} \right) $ \cite{Likelihood_Based}. The average probabilities of correct identification ${\rm{Pr}}$ of the number of transmit antennas and MIMO scheme, respectively, are calculated as follows \begin{subequations} \label{eq56} \begin{align} &{\rm{P}}{{\rm{r}}_1} = \frac{1}{4}\sum {{\rm{Pr}}({{\hat{ N}_t}}|{N_t})} \\ &{\rm{P}}{{\rm{r}}_2} = \frac{1}{{17}}\sum {{\rm{Pr}}({ \hat C}|C)} \end{align} \end{subequations} respectively, and used as performance measures. The MIMO scheme pool is defined to contain the 17 types of MIMO schemes listed in Table \ref{table1} (see Appendix C). The simulation of each MIMO scheme was run for 1000 trials. \subsection{Simulation Results for Single-Carrier System} We compare the proposed algorithm with the conventional algorithms first, and then evaluate the validity of the proposed algorithm with different system parameters and transmission impairments. \subsubsection{Performance Evaluation} First, we compare the size of the MIMO scheme pool between existing algorithms and the proposed algorithm, as shown in Table \ref{pool_comp}. The data on the existing algorithms are based on the survey \cite{Survey_Signal_Identification}. Table \ref{pool_comp} shows that the feature-based algorithms\cite{correlator_function,higher_order_cyclic,blind_recognition_STBC,Hierarchical_STBC, Fourth_order_TC,Second_Order_cyclic,K_S_test,Classify_STBC_Over_FS} are only able to identify less than 5 types of MIMO schemes since many MIMO schemes have the same redundancy locations. The algorithm in \cite{STBC_cyclic_2015_ICC} can identify 11 types of MIMO schemes owing to the pre-processing operation which leads to a finer discriminatory capability. However, this capability depends on \textit{a priori} information including the number of transmit antennas and channel coefficients. The algorithm in \cite{Likelihood_Based} utilizes the code rate to construct a likelihood function which quantifies MIMO schemes with different code rates. There are three approaches introduced in \cite{Likelihood_Based}. The first two approaches require \textit{a priori} information including the number of transmit antennas, channel coefficients and noise power, while the third approach referred to as codes-parameter (COP) based, only requires the number of transmit antennas. The proposed algorithm can identify 17 types of MIMO schemes, and thus has a wider applicability. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig4_Performance_comp.eps} \caption{Performance comparison between the proposed algorithm and the algorithms AIC/MDL combined with COP based on average probability of correct identification $\rm{Pr}$.} \label{fig5} \end{figure} \begin{table}[htbp] \centering \caption{Complexity of the proposed algorithm and comparison.} \label{complexity} \begin{tabular}{@{}ccc@{}} \toprule \multirow{2}{*}{\centering Algorithm } & \multirow{2}{2cm}{\centering Computational complexity} & \multirow{2}{2cm}{\centering Matlab runtime (Average)} \\ & & \\ \midrule AIC+COP & $2{\cal{O}}(L+N_{r}^{3})$ & $2.97\ ms$ \\ MDL+COP & $2{\cal{O}}(L+N_{r}^{3})$ & $3.12\ ms$ \\ Proposed & ${\cal{O}}(L+N_{r}^{3})$ & $1.55\ ms$ \\ Proposed (OFDM) & ${\cal{O}}{(N_d}(L+N_{r}^{3}))$ & $31.48\ ms$ \\ \bottomrule \end{tabular} \end{table} For a fair comparison, we compare the combination of AIC/MDL\cite{AIC_MDL} and COP\cite{Likelihood_Based} with the proposed algorithm for the identification of both the number of transmit antennas and MIMO scheme using the same parameters of the single-carrier system described in Table \ref{default_sim}. Fig. \ref{fig5} shows that the algorithms in \cite{AIC_MDL,Likelihood_Based} have a better performance in the low-SNR regime. The reason is that these algorithms use precise mathematical expressions to describe and classify the discriminating features under Rayleigh fading channels which leads to an accurate eigenvalue analysis, while the proposed algorithm employs the heuristic method so that it has a wider applicability. In addition, the probabilities of correct identification of AIC and COP do not converge to one due to the inconsistency of AIC\cite{AIC_MDL} and COP has a smaller pool size (only identifies 13 types). From a practical point of view, it is important to analyze the computational complexity of the proposed algorithm, which is ${\cal{O}}(L+N_{r}^{3})$, where $N_{r}^{3}$ floating point operations are needed for the eigenvalue decomposition. It is worth noting that this complexity has the same order as those of AIC/MDL or COP, since they require similar operations including the covariance matrix construction and eigenvalue decomposition. We also verify the practical runtime of these algorithms using a computer with a Core i7-7700T CPU, 16 GB RAM (the simulation software is MATLAB$^{\copyright}$ 2017b). The runtime is evaluated using the default parameters listed in Table \ref{default_sim} through 1000 trials. We recorded the runtime for the proposed algorithms, AIC+COP and MDL+COP for each trial and then averaged the runtimes. The average runtime of the proposed algorithm is about 1.55 $ms$, while the combination of AIC/MDL and COP takes about 2.97 $ms$ or 3.12 $ms$. The complexity results are shown in Table \ref{complexity}. \subsubsection{Effect of the Number of Processed Samples} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig5_STBC_N.eps} \caption{Effect of the number of processed samples on the average probability of correct identification $\rm{Pr}$.} \label{fig6} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig6_STBC_Nr.eps} \caption{Effect of the number of receive antennas on the average probability of correct identification $\rm{Pr}$.} \label{fig7} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig7_STBC_Mod.eps} \caption{Effect of the modulation type on the average probability of correct identification $\rm{Pr}$. } \label{fig8} \end{figure} Fig. \ref{fig6} shows the performance of the proposed algorithm for different observation intervals. In the three cases, the performance improves with the number of processed samples $L$, because the estimation of the sample covariance matrix in \eqref{eq28} becomes more accurate when $L$ increases. \subsubsection{Effect of the Number of Receive Antennas} Fig. \ref{fig7} demonstrates that the probability of correct identification increases with the number of receive antennas for the assumed default simulation parameters. In fact, a large number of receive antennas enhances the estimation performance of the signal subspace dimension, since signals are mapped into a higher dimensional space where it is easy to discriminate between the features. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig8_STBC_timeoffset.eps} \caption{Effect of the timing offset on the average probability of correct identification $\rm{Pr}$. } \label{fig9} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig9_STBC_freqoffset.eps} \caption{Effect of the frequency offset on the average probability of correct identification $\rm{Pr}$. } \label{fig10} \end{figure} \subsubsection{Effect of Modulation Type} Fig. \ref{fig8} shows the effect of the modulation type on the identification performance. We evaluate the performance with four modulation schemes: 4PSK, 8PSK, 16QAM, 64QAM. These modulations are mandatory for most of the wireless standards. The results demonstrate that the performance does not depend on the modulation type. The reason is that the modulation type does not affect the Gerschgorin circles of the signal subspace since the rank of signal subspace is independent of the modulation type. \subsubsection{Effect of Timing Offset} Results presented so far assumed perfect timing synchronization. Now, we evaluate the performance of the proposed algorithm under timing offset. The effect of the timing offset can be modeled as a two-path channel $[1-\zeta, \zeta]$ \cite{time_offset}, where $0 \le \zeta < 1$ is the normalized timing offset when the whole sampling period is one. Fig. \ref{fig9} shows that the timing offset has a small effect on the performance of the identification of the number of transmit antennas, while the effect can be significant on the performance of the identification of MIMO scheme. The reason is that the timing offset destroys the orthogonality of the STBC, which introduces extra terms for the ICSF and IRSF and leads to the overestimation of the features. The timing synchronization parameters can be blindly recovered by algorithms as in \cite{synch_symbol,symbol_recovery} for single-carrier systems and \cite{Cyclic_of_OFDM,OFDM_parameters} for OFDM systems using the cyclostationarity principle. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig10_STBC_Doppler.eps} \caption{Effect of the maximum Doppler frequency on the average probability of correct identification $\rm{Pr}$. } \label{fig11} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig11_Non_Gaussian.eps} \caption{Effect of the non-Gaussian noise on the average probability of correct identification $\rm{Pr}$. } \label{fig14} \end{figure} \subsubsection{Effect of Frequency Offset} Fig. \ref{fig10} illustrates the identification performance of the proposed algorithm for different frequency offsets. The frequency offset, $\Delta f$, is normalized to the signal bandwidth. The identification of the MIMO scheme is impacted by the frequency offset, while the enumeration of the number of transmit antennas is robust with respect to this model mismatch. This is because the frequency offset rotates complex signals and destroys the orthogonality of STBCs, while it does not impact the independence of channels between transmit and receive antennas. In addition, the frequency offset can also be blindly compensated by an algorithm utilizing the kurtosis-type criterion as in \cite{blind_freq_offset}. \subsubsection{Effect of Doppler Frequency} The previous analysis assumed constant channel coefficients over the observation period. Next, we consider the effect of the Doppler spreads on the proposed algorithm. Fig \ref{fig11} shows the identification performance of the proposed algorithm for different Doppler frequencies. Here, the maximum Doppler frequency $|f_d|$ is normalized to the signal bandwidth. The results show a good robustness for $|f_d| < 10^{-4}$. In addition, the Doppler frequency for MIMO signals can also be estimated using a maximum likelihood estimator as in \cite{Blind_Doppler}. \subsubsection{Effect of Non-Gaussian Noise} Fig. \ref{fig14} shows the effect of non-Gaussian noise on the proposed algorithm. Here the impulsive noise is modeled as the Gaussian mixture noise with the probability density function (PDF) given by $p(t) = (1- \varepsilon ){\cal N}(0, \sigma^2) + \varepsilon{\cal N}(0, \eta \sigma^2)$, where $\varepsilon$ is the probability of impulses in noise and ${\cal N}(0, \sigma^2)$ and ${\cal N}(0, \eta \sigma^2)$ denote zero mean Gaussian PDFs with variances $\sigma^2$ and $\eta \sigma^2$, respectively \cite{Non_Gaussian}. The results indicate that the proposed algorithm has a relatively good robustness against the impulsive noise since the Gerschgorin circle-based method can reduce the effect of non-Gaussian noise. \subsection{Simulation Results for OFDM System} Our proposed algorithm can identify a larger MIMO scheme pool than existing algorithms, as shown in Table \ref{pool_comp}. In addition, the complexity of the proposed algorithm is ${\cal{O}}{(N_d}(L+N_{r}^{3}))$ due to the use of $N_d$ detectors, as shown in Table \ref{complexity}. \subsubsection{STBC-OFDM Case} Fig. \ref{fig12} demonstrates the viability of the proposed algorithm for STBC-OFDM systems and presents the identification performance for different numbers of detectors, denoted by $N_d$. The performance improves significantly as the number of detectors increases from 1 to 16 with diminishing performance gains beyond 16 detectors. This result indicates that the combination in \eqref{eq49} converges fast with increasing $N_d$. It should also be mentioned that employing one detector makes the proposed algorithm degenerate into the single-carrier system. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig12_STBC_OFDM_Nd.eps} \caption{Effect of the number of detectors on the average probability of correct identification $\rm{Pr}$ for STBC-OFDM system. } \label{fig12} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Fig13_SFBC_OFDM_Nd.eps} \caption{Effect of the number of detectors on the average probability of correct identification $\rm{Pr}$ for SFBC-OFDM system.} \label{fig13} \end{figure} \subsubsection{SFBC-OFDM Case} Fig. \ref{fig13} verifies the viability of the proposed algorithm for the SFBC-OFDM system and illustrates the identification performance for different numbers of detectors. It can be observed that the proposed algorithm for the SFBC-OFDM system has a close performance to that for the STBC-OFDM system. This is because the detector combinations (see Equations \eqref{eq49} and \eqref{eq53}) are the same for these two systems with the same parameters. \section{Conclusion and Future Work} We proposed a novel joint blind identification algorithm for the number of transmit antennas and MIMO schemes. After restructuring the received signals, three subspace-rank features based on the dimension of the signal subspace, namely, NTAF, ICSF and IRSF, are derived to discriminate between different numbers of transmit antennas and MIMO schemes. Then, we proposed a neural-network Gerschgorin radii-based method to calculate the three features and used a minimal weighted norm-1 distance metric for decision making. Taking advantage of the subspace-rank features and the neural-network Gerschgorin radii-based method, the proposed algorithm can identify a large number of MIMO schemes and applies to both single-carrier and OFDM systems. In addition, the proposed algorithm has an acceptable computational complexity and does not require \textit{a priori} information on the channel coefficients, modulation type or noise power. The simulation results demonstrated the viability of the proposed algorithm for a short observation period both in the single-carrier and OFDM systems, and showed an acceptable performance in the presence of non-Gaussian noise, small timing and frequency offsets and Doppler effects. The transmission impairments are very challenging problems for the blind identification of MIMO signals and limit their applicability. Future works include devising robust identification algorithms for MIMO signals under relatively large timing and frequency offsets and Doppler effects. Since the analytical expressions of the MIMO signal model fall apart under these impairments, we believe that heuristic approaches are better to address these issues. In addition, deep learning can be a promising approach to the MIMO blind identification problem and we will investigate it in our future work. Furthermore, off-the-air data are planned to be used in future work. \appendices \section{TD Examples} The code matrix of the SM and AL are, respectively, defined as \begin{align} & {{\bf{C}}^{{\rm{SM}}}} ( {{{\bf{x}}_b}} ) = { [ {{x_{b,0}}, \cdots ,{x_{b,{N_t} - 1}}} ]^T} \notag \\ & {{\bf{C}}^{{\rm{AL}}}} ( {{{\bf{x}}_b}} ) = \left[ {\begin{array}{*{20}{c}} {{x_{b,0}}}&{ - x_{b,1}^ * }\\ {{x_{b,1}}}&{x_{b,0}^ * } \end{array}} \right]. \notag \end{align} The SBCs using ${N_t} = 3$ transmit antennas are defined by the following coding matrices\cite{STBC_Tarokh,STBC_Ganesan,IEEE802_11} \begin{equation} {{\bf{C}}^{{\rm{OSBC}}{{\rm{3}}^1}}}({{\bf{x}}_b}) = {\left[ {\begin{array}{*{20}{c}} {{x_{b,0}}}&{{x_{b,1}}}&{{x_{b,2}}}\\ { - {x_{b,1}}}&{{x_{b,0}}}&{ - {x_{b,3}}}\\ { - {x_{b,2}}}&{{x_{b,3}}}&{{x_{b,0}}}\\ { - {x_{b,3}}}&{ - {x_{b,2}}}&{{x_{b,1}}}\\ {x_{b,0}^*}&{x_{b,1}^*}&{x_{b,2}^*}\\ { - x_{b,1}^*}&{x_{b,0}^*}&{ - x_{b,3}^*}\\ { - x_{b,2}^*}&{x_{b,3}^*}&{x_{b,0}^*}\\ { - x_{b,3}^*}&{ - x_{b,2}^*}&{x_{b,1}^*} \end{array}} \right]^T} \notag \end{equation} \begin{equation} {{\bf{C}}^{{\rm{OSBC}}{{\rm{3}}^2}}}({{\bf{x}}_b}) = \left[ {\begin{array}{*{20}{c}} {{x_{b,0}}}&0&{{x_{b,1}}}&{ - {x_{b,2}}}\\ 0&{{x_{b,0}}}&{x_{b,2}^*}&{x_{b,1}^*}\\ { - x_{b,1}^*}&{ - {x_{b,2}}}&{x_{b,0}^*}&0 \end{array}} \right] \notag \end{equation} \begin{equation} {{\bf{C}}^{{\rm{OSBC}}{{\rm{3}}^3}}}({{\bf{x}}_b}) = \left[ {\begin{array}{*{20}{c}} {{x_{b,0}}}&{ - x_{b,1}^*}&{x_{b,2}^*}&0\\ {{x_{b,1}}}&{x_{b,0}^*}&0&{ - x_{b,2}^*}\\ {{x_{b,2}}}&0&{ - x_{b,0}^*}&{x_{b,1}^*} \end{array}} \right] \notag \end{equation} \begin{equation} \mathbf{C}^{{\rm{OSBC3}}^4}\left( \mathbf{x}_b \right) =\left[ \begin{matrix} x_{b,0}& x_{b,1}& \frac{x_{b,2}}{\sqrt{2}}\\ -x_{b,1}^{*}& x_{b,0}^{*}& \frac{x_{b,2}}{\sqrt{2}}\\ \frac{x_{b,2}^{*}}{\sqrt{2}}& \frac{x_{b,2}^{*}}{\sqrt{2}}& \frac{-x_{b,0}-x_{b,0}^{*}+x_{b,1}-x_{b,1}^{*}}{2}\\ \frac{x_{b,2}^{*}}{\sqrt{2}}& -\frac{x_{b,2}^{*}}{\sqrt{2}}& \frac{x_{b,1}+x_{b,1}^{*}+x_{b,0}-x_{b,0}^{*}}{2}\\ \end{matrix} \right]^T \notag \end{equation} \begin{equation} {{\bf{C}}^{{\rm{SBC3}}}}\left( {{{\bf{x}}_b}} \right) = {\left[ {\begin{array}{*{20}{c}} {{x_{b,0}}}&{ - x_{b,1}^*}&{{x_{b,2}}}\\ {{x_{b,1}}}&{x_{b,0}^*}&{{x_{b,3}}} \end{array}} \right]^T} . \notag \end{equation} The SBCs and FSTD using ${N_t} = 4$ transmit antennas are defined by the following coding matrices\cite{STBC_Tarokh,STBC_4_ant,QOSTBC,IEEE802_11,sesia2009lte} \begin{equation} \mathbf{C}^{\text{OSBC}4^1}\left( \mathbf{x}_b \right) =\left[ \begin{matrix} x_{b,0}& x_{b,1}& x_{b,2}& x_{b,3}\\ -x_{b,1}& x_{b,0}& -x_{b,3}& x_{b,2}\\ -x_{b,2}& x_{b,3}& x_{b,0}& -x_{b,1}\\ -x_{b,3}& -x_{b,2}& x_{b,1}& x_{b,0}\\ x_{b,0}^{*}& x_{b,1}^{*}& x_{b,2}^{*}& x_{b,3}^{*}\\ -x_{b,1}^{*}& x_{b,0}^{*}& -x_{b,3}^{*}& x_{b,2}^{*}\\ -x_{b,2}^{*}& x_{b,3}^{*}& x_{b,0}^{*}& -x_{b,1}^{*}\\ -x_{b,3}^{*}& -x_{b,2}^{*}& x_{b,1}^{*}& x_{b,0}^{*}\\ \end{matrix} \right] ^T \notag \end{equation} \begin{figure*}[hb] \normalsize \vspace*{4pt} \hrulefill \begin{equation} \mathbf{C}^{\text{OSBC}4^2}\left( \mathbf{x}_b \right) =\left[ \begin{matrix} x_{b,0}& x_{b,1}& \frac{x_{b,2}}{\sqrt{2}}& \frac{x_{b,2}}{\sqrt{2}}\\ -x_{b,1}^{*}& x_{b,0}^{*}& \frac{x_{b,2}}{\sqrt{2}}& -\frac{x_{b,2}}{\sqrt{2}}\\ \frac{x_{b,2}^{*}}{\sqrt{2}}& \frac{x_{b,2}^{*}}{\sqrt{2}}& \frac{-x_{b,0}-x_{b,0}^{*}+x_{b,1}-x_{b,1}^{*}}{2}& \frac{x_{b,0}-x_{b,0}^{*}-x_{b,1}-x_{b,1}^{*}}{2}\\ \frac{x_{b,2}^{*}}{\sqrt{2}}& -\frac{x_{b,2}^{*}}{\sqrt{2}}& \frac{x_{b,1}+x_{b,1}^{*}+x_{b,0}-x_{b,0}^{*}}{2}& \frac{-x_{b,0}-x_{b,0}^{*}-x_{b,1}-x_{b,1}^{*}}{2}\\ \end{matrix} \right] ^T \notag \end{equation} \begin{equation} \mathbf{C}^{\text{OSBC}4^3}\left( \mathbf{x}_b,\mathbf{x}_{b+1} \right) =\left[ \begin{matrix} \mathbf{C}^{\text{AL}}\left( \mathbf{x}_b \right)& \mathbf{C}^{\text{AL}}\left( \mathbf{x}_{b+1} \right)\\ -\left[ \mathbf{C}^{\text{AL}}\left( \mathbf{x}_{b+1} \right) \right] ^*& \frac{\left[ \mathbf{C}^{\text{AL}}\left( \mathbf{x}_{b+1} \right) \right] ^*\mathbf{C}^{\text{AL}}\left( \mathbf{x}_b \right) \mathbf{C}^{\text{AL}}\left( \mathbf{x}_{b+1} \right)}{\lVert \mathbf{x}_{b+1} \rVert ^2}\\ \end{matrix} \right] \notag \end{equation} \end{figure*} \begin{equation} \mathbf{C}^{\text{QOSBC}4}\left( \mathbf{x}_b \right) =\left[ \begin{matrix} x_{b,0}& x_{b,1}& x_{b,2}& x_{b,3}\\ -x_{b,1}^{*}& x_{b,0}^{*}& -x_{b,3}^{*}& x_{b,2}^{*}\\ -x_{b,2}^{*}& -x_{b,3}^{*}& x_{b,0}^{*}& x_{b,1}^{*}\\ x_{b,3}& -x_{b,2}& -x_{b,1}& x_{b,0}\\ \end{matrix} \right] \notag \end{equation} \begin{equation} {{\bf{C}}^{{\rm{SBC}}{4^1}}}\left( {{{\bf{x}}_b}} \right) = {\left[ {\begin{array}{*{20}{c}} {{x_{b,0}}}&{ - x_{b,1}^*}&{{x_{b,2}}}&{ - x_{b,3}^*}\\ {{x_{b,1}}}&{x_{b,0}^*}&{{x_{b,3}}}&{x_{b,2}^*} \end{array}} \right]^T} \notag \end{equation} \begin{equation} {{\bf{C}}^{{\rm{SBC}}{4^2}}}\left( {{{\bf{x}}_b}} \right) = {\left[ {\begin{array}{*{20}{c}} {{x_{b,0}}}&{ - x_{b,1}^*}&{{x_{b,2}}}&{{x_{b,4}}}\\ {{x_{b,1}}}&{x_{b,0}^*}&{{x_{b,3}}}&{{x_{b,5}}} \end{array}} \right]^T} \notag \end{equation} \begin{equation} {{\bf{C}}^{\rm FSTD}}\left( {{{\bf{x}}_b}} \right) = \left[ {\begin{array}{*{20}{c}} {{x_{b,0}}}&{{x_{b,1}}}&0&0\\ 0&0&{{x_{b,2}}}&{{x_{b,3}}}\\ { - x_{b,1}^*}&{x_{b,0}^*}&0&0\\ 0&0&{ - x_{b,3}^*}&{x_{b,2}^*} \end{array}} \right]. \notag \end{equation} \section{Proof of the Propositions} \subsection{Proof of Proposition 1} Clearly, the rank of ${{\bf{\Sigma }}_{\bf{S}}}$ is $N_t$, which implies that the rank of the first term on the right hand side of \eqref{eq5} is equal to $N_t$. Thus, all of the smallest ${N_r} - {N_t}$ ordered eigenvalues of ${{\bf{\Sigma }}_{\bf{Y}}}$ are equal to $2 \sigma _w^2$. Q.E.D. \subsection{Proof of Proposition 2 \& 3} Assume that an $N \times 1$ random vector $\bf s$ has ${M}$ linearly independent random symbols, and is denoted by ${\bf{ s}} = { [ {{x_1} , \cdots ,{x_{M}} ,{{x'}_1} , \cdots ,{{x'}_{{N} - {M}}} } ]^T}$. Then, ${\bf \Sigma} _{\mathbf{s}}$ is given by \begin{equation} {\bf \Sigma} _{\mathbf{s}}=\text{E}\left[ \mathbf{ss}^H \right] =\left[ \begin{matrix} \text{E}\left[ |x_1|^2 \right]& \cdots& \text{E}\left[ x_1\left( x'_{N-M} \right) ^* \right]\\ \vdots& \ddots& \vdots\\ \text{E}\left[ x_{1}^{*}x'_{N-M} \right]& \cdots& \text{E}\left[ |x'_{N-M}|^2 \right]\\ \end{matrix} \right] . \end{equation} According to the definition of the linearly independent random symbol, the vector observations ${\bf X'}_1, \cdots, {\bf X'}_{N-M}$ of the random variables ${x'}_1 , \cdots ,{x'}_{N - M}$ can be represented by the linear combination of ${\bf X}_1, \cdots, {\bf X}_{M}$, i.e., ${\bf X'}_i=\sum_{j=1}^{N}{c_j\cdot {\bf X}_j}$ for $i = 1, \cdots, N-M$, where $c_j$ is any real constant. Then, ${\bf \Sigma} _{\mathbf{s}}$ is transformed into a matrix of rank ${M}$ by elementary row operations resulting in the matrix \begin{equation} {\bf \Sigma }_{\mathbf{s}}=\left[ \begin{matrix} \text{E}\left[ |x_1|^2 \right]& \cdots& \text{E}\left[ x_1\left( x'_{N-M} \right) ^* \right]\\ \vdots& \ddots& \vdots\\ \text{E}\left[ x_{1}^{*}x_M \right]& \cdots& \text{E}\left[ x_M\left( x'_{N-M} \right) ^* \right]\\ \mathbf{0}& \mathbf{O}& \mathbf{0}\\ \end{matrix} \right] . \end{equation} Assume that $\bf H$ is a $P \times N$ full-rank matrix and $\bf y = Hs+ w$, where $\text{E}\left[ \mathbf{ww}^H \right]= \sigma ^2 {\bf I}_P$. Clearly, all of the smallest $P-M$ ordered eigenvalues of ${{\bf{\Sigma }}_{\bf{y}}} = \text{E}\left[ \mathbf{yy}^H \right]$ are equal to $\sigma ^2$. Q.E.D. \section{Subspace-Rank Features for Common MIMO Schemes and a Representative Example} \begin{table*}[htbp] \centering \caption{Features of signals using different numbers of transmit antennas and MIMO schemes.} \label{table1} \begin{tabular}{@{}ccccccccc|ccccccccc@{}} \toprule \multirow{2}{*}{$N_t$} &MIMO &Code & \multirow{2}{*}{$\alpha$} & \multirow{2}{*}{$\beta _1$} & \multirow{2}{*}{$\beta _2$} & \multirow{2}{*}{$\beta _3$} & \multirow{2}{*}{$\gamma_1$} & \multirow{2}{*}{$\gamma_2$} & \multirow{2}{*}{$N_t$} &MIMO &Code & \multirow{2}{*}{$\alpha$} & \multirow{2}{*}{$\beta _1$} & \multirow{2}{*}{$\beta _2$} & \multirow{2}{*}{$\beta _3$} & \multirow{2}{*}{$\gamma_1$} & \multirow{2}{*}{$\gamma_2$} \\ &schemes &rate & & & & & & & &schemes &rate & & & & & \\ \midrule \multirow{2}{*}{1} &Single- & \multirow{2}{*}{1} & \multirow{2}{*}{1} & \multirow{2}{*}{2} & \multirow{2}{*}{2} & \multirow{2}{*}{2} & \multirow{2}{*}{4} & \multirow{2}{*}{4} & 3 & ${\rm{SM}}^3$ & 3 & 3 & 6 & 6 & 6 & 12 & 12 \\ &antenna & & & & & & & &4 & ${\rm{OSBC4}}^1$ & 1/2 & 4 & 4 & 4 & 4 & 8 & 8 \\ 2 & $\rm{AL}$ & 1 & 2 & 4 & 4 & 4 & 4 & 4 & 4 & ${\rm{OSBC4}}^2$ & 3/4 & 4 & 5 & 5 & 5 & 6 & 6 \\ 2 & ${\rm{SM}}^2$ & 2 & 2 & 4 & 4 & 4 & 8 & 8 & 4 & ${\rm{OSBC4}}^3$ & 1 & 4 & 8 & 6 & 8 & 8 & 8 \\ 3 & ${\rm{OSBC3}}^1$ & 1/2 & 3 & 4 & 4 & 4 & 8 & 8 & 4 & ${\rm{QOSBC}}$ & 1 & 4 & 8 & 4 & 8 & 8 & 8 \\ 3 & ${\rm{OSBC3}}^2$ & 3/4 & 3 & 3 & 5 & 5 & 6 & 6 & 4 & ${\rm{FSTD}}$ & 1 & 4 & 4 & 4 & 4 & 4 & 4 \\ 3 & ${\rm{OSBC3}}^3$ & 3/4 & 3 & 5 & 3 & 3 & 6 & 6 & 4 & ${\rm{SBC4}}^1$ & 2 & 4 & 8 & 8 & 8 & 8 & 8 \\ 3 & ${\rm{OSBC3}}^4$ & 3/4 & 3 & 5 & 5 & 3 & 6 & 6 & 4 & ${\rm{SBC4}}^2$ & 3 & 4 & 8 & 8 & 8 & 12 & 12 \\ 3 & ${\rm{SBC3}}$ & 2 & 3 & 6 & 6 & 6 & 8 & 8 & 4 & ${\rm{SM}}^4$ & 4 & 4 & 8 & 8 & 8 & 16 & 16 \\ \bottomrule \end{tabular} \end{table*} Table \ref{table1} shows all signal features using different numbers of transmit antennas and common MIMO schemes. \emph{Case of AL:} Assuming $m=1$, we have ${\bf S}(2) = \left[-x_2^*, x_1^*; x_3, x_4 \right]$ ($l=2$). Then, the covariance matrix ${{\bf{\Sigma }}_{\bf{S}}} (2) = 2{\sigma_s^2}{\bf I}_2$ (assume that the average power of signals is ${\sigma_s^2}$). From {\emph{Proposition 1}}, the rank of the covariance matrix ${{\bf{\Sigma }}_{\bf{Y}}} (2) $ and the cardinality of the set ${\cal A}_2$ are both equal to two. Hence, we have $\alpha = 2$. Subsequently, $\mathbf{\bar{s}}\left( 1 \right) = \left[ x_1, x_2, -x_2^*, x_1^* \right]^T$ ($l_1 = 1$). The covariance matrix ${{\bf{\Sigma }}_{{\bf{\tilde s}}}} (1) = {\sigma_s^2}{\bf I}_4$. From {\emph{Proposition 2}}, the rank of the covariance matrix ${{\bf{\Sigma }}_{{\bf{\tilde y}}}} (1) $ and the cardinality of the set ${\cal B}_1$ are both equal to four. Thus, we have $\beta_1 = 4$. Furthermore, $\mathbf{\tilde{s}}\left( 1 \right) = \left[ \Re(x_1), \Re(x_2), \Re(-x_2^*), \Re(x_1^*), \Im(x_1), \Im(x_2), \Im(-x_2^*), \Im(x_1^*) \right]^T$ ($l_1=1$). Then, we have ${{\bf{\Sigma }}_{{\bf{\tilde s}}}} (1) = \left[ {\sigma _s^2{{\bf{I}}_4}/2, {{\bf{O}}_4}; {{\bf{O}}_4}, {{\bf{O}}_4}} \right]$. Clearly, the rank of the covariance matrix ${{\bf{\Sigma }}_{{\bf{\tilde y}}}} (1)$ and the cardinality of the set ${\cal C}_1$ are both equal to four. Thus, we have $\gamma_1 = 4$. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
1,314,259,994,008
arxiv
\section{Introduction} Wavelet theory has been known a great succes since the eitheenth of the last century. It provides for function spaces as well as time series good bases allowing the decomposition of the studied object into spices associated to different horizons known as the levels od decomposition. A wavelet basis is a family of functions obtained from one function known as the mother wavelet, by translations and dilations. Due to the power of their theory, wavelets have many applications in different domains such as mathematics, physics, electrical engineering, seismic geology. This tool permits the representation of $L^2$-functions in a basis well localized in time and in frequency. Hence, wavelets are special functions charcterized by special properties that may not be satisfied by other functions. In the present context, our aim is to develop new wavelet functions based on some special functions such as Bessel one. Bessel functions form an important class of special functions and are applied almost everywhere in mathematical physics. They are also known as cylindrical functions, or cylindrical harmonics, because of their strong link to the solutions of the Laplace equation in cylindrical coordinates. We aim precisely to apply the generalized $q$-Bessel function introduced in the context of $q$-theory and which makes a general variant of Bessel, Bessel modified and $q$-Bessel functions. To organize this paper, we will briefly review the wavelet theory on the real line in section 2. In section 3, the basic concepts on Bessel wavelets are presented. Section 4 is devoted to the presentation of the extension to the $q$-Bessel wavelets. Section 5 is concerned with the developments of our new extension to the case of generalized $q$-Bessel wavelets. \section{Brief review of wavelets on $\mathbb{R}$} In this part, we recall some basic definitions and properties of wavelets on $\mathbb{R}$. For more details we may refer to \cite{Olivier}, \cite{Prasad}. In $L^2(\mathbb{R})$, a wavelet is a function $\psi\in L^{2}(\mathbb{R})$ satisfying the so-called admissibility condition $$ C_{\psi}=\displaystyle\int_{0}^{\infty}\dfrac{|\widehat{\psi}(\xi)|^{2}}{\xi}d\xi<\infty. $$ From translations and dilations of $\psi$, we obtain a family of wavelets $\{\psi_{a,b}\}$ \begin{equation}\label{0.1} \psi_{a,b}(x)=\dfrac{1}{\sqrt{a}}\,\psi\left(\dfrac{x-b}{a}\right),\;b\in\mathbb{R}, a>0. \end{equation} $\psi$ is called the mother wavelet. $\textbf{a}$ is the parameter of dilation (or scale) and $\textbf{b}$ is the parameter of translation (or position). The continuous wavelet transform of a function $f\in L^{2}{\mathbb{(R)}}$ at the scale $a$ and the position $b$ is given by $$ C_{f}(a,b)=\displaystyle\int_{-\infty}^{+\infty}f(t)\,\overline{\psi}_{a,b}(t)\,dt. $$ The wavelet transform $C_{f}(a,b)$ has several properties. \begin{itemize} \item[$\bullet$] It is linear, in the sense that $$ C_{(\alpha f_{1}+\beta f_{2})}(a,b)=\alpha C_{{f_{1}}}(a,b)+\beta C_{{f_{2}}}(a,b),\;\forall\;\alpha,\beta\in \mathbb{R}\;\mbox{and}\; f_{1},f_{2}\in L^{2}(\mathbb{R}). $$ \item[$\bullet$] It is translation invariant: $$ C_{(\tau_{b'}f)}(a,b)=C_{f}(a,b-b') $$ where $\tau_{b'}$ refers to the translation of $f$ by $b'$ given by $$ (\tau_{b'}f)(x)=f(x-b'). $$ \item[$\bullet$] It is dilation-invariant, in the sense that, if $f$ satisfies the invariance dilation property $f(x)=\lambda f(rx)$ for some $\lambda,r>0$ fixed then $$ C_{f}(a,b)=\lambda C_{f}(ra,rb). $$ \end{itemize} As in Fourier or Hilbert analysis, wavelet analysis provides a Plancherel type relation which permits itself the reconstruction of the analysed function from its wavelet transform. More precisely we have \begin{equation}\label{0.2} \langle f,\,g\rangle\,=\,\displaystyle\frac{1}{C_{\psi}}\displaystyle\int_{a>0}\displaystyle\int_{b\in\mathbb{R}}C_{f}(a,b)\,\overline{C_{g}(a,b)}\,\dfrac{da\,db}{a^{2}},\;\;\;\forall\,f,g\;\in L^{2}(\mathbb{R}) \end{equation} which in turns permit to reconstruct the analyzed function $f$ in the $L^2$ sense from its wavelet transform $C_{a,b}(f)$ as \begin{equation}\label{0.3} f(x)=\frac{1}{C_{\psi}}\displaystyle\int_{a>0}\displaystyle\int_{b\in\mathbb{R}}\;C_{f}(a,b)\;\psi(\dfrac{x-b}{a})\;\dfrac{da\,db}{a^{2}}. \end{equation} \section{Bessel wavelets} There are in literature several approaches to introduce Bessel wavelets. We refer for instence to \cite{Pathak},\;\cite{Pandey}. As its name indicates, Bessel wavelets are related to special functions namely the Bessel one. Historically, special functions differ from elementary ones such as powers, roots, trigonometric, and their inverses mainly with the limitations that these latter classes have known. Many fundamental problems such that orbital motion, simultaneous oscillatory chains, spherical bodies gravitational potential were not best described using elementary functions. This makes it necessary to extend elementary functions' classes to more general ones that may describe well unresolved problems. The present section aims to present basics about Bessel wavelets. For $1\leq p<\infty$ and $\mu>0$, denote $$ L_{\sigma}^{p}(\mathbb{R}_{+}):=\left\{f\;\;\mbox{such as}\;\;\;\Arrowvert f \Arrowvert_{p,\sigma}^p=\displaystyle\int_{0}^{\infty}|f(x)|^{p}\,d\sigma(x)<\infty\right\}, $$ where $d\sigma(x)$ is the measure defined by $$ d\sigma(x)=\dfrac{x^{2\mu}}{2^{\mu-\frac{1}{2}}\,\Gamma(\mu+\frac{1}{2})}\,dx. $$ Denote also $$ j_{\mu}(x)=2^{\mu-\frac{1}{2}}\,\Gamma(\mu+\frac{1}{2})\,x^{\frac{1}{2}-\mu}\,J_{\mu-\frac{1}{2}}(x), $$ where $J_{\mu-\frac{1}{2}}(x)$ is the Bessel function of order $v=\mu-\frac{1}{2}$ given by $$ J_{v}(x)=(\dfrac{x}{2})^{v}\sum_{k=0}^{\infty}\dfrac{(-1)^{k}}{k!\,\Gamma(k+v+1)}(\dfrac{x}{2})^{2k}. $$ Denote next $$ D(x,y,z)=\displaystyle\int_{0}^{\infty}j_{\mu}(xt)\,j_{\mu}(yt)\,j_{\mu}(zt)\,d\sigma(t). $$ For a 1-variable function $f$, we define a translation operator $$ \tau_{x}f(y)=\widetilde{f}(x,y)=\displaystyle\int_{0}^{\infty}\,D(x,y,z)\,f(x)\,d\sigma(z),\;\;\forall\; 0 < x,y <\infty. $$ and for a 2-variables function $f$, we define a dilation operator $$ D_{a}f(x,y)=a^{-2\mu-1}\,f(\frac{x}{a},\frac{y}{a}). $$ Recall that $$ \displaystyle\int_{0}^{\infty}j_{\mu}(zt)\,D(x,y,z)\,d\sigma(z)=\,j_{\mu}(xt)\,j_{\mu}(yt),\;\;\;\forall\; 0<x,y<\infty,\;0\leq t <\infty $$ and $$ \displaystyle\int_{0}^{\infty}\,D(x,y,z)\,d\sigma(z)=1. $$ (See \cite{Pathak}). The Bessel Wavelet copies $\Psi_{a,b}$ are defined from the Bessel wavelet mother $\Psi\in L_{\sigma}^{2}(\mathbb{R_{+}})$ by \begin{equation}\label{0.4} \Psi_{a,b}(x)=D_{a}\tau_{b}\Psi(x)=a^{-2\mu-1}\,\displaystyle\int_{0}^{\infty}D(\frac{b}{a},\frac{x}{a},z)\,\Psi(z)\,d\sigma(x),\;\;\forall a,b\geq 0. \end{equation} As in the classical wavelet theory on $\mathbb{R}$, we define herealso the continuous Bessel Wavelet transform (CBWT) of a function $f\in L_{\sigma}^{2}(\mathbb{R_{+}})$, at the scale $a$ and the position $b$ by \begin{equation}\label{0.5} (B_{\Psi}f)(a,b)=a^{-2\mu-1}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}f(t)\,\overline{\Psi}(z)\,D(\frac{b}{a},\frac{t}{a},z)\,d\sigma(z)\,d\sigma(t). \end{equation} It is well known is Bessel wavelt theory that such a transform is a continuous function according to the variable $(a,b)$. The following result is a variant of Parceval/Plancherel rules for the case of Bessel wavelet transforms. \begin{theorem}\cite{Pathak} Let $\Psi \in L_{\sigma}^{2}(\mathbb{R}_{+})$ and $f,g\in L_{\sigma}^{2}(\mathbb{R}_{+})$.Then \begin{equation}\label{1.7} a^{-2\mu-1}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}(B_{\Psi}f)(b,a)\,(\overline{B_{\Psi}g})(b,a)\,d\sigma(a)\,d\sigma(b)=C_{\Psi}\;\langle f,g \rangle, \end{equation} whenever $$ C_{\Psi}=\displaystyle\int_{0}^{\infty}t^{-2\mu-1}\;|\widehat{\Psi}(t)|^{2}\;d\sigma(t)<\infty. $$ \end{theorem} Indeed, $$ \begin{array}{lll} (B_{\Psi}f)(b,a)&=&\displaystyle\int_{0}^{+\infty}f(t)\,\Psi_{a,b}(t)\,d\sigma(t)\\ &=&\dfrac{1}{a^{2\sigma+1}}\,\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}f(t)\,\overline{\Psi}(z)\,D(\frac{b}{a},\frac{t}{a},z)\,d\sigma(z)\,d\sigma(t).\\ \end{array} $$ Now observe that $$ D(\frac{b}{a}, \frac{t}{a}, z)=\displaystyle\int_{0}^{+\infty} j_{\mu}(\frac{b}{a}u)\,j_{\mu}(\frac{t}{a}u)\,j_{\mu}(zu)\,d\sigma(u). $$ Hence, $$ \begin{array}{lll} (B_{\Psi}f)(a,b)&=&\dfrac{1}{a^{2\mu+1}}\displaystyle\int_{\mathbb{R}^{3}_{+}}f(t)\Psi(z)j_{\mu}(\frac{b}{a}u)j_{\mu}(\frac{t}{a}u)j_{\mu}(zu)d\sigma(u)d\sigma(z)d\sigma(t)\\ &=&\dfrac{1}{a^{2\mu+1}}\displaystyle\int_{\mathbb{R}^{2}_{+}}\widehat{f}(\frac{u}{a})\Psi(z)j_{\mu}(\frac{b}{a}u)j_{\mu}(zu)d\sigma(u)d\sigma(z)\\ &=&\dfrac{1}{a^{2\mu+1}}\displaystyle\int_{\mathbb{R}_{+}}\widehat{f}(\frac{u}{a})\widehat{\Psi}(u)j_{\mu}(\frac{b}{a}u)d\sigma(u)\\ &=&\displaystyle\int_{\mathbb{R}_{+}}\widehat{f}(\eta)\widehat{\Psi}(a\eta)j_{\mu}(b\eta)d\sigma(\eta)\\ &=&\left(\widehat{f}(\eta)\widehat{\Psi}(a\eta)\right)(b). \end{array} $$ As a result, $$ \begin{array}{lll} &&\displaystyle\int_{\mathbb{R}_{+}^{2}}(B_{\Psi}f)(a,b)\overline{(B_{\Psi}g)}(a,b)\dfrac{d\sigma(a)}{a^{2\eta+1}}d\sigma(b)\\ &=&\displaystyle\int_{\mathbb{R}_{+}^{2}}\widehat{f}(\eta)\widehat{\Psi}(a\eta)\overline{\widehat{g}(\eta)}\overline{\widehat{\Psi}(a\eta)}d\sigma(\eta)\dfrac{d\sigma(a)}{a^{2\sigma+1}}\\ &=&\displaystyle\int_{\mathbb{R}_{+}^{2}}\widehat{f}(\eta)\overline{\widehat{g}(\eta)}\left(\displaystyle\int_{\mathbb{R}_{+}}|\widehat{\Psi}(a\eta)|^{2}\dfrac{d\sigma(a)}{a^{2\sigma+1}}\right)d\sigma(\eta)\\ &=& C_{\Psi}\displaystyle\int_{\mathbb{R}_{+}}\widehat{f}(\eta)\overline{\widehat{g}(\eta)}d\sigma(\eta)\\ &=& C_{\Psi}\langle \widehat{f},\widehat{g}\rangle\\ &=&C_{\Psi}\langle f,g\rangle. \end{array} $$ \section{q-Bessel wavelets} At the beginning of the twentieth century Jachkson introduced the theory of $q$-analysis by defining the notions of $q$-derivative and $q$-integral and giving $q$-analogues of certain special functions such as Bessel's one. By virtue of their utilities, special functions and $q$-special functions continue to be a fascinating research topic. Many of these special functions are related to mathematical physics and play an important role in mathematical analysis. This is particularly the case for $q$-Bessel functions, which represent one of the most important examples of $q$-special functions. In the present section we propose to review the basic developments of $q$-Bessel wavelets which is the starting point to be able to develop next our extension for the generalized $q$-Bessel case. Backgrounds on $q$-theory and $q$-wavelets may be found in \cite{Dhaouadi}, \cite{Ahmed}, \cite{DhaouadiAtia}, \cite{DhaouadiFitouhiElKamel}, \cite{FitouhoiBettaibi} and the references therein. For $0<q<1$, denote $$ \mathbb{R}_{q}=\{\pm q^{n},\;\;n\in\mathbb{Z}\}\;\;\hbox{and}\;\;\widetilde{\mathbb{R}}_{q}^{+}=\mathbb{R}_{q}^{+}\bigcup \{0\}. $$ On $\widetilde{\mathbb{R}}_{q}^{+}$, the $q$-Jackson integrals from 0 to $a$ and from 0 to $+\infty$ are defined respectively by $$ \displaystyle\int_{0}^{a}f(x)d_{q}x=(1-q)\,a\sum_{n\geq0}f(aq^{n})\,q^{n} $$ and $$ \displaystyle\int_{0}^{\infty}f(x)d_{q}x=(1-q)\sum_{n\in\mathbb{N}}f(q^{n})\,q^{n} $$ provided that the sums converge absolutely. On $[a,b]$ the integral is given by $$ \displaystyle\int_{a}^{b}f(x)d_{q}x=\displaystyle\int_{0}^{b}f(x)d_{q}x-\displaystyle\int_{0}^{a}f(x)d_{q}x. $$ (See \cite{Ahmed}, \cite{Slim}). This allows to introduce next functional space $$ \mathcal{L}_{q,p,\alpha}(\widetilde{\mathbb{R}}_{q}^{+})=\{f: \,\|f\|_{q,p,\alpha}<\infty\}, $$ where $$ \|f\|_{q,p,\alpha}=\left[\displaystyle\int_{0}^{\infty}|f(x)|^{p}\,x^{2\alpha+1}\,d_{q}x\right]^{\frac{1}{p}}. $$ where $\alpha>\dfrac{-1}{2}$ fixed. Denote next, $C_{q}^{0}(\widetilde{\mathbb{R}}_{q}^{+})$ the space of functions defined on $\widetilde{\mathbb{R}}_{q}^{+}$, continuous in 0 and vanishing at $+\infty$, equipped with the induced topology of uniform convergence such that $$ \|f\|_{q,\infty}=\sup_{x\in\widetilde{\mathbb{R}}_{q}^{+}}|f(x)|<\infty. $$ Finally, $C_{q}^{b}(\widetilde{\mathbb{R}}_{q}^{+})$ designates the space of functions that are continuous at $0$ and bounded on $\widetilde{\mathbb{R}}_{q}^{+}$. The $q$-derivative of a function $f\in\mathcal{L}_{q,p,\alpha}(\widetilde{\mathbb{R}}_{q}^{+})$ is defined by $$ D_{q}f(x)=\begin{cases} \dfrac{f(x)-f(qx)}{(1-q)x},\;\;&\;x\neq 0\\ f'(0)\;,& else. \end{cases} $$ The $q$-derivative of a function is a linear operator. However for the product of functions we have a different form, $$ D_{q}\left(fg\right)(x)=f(qx)D_{q}g(x)+D_{q}f(x)g(x), $$ and whenever $g(x)\neq 0$ and $g(qx)\neq 0$, we have $$ D_{q}\left(\frac{f}{g}\right)(x)=\frac{g(qx)D_{q}f(x)-f(qx)D_{q}g(x)}{g(qx)g(x)}. $$ In $q$-theory, we posses an analogues of the integration by parts rule (\cite{Ali}). $$ \displaystyle\int_{a}^{b}g(x)\,D_{q}f(x)\,d_{q}(x)=\left[f(b)g(b)-f(a)g(a)\right]-\displaystyle\int_{a}^{b}\,f(qx)\,D_{q}g(x)\,d_{q}(x), $$ where the integration is understood in $q$-Jackson sense. We now introduce the normalized $q$-Bessel function (\cite{Manel}) \begin{equation}\label{0.7} j_{\alpha}(x,q^{2})=\sum_{n\geq 0}(-1)^{n}\dfrac{q^{n(n+1)}}{(q^{2\alpha+2},q^{2})_{n}\,(q^{2},q^{2})_{n}}\,x^{2n}, \end{equation} where the $q$-shifted factorial are defined by $$ (a,q)_{0}=1,\;\;\;(a,q)_{n}=\prod_{k=0}^{n-1}(1-aq^{k}),\;\;\;(a,q)_{\infty}=\prod_{k=0}^{+\infty}(1-aq^{k}). $$ We recall also $q$-Bessel operator defined for all $f$ by \begin{equation}\label{0.8} \Delta_{q,\,\alpha}f(x)=\dfrac{f(q^{-1}x)-(1+q^{2\alpha})f(x)+q^{2\alpha}f(qx)}{x^{2}},\;\;\;\;\forall x\neq 0. \end{equation} The $q$-Bessel operator is related to the normalized $q$-Bessel function by the eigenvalue equation $$ \Delta_{q,\,\alpha}j_{\alpha}(x,q^{2})=-\lambda^{2}j_{\alpha}(x,q^{2}). $$ More precisely, $j_{\alpha}(x,q^{2})$ is the unique solution of the Laplace eigenvalue problem for $\lambda\in\mathbb{C}$, $$ \begin{cases} \Delta_{q,\,\alpha}u(x)= -\lambda^{2}\,u(x),\\ u(0)=1,\;u'(0)=0. \end{cases} $$ The following relations are easy to show. The first is an analogue of Stokes rule and states that for $f,g\in\mathcal{L}_{q,2,\alpha}(\widetilde{\mathbb{R}}_{q}^{+})$ such that $\Delta_{q,\alpha}f,\Delta_{q,\alpha}g\in \mathcal{L}_{q,2,\alpha}(\widetilde{\mathbb{R}}_{q}^{+})$, we have \begin{equation}\label{0.10} \displaystyle\int_{0}^{\infty}\Delta_{q,\alpha}f(x)\,g(x)\,x^{2\alpha+1}\,d_{q}x=\displaystyle\int_{0}^{\infty}f(x)\,\Delta_{q,\alpha}g(x)\,x^{2\alpha+1}\,d_{q}x. \end{equation} As a result, we get an orthogonality relation for the normalized $q$-Bessel function (\cite{Radouan}) as $$ \displaystyle\int_{0}^{\infty}j_{\alpha}(xt,q^{2})\,j_{\alpha}(yt,q^{2})\,t^{2\alpha+1}\,d_{q}t=\dfrac{1}{c_{q,\alpha}^{2}}\,\delta_{q,\alpha}(x,y) $$ where \begin{equation}\label{0.9} c_{q,\alpha}=\dfrac{1}{1-q}\dfrac{(q^{2\alpha+2},q^{2})_{\infty}}{(q^{2},q^{2})_{\infty}} \end{equation} and $$ \delta_{q,\alpha}(x,y)= \begin{cases} \dfrac{1}{(1-q)\,x^{2(\alpha+1)}};\;\;\;&\;if\;x=y\\ 0 \;\;\;& else. \end{cases} $$ We now recall the $q$-Bessel Fourier transform $\mathcal{F}_{q,\alpha}$ already defined in (\cite{Ahmed}) as \begin{equation}\label{0.11} \mathcal{F}_{q,\alpha}f(x)=c_{q,\alpha}\displaystyle\int_{0}^{\infty}f(t)\,j_{\alpha}(xt,q^{2})\,t^{2\alpha+1}\,d_{q}t, \end{equation} where $c_{q,\alpha}$ is given by $(\ref{0.9})$ and the $q$-Bessel translation operator defined next by \begin{equation}\label{0.12} T_{q,x}^{\alpha}f(y)=c_{q,\alpha}\displaystyle\int_{0}^{\infty}\mathcal{F}_{q,\alpha}f(t)\,j_{\alpha}(xt,q^{2})\,j_{\alpha}(yt,q^{2})\,t^{2\alpha+1}\,d_{q}t. \end{equation} Such a translation operator satisfies for all $f\in\mathcal{L}_{q,2,\alpha}(\widetilde{\mathbb{R}}_{q}^{+})$ a Fourier invariance property (\cite{Radouan}) \begin{equation}\label{0.13} \mathcal{F}_{q,\alpha}(T_{q,x}^{\alpha}f)(\lambda)=j_{\alpha}(\lambda x,q^{2})\,\mathcal{F}_{q,\alpha}f(\lambda),\;\;\forall \lambda,x\in\widetilde{\mathbb{R}}_{q}^{+}. \end{equation} It satisfies also for $f\in\mathcal{L}_{q,2,\alpha}(\widetilde{\mathbb{R}}_{q}^{+})$, $$ T_{q,x}^{\alpha}f(y)=T_{q,y}^{\alpha}f(x)\;\;\mbox{and}\;\; T_{q,x}^{\alpha}f(0)=f(x) $$ and $$ \mathcal{T}_{q,x}^{\alpha}j_{\alpha}(ty,q^{2})=\,j_{\alpha}(tx,q^{2})\,j_{\alpha}(ty,q^{2}),\;\forall t,x,y\in \mathbb{\widetilde{R}}_{q}^{+}. $$ \begin{definition}\cite{Ahmed} A q-Bessel wavelet is an even function $\Psi\in\mathcal{L}_{q,2,\alpha}(\mathcal{R}_{q}^{+})$ satisfying the following admissibility condition: $$ C_{\alpha,\Psi}=\displaystyle\int_{0}^{\infty}|\mathcal{F}_{q,\alpha}\Psi(a)|^{2}\,\dfrac{d_{q}a}{a}<\infty. $$ The continuous $q$-Bessel wavelet transform of a function $f\in\mathcal{L}_{q,2,\alpha}(\widetilde{\mathbb{R}}_{q}^{+})$ is defined by $$ C_{q,\Psi}^{\alpha}(f)(a,b)=c_{q,\alpha}\,\displaystyle\int_{0}^{\infty}f(x)\,\overline{\Psi_{(a,b)}^{\alpha}}(x)\,x^{2\alpha+1}\,d_{q}x,\;\forall a\in \mathbb{R}_{q}^{+},\;\forall b\in \widetilde{\mathbb{R}}_{q}^{+} $$ where $$ \Psi_{(a,b)}^{\alpha}(x)=\sqrt{a}\mathcal{T}_{q,b}^{\alpha}(\Psi_{a});\;\forall a,b\in\mathbb{R}_{q}^{+}, $$ and $$ \Psi_{a}(x)=\dfrac{1}{a^{2\alpha+2}}\,\Psi(\dfrac{x}{a}) $$ \end{definition} It is straightforward that for any $f\in\mathcal{L}_{q,2,\alpha}(\widetilde{\mathbb{R}}_{q}^{+})$,the function $(a,b)\mapsto\,C_{q,\Psi}^{\alpha}(f)(a,b)$ is continuous. The following result is a variant of Parceval-Plancherel Theorems for the case of $q$-Bessel wavelet transforms. \begin{theorem}\cite{Ahmed} Let $\Psi$ be a $q$-Bessel wavelet in $\mathcal{L}_{q,2,\alpha}(\widetilde{\mathbb{R}}_{q}^{+})$. \begin{enumerate} \item $\forall\,f,g\in\mathcal{L}_{q,2,\alpha}(\widetilde{\mathbb{R}}_{q}^{+})$, there holds that $$ \displaystyle\int_{0}^{\infty}f(x)\,\overline{g}(x)\,x^{2\alpha+1}\,d_{q}x=\dfrac{1}{C_{\alpha,\Psi}}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}C_{q,\Psi}^{\alpha}(f)(a,b)\,\overline{C_{q,\Psi}^{\alpha}}(g)(a,b)\;b^{2\alpha+1}\dfrac{d_{q}a\,d_{q}b}{a^{2}}. $$ \item $\forall\,f\in\mathcal{L}_{q,2,\alpha}(\widetilde{\mathbb{R}}_{q}^{+})$, it holds that $$ f(x)=\dfrac{c_{q,\alpha}}{C_{\alpha,\Psi}}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}C_{q,\Psi}^{\alpha}(f)(a,b)\,\Psi_{(a,b)}^{\alpha}(x)\,b^{2\alpha+1}\dfrac{d_{q}a \,d_{q}b}{a^{2}};\;\forall x \in \mathbb{R}_{q}^{+}. $$ \end{enumerate} \end{theorem} The prrof is easy and may be gothered from \cite{Ahmed} and \cite{Slim}. \section{Generalized q-Bessel wavelets} In this part, the purpose is to generalize the previous results on $q$-Bessel wavelets to the case of generalized $q$-Bessel wavelets by replacing the $q$- Bessel function with a more general one. This latter have been introduced in \cite{Manel}. The reader may refer to this reference for backgrounds on such function and its properties. In the present work, we will not review all such properties. We will recall in a brief way just what we need here. Instead, we propose to introduce new wavelet functions and new wavelet transforms and we will prove some associated famous relations such as Plancherel/Parcevall ones as well as reconstruction formula. For $\alpha,\,\beta\in\mathbb{R}$, we put $$ v=(\alpha,\beta)\;,\;\;\;\overline{v}=(\beta,\alpha) $$ $$ v+1=(\alpha+1,\beta)\;,\;\;\;|v|=\alpha+\beta $$ and for $1\leq p<\infty$, we put $$ \mathcal{L}_{q,p,v}(\widetilde{\mathbb{R}}_{q}^{+})=\left\{f: \,\|f\|_{q,p,v}\left[\displaystyle\int_{0}^{\infty}|f(x)|^{p}\,x^{2|v|+1}\,d_{q}x\right]^{\frac{1}{p}}<\infty\right\}. $$ Throughout this part we will fix $0<q<1$ and $\alpha+\beta>-1$. We refer to \cite{Manel} for the definitions, notations and properties. Denote next \begin{equation}\label{0.32} \widetilde{j}_{q,v}(x,q^{2})=x^{-2\beta}\,j_{\alpha-\beta}(q^{-\beta}x,q^{2}). \end{equation} \begin{definition} The generalized q-Bessel Fourier transform $\mathcal{F}_{q,v}$, is defined by \begin{equation}\label{0.46} \mathcal{F}_{q,v}f(x)=c_{q,v}\displaystyle\int_{0}^{\infty}\,f(t)\,\widetilde{j}_{q,v}(tx,q^{2})\,t^{2|v|+1}\,d_{q}t,\;\;\; \forall f\in\mathcal{L}_{q,p,v}(\mathbb{R}_{q}^{+}). \end{equation} where $$ c_{q,v}=\dfrac{q^{n(\alpha+n)\;\;\;(q^{2\alpha+2},q^{2})_{\infty}}}{(1-q)\;\;\;(q^{2},q^{2})_{\infty}\,(q^{2\alpha+2},q^{2})_{n}}. $$ \end{definition} We now able to introduce the context of wavelets associated to the new generalizd $q$-Bessel function. \begin{definition} A generalized $q$-Bessel wavelet is an even function $\Psi\in\mathcal{L}_{q,2,v}(\widetilde{\mathcal{R}}_{q}^{+})$ satisfying the following admissibility condition: \begin{equation}\label{0.51} C_{v,\Psi}=\displaystyle\int_{0}^{\infty}|\mathcal{F}_{q,v}\Psi(a)|^{2}\,\dfrac{d_{q}a}{a}<\infty. \end{equation} \end{definition} To introduce the continuous generalized $q$-Bessel wavelet transform of a function $f\in\mathcal{L}_{q,2,v}(\widetilde{\mathbb{R}}_{q}^{+})$ at the scale $a\in\mathbb{R}_{q}^{+}$ and the position $b\in \widetilde{\mathbb{R}}_{q}^{+}$ we need to introduce firstly a translation parameter and a dilation one on the wavelet function $\Psi$. A generalized $q$-Bessel translation operator associated via the generalized $q$-Bessel function has been already defined in \cite{Manel} by \begin{equation}\label{0.50} T_{q,x}^{v}f(y)=c_{q,v}\displaystyle\int_{0}^{\infty}\mathcal{F}_{q,v}f(t)\,\widetilde{j}_{q,v}(yt,q^{2})\,\widetilde{j}_{q,v}(xt,q^{2})\,t^{2|v|+1}\,d_{q}t,\;\;\forall x,y \in \widetilde{\mathbb{R}}_{q}^{+}. \end{equation} It is easy to show that $$ T_{q,x}^{v}f(y)=T_{q,y}^{v}f(x)\;\;\hbox{and}\;\; T_{q,x}^{v}f(0)=f(x). $$ \begin{definition} The continuous generalized $q$-Bessel wavelet transform of a function $f\in\mathcal{L}_{q,2,v}(\widetilde{\mathbb{R}}_{q}^{+})$ at the scale $a\in\mathbb{R}_{q}^{+}$ and the position $b\in \widetilde{\mathbb{R}}_{q}^{+}$ is defined by $$ C_{q,\Psi}^{v}(f)(a,b)=c_{q,v}\,\displaystyle\int_{0}^{\infty}f(x)\,\overline{\Psi_{(a,b),v}}(x)\,x^{2|v|+1}\,d_{q}x,\;\forall a\in\mathbb{R}_{q}^{+},\;\forall b\in\widetilde{\mathbb{R}}_{q}^{+}, $$ where $$ \Psi_{(a,b),v}(x)=\sqrt{a}\mathcal{T}_{q,b}^{v}(\Psi_{a}) \qquad\hbox{and}\qquad \Psi_{a}(x)=\dfrac{1}{a^{2|v|+2}}\,\Psi(\dfrac{x}{a}). $$ \end{definition} \textbf{Remark} $$ C_{q,\Psi}^{v}(f)(a,b)=\sqrt{a}\,q^{-4|v|-2}\,\mathcal{F}_{q,v}[\mathcal{F}_{q,v}(f).\mathcal{F}_{q,v}(\Psi_{a})](b). $$ THe following result shows some properties of the generalized $q$-Bessel continuous wavelet transform. \begin{theorem}\label{continuityofgeneralq-besselwt} Let $\Psi$ be a generalized $q$-Bessel wavelet in $\mathcal{L}_{q,2,v}(\widetilde{\mathbb{R}}_{q}^{+})$. Then for all $f\in\mathcal{L}_{q,2,v}(\widetilde{\mathbb{R}}_{q}^{+})$ and all $a\in\mathbb{R}_{q}^{+}$ the function $C_{q,\Psi}^{v}(f)(a,.)$ is continuous on $\widetilde{\mathbb{R}}_{q}^{+}$ and $$ \displaystyle\lim_{b\rightarrow\infty}C_{q,\Psi}^{v}(f)(a,b)=0. $$ Furthermore, we have $$ |C_{q,\Psi}^{v}(f)(a,b)|\leq \dfrac{c_{q,v}}{(q,q^{2})^{2}_{\infty}\,a^{|v|+\frac{1}{2}}}\,\|\Psi\|_{q,2,v}\,\|f\|_{q,2,v}. $$ \end{theorem} The proof is based on the following preliminary Lemmas. \begin{lemme}\label{operateurdeltaqv} Define the $(q,v)$-delta operator by $$ \delta_{q,v}(x,y)= \begin{cases} \dfrac{1}{(1-q)\,x^{2(|v|+1)}};\;\;\;&\;if\;x=y\\ 0\;\;\;& else. \end{cases} $$ The following assertions hold. \begin{enumerate} \item For all $f\in\mathcal{L}_{q,2,v}(\mathbb{R}_{q}^{+})$ and all $t\in\mathbb{R}_{q}^{+}$, we have $$ f(t)=\displaystyle\int_{0}^{\infty}\,f(x)\,\delta_{q,v}(x,t)\,x^{2(|v|+1)}\,d_{q}t. $$ \item For $x,y\in\mathbb{R}_{q}^{+}$, we have $$ c_{q,v}^{2}\displaystyle\int_{0}^{\infty}\,\widetilde{j}_{q,v}(tx,q^{2})\,\widetilde{j}_{q,v}(ty,q^{2})\,t^{2|v|+1}\,d_{q}t=\,\delta_{q,v}(x,y), $$ \end{enumerate} \end{lemme} \textbf{Proof.} (1) From the definition of the $q$-Jackson integral we have $$ \begin{array}{lll} \displaystyle\int_{0}^{\infty}f(x)\delta_{q,v}(x,t)x^{2|v|+1}d_qx&=&(1-q)\displaystyle\sum_{n=0}^{\infty}f(q^n) \delta_{q,v}(q^n,s)q^{n(2|v|+2)}\\ &=&(1-q)f(q^k)\delta_{q,v}(q^k,t)q^{k(2|v|+2)}\\ &=&f(q^k)\\ \end{array} $$ where $k$ is the unique integer such that $t=q^k$.\\ (2) Let $x,y\in\mathbb{R}_{q}^{+}$. We have $$ \begin{array}{lll} &&\displaystyle\int_{0}^{\infty}\,\widetilde{j}_{q,v}(tx,q^{2})\,\widetilde{j}_{q,v}(ty,q^{2})\,t^{2|v|+1}\,d_{q}t\\ &=&\displaystyle\int_{0}^{\infty}(xy)^{2n}j_{\alpha+n}(q^{n}tx,q^{2})\,j_{\alpha+n}(q^{n}ty,q^{2})\,t^{2(\alpha+n)+1}\,t^{4n}\,d_{q}t\\ &=&(xy)^{2n}\,\displaystyle\int_{0}^{\infty}j_{\alpha+n}(ux,q^{2})\,j_{\alpha+n}(uy,q^{2})\,u^{2(\alpha+n)+1}\,q^{2n(\alpha+n)}\,d_{q}t\\ &=&(xy)^{2n}\,q^{-2n(\alpha+n)}\,\displaystyle\int_{0}^{\infty}j_{\alpha+n}(ux,q^{2})\,j_{\alpha+n}(uy,q^{2})\,u^{2(\alpha+n)+1}\,d_{q}t,\\ \end{array} $$ where $u=q^{n}t$. So $$ c_{q,v}^{2}\displaystyle\int_{0}^{\infty}\,\widetilde{j}_{q,v}(tx,q^{2})\,\widetilde{j}_{q,v}(ty,q^{2})\,t^{2|v|+1}\,d_{q}t=\,\delta_{q,v}(x,y). $$ \begin{lemme}\label{FourierqvIsometrie} For all $f\in L_{q,2,v}(\widetilde{\mathbb{R}}_q^+)$, we have $$ \|\mathcal{F}_{q,v}f\|_{q,2,v}=\|f\|_{q,2,v}. $$ \end{lemme} Indeed, denote for simplicity $$ \mathcal{K}_{q,v}(x,t,s)=\widetilde{j}_{q,v}(xt,q^2)\overline{\widetilde{j}_{q,v}(xs,q^2)}. $$ We have $$ \begin{array}{lll} \|\mathcal{F}_{q,v}f\|_{q,2,v}^2 &=&c_{q,v}^2\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}f(t)\overline{f(s)}\mathcal{K}_{q,v}(x,t,s)(tsx)^{2|v|+1}d_qtd_qsd_qx\\ &=&c_{q,v}^2\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}f(t)\overline{f(s)}\displaystyle\int_{0}^{\infty}\mathcal{K}_{q,v}(x,t,s)x^{2|v|+1}d_qx(ts)^{2|v|+1}d_qtd_qs\\ &=&\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}f(t)\overline{f(s)}\delta_{q,v}(t,s)t^{2|v|+1}s^{2|v|+1}d_qtd_qs\\ &=&\displaystyle\int_{0}^{\infty}f(t)t^{2|v|+1}\displaystyle\int_{0}^{\infty}\overline{f(s)}\delta_{q,v}(t,s)s^{2|v|+1}d_qsd_qt\\ &=&\displaystyle\int_{0}^{\infty}|f(t)|^2t^{2|v|+1}d_qt\\ &=&\|f\|_{q,2,v}^2. \end{array} $$ The second and the fourth equalities are simple applications of Fubini's rule. The third one follows from Assertion 2 in Lemma \ref{operateurdeltaqv}. Finally, the fifth equality results from Assertion 1 in Lemma \ref{operateurdeltaqv}. \begin{lemme}\label{lemme5.1} For all $f\in L_{q,2,v}(\widetilde{\mathbb{R}}_q^+)$, the following assertions are true. \begin{enumerate} \item\qquad $\displaystyle\|\mathcal{T}_{q,x}^{v}\Psi\|_{q,2,v}\leq \frac{1}{(q,q^{2})_{\infty}^{2}}\,\|\Psi\|_{q,2,v}$.\vskip0.25cm \item\qquad $\displaystyle\|\Psi_{a}\|_{q,2,v}=\dfrac{1}{a^{2|v|+2}}\|\Psi\|_{q,2,v}$. \end{enumerate} \end{lemme} \textbf{Proof.} (1) Denote as in Lemma \ref{FourierqvIsometrie} $$ \widetilde{\mathcal{K}}_{q,v}(x,y,t,s)=\mathcal{K}_{q,v}(x,t,s)\mathcal{K}_{q,v}(y,t,s) $$ and $$ \mathcal{Q}_{q,v}f(t,s)=\mathcal{F}_{q,v}f(t)\overline{\mathcal{F}_{q,v}f(s)}. $$ We have $$ \begin{array}{lll} \medskip&&\|T_{q,x}^vf\|_{q,2,v}^2\\ &=&c_{q,v}^2\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}\mathcal{Q}_{q,v}f(t,s)\widetilde{\mathcal{K}}_{q,v}(x,y,t,s)(tsy)^{2|v|+1}d_qtd_qsd_qy\\ &=&c_{q,v}^2\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}\mathcal{Q}_{q,v}f(t,s)\displaystyle\int_{0}^{\infty}\widetilde{\mathcal{K}}_{q,v}(x,y,t,s)y^{2|v|+1}d_qy(ts)^{2|v|+1}d_qtd_qs\\ &=&\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}\mathcal{Q}_{q,v}f(t,s)\delta_{q,v}(t,s)\mathcal{K}_{q,v}(x,t,s)(ts)^{2|v|+1}d_qtd_qs\\ &=&\displaystyle\int_{0}^{\infty}\mathcal{F}_{q,v}f(t) \widetilde{j}_{q,v}(xt,q^2)t^{2|v|+1} \displaystyle\int_{0}^{\infty}\overline{\mathcal{F}_{q,v}f(s)} \delta_{q,v}(t,s)\overline{\widetilde{j}_{q,v}(xs,q^2)}s^{2|v|+1}d_qsd_qt\\ &=&\displaystyle\int_{0}^{\infty}|\mathcal{F}_{q,v}f(t)|^2|\widetilde{j}_{q,v}(xt,q^2)|^2t^{2|v|+1}d_qt. \end{array} $$ As previously, the second and the fourth equalities are simple applications of Fubini's rule. The third and the fifth ones are applications of the second and the first assertions in Lemma \ref{operateurdeltaqv} respectively. Next, observing that $$ |\widetilde{j}_{q,v}(xt,q^2)|\leq\displaystyle\frac{1}{(q,q^2)_\infty^2}, $$ we get $$ \begin{array}{lll} \medskip\displaystyle\int_{0}^{\infty}|\mathcal{F}_{q,v}f(t)|^2|\widetilde{j}_{q,v}(xt,q^2)|^2t^{2|v|+1}d_qt &\leq&\displaystyle\frac{1}{(q,q^2)_\infty^4} \displaystyle\int_{0}^{\infty}|\mathcal{F}_{q,v}f(t)|^2t^{2|v|+1}d_qt\\ \medskip&=&\displaystyle\frac{1}{(q,q^2)_\infty^4} \|\mathcal{F}_{q,v}f\|_{q,2,v}^2. \end{array} $$ (2) Recall that $$ \begin{array}{lll} \|\Psi_{a}\|_{q,2,v}^2&=&\displaystyle\int_{0}^{\infty}|\Psi_{a}(x)|^{2}x^{2|v|+1}d_{q}(x)\\ &=&\dfrac{1}{a^{4|v|+4}}\displaystyle\int_{0}^{\infty}|\Psi(\frac{x}{a})|^{2}x^{2|v|+1}d_{q}(x). \end{array} $$ Which by setting $u=\dfrac{x}{a}$ yields that $$ \begin{array}{lll} \|\Psi_{a}\|_{q,2,v}&=&\dfrac{1}{a^{2|v|+2}}\displaystyle\int_{0}^{\infty}|\Psi(u)|^{2}u^{2|v|+1}d_{q}(u)\\ &=&\dfrac{1}{a^{2|v|+2}}\|\Psi\|_{q,2,v}^2. \end{array} $$ \textbf{Proof of Theorem \ref{continuityofgeneralq-besselwt}} For $a\in\mathbb{R}_{q}^{+}$ and $b\in\widetilde{\mathbb{R}}_{q}^{+}$, we have $$ C_{q,\Psi}^{v}(f)(a,b) =c_{q,v}\displaystyle\int_{0}^{\infty}f(x)\overline{\Psi_{(a,b),v}}(x)x^{2|v|+1}d_{q}x. $$ Observing that $$ \Psi_{(a,b),v}(x)=\sqrt{a}\mathcal{T}_{q,b}^{v}(\Psi_{a}) $$ we get $$ C_{q,\Psi}^{v}(g)(a,b)= =c_{q,v}\sqrt{a}\displaystyle\int_{0}^{\infty}f(x)\overline{T_{q,b}^{v}\Psi_{a}}(x)x^{2|v|+1}d_{q}x. $$ Next, H\"older's inequality yields that $$ \left|C_{q,\Psi}^{v}(f)(a,b)\right|\leq c_{q,v}\sqrt{a}\|f\|_{q,2,v}\|T_{q,b}^{v}\Psi_{a}\|_{q,2,v}. $$ Which by Lemma \ref{lemme5.1} implies that $$ \left|C_{q,\Psi}^{v}(f)(a,b)\right|\leq \dfrac{c_{q,v}}{(q,q^{2})^{2}_{\infty}\,a^{|v|+\frac{1}{2}}}\,\|\Psi\|_{q,2,v}\,\|f\|_{q,2,v}. $$ The following result shows Plancherel and Parceval formulas for the generalized $q$-Bessel wavelet transform. \begin{theorem}\label{PlancherelParseval} Let $\Psi$ be a generalized $q$-Bessel wavelet in $\mathcal{L}_{q,2,v}(\widetilde{\mathbb{R}}_{q}^{+})$. Then we have \begin{enumerate} \item $\forall f\in \mathcal{L}_{q,2,v}(\mathbb{R}_{q}^{+})$, $$ \dfrac{1}{C_{v,\Psi}}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}|C_{q,\Psi}^{v}(f)(a,b)|^{2}\;b^{2|v|+1}\dfrac{d_{q}a \,d_{q}b}{a^{2}} =\|f\|_{q,2,v}^{2}. $$ \item $\forall f,g\in\mathcal{L}_{q,2,v}(\widetilde{\mathbb{R}}_{q}^{+})$, $$ \displaystyle\int_{0}^{\infty}f(x)\overline{g}(x)\,x^{2|v|+1}\,d_{q}x=\dfrac{1}{C_{v,\Psi}}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}C_{q,\Psi}^{v}(f)(a,b)\,\overline{C_{q,\Psi}^{v}}(g)(a,b)\;b^{2|v|+1}\dfrac{d_{q}a \,d_{q}b}{a^{2}}. $$ \end{enumerate} \end{theorem} \textbf{Proof.} (1) We have $$ \begin{array}{lll} && q^{4|v|+2}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}|C_{q,\Psi}^{v}(f)(a,b)|^{2}\;b^{2|v|+1}\dfrac{d_{q}a \,d_{q}b}{a^{2}}\\&=& q^{4|v|+2}\displaystyle\int_{0}^{\infty}\left(\displaystyle\int_{0}^{\infty}|\mathcal{F}_{q,v}(f)(x)|^{2}|\mathcal{F}_{q,v}(\overline{\Psi_{a}})|^{2}(x)x^{2|v|+1}d_{q}x\right)\dfrac{d_{q}a}{a}\\ &=&\displaystyle\int_{0}^{\infty}|\mathcal{F}_{q,v}(f)(x)|^{2}\left(|\mathcal{F}_{q,v}(\Psi)(ax)|^{2}\dfrac{d_{q}a}{a}\right)x^{2|v|+1}d_{q}x\\ &=&C_{v,\Psi}\displaystyle\int_{0}^{\infty}|\mathcal{F}_{q,v}(f)(x)|^{2}x^{2|v|+1}d_{q}x\\ &=& C_{v,\Psi}\|f\|_{q,2,v}^{2}. \end{array} $$ Hence, the first assertions is proved.\\ (2) may be deduced from the previous assertion by replacing $f$ by $f+g$ and observing the linearity of the wavelet transform. \begin{theorem} Let $\Psi$ is a generalized q-Bessel wavelet in $\mathcal{L}_{q,2,v}(\mathbb{R}_{q}^{+})$, then\\ for all $f \in \mathcal{L}_{q,2,v}(\mathbb{R}_{q}^{+})$ we have \begin{equation}\label{0.61} f(x)=\dfrac{ c_{q,v}}{C_{v,\Psi}}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}C_{q,\Psi}^{v}(f)(a,b)\,\Psi_{(a,b),\alpha}(x)\,b^{2|v|+1}\dfrac{d_{q}b \,d_{q}a}{a^{2}};\;\forall x \in \mathbb{R}_{q}^{+}. \end{equation} \end{theorem} \textbf{Proof} For $x\in\mathbb{R}_{q}^{+}$, consider the function $g=\delta_{q,v}(x,.)$. It is straightforward that $$ C_{q,\Psi}^{v}(g)(a,b)=c_{q,v}\Psi_{(a,b),v}(x). $$ Consequently, the right hand part of the assertion 2 in Theorem \ref{PlancherelParseval} becomes $$ \dfrac{c_{q,v}}{C_{v,\Psi}}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}C_{q,\Psi}^{v}(f)(a,b)\overline{\Psi_{(a,b),v}}(x)\;b^{2|v|+1}\dfrac{d_{q}a \,d_{q}b}{a^{2}}. $$ In the other hand, with the choice of $g$ above it follows from Lemma \ref{operateurdeltaqv} that for all $f\in L_{q,2,v}(\mathbb{R}_q^+)$, $$ \displaystyle\int_{0}^{\infty}f(x)\overline{g}(x)\,x^{2|v|+1}\,d_{q}x=f(x). $$ Consequently, $$ f(x)=\dfrac{c_{q,v}}{C_{v,\Psi}}\displaystyle\int_{0}^{\infty}\displaystyle\int_{0}^{\infty}C_{q,\Psi}^{v}(f)(a,b)\overline{\Psi_{(a,b),v}}(x)\;b^{2|v|+1}\dfrac{d_{q}a \,d_{q}b}{a^{2}}. $$
1,314,259,994,009
arxiv
\section{Introduction} In this paper, we continue to investigate the optical spectra of galactic infrared (IR) sources identified with highly evolved stars at the short-lived post-asymptotic giant branch (post--AGB) evolutionary stage, at which intermediate-mass (3$\div8 \mathcal{M}_{\sun}$) stars rapidly pass into the planetary-nebula phase. Our comprehensive studies of supergiants with large IR excesses have led to the determination (or refinement) of their evolutionary status. One of the results of our spectroscopy for a sample of high luminosity stars, PPN candidates, is the conclusion reached by Klochkova (2012) about the inhomogeneity of this sample. Apart from PPNe, the sample produced on the basis of IR photometry and low-resolution spectroscopy includes young pre-main sequence stars, high-luminosity stars of various types, from low-mass semiregular variables to hypergiants. The fact that PPNe belong to the post--AGB stage makes them extremely interesting both in investigating the final evolutionary stages of intermediate-mass stars and in studying the chemical evolution of stars and galaxies as a whole. The atmospheres of stars at such an advanced evolutionary stage have chemical peculiarities attributable to the successive change of energy-releasing nuclear reactions accompanied by a change in the structure and chemical composition of the stellar envelope, the mixing of matter, and the dredge--up of nuclear--reaction products to the surface layers of the atmosphere. A small homogeneous subgroup of PPNe with evolutionary overabundances of carbon and heavy metals found in the atmospheres of their central stars has been identified over the last two decades during the spectroscopy of a sample of PPN candidates at the world’s largest telescopes (Klochkova 1995, 2013; Za\v{c}s et al. 1995; Reddy 1997, 1999, 2002; Klochkova et al. 1999, 2000a, 2000b; van Winckel and Reyniers 2000; Kipper and Klochkova 2006). The circumstellar envelopes of these objects have a complex morphology and are generally enriched in carbon, which manifests itself in the presence of carbon-containing C$_2$, C$_3$, CN, CO, etc. molecular bands in their IR, radio, and optical spectra. These PPNe belong to those few objects whose spectra exhibit the 21\,$\mu$ envelope emission band (Kwok et al. 1999, Hrivnak et al. 2009). Despite an active search for appropriate chemical agents, there is no ultimate identification of this extremely rarely observed feature at present. However, its presence in the spectra of PPNe with carbon-enriched envelopes suggests that this emission may be due to the presence of a complex carbon-based molecule in the envelope (for details and references, see Hrivnak et al. 2009). A circumstellar gas--dust envelope manifests itself in peculiarities of the radio, IR, and optical spectra of post--AGB supergiants. The optical spectra of PPNe differ from those of classical massive supergiants by the presence of molecular bands superimposed on the spectrum of an F--G supergiant and by the anomalous behavior of the profiles for selected spectral features. These can be the complex emission--absorption profiles of HI, NaI, and HeI lines, the profiles of strong absorptions distorted by emissions or splitting, and metal emissions. The manifestations of the circumstellar envelope in the optical spectra of PPNe are considered in more detail in Klochkova (2014). The previous results of our spectroscopy for PPNe with the 6-m BTA telescope were published in a series of original papers and are summarized in the reviews by Klochkova (1997, 2012, 2014). In this paper, we present new results of high-resolution spectroscopy for the post--AGB star identified with the IR--source IRAS\,23304+6147 (below referred to as IRAS 23304). The central star of IRAS\,23304 is a rather faint (B\,=\,15$\lefteqn{.}^{\rm m}$52, V\,=\,13$\lefteqn{.}^{\rm m}$15) supergiant of spectral type G2\,Ia lying near the Galaxy plane (b\,=\,0$\lefteqn{.}^{\rm o}$58). According to the high-spatial-resolution Hubble space telescope observations by Sahai et al. (2007), the circumstellar envelope in this system has a complex structure including a multipole and an extended halo with arcshaped features. The first study of the optical spectrum for the central star of IRAS\,23304 belonging to the group of stars with atmospheres enriched in carbon and heavy metals and the calculations of elemental abundances in its atmosphere were performed by Klochkova et al. (2000a), who determined the main parameters of the star: its effective temperature Teff\,=\,5900\,K, surface gravity, considerably reduced metallicity relative to the Sun [Fe/H]\,=\,$-0.61$, and the abundances of 25 other chemical elements. Van Winckel and Reyniers (2000) found similar chemical peculiarities based on a higher-resolution spectrum. However, in both publications aimed mainly at studying the fundamental parameters of the star and the chemical composition of its atmosphere, little attention was given to the peculiarities of its spectrum, the pattern of radial velocities, and their variability with time. In this paper, the peculiarities of the optical spectrum for the central star of IRAS\,23304 and their variability are considered in more detail. Our observational data are briefly described in Section~2. The peculiarities of the profiles for the H$\alpha$, NaI~D lines metal lines detected from high-resolution spectra and molecular bands as well as the data on the velocity field in the supergiant’s atmosphere and envelope are considered in Section~3. Main conclusions are presented in Section~4. \section{Observational data} In this paper, we use the spectra taken at the Nasmyth focus with the NES echelle spectrograph (Panchuk et al. 2007, 2009) on October 12, 2013. In combination with an image slicer, the NES spectrograph provides a spectral resolution R $\approx$60000. A 2048$\times$4096-pixel CCD array has been used at the NES spectrograph since 2011, which has allowed the recorded spectral range to be extended considerably. In addition, for comparison, we used the spectra taken with the PFES echelle spectrograph (Panchuk et al. 1998) at the prime focus of the BTA telescope with a resolution R$\approx$15000 during several observational sets in 1997. The details of our spectrophotometric and positional measurements of the spectra were described in previously published papers; the corresponding references to them are given in Klochkova (2014). Note that applying the image slicer required a significant modification of the standard ECHELLE context of the MIDAS software package. The data were extracted from two-dimensional echelle spectra with the software package described by Yushkin and Klochkova (2005). The DECH\,20 code (Galazutdinov 1992), which allows, in particular, the radial velocities to be measured from individual features of complex lines typical for the spectra of the program stars, was used to reduce the extracted spectra. \section{Peculiarities of the optical spectrum} \begin{figure} \includegraphics[angle=0,width=0.6\columnwidth,bb=20 30 570 780,clip]{fig1.ps} \caption{\it H$\alpha$ (thin curve) and NaI~D2 (thick curve) line profiles in the spectrum of IRAS\,23304. The arrow marks the envelope velocity inferred from the the C$_2$ Swan band, Vr(CS)\,=$-41$\,km/s. The crosses mark the positions of two interstellar (IS) and circumstellar (CS) components of the NaI~D2 line. The vertical dashed line indicates the systemic velocity. Here and in Fig.\,2, the intensity of the normalized continuum along the vertical axis is taken as 100.} \end{figure} It follows from our spectroscopy for the sample of PPNe that the following main types of spectral features are observed in their optical spectra: (1) low or moderate-intensity metal absorptions whose symmetric profiles have no apparent distortions; (2) complex neutral hydrogen line profiles changing with time and including absorption and emission components; (3) the strongest metal absorptions with a low lowerlevel excitation potential, their variable profiles are often distorted by envelope features causing an asymmetry of the profile or its splitting into components; (4) absorption or emission bands of molecules, mostly carbon-containing ones; (5) envelope components of the NaI and KI resonance lines; and (6) narrow permitted or forbidden emission lines of metals originating in the envelopes. The main difference between the spectra of PPNe and massive supergiants is the presence of features of types 2--6. All the main peculiarities of the optical spectra for PPNe are contained in the spectra of the post--AGB star HD\,56126. The latter may be considered by the combination of observed properties (a typical double-humped spectral energy distribution, an F--supergiant spectrum with a variable absorption--emission H$\alpha$ line profile, the presence of C$_2$ Swan molecular bands in the optical spectrum originating in an outflowing extended envelope, large overabundances of carbon and heavy metals synthesized during the star evolution through the s--process and dredged--up to the surface layers of the atmosphere through mixing) as a canonical post--AGB star. As follows from Fig.\,2 in the spectral atlas (Klochkova et al. 2007) based on a long-term monitoring of HD\,56126 with BTA, the H$\alpha$ profile in its spectrum took all of the varieties listed above: an asymmetric core, a direct or inverse P\,Cyg profile, and a profile with two emissions in the wings. In the subgroup of PPNe with the 21\,$\mu$ emission feature, the central star of the IR source IRAS\,23304 belongs to the coolest ones (its spectral type is G2\,Ia), with the effective temperature Teff\,=\,5900\,K. The temperatures in the central stars of IRAS\,22272+5435 (G5\,Ia) and IRAS\,20000+3239 (G8\,Ia) are even lower: Teff\,=\,5650\,K (Klochkova et al. 2009) and 5000\,K (Klochkova and Kipper 2006), respectively. \begin{table} \bigskip \caption{\footnotesize\it Observational data and heliocentric radial velocities Vr, km/s, measured from various spectral features. The number of lines used to determine the mean velocity is given in parentheses} \medskip \begin{tabular}{ l| c| c| c| c| c| c| c } \hline Date& $\Delta\lambda$, & \multicolumn{6}{c}{\small Vr, km/s}\\ \cline{3-8} & \AA{} & Absorptions & H$\beta$& H$\alpha$ & C$_2$ & NaI & DIB \\ & & (297) & & & (24) & (2) & (4) \\ \hline 12.10.2013 & 4500--6980 & $-25.7\pm0.2$ & $-27.8$ & $-26.6$ & &$-26.0$ & \\ & & & & & &$-61.6$ & \\ & & & & & $-41.3\pm 0.2$ &$-41.0$ & \\ & & & & & &$-13.2$ & $-14.0\pm1.3$ \\ \hline \end{tabular} \end{table} {\bf The H$\alpha$ line}. The H$\alpha$ line in the PPN spectra has complex (a combination of emission and absorption components), time-varying profiles of various types: with an asymmetric core, P\,Cyg or inverse P\,Cyg ones, and with two emission components in the wings. The presence of an emission in the H$\alpha$ line points to a high mass loss rate and is one of the criteria for searching and identifying PPNe. The H$\alpha$ profile in the spectrum of IRAS\,23304 is also subjected to significant changes: in Fig.\,1 from Klochkova et al. (2000a), a strong emission is superimposed on the short-wavelength wing of the absorption profile typical of G--supergiants. The profile in the spectrum taken in 2013 and presented in Fig.\,1 in ``relative intensity--Vr'' coordinates has an absorption core of the same depth as that in the 1997 spectrum, but it contains no peculiarities. {\bf Molecular features}. In their paper devoted to investigating the molecular component of the PPN spectra, Bakker et al. (1997) point out the presence of C$_2$ bands for the source IRAS\,23304 whose positions in the spectrum correspond to the circumstellar envelope velocity Vr(CS)\,=$-39.7$\,km/s, which leads to the envelope expansion velocity Vexp\,=\,13.9\,km/s. The observations of two envelope CO bands give the envelope expansion velocity Vexp\,=\,9.2 and 10.3\,km/s (Hrivnak et al. 2005). We emphasize that Bakker et al. (1997) pointed out the presence of Swan bands in the form of absorption features. The (0;\,0) 5165\,\AA{} band was also recorded in our 1997 and 2013 spectra. However, the intensity in the head of the (0;\,1) band in both spectra exceeds appreciably the 5635\,\AA{} local continuum level, i.e., we detected a complex absorption--emission profile of the band (0;\,1) 5635\,\AA{}. The table gives the radial velocity Vr(C$_2$)\,=$-41.3\pm0.2$\,km/s that we measured from the positions of the 24 rotational lines of the Swan 5165\,\AA{} band in the 2013 spectrum. Taking into account the systemic velocity Vsys(CO)\,=$-25.8$\,km/s from Woodsworth et al. (1990), we obtain the envelope expansion velocity Vexp\,=\,15.5\,km/s. A velocity Vr(C$_2)\approx -50$\,km/s was derived in Klochkova et al. (2000a) from the positions of the band heads because of the lower spectral resolution. When an asymmetric head with a shade of violet is convolved with a low-resolution, the position of the head is shifted to a short-wavelength velocity, which explains the difference in the results of the measurements in the 1997 and 2013 spectra. {\bf The NaI doublet resonance lines and diffuse bands}. The heliocentric radial velocities for the main components of the NaI~D lines presented in Fig.\,1 are Vr\,=$-61.6$, $-41.0$, $-26.0$, and $-13.2$\,km/s (see the table). Here, it should be noted that the velocities for these components differ from those in our previous publication (Klochkova et al. 2000a). The moderate resolution (R\,=\,15000) in the spectra used in this paper was insufficient for the identification of individual components in the complex NaI~D line profile. The component of the NaI doublet lines whose position corresponds to the velocity Vr\,=$-26.0$\,km/s originates in the stellar atmosphere, because its position agrees with the positions of the overwhelming majority of symmetric stellar absorptions in the spectrum of the optical counterpart of IRAS\,23304. The longer-wavelength component, Vr\,=$-13.2$\,km/s, is interstellar and originates in the Local arm of the Galaxy. The shortest-wavelength component, Vr\,=$-61.6$\,km/s, of the NaI doublet is also interstellar; it originates in the interstellar medium of the Perseus arm. The presence of an analogous interstellar component with Vr$\approx -63$\,km/s in the spectra of the B--stars HD\,4841, HD\,4694, and Hiltner~62, whose positions in the Galaxy are close to the longitude of IRAS\,23304, serves as an argument for this. The spectra of these stars, members of the Cas\,OB7 association, were studied by Miroshnichenko et al.~(2009). The presence of the component with Vr\,=$-61.6$\,km/s allows us to consider the distance to the Cas\,OB7 association d\,=\,2.5\,kpc from Cazzolato and Pineault (2003) as a lower limit for the distance to IRAS\,23304. Regarding the component of the NaI~D lines with Vr\,=$-41.0$\,km/s, it is natural to assume that it originates in the expanding circumstellar envelope of IRAS\,23304, where the Swan bands are also formed (the close radial velocity Vr(CS)\,=$-39.7$\,km/s corresponds to their positions). Thus, we obtain an envelope expansion velocity Vexp$\approx$13\,km/s typical for PPNe (Loup et al. 1993; Klochkova 2014). According to the results of Miroshnichenko et al. (2009), there are diffuse interstellar bands (DIBs) with velocities in the range $-11 \div -14$\,km/s in the spectra of the hot stars HD\,4841, HD\,4694, and Hiltner\,62 mentioned above. It is also natural to expect the presence of DIBs in the spectra of the optical counterpart of IRAS\,23304. Luna et al.~(2008) provide their measurements of the positions of DIBs that have a very large spread of Vr in the spectrum of IRAS\,23304, from $-26$ to +5\,km/s. We measured the positions of five absorptions that could be identified with the 5797, 6196, 6203, 6207, and 6613\,\AA{} DIBs. The mean velocity for them is Vr(DIBs)\,=$-15$\,km/s. If, however, we discard the 6613\,\AA{} band blended in the spectrum of the supergiant by a strong YII line, then we obtain a mean velocity Vr(DIBs)\,=$-14.0\pm 1.3$\,km/s close to Vr\,=$-13.2$\,km/s inferred from the longestwavelength component of the NaI~D lines. We see, that for a more definite conclusion about the positions of DIBs in the spectrum of a cool supergiant, it is necessary to have observational data with an ultrahigh spectral resolution, R$\ge$100000. {\bf Asymmetry of the profiles for strong absorptions of ionized metals}. Thanks to their high spectral resolution (R\,= \,60000), the 2013 observations allowed us to detect one more, previously unknown peculiarity of the optical spectrum for IRAS\,23304, complex (asymmetric or split) profiles of the strongest metal absorptions. This peculiarity is clearly seen in Fig.\,2, where the YII\,5200\,\AA{}, LaII\,6390\,\AA{}, and BaII\,6141, 6496\,\AA{}, SiII\,6347\,\AA{} line profiles are presented. The BaII absorptions in the spectrum of IRAS\,23304 are enhanced to such an extent that their equivalent widths W$_{\lambda}$ are comparable to those for the neutral hydrogen lines: W$_{\lambda}$(6141)\,=\,0.76\,\AA{}, W$_{\lambda}$(H$\alpha$)\,=\,0.84\,\AA{}. Let us consider the detected effect in slightly more detail using the selected lines presented in Fig.\,2 in ``relative intensity -- Vr'' coordinates as an example. As can be seen from Fig.\,2 and the data in the table, the profiles of these lines include a component whose position coincides with the positions of symmetric absorptions in the spectrum and a short--wavelength component whose position corresponds to the velocity inferred from the C$_2$ Swan band. The proposed interpretation of the complex profile is confirmed by our comparison of the line profiles in Figs.\,1 and 2. The position of the short--wavelength component also coincides with the position of the circumstellar component in the Na~D1 profile. Thus, it can be asserted that, apart from the photospheric component, the complex BaII line profile contains a component originating in the circumstellar envelope, suggesting an effecient dredge--up of the heavy metals produced during the preceding evolution of this star into the envelope. The separation between the atmospheric and circumstellar line components is about 15\,km/s. All the lines of heavy-metal ions (BaII, YII, LaII) in which a profile asymmetry was detected are distinguished by a low lower-level excitation potential, $\chi_{low} < 1$\,eV. As the spectral resolution is reduced, the intensity of the envelope components will be added to the intensity of the components originating in the atmosphere, which will lead to an overestimation of the heavy element abundances determined from strong absorptions. The abundances determined from low- and moderate-intensity lines will be more realistic. \begin{figure} \includegraphics[angle=0,width=0.6\columnwidth,bb=20 30 570 780,clip]{fig2.ps} \caption{\it Profiles of selected lines in the optical spectrum for the central star of IRAS\,23304. The lower group of lines: BaII\,6141 (thick solid curve), BaII\,6496 (dotted curve), and YII\,5200 (dashed curve). The upper group: LaII\,6390 (thin solid curve) and Si\,II 6347 (dotted curve). The arrow marks the envelope velocity Vr(CS)\,=$-41$\,km/s inferred from the C$_2$ Swan band. The vertical dashed line indicates the systemic velocity.} \end{figure} A complex profile for the absorptions of heavy metal ions that, apart from the photospheric component, also contains the circumstellar one was found previously in the spectra of the related post--AGB stars V354\,Lac\,=\,IRAS\,22272+5435 (Klochkova 2009), V448\,Lac\,=\,IRAS\,22223+4327 (Klochkova et al. 2010), and V5112\,Sgr\,=\,IRAS 19500$-$1709 (Klochkova 2013). The envelope effect in the spectrum of the high--latitude supergiant V5112\,Sgr, which enters the group of PPNe with an atmosphere enriched in carbon and heavy metals and its IR spectrum contains the 21\,$\mu$ emission feature, is of greatest interest. An asymmetry and splitting of strong absorptions with a low lower-level excitation potential were detected in the spectra of V5112\,Sgr taken with the NES echelle spectrograph at the 6--meter telescope (Klochkova 2013). The effect is maximal for the BaII lines whose profile is split into three components. The shape of the profiles for the split lines and their positions change with time. Our analysis of the velocity field led us to conclude that both short-wavelength components of the split absorptions originate in the structured circumstellar envelope of V5112\,Sgr. We emphasize that the strong SiII\,6347 and 6371\,\AA{} lines in the spectrum of the optical counterpart of IRAS\,23304 are also asymmetric (Fig.\,2). Apart from the photospheric component, both these lines include a weak short-wavelength component whose position points to its formation in the stellar gaseous envelope. This peculiarity of the SiII lines is consistent with a significant silicon overabundance in the stellar atmosphere (Klochkova et al. 2000a; van Winckel and Reyniers 2000). Thus, for the first time we have detected the dredge--up of not only s--process elements but also silicon into the envelope. The synthesis of silicon is possible through the capture of protons by heavier nuclei in the hot layers of the convective envelope in massive AGB stars with initial masses higher than 4\,${\mathcal M}_{\sun}$. A description of this so-called ``hot bottom burning'' (HBB) and the necessary references are available in Ventura et al. (2011) devoted to the synthesis of Mg, Al, and Si through HBB. \section{The velocity field in the atmosphere and envelope} The heliocentric radial velocity of IRAS\,23304 that we measured from a large set of visually symmetric absorptions is Vr(abs)\,=$-25.7\pm 0.2$\,km/s. First, this velocity coincides with the systemic velocity Vsys(CO)\,=$-25.8$\,km/s inferred from the radio CO observations performed by Woodsworth et al. (1990) for a sample of PPNe with the 21\,$\mu$ band. Second, the velocity we found agrees well with the velocity estimated from absorptions for two times of observations of IRAS\,23304 in 1994 (Vr\,=$-26$, $-26$\,km/s from the data of van Winckel and Reyniers (2000)) and three times of its observations in 1997 (Vr\,=$-23.4$, $-24.9$, $-25.3$\,km/s from the measurements by Klochkova et al. (2000a)). This constancy of the velocity leads us to the preliminary conclusion about the absence of pulsations and signatures of binarity in the system. However, it should be noted that Hrivnak et al. (2010) revealed a weak brightness variability of the object with an amplitude of about 0$\lefteqn{.}^{\rm m}$2 and a period P\,$\approx$85\,days as a result of their long--term photometric monitoring of IRAS\,23304. The parameters of the brightness and color variability in IRAS\,23304 derived by Hrivnak et al. (2010) are typical for PPNe with a temperature close to that of IRAS\,23304. As regards the Vr variability, the weak velocity variability from the available, so far scarce data distinguishes this object among the remaining PPNe with enriched atmospheres. Having studied the variability for seven such PPNe, Hrivnak et al. (2011) found a pulsational Vr variability with amplitudes of $\approx$10\,km/s. It is pertinent to dwell on the methodological aspects of studying the velocity field in the atmospheres of PPNe. The overwhelming majority of the radial velocities used by Hrivnak et al. (2013) were obtained with CORAVEL--type spectrometers. For definiteness, consider the results obtained by these authors for V354\,Lac (IRAS\,22272+5435). The measurements were performed predominantly by various cross--correlation techniques. A fundamental shortcoming of the single-channel methods is a mismatch between the scales of the spectrum and the shifted spectral mask arising at any change in the stellar radial velocity. In the period 1991--1995, Hrivnak et al. (2013) used the DAO cross--correlation photometer (Fletcher et al. 1982; McClure et al. 1985). The above mismatch between the scales was partly compensated for by a special design of the mask (a with a system of tall slits in the range 4000--4600\,\AA{}, slope changing with distance from the mask center) and a special law of mask motion (at 45$^{\rm o}$ to the spectrum axis). In their description of the DAO correlation photometer, Fletcher et al. (1982) point out a systematic zero-point drift for the radial velocities (1\,km/s in 2\,h), which they managed to reduce by half, to 0.5\,km/s, by frequent zero-point calibration based on a comparison spectrum. Since the correlation focus of the DAO 1.2--m photometer at the coude telescope is not subjected to any mechanical deformations, the above zero-point drift for the radial velocities is completely determined by the filling of the spectrograph aperture, variable with the object`s horizontal coordinates. The Vr measurements in the period 2007--2011 performed from the spectra taken with a CCD array in the range 4350--4500\,\AA{} are free from the mismatch of the scales, because the mask is no longer mechanical but digital; besides, the wavelength range is shorter by four times. The longer the spectrum interval intercepted by the mask, the more pronounced the mismatch between the mask and spectrum scales. This problem was solved by T.\,Walraven and J.\,Walraven (1972) by using short spectral orders, i.e., by applying echelle correlation photometers. The third series of radial-velocity measurements for IRAS\,22272+5435 was performed with an echelle correlation photometer whose design, in fact, copies the photometer of Tokovinin (1987). Upgren et al. (2002) provided the errors calculated from the ``cross-correlation dip area-radial velocity error'' relationship taken from Jasniewicz and Mayor (1988) for this photometer based on 153 individual measurements performed for 149 stars. Excluding the errors of more than 1.1\,km/s, we obtained a mean error of 0.83\,km/s for 142 stars from Table\,1 in Upgren et al. (2002). For Tokovinin’s correlation photometer, the error due to construction flexure reaches 0.6\,km/s, but it can be reduced to 0.1\,km/s by orienting the slit parallel to the vertical circle (Tokovinin 1987). However, the errors due to atmospheric dispersion shifting the centers of the star monochromatic images across the slit increase in this case. The angular width of the correlation photometer slit is $\approx$1\,arcsec; the centers of the monochromatic images for the extreme wavelengths of the simultaneously recorded range (4000--6000\,\AA{}) will be separated virtually by the same amount at a zenith distance of 45$^{\rm o}$. The estimate was obtained from the tables of differential refraction calculated by Filippenko (1982) for the altitude h\,=\,2 km. The differential refraction effect for the Moletai Observatory (h\,=\,0.2\,km) is more pronounced. Let us estimate the radial-velocity error due to inaccurate centering of the star on the slit. For an autocollimation photometer, the scales on the entrance slit and on the mask are identical. For $\lambda$\,=\,4400\,\AA{}, a shift of the spectrum by 0.0074\,mm corresponds to a Doppler shift by 1\,km/s (Tokovinin 1987). This means that a radial-velocity measurement error of 1\,km/s can be obtained at an error in centering the star on the slit of 0.01\,arcsec. An optical element ``tangling'' the rays along the slit width was used as the slit in Tokovinin’s photometer. The slit width in the photometer used by Hrivnak et al. (2013) is 0.11\,mm (Upgren et al. 2002). The standard star and the program star cannot be unambiguously set on the slit with an accuracy better than 0.1\,arcsec even in the presence of an autoguider. On the whole, it can be asserted that the instrumental effects of the single-channel correlation techniques limit the accuracy of Vr measurements at 0.8\,km/s. This value should be kept in mind when interpreting the peculiarities of the radial velocity curve for IRAS\,22272+5435 provided by Za\v{c}s et al. (2009). Recall that periodic radial velocity oscillations with a semi-amplitude of 3.1\,km/s were detected in this paper, with the overall scatter of individual measurements from the mean heliocentric velocity reaching $\pm5$\,km/s. Note that we obtain an accuracy of $\approx$0.8\,km/s typical for cross-correlation techniques from the spectra when measuring the position of a single line (Klochkova et al. 2010). In the spectra of F--supergiants, the accuracy of Vr from an ensemble of several hundred symmetric absorptions is an order of magnitude better. In addition, in the case of a correlation technique of Vr measurements, the possible peculiarities of the profiles for strong lines attributable to a complex pattern of the velocity field in the atmospheres of these stars and the influence of the circumstellar envelope on the profiles are disregarded. Based on the spectra taken with BTA in a wide wavelength range, we found an asymmetry and splitting of the profiles for low excitation lines in the spectra of the post--AGB stars V5112\,Sgr (Klochkova 2013), V354\,Lac (Klochkova 2009), and V448\,Lac (Klochkova et al. 2010). This is primarily observed in the BaII, YII, and LaII resonance absorption lines. A temporal variability of the profiles for the above lines was revealed. The set of peculiarities of these profiles can be explained by a superposition of spectral features: absorptions originating in the stellar atmosphere and an envelope emission. The anomalies of the profiles and their variability can affect significantly the conclusions about the pulsation properties. For instance, for V448\,Lac, Klochkova et al. (2010) detected differential line shifts reaching 8\,km/s and a very low pulsation amplitude $\Delta$Vr$\approx$1--2\,km/s, while Hrivnak et al. (2013) found an amplitude exceeding this value manyfold. The large spread of Vr for close times of observations of V354\,Lac and V448\,Lac (Hrivnak et al. 2013) at a more regular brightness variation in the star can probably be explained by the neglect of both instrumental and these subtle kinematic effects. The amplitude of the differential shifts in the atmosphere of the post--AGB star HD\,56126 is even more significant (Klochkova and Chentsov 2007): they reach 15\,km/s for metal lines. Thus, apart from the pulsational variability with time, the pattern of Vr variations in the case of PPNe can also be complicated by differential motions in the extended atmospheres of the program objects. A detailed analysis of Vr based on high spectral- and time-resolution spectra for the selected, brightest PPNe allows the differences in the behavior of Vr determined from lines with different degrees of excitation originating at different depths in the stellar atmosphere to be detected. \section{Conclusions} Based on high-spectral-resolution observations performed with the echelle spectrograph of the 6-m telescope, we studied the peculiarities of the spectrum and the details of the velocity field in the atmosphere and envelope of a faint supergiant, the central star of the IR--source IRAS\,23304+6347. Our comparison of the radial velocity Vr\,=$-25.7$\,km/s inferred from numerous low- and moderate-intensity symmetric absorptions with previously published results points to the absence of significant variations in the velocity and its coincidence with the systemic velocity deduced from radio data. Based on our measurements of the positions for 24 rotational lines of the C$_2$ Swan (0;\,0) $\lambda$\,5165\,\AA{} band, we determined the envelope expansion velocity Vexp\,=\,15.5\,km/s, typical for post-AGB stars. A complex emission--absorption profile was detected for the Swan (0;\,1) 5635\,\AA{} band. Our analysis of the multicomponent NaI\,D doublet line profile revealed interstellar components with velocities V(IS)\,=$-61.6$ and $-13.2$\,km/s as well a circumstellar component with V(CS)\,=$-41.0$\,km/s whose position corresponds to the velocity inferred from C$_2$ features. The shortestwavelength component (Vr\,=$-61.6$\,km/s ) of the NaI D lines originates in the interstellar medium of the Perseus arm. Its presence allows d\,=\,2.5 kpc to be considered as a low limit for the distance to IRAS\,23304. Based on four features identified with diffuse interstellar bands (DIBs), we found the mean velocity Vr(DIBs)\,=$-14.0\pm 1.3$\,km/s close to Vr\,=$-13.2$\,km/s inferred from the long-wavelength interstellar component of the NaI D lines. An asymmetry of the profiles for strong absorptions of ionized metals (YII, BaII, LaII, SiII) attributable to the presence of a short-wavelength component originating in the circumstellar envelope in these lines has been detected in the optical spectrum of IRAS\,23304 for the first time. The overabundance of silicon whose synthesis is possible through the hot bottom burning process in the hot layers of the convective envelope in massive AGB stars suggests that the star being investigated belongs to stars with initial masses higher than 4$\mathcal{M}_{\sun}$. \section*{Acknowledgments} This work was supported by the Russian Foundation for Basic Research (project no.\,14--02--00291\,a). We used the SIMBAD and ADS astronomical databases. \section*{References} \begin{itemize} \item{} E.J. Bakker, E.F. van Dishoeck, L.B.F.M. Waters, and T. Schoenmaker, Astron. Astrophys. 323, 469 (1997). \item{} F. Cazzolato and S. Pineault, Astron. J. 125, 2050 (2003). \item{} A.V. Filippenko, Publ. Astron. Soc. Pacif. 94, 715 (1982). \item{} J.M. Fletcher, H.C. Harris, R.D. McClure, and C.D. Scarfe, Publ. Astron. Soc. Pacif. 94, 1017 (1982). \item{} G.A. Galazutdinov, Preprint Spec. Astrophys. Observ. RAN, No.\,92 (1992). \item{} B.J. Hrivnak and J.H. Bieging, Astrophys. J. 624, 331 (2005). \item{} B.J. Hrivnak, K. Volk, and S. Kwok, Astrophys. J. 694, 1147 (2009). \item{} B.J. Hrivnak, W. Lu, R.E. Maupin, and B.D. Spitzbart, Astrophys. J. 709, 1042 (2010). \item{} B.J. Hrivnak, W. Lu, J. Sperauskas, H. van Winckel, D. Bohlender, and L. Za\v{c}s, Astrophys. J. 766, 116 (2013). \item{} G. Jasniewicz and M. Mayor, Astron. Astrophys. 203, 329 (1988). \item{} V.G. Klochkova, Mon. Not. R. Astron. Soc. 272, 710 (1995). \item{} V.G. Klochkova, Astrophys. Bull. 44, 5 (1997). \item{} V.G. Klochkova, Astron. Lett. 35, 457 (2009). \item{} V.G. Klochkova, Astrophys. Bull. 67, 385 (2012). \item{} V.G. Klochkova, Astron. Lett. 39, 765 (2013). \item{} V.G. Klochkova, Astrophys. Bull. 69, 279 (2014). \item{} V.G. Klochkova and E.L. Chentsov, Astron. Rep. 51, 994 (2007). \item{} V.G. Klochkova and T. Kipper, Baltic Astron. 15, 395 (2006). \item{} V.G. Klochkova, R. Szczerba, V.E. Panchuk, and K. Volk, Astron. Astrophys. 345, 905 (1999). \item{} V.G. Klochkova, R. Szczerba, and V.E. Panchuk, Astron. Lett. 26, 88 (2000a). \item{} V.G. Klochkova, R. Szczerba, and V.E. Panchuk, Astron. Lett. 26, 439 (2000b). \item{} V.G. Klochkova, E.L. Chentsov, N.S. Tavolganskaya, and M. V. Shapovalov, Astrophys. Bull. 62, 162 (2007). \item{} V.G. Klochkova, V.E. Panchuk, and N.S. Tavolganskaya, Astrophys. Bull. 64, 155 (2009). \item{} V.G. Klochkova, V.E. Panchuk, and N.S. Tavolganskaya, Astron. Rep. 54, 234 (2010). \item{} S. Kwok, K. Volk, and B.J. Hrivnak, IAU Symp., №191, 297 (1999). \item{} C. Loup, T. Forveille, A. Omont, and J.F. Paul, Astron. Astrophys. Suppl. Ser. 99, 291 (1993). \item{} R.D. McClure, J.M. Fletcher, W.A. Grundman, and E.H. Richardson, IAU Coll., №88, 49 (1985). \item{} A.S. Miroshnichenko, E.L. Chentsov, V.G. Klochkova, S.V. Zharikov, K.N. Grankin, A.V. Kusakin, T.L. Gandet, G. Klingenberg, et al., Astrophys. J., 209 (2009). \item{} V. Panchuk, V. Klochkova, M. Yushkin, and I. Najdenov, in Proceedings of the Joint Discussion No.\,4 during the IAU General Assembly of 2006, Ed. by I. Gomez de Castro and M.A. Barstow (Editorial Complutense, Madrid, 2007), p. 179. \item{} V.E. Panchuk, V.G. Klochkova, M.V. Yushkin, and I.D. Naidenov, J. Opt. Technol. 76, 87 (2009). \item{} B.E. Reddy, M. Parthasarathy, G. Gonzalez, and E.J. Bakker, Astron. Astrophys. 328, 331 (1997). \item{} B.E. Reddy, E.J. Bakker, and B.J. Hrivnak, Astrophys. J. 524, 831 (1999). \item{} B.E. Reddy, D. L. Lambert, G. Gonzalez, and D. Yong, Astrophys. J. 564, 482 (2002). \item{} R. Sahai, M. Morris, C. Sanchez Contreras, and M. Claussen, Astron. J. 134, 2200 (2007). \item{} A.A. Tokovinin, Sov. Astron. 31, 98 (1987). \item{} A.R. Upgren, J. Sperauskas, and R.P. Boyle, Baltic Astron. 11, 91 (2002). \item{} P. Ventura, R. Carini, and F. D.D’Antona, Mon. Not. R. Astron. Soc. 415, 3865 (2011). \item{} T. Walraven and J.H. Walraven, Auxiliary Instrumentation for Large Telescopes, Ed. by S. Lautsen and A. Reiz (ESO, 1972), p. 175. \item{} H. van Winckel and M. Reyniers, Astron. Astrophys. 354, 135 (2000). \item{} A.W. Woodsworth, S. Kwok, and S.J. Chan, Astron. Astrophys. 228, 503 (1990). \item{} M.V. Yushkin and V.G. Klochkova, Preprint Spec. Astrophys. Observ. RAN, No.\,206, (2005). \item{} L. Za\v{c}s, V.G. Klochkova, and V.E. Panchuk, Mon. Not. R. Astron. Soc. 275, 764 (1995). \item{} L. Za\v{c}s, J. Sperauskas, F.A. Musaev, O. Smirnova, T.C. Yang, W.P. Chen, and M. Schmidt, Astrophys. J. 695, L203 (2009). \end{itemize} \end{document}
1,314,259,994,010
arxiv
\section{Introduction} \label{sec:introduction} For practical application of machine learning algorithms, usually not only the original features, but also their interactions play an important role. However, taking all the interactions into consideration will lead to an extremely high-dimensional input. For example, even if only the product of two numerical features is considered, there will be $O(p^2)$ such interactions for $p$ main effects. As the size of data is growing rapidly nowadays, adding all pairwise interaction into the input may be computationally infeasible, let alone higher-order interactions. What's more, adding all the interactions without selection is possibly harmful to the prediction model since too much redundant information is brought in. It is a well-established practice among statisticians fitting models on interactions as well as original features. For example, \cite{bien13} add a set of convex constraints to the lasso that honour the hierarchy restriction; \cite{hao14} tackle the difficulty by forward-selection-based procedures; \cite{hao18} consider two-stage LASSO and a new regularization method named \textit{RAMP} to compute a hierarchy-preserving regularization solution path efficiently; \cite{agrawal19} propose to speed up inference in Bayesian linear regression with pairwise interactions by using a Gaussian process and a kernel interaction trick. However, these methods are based on the hierarchy assumption. That is, an interaction will be useful only if its lower-order components are also useful. So the theoretical analysis lose efficacy and their practical performance may be unsatisfactory for the case where the assumption does not hold. There is also some work free of the hierarchy assumption. \cite{Thanei18} propose the \textit{xyz} algorithm, where the underlying idea is to transform interaction search into a closest pair problem which can be solved efficiently in subquadratic time. Instead of the hierarchy principle, \cite{yu19} come up with the reluctant principle, which says that one should prefer main effects over interactions given similar prediction performance. The above-mentioned work mainly aims to select pairwise interactions. A drawback of these methods is that potentially informative higher-order interactions are overlooked. \textit{Random intersection trees} \citep{shah14} gets over this difficulty by starting with a maximal interaction that includes all variables, and then gradually removing variables if they fail to appear in randomly chosen observations of a class of interest. Another approach is \textit{Backtracking} \citep{shah16}. It can be incorporated into many existing high-dimensional methods based on penalty functions, and works by building increasing sets of candidate interactions iteratively. An alternative approach for higher-order interaction selection is extracting interactions from rules. Decision trees, such as ID3 \citep{Quinlan86}, C4.5 \citep{Quinlan93} and CART \citep{Breiman84}, are widely-used models to generate comprehensive rules. They work by partitioning the input space according to whether the features satisfy some conditions, and then assigning a constant to each region. After the tree is built, each path connecting the root node and a leaf node can be transformed into a decision rule by combining the split decisions. The predictions of the leaf nodes are discarded and only the splits are used in the decision rules. \textit{RuleFit} \citep{Friedman08} uses these decision rules as binary features, then fits a linear model on them. According to \cite{qu16}, however, the exploration ability of tree-based models is restricted for high-dimensional categorical features due to the low usage rate of categorical features. Besides tree-based models, there is another family of algorithms that generate rules based on the data, namely association rule mining. For a categorical feature $X$, and one of its optional values $x$, we can treat the pattern ``$X=x$'' as an item. In this way we can transform an observation in the data set to a record of items. Then association rule mining algorithms can be applied. Mining association rules between items from a large database is one of the most important and well researched topics of data mining. Let $I=\{i_1, i_2, ..., i_m\}$ be a set of items, called itemsets. Association rule mining aims to extract rules in the form of ``$X\to Y$'', where $X\subset I$, $Y\subset I$, $X\cap Y=\emptyset$. Calling $X$ the antecedent and $Y$ the consequent, the rule means $X$ implies $Y$. The support of an itemset $X$ is the number of records that contain $X$. For an association rule ``$X\to Y$'', its support is defined as the fraction of records that contain $X\cup Y$ to the total number of records in the database, and its confidence is the number of cases in which the rule is correct relative to the number of cases in which it is applicable, or equivalently, support($X\cup Y$)/support($X$). Association rule mining problem was firstly stated by \cite{agrawal93}, and further studied by many researchers. Apriori \citep{agrawal94}, FP-growth \citep{han00} and H-mine \citep{pei01} are some well-known algorithms for association rule mining. If the antecedents is meaningful for the target, it seems reasonable to use them as features for another classification model rather than a classifier themselves, just like how \textit{RuleFit} makes use of decision rules. Although the common association rule mining algorithms are much faster than a brute force search, many of them are not suitable for ``big data'' since they have to go through the whole database multiple times. On the contrary, \textit{Random Intersection Trees} gets over this difficulty by regarding the intersection of a set of random samples as a frequent pattern. To be more specific, it generates $M$ trees of depth $D$, in which each node but the leaf nodes has $B$ child nodes, where $B$ subjects to a pre-specified distribution. The root node contains the items in a uniformly chosen observation from the database, and each node except the root consists of the intersection of its parent node and a uniformly chosen observation. The patterns in the leaf nodes are finally used as interactions. One of the shortcomings of \textit{Random Intersection Trees} is that the selection is relatively crude because the subpatterns of the found frequent patterns are neglected, while they are actually more frequent. What's more, \textit{Random Intersection Trees} aims to find the patterns that are frequent in a class of interest but infrequent in other classes, which are exactly ``confident rules''. But the precise quantities of ``frequency'' or ``confidence'' are not provided, which makes it difficult to make a more careful comparison among different patterns. Another problem is that \textit{Random Intersection Trees} can only deal with binary features. A categorical feature of high cardinality should be one-hot encoded before applying the algorithm, which may result in a very high-dimensional and sparse input. Inspired by the idea of \textit{Random Intersection Trees}, we suggest a method that can select useful interactions of categorical features for classification tasks, called \textit{Random Intersection Chains}. The road map of \textit{Random Intersection Chains} is listed below. \begin{enumerate} \item Generate chains for different classes separately by random intersections; \item Calculate the frequency of the patterns in the tail nodes as well as their subpatterns by maximum likelihood estimation; \item Select the most frequent patterns; \item Calculate the confidence of the most frequent patterns by Bayes’ theorem; \item Select the most confident patterns. \end{enumerate} Our main contributions in this paper can be concluded as follows: (1) an interaction selection method for categorical features in classification tasks, named \textit{Random Intersection Chains}, is proposed; (2) we show that \textit{Random Intersection Chains} can find all the frequent patterns while leaving out the infrequent ones if parameters are appropriately chosen; (3) the computational complexity, including space complexity and time complexity, are analyzed; (4) we prove that the estimated frequency and confidence converge to their true values; (5) a series of experiments are conducted to verify the effectiveness and efficiency of \textit{Random Intersection Chains}. The rest of the paper is organized as follows. In Section~\ref{sec:preliminaries}, a brief introduction of some related contents is given. In Section~\ref{sec:algorithm} we introduce \textit{Random Intersection Chains}, our algorithm for interactive feature selection, in detail. This followed by some analyses of computational complexities in Section~\ref{sec:computation_complexity}. In Section~\ref{sec:convergence}, we theoretically analyze the the convergence of the estimated frequency and confidence. In Section~\ref{sec:experiments} we report the results of a series of experiments to verify the effectiveness of the algorithm. Finally this paper is concluded in Section~\ref{sec:conclusion}. The proofs of some main results are relegated to the Appendix. \section{Preliminaries} \label{sec:preliminaries} In this paper, we consider the classification task involving high-dimensional categorical predictors. Usually the categorical features have a number of optional values. A common approach to deal with such features is one-hot encoding, which transforms a categorical feature to a large number of binary variables. But this method will make the input extremely high-dimensional and sparse. To avoid this difficulty, label encoding is adopted in this paper, which maps each category to a specific integer. For example, there may be three categories, namely ``red'', ``green'' and ``blue'', in a feature representing colors. Then we replace ``red'', ``green'' and ``blue'' with 1, 2 and 3, respectively. It's worth noting that these integers can only be used to check whether two values are identical or different, while their numerical relationships should be ignored. Suppose $C_1, C_2, ..., C_p$ are $p$ categorical features, and $C$ is the set of classification labels. The given data set is in the form of $D=\{\boldsymbol{X}, \boldsymbol{y}\}$, where $\boldsymbol{X}\in \mathbb{N}^{N\times p}$ contains the records of $N$ observations, $\boldsymbol{y}\in C^N$ indicates the label of these observations. The $i$-th row of $\boldsymbol{X}$ and the $i$-th component of $\boldsymbol{y}$ are denoted by $X_i$ and $y_i$, respectively. Suppose $X_i=[c_1, c_2, ..., c_p]$ is an observation in the data set, then it can be viewed from two aspects. First if we treat $c_1, c_2, ..., c_p$ as integers, $\boldsymbol{X}_i$ is naturally a vector of dimension $p$. Or $c_1, c_2, ..., c_p$ can be seen as items, thus $\boldsymbol{X}_i$ is an record consisting of $p$ items. Therefore, $\boldsymbol{X}$ can be regarded as a data set for machine learning algorithms, or a database for data mining algorithms. For a variable $C_j$ and one of its possible values $c_j$, we use ``$C_j=c_j$'' to represent $\mathbbm{1}_{\{C_j=c_j\}}$, a binary feature that indicates whether the value of variable $C_j$ is $c_j$. ``$C_j=c_j$'' also stands for an item that only appears in the records where the value of variable $C_j$ is $c_j$. Similarly, for $\{j_1, j_2, ..., j_k\}\subseteq \{1, 2, ..., p\}$, suppose $C_{j_1}, C_{j_2}, ..., C_{j_k}$ are $k$ variables and $c_{j_1}, c_{j_2}, ..., c_{j_k}$ are one of their corresponding possible values. Then a pattern $s$=``$C_{j_1}=c_{j_1}, C_{j_2}=c_{j_2}, ..., C_{j_k}=c_{j_k}$'' can be comprehended as a logical expression, a binary feature $\mathbbm{1}_{\{s\subseteq X\}}$, or an itemset containing $k$ items. We call such a expression ``$k$-order interaction''. This definition coincides with the term ``interaction'' used by \cite{shah14}, and will reduce to the latter if $c_j$=1 for all $j\in \{j_1, j_2, ..., j_k\}$. Also the interaction defined here is a non-additive interaction \citep{Friedman08, Sorokina08, Tsang18}, since it can not be represented by a linear combination of lower-order interactions. In this paper, we use the terms ``interaction'', ``itemset'' and ``pattern'' interchangeably to describe such expressions. When there is no ambiguity for classification label $c$, this expression is also referred to ``$s\to c$'' as a ``rule''. The frequency of an interaction $s$ for class $c$ is defined as the ratio of records containing $s$ with label $c$ to all the records with label $c$, and is denoted by \begin{equation} p_s^{(c)}=\mathbb{P}_N(s\subseteq X|Y=c)\coloneqq \frac{1}{|I^{(c)}|}\sum_{i\in I^{(c)}}\mathbbm{1}_{\{s\subseteq X_i\}}, \end{equation} where $I^{(c)}$ is the set of observations in class $c$. The confidence of an interaction $s$ for class $c$ is defined as the ratio of records containing $s$ with label $c$ to all the records containing $s$, and is denoted by \begin{equation} q_s^{(c)}=\mathbb{P}_N(Y=c|s\subseteq X)\coloneqq \frac{1}{|I_s|}\sum_{i\in I^{(c)}}\mathbbm{1}_{\{s\subseteq X_i\}}, \end{equation} where $I_s$ is the set of observations containing interaction $s$. An interaction is said to be frequent if its frequency is large, and confident if its confidence is large for a class. The main goal of this paper is to efficiently detect the interactions that are both frequent and confident for some classes. Then these interactions are used as the input for a succeeding classification model. Since irrelevant predictors in the original input are dropped and useful high-order interactions are explicitly added, interaction detection is likely to be beneficial for the prediction performance. \section{Random Intersection Chains} \label{sec:algorithm} In this section we give a naive version of \textit{Random Intersection Chains} at first, and show that there exist appropriate parameters for it to find all the frequent patterns while the infrequent ones are left out. Then a modification is provided to prevent the exponential computational burden for selecting the most frequent subpatterns from a given pattern. \subsection{A Basic Algorithm} Drawing inspiration from association rule mining and \textit{Random Intersection Trees}, we suggest an algorithm that can efficiently detect useful interactions, named \textit{Random Intersection Chains}. Like other data mining algorithms, frequent itemsets are discovered at first and then confident rules are generated. But instead of scanning the complete database, we adopt random intersections to mine the frequent itemsets, then their frequency and confidence are calculated with the assistant of maximum likelihood estimation or Bayes’ theorem. The first node of a chain, called the head node, contains the items in a randomly chosen instance. The other nodes in the chain contain the intersection of its previous node and a new randomly chosen instance. We repeatedly choose random instances until the length of a chain reaches the pre-defined threshold, or the number of the items in the last node (named the tail node) is sufficiently small. Finally the itemsets in the tail nodes as well as their subsets are regarded as frequent itemsets. For example, if we want to generate a chain consisting of three nodes, and the chosen instances are $[c_1, c_2, c_3]$, $[c_1, c_2', c_3]$, $[c_1, c_2, c_3']$, where $c_1\neq c_1'$, $c_2\neq c_2'$ and $c_3\neq c_3'$, then the chain is $[C_1=c_1, C_2=c_2, C_3=c_3]\to [C_1=c_1, C_3=c_3]\to [C_1=c_1]$. The procedure of generating $M$ such chains of length $D$ can be seen straightly from Algorithm \ref{alg:generate_chain}. \begin{algorithm} \caption{GenerateChain: generate chains by intersection} \label{alg:generate_chain} \begin{algorithmic}[1] \REQUIRE $\{(X_i, y_i)\}_{i\in I^{(c)}}$(observations in class $c$); \\ D(length of a chain); \\ M(number of chains);\\ \ENSURE chains for class c; \FOR{$m=1~to~M$} \STATE{Draw a random observation $X_{i_1}$ from the observations} \STATE{$S_{1,m}^{(c)}\leftarrow X_{i_1}$} \FOR{$d=2~to~D$} \STATE{Draw a random observation $X_{i_d}$ from the observations} \STATE{$S_{d,m}^{(c)}\leftarrow S_{d-1,m}^{(c)}\cap X_{i_d}$} \ENDFOR \ENDFOR \STATE{return $\{\{S_{d,m}^{(c)}\}_{d=1}^D\}_{m=1}^M$} \end{algorithmic} \end{algorithm} The larger frequency a pattern has, the more likely it is to appear in a uniformly chosen instance. Therefore, it's reasonable to assume the pattern in tail nodes are more frequent than others. After detecting the frequent patterns, \cite{shah14} adopt an estimator based on min-wise hashing to obtain their frequency. Though this estimator enjoys reduced variance compared to that which would be obtained using subsampling, it seems somewhat redundant and unnatural because it's independent of \textit{Random Intersection Trees}. However, \textit{Random Intersection Chains} can estimate frequency by themselves. For a pattern $s$ with frequency $p_s^{(c)}$, denote the number of its appearance in the $m$-th chain for class $c$ by $k_{s,m}^{(c)}$. The likelihood of observing this chain is \begin{equation} \mathbb{P}(k_{s,m}|p_s)=\begin{cases} p_s^{k_{s,m}}(1-p_s), \hfill \mbox{if~} k_{s,m}<D\\ p_s^{k_{s,m}}, \hfill \mbox{if~} k_{s,m}=D \end{cases}, \end{equation} where we omit the superscript ``$(c)$'' to keep notation uncluttered. And we have the likelihood of observing $M$ chains as shown in Equation \ref{eq:likelihood}, \begin{equation} \label{eq:likelihood} \begin{aligned} \mathbb{P}(\{k_{s,m}\}_{m=1}^M|p_s)&=\prod_{m:k_{s,m}<D}p_s^{k_{s,m}}(1-p_s)\cdot \prod_{m:k_{s,m}=D}p_s^{k_{s,m}}\\ &=p_s^{K_s}(1-p_s)^{I_s}, \end{aligned} \end{equation} where $K_s=\sum_{m=1}^Mk_{s,m}$, $I_s=\sum_{m=1}^M\mathbbm{1}_{\{k_{s,m}<D\}}$. Thus the log of likelihood is \begin{equation} \label{eq:loglikelihood} \log \mathbb{P}(\{k_{s,m}\}_{m=1}^M|p_s)=K_s\log p_s+I_s\log (1-p_s). \end{equation} Setting the derivative of Equation \ref{eq:loglikelihood} with respect to $p_s$ equalling to zero and rearranging, we obtain \begin{equation} \hat{p}_s=\frac{K_s}{K_s+I_s}=\frac{\bar{k}_s}{\bar{k}_s+\bar{\chi}_s}, \end{equation} where $\bar{k}_s=\frac{1}{M}\sum_{m=1}^Mk_{s,m}$, $\bar{\chi}_s=\frac{1}{M}\sum_{m=1}^M\mathbbm{1}_{\{k_{s,m}<D\}}$. Algorithm \ref{alg:frequency} estimates the frequency of a pattern by maximum likelihood estimation based on \textit{Random Intersection Chains}. Algorithm \ref{alg:confidence} provides an estimator of confidence by Bayes’ theorem once the frequency is available. \begin{algorithm} \caption{Frequency: estimate the frequency according to chains} \label{alg:frequency} \begin{algorithmic}[1] \REQUIRE $s$(a pattern); \\ $\{\{S_{d,m}^{(c)}\}_{d=1}^D\}_{m=1}^M$(the chains); \\ \ENSURE $\hat{p}_s^{(c)}$(the frequency); \FOR{$m=1~to~M$} \STATE{$k_{s,m}^{(c)}\leftarrow \mathop{\arg\max}_{d}\{s\in S_{d,m}^{(c)}\}$} \STATE{$\chi_{s,m}^{(c)}\leftarrow \mathbbm{1}_{\{k_{s,m}^{(c)}<D\}}$} \ENDFOR \STATE{$\hat{p}_s^{(c)}\leftarrow \frac{\bar{k}_s^{(c)}}{\bar{k}_s^{(c)}+\bar{\chi}_s^{(c)}}=\frac{\sum_{m=1}^M{k_{s,m}^{(c)}}}{\sum_{m=1}^M[{k_{s,m}^{(c)}}+{\chi_{s,m}^{(c)}]}}$} \STATE{return $\hat{p}_s^{(c)}$} \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Confidence: estimate the confidence by Bayes' theorem} \label{alg:confidence} \begin{algorithmic}[1] \REQUIRE $\{\hat{p}_s^{(c)}\}_{c\in C}$(frequency); \\ $\{p^{(c)}\}_{c\in C}$(prior probabilities); \\ \ENSURE $\hat{q}^{(c)}$(the confidence); \STATE{$\hat{q}_s^{(c)}\leftarrow \frac{\hat{p}_s^{(c)}p^{(c)}}{\sum_{c'\in C}\hat{p}_s^{(c')}p^{(c')}}$} \STATE{return $\hat{q}_s^{(c)}$} \end{algorithmic} \end{algorithm} After the chains are generated, we estimate the frequency of the patterns in the tail nodes. Then the confidence of these patterns is calculated, after which the confident patterns are returned as the useful interactions. We formally describe a basic version of \textit{Random Intersection Chains} in Algorithm \ref{alg:ric}, which combines the characteristic of both random intersections and association rule mining. \begin{algorithm} \caption{Random Intersection Chains} \label{alg:ric} \begin{algorithmic}[1] \REQUIRE $\{(X_i, y_i)\}_{i=1}^{N}$(database); \\ D(length of a chain); \\ M(number of chains);\\ $\xi$(threshold of confidence);\\ \ENSURE $\{L^{(c)}\}_{c\in C}$(returned patterns); \STATE{$p^{(c)}\leftarrow |I^{(c)}|/N$ for $c \in C$} \STATE{$S^{(c)}\leftarrow \emptyset$, for $c \in C$} \FORALL{$c \in C$} \STATE{$\{\{S_{d,m}^{(c)}\}_{d=1}^D\}_{m=1}^M\leftarrow$ GenerateChain($\{(X_i, y_i)\}_{i\in I^{(c)}}, D, M$)} \STATE{$S^{(c)}\leftarrow \bigcup_{m=1}^M\{s|s\subseteq S_{D,m}^{(c)}\}$} \ENDFOR \FORALL{$c \in C$} \STATE{$L^{(c)}\leftarrow \emptyset$} \FORALL{$s \in S^{(c)}$} \STATE{$\hat{p}_s^{(c')}\leftarrow$ Frequency($s, \{\{S_{d,m}^{(c')}\}_{d=1}^D\}_{m=1}^M$), for all $c'\in C$} \STATE{$\hat{q}_s^{(c)}\leftarrow$ Confidence($\{\hat{p}_s^{(c')}\}_{c'\in C}, \{p^{(c')}\}_{c'\in C}$)} \IF{$\hat{q}_s^{(c)}\ge \xi$} \STATE{$L^{(c)}\leftarrow L^{(c)}\cup \{s\}$} \ENDIF \ENDFOR \ENDFOR \STATE{return $\{L^{(c)}\}_{c\in C}$} \end{algorithmic} \end{algorithm} We now explain Algorithm \ref{alg:ric} line by line. There are three parameters in the algorithm, named $D$, $M$ and $\xi$, among which the first two are inherited from random intersections, and the last is from association rule mining. $D$ represents the length of a chain, $M$ stands for the number of chains and $\xi$ is the threshold of confidence. Line 1 calculates the proportion of each class, where $I^{(c)}$ represents the indices of observations in class $c$, $C$ is the set of class labels. These proportions will be used later to calculate the confidence. Line 2 initializes the set of frequent patterns, which is now an empty set, for each class. For every class, $M$ chains are generated at Line 4, then the patterns in the tail nodes as well as all their subpatterns are added to the sets created at Line 2. The frequency of a pattern in every class is calculated at Line 10, after which the confidence is calculated at Line 11. If the confidence of a pattern is larger than the pre-defined threshold, this pattern will be included in the resulting set at Line 13. Finally the resulting set is returned at Line 17. The longer a chain is, the harder for a pattern to appear in its tail node. Oppositely, the more chains, the more likely for a pattern to be observed in at least one of their tail nodes. By adjusting $D$ and $M$ carefully, we can control which patterns will be considered as ``frequent''. As proven in Theorem~\ref{thm:exist_M_D}, there actually exist choices of parameters such that the returned set contains frequent patterns with arbitrarily high probability while including the infrequent ones with probability lower than any given threshold. \begin{theorem} \label{thm:exist_M_D} \it Given $\eta_1, \eta_2 \in (0,1]$, for any $\theta \in (0,1]$, there exist choices of $M$, $D$ such that the set $L^{(c)}$ returned by Algorithm \ref{alg:ric} contains $s$ with probability at least $1-\eta_1$ if $P(s\subseteq X|Y=c)\ge \theta$, and with probability at most $\eta_2$ if $P(s\subseteq X|Y=c)< \theta$. \hfill \end{theorem} From the proof of Algorithm \ref{thm:exist_M_D}, it follows that M and D meet the demand if \[\frac{{\rm log}(1-p_1^{D})}{\log(1-p_2^{D})}\ge \frac{a}{b},\] \[M\ge \frac{a}{\log(1-p_1^{D})},\] where \[ p_1=\min\{p_s:p_s\ge \theta_1\},\] \[ p_2=\max\{p_s:p_s<\theta_1\},\] \[ a=\log \eta_1^{-1},\] \[ b=\log(1-\eta_2)^{-1}.\] So we have Corollary \ref{crl:choice_M_D}. \begin{corollary} \label{crl:choice_M_D} \it M and D meet the requirements in Theorem \ref{thm:exist_M_D} if $M\ge M^*$ and $D\ge D^*$, where \begin{equation} \label{D} D^*=\lceil\max\left\{ \frac{{\rm log}(b+1)-{\rm log}(a)}{{\rm log}(1/p_1)}, \frac{{\rm log}2+{\rm log}a-{\rm log}b}{{\rm log}(1/p_2)-{\rm log}(1/p_1)} \right\}\rceil, \end{equation} \begin{equation} M^*= \lceil \frac{a}{{\rm log}[1/(1-p_1^{D^*})]}\rceil. \end{equation} \hfill \end{corollary} Usually $\eta_1$ and $\eta_2$ are small, thus $a$ is large and $b$ is small. Assume $a\ge b+1$, then the first item in the braces of Equation~\ref{D} is no greater than 0. If $a\ge \frac{1}{2}b$, the second item in the braces of Equation~\ref{D} is no less than 0. In this case, Corollary \ref{crl:choice_M_D_2} holds. \begin{corollary} \label{crl:choice_M_D_2} \it If $a\ge \max\left\{b+1, \frac{1}{2}b \right\}$, then M and D meet the requirements in Theorem \ref{thm:exist_M_D} if $M\ge M^*$ and $D\ge D^*$, where \begin{equation} D^*=\lceil \frac{{\rm log}2+{\rm log}a-{\rm log}b}{{\rm log}(1/p_2)-{\rm log}(1/p_1)}\rceil, \end{equation} \begin{equation} M^*= \lceil \frac{a}{{\rm log}[1/(1-p_1^{D^*})]}\rceil. \end{equation} \hfill \end{corollary} Compared with \textit{Random Intersection Trees}, it is more convenient to apply \textit{Random Intersection Chains} on multi-class classification tasks. The former is originally designed for binary classification, and detects the interesting patterns for one class at a time. But it's unable to answer which patterns are the most useful ones among different classes. On the contrary, since \textit{Random Intersection Chains} not only detects the frequent patterns but also estimates their frequency and confidence, we can directly compare the frequency or confidence of patterns in different classes. Thus we can select the patterns from all the classes simultaneously. \subsection{Random Intersection Chains with Priority Queues} \label{sec:ric_queue} Thanks to its abandon of scanning the complete database, the naive version of \textit{Random Intersection Chains} is more efficient than the traditional methods on large data sets. Another advantage is that it can extract high-order interactions without discovering all lower-order interactions beforehand. However, this characteristic is also where a drawback of \textit{Random Intersection Chains} comes from. It is obvious that any subpattern of a frequent pattern is also frequent. Since the pattern in the tail node of a chain may contain many components, there are in fact a huge number of frequent patterns we have obtained. For example, if an interaction in a tail node contains $k$ components, then all the combinations of these components are frequent. So there are $2^k-1$ frequent patterns indeed. It's computationally infeasible to calculate frequency and confidence for every frequent pattern in this sense. And it's even impossible to pass over all these subpatterns due to its exponential complexity. \textit{Random Intersection Trees} \citep{shah14} gets over this difficulty by giving priority to the largest frequent patterns. That is, the authors only consider the patterns whose every superset is infrequent. They limit their attention to the patterns of the leaf nodes, but ignore their subpatterns. This may not be a satisfactory solution, since a subpattern is more frequent than the complete pattern, and sometimes can also have higher confidence. Overlooking them may lead to the missing of informative interactions. Fortunately, by taking advantage of a data structure named priority queue, we find an approach that selects the most frequent patterns from the power set of a given interaction in polynomial time. A priority queue is a data structure for maintaining a set of elements, each with an associated value called a key \citep{Cormen09}. In a priority queue, an element with high priority is served before an element with low priority. Like many implementations, we prefer the element enqueued earlier if two elements have the same priority. This may seem arbitrary, but is meaningful later in this work. In this paper, we give priority to elements with larger keys, so what we used is exactly a max-priority queue. A series of operations should be supported by such a queue, and the following are required in our method. \begin{itemize} \item INSERT($S, x, key$): inserts the element $x$ with $key$ into the set $S$, which is equivalent to the operation $S=S\cup \{x\}$; \item EXTRACT-MAX($S$): removes and returns the element with the largest key in $S$; \item COPY($S$): returns a copy of $S$. \end{itemize} We add a new attribute $size$ to a priority queue, which indicates its maximum capacity. In other words, we will discard an element if its $key$ is not the $S.size$ largest, which can be done by removing the element with the smallest key after inserting a new element into a full queue. If $S.size$ is zero, then this priority queue will always be empty. \begin{algorithm} \caption{Random intersection chains with priority queue} \label{alg:ric_improved} \begin{algorithmic}[1] \REQUIRE $\{(X_i, y_i)\}_{i=1}^{N}$(database); \\ $D$(length of a chain); \\ $M$(number of chains);\\ $d_{\rm freq}$(number of frequent patterns);\\ $d_{\rm conf}$(number of confident patterns);\\ \ENSURE $\{L^{(c)}\}_{c\in C}$(returned patterns); \STATE{$p^{(c)}\leftarrow |I^{(c)}|/N$, for $c \in C$} \FORALL{$c \in C$} \STATE{$\{\{S_{d,m}^{(c)}\}_{d=1}^D\}_{m=1}^M\leftarrow$ GenerateChain($\{(X_i, y_i=c)\}_{i\in I^{(c)}}, D, M$)} \STATE{Initialize $S^{(c)}$ as an empty priority queue of size $d_{\rm freq}$} \FOR{$m=1~to~M$} \STATE{InsertFreqSubset($S^{(c)},S_{D,m}^{(c)},\{\{S_{d,m}^{(c)}\}_{d=1}^D\}_{m=1}^M$)} \ENDFOR \ENDFOR \FORALL{$c \in C$} \STATE{Initialize $L^{(c)}$ as an empty priority queue of size $d_{\rm conf}$} \FORALL{$s\in S^{(c)}$} \STATE{INSERT($L^{(c)},s,q_s^{(c)}$)} \ENDFOR \ENDFOR \STATE{return $\{L^{(c)}\}_{c\in C}$} \end{algorithmic} \end{algorithm} With priority queues defined above, we come up with Algorithm \ref{alg:ric_improved}. The main difference between Algorithm \ref{alg:ric} and Algorithm \ref{alg:ric_improved} lies in the approach of identifying which patterns are frequent or confident. In Algorithm \ref{alg:ric}, all the patterns in the tail nodes and their subpatterns are considered to be frequent, which may result in heavy computation. On the contrary, Algorithm \ref{alg:ric_improved} only takes the $d_{\rm freq}$ most frequent patterns into consideration. What's more, Algorithm \ref{alg:ric_improved} returns the $d_{\rm conf}$ most confident patterns, while Algorithm \ref{alg:ric} identifies confident patterns by a pre-defined threshold. \begin{algorithm} \caption{InsertFreqSubset: add the frequent subsets of an itemset} \label{alg:insert_freq_subset} \begin{algorithmic}[1] \REQUIRE $S^{(c)}$ (a priority queue); \\ $s$(an itemset); \\ $\{\{S_{d,m}^{(c)}\}_{d=1}^D\}_{m=1}^M$(chains);\\ \FORALL{$x \in s$} \STATE{$\hat{p}_x^{(c)}\leftarrow$ Frequency($\{x\}, \{\{S_{d,m}^{(c)}\}_{d=1}^D\}_{m=1}^M$)} \STATE{INSERT($S^{(c)},\{x\},\hat{p}_x^{(c)}$)} \ENDFOR \FOR{$k=2~to~|s|$} \STATE{$A\leftarrow$ COPY($S^{(c)}$)} \WHILE{$|A| > 1$} \STATE{$a\leftarrow $EXTRACT-MAX($A$)} \STATE{$A.size\leftarrow A.size-1$} \IF{$|a|=1$ and $a\in S^{(c)}$} \STATE{$B\leftarrow$ COPY($A$)} \WHILE{$|B| > 0$} \STATE{$b\leftarrow $EXTRACT-MAX($B$)} \STATE{$B.size\leftarrow B.size-1$} \IF{$|b|=k-1$ and $b\in S^{(c)}$ and $a\cap b=\emptyset$} \STATE{$\hat{p}_{a\cup b}^{(c)}\leftarrow$ Frequency($a\cup b, \{\{S_{d,m}^{(c)}\}_{d=1}^D\}_{m=1}^M$)} \STATE{INSERT($S^{(c)},a\cup b, \hat{p}_{a\cup b}^{(c)}$)} \STATE{INSERT($A,a\cup b, \hat{p}_{a\cup b}^{(c)}$)} \STATE{INSERT($B,a\cup b, \hat{p}_{a\cup b}^{(c)}$)} \ENDIF \ENDWHILE \ENDIF \ENDWHILE \ENDFOR \end{algorithmic} \end{algorithm} The most important improvement of Algorithm \ref{alg:ric_improved} lies in Line 6, where the algorithm named ``InsertFreqSubset'' is used to select the most frequent subsets among the power set of a given itemset. The algorithm works by selecting frequent itemsets level-wisely. We have known that an itemset can not be more frequent then any of its subsets. So if an itemset fails to be in the priority queue, so do its supersets. For an itemset $s$ consisting of more than one item, it could be uniquely represented by $s=\{a\}\cup (s\setminus \{a\})$, where $a$ is the most frequent item in $s$ (if there are several items having the same frequency, choose the one enqueued earliest, which coincides with the implementation of our priority queue). $\{a\}$ is by definition more frequent than the other singleton subsets of $s\setminus \{a\}$, thus it's more frequent than $s\setminus \{a\}$. So if we extract the elements from the priority queue in order, $s\setminus \{a\}$ occurs later than $\{a\}$. This sheds some light on the search for frequent $k$-order itemsets. We need first take care of the 1-order itemsets in the priority queue. For each 1-order itemset, we only pay attention to the ($k$-1)-order itemsets that are extracted later than it. If the 1-order itemset and a ($k$-1)-order itemset are disjoint, then their union is a candidate frequent $k$-order itemset. We need only calculate the frequency of these $k$-order itemsets. The detailed InsertFreqSubset algorithm is given in Algorithm \ref{alg:insert_freq_subset}. Frequent 1-order itemsets are selected in Line 1-4. This is followed by a for-loop, where the loop index variable $k$ represents the size of the candidate itemsets. Since extracting the elements from a queue will change the queue, we make a copy of the queue at Line 5 and Line 11, then apply the EXTRACT-MAX operation on the replica to prevent unwanted changes. The outer while-loop aims to find 1-order itemsets in the priority queue. For an 1-order itemset, the inner while-loop is conducted to find its frequent $k$-order supersets. ($k$-1)-order itemsets in the remaining queue is caught at Line 15, then a candidate $k$-order itemset is generated by combining the 1- and ($k$-1)-order itemset, whose frequency is estimated at Line 16. After that the candidate itemset is inserted into the resulting queue $S^{(c)}$ at Line 17. The changes of the queue capacity at Line 9 and 14, as well as INSERT operations at Line 18 and 19 have no effects on the resulting queue. But they can squeeze out the infrequent 1- or ($k$-1)-order itemsets as soon as possible, which reduces the number of candidates and speed up the algorithm. \section{Computational Complexity} \label{sec:computation_complexity} In this section, we analysis the computational complexity of generating a chain and selecting the most frequent subsets of a given itemset, from which we can see that the proposed algorithms are very efficient in some sense. \subsection{Complexity of Chain Generation} Most intuitively, a chain can be represented by recording each itemsets at a node. But this not only causes a wastage of space, but also makes it troublesome to generate or check a chain. For example, to compute the intersection, \cite{shah14} check whether each component of the current interaction is in the new observation. Every such check is $O(\log p)$ even if a binary search is adopted. If most of the components are sufficiently frequent, the size of interactions will keep close to $p$, so the time complexity of an intersection can be near $O(p\log p)$. The total time needed to generate a chain of length $D$ is near $O(Dp\log p)$, and the memory required to store this chain is near $O(Dp)$. It's worth noting that the itemsets in a chain is descending. That is to say, the itemset in a node (except the head node) must be a subset of its previous node. So rather than record the chain as a series of ordered itemsets, we can view it from the aspect of items. A chain can be represented by two $p$-dimensional vectors, where the first is a copy of the first randomly chosen observation, and the other records how many times the corresponding item occurs in the chain. For instance, the chain $[C_1=c_1, C_2=c_2, C_3=c_3]\to [C_1=c_1, C_3=c_3]\to [C_1=c_1]$ can be represented by \{$[C_1=c_1, C_2=c_2, C_3=c_3]$, [3, 1, 2]\}, or simply \{$[c_1, c_2, c_3]$, [3, 1, 2]\}. The memory to store a chain is thus $O(p)$, and is independent of its length. The memory required to store $M$ chains is the same as $2M$ observations. For large data sets, the number of observations $N$ can be very huge, thus additional $2M$ observations usually have little influence on the requirement of storage. An item is in the $i$-th node if and only if it occurs at least $i$ times. When adding a new node to the chain, we need only pay attention to the items whose number of occurrences equals to the current length of the chain. If these items also appear in the new randomly chosen observation, then making the intersection can be done simply by adding one to their number of occurrences. In this way, the time complexity of adding a node is $O(p)$, and generating a chain of length $D$ has a time complexity of $O(pD)$. Making use of this representation, we have Theorem~\ref{thm:space_complexity_chain} and Theorem~\ref{thm:time_complexity_chain}. \begin{theorem} \label{thm:space_complexity_chain} \it The space complexity of Algorithm \ref{alg:generate_chain} is $O(p|C|M)$. If $M$ and $D$ are chosen as $M^*$ and $D^*$ in Corollary \ref{crl:choice_M_D_2}, then the return meets the requirements in Theorem \ref{thm:exist_M_D}, while the space complexity is $O(p|C|\lceil \frac{a}{\log[1/(1-p_1^{D^*})]}\rceil)$. \hfill \end{theorem} \begin{theorem} \label{thm:time_complexity_chain} \it The time complexity of Algorithm \ref{alg:generate_chain} is $O(p|C|MD)$. If $M$ and $D$ are chosen as $M^*$ and $D^*$ in Corollary \ref{crl:choice_M_D_2}, then the return meets the requirements in Theorem \ref{thm:exist_M_D}, while the time complexity is $O(p|C|\lceil \frac{a}{\log[1/(1-p_1^{D^*})]}\rceil D^*)$.\\ \hfill \end{theorem} From Theorem~\ref{thm:space_complexity_chain} and Theorem~\ref{thm:time_complexity_chain} we can conclude that \textit{Random Intersection Chains} will be efficient when $p_1$ is large and $p_2$ is small. For the ideal situation where $p_1$ approaches 1 and $p_2$ tends to 0, $M^*$ and $D^*$ are near 1, both the space and time complexity of chain generation are linear with the number of features $p$ or the number of different labels in $C$. \subsection{Complexity of Subset Selection} As stated earlier in Section~\ref{sec:ric_queue}, finding a frequent pattern $s$ means we have actually found $O(2^{|s|})$ frequent patterns, since every subpattern of $s$ is frequent. It's not realistic to take all of them into consideration, not even to perform a traversal. Algorithm \ref{alg:insert_freq_subset} solves this problem with the help of priority queues. We provide Theorem \ref{thm:freq_subset} to guarantee the validity and time complexity of Algorithm \ref{alg:insert_freq_subset}. \begin{theorem} \label{thm:freq_subset} \it If the input priority queue of Algorithm \ref{alg:insert_freq_subset} is empty, then it contains the $d_{\rm freq}$ most frequent subsets of $s$ when the algorithm ends, and the number of frequency calculation is $O(|s|d_{\rm freq}^2)$. \hfill \end{theorem} If the input priority queue of Algorithm \ref{alg:insert_freq_subset} is not empty, e.g. $S^{(c)}=S'\neq \emptyset$, then during the running of the algorithm, itemsets with small frequency in $S'$ will be squeezed out by the subsets of the input itemset $s$, while itemsets with large frequency in $S'$ occupies a position in the priority queue from beginning to end. So at the end of the algorithm, the priority queue $S^{(c)}$ contains the $d_{\rm freq}$ most frequent itemsets in $S'\cup Pow(s)$. As a result, after the for-loop in Line 5-7 of Algorithm \ref{alg:ric_improved}, $S^{(c)}$ contains the $d_{\rm freq}$ most frequent itemsets found by the chains for class $c$. \section{Convergence Analysis} \label{sec:convergence} The previous section analyzes the computational complexity of \textit{Random Intersection Chains}, from which we can see that it is efficient in some sense. Another main concern about \textit{Random Intersection Chains} is how effective it is. In other words, since the frequency and confidence are estimated on the basis of random intersection, one may wonder how well they can approximate their true value. We find that these estimators have some good properties. To illustrate this point, asymptotic behaviors of $\hat{p}_s^{(c)}$ and $\hat{q}_s^{(c)}$ are given in Theorem \ref{thm:freq} and Theorem \ref{thm:conf}. The derivations are given in the appendix, which are mainly based on the multivariate delta method. \begin{theorem} \label{thm:freq} \it $\hat{p}_s^{(c)}$ calculated by Algorithm \ref{alg:frequency} satisfies: \begin{equation} \sqrt{M}[\hat{p}_s^{(c)}-p_s^{(c)}]\stackrel{d}{\longrightarrow}n(0, \frac{p_s^{(c)}(1-p_s^{(c)})^2}{1-(p_s^{(c)})^D}). \end{equation} \hfill \begin{theorem} \label{thm:conf} \it $\hat{q}_s^{(c)}$ calculated by Algorithm \ref{alg:confidence} satisfies: \begin{equation} \sqrt{M}[\hat{q}_s^{(c)}-q_s^{(c)}]\stackrel{d}{\longrightarrow} n(0, \tau^2). \end{equation} where \[ \tau^2=\left[\frac{p_s^{(c)}p^{(c)}}{p_s^2}\right]^2\sum_{c'\in C}\frac{[1-p_s^{(c')}]^2p_s^{(c')}}{1-p_s^{(c')D}}p^{(c')2} +\left[\frac{p^{(c)}}{p_s^2}\right]^2 \frac{[1-p_s^{(c)}]^2p_s^{(c)}}{1-p_s^{(c)D}}p_s(p_s-2p_s^{(c)}p^{(c)})\] \hfill \end{theorem} \end{theorem} From Theorem \ref{thm:freq} we can see that $\hat{p}_s^{(c)}$ converges to an unbiased estimator in distribution as $M$ goes to infinity. The limiting estimator multiplied by $\sqrt{M}$ would have variance $\frac{p_s^{(c)}(1-p_s^{(c)})^2}{1-(p_s^{(c)})^D}$. This variance is monotone decreasing with the increase of $D$. Remember that the space needed for chains is independent of $D$. So if time permits, setting $D$ as large as possible seems a good choice. The variance tends to 0 if $p_s^{(c)}$ is close to either 0 or 1, which means the estimator is more accurate if the itemset is extremely frequent or extremely infrequent. Theorem \ref{thm:conf} leads to some similar results. $\hat{q}_s^{(c)}$ also converges to an unbiased estimator in distribution as $M$ goes to infinity. But the variance of the limiting estimator multiplied by $\sqrt{M}$ is more complex. Anyway, the variance is monotone decreasing with the increase of $D$, too. This makes setting larger $D$ more appealing. In general, large $p_s$(means $s$ is frequent in the whole database), small $p^{(c)}$(means $c$ is a minor class) and extremely large or small $p_s^{(c')}$(means $s$ is either very frequent or very infrequent for each class) will leads to relatively small variance. \section{Numerical Studies} To verify the efficiency and effectiveness of \textit{Random Intersection Chains}, we give the results of several numerical examples. We first conduct a series of experiments on two benchmark data sets for click-through rate (CTR) prediction, which aims to illustrate the efficiency, consistency and effectiveness of \textit{Random Intersection Chains} on large-scale data sets. We also adopt the data sets used by \cite{shah14}, from whose experimental results two conclusions can be obtained: (1) \textit{Random Intersection Chains} can find almost all meaningful patterns for an ideal data set, (2) rather than act as a classifier themselves, the detected patterns can lead to a better result if they serve as input features for another classification model. We also show that \textit{Random Intersection Chains} helps to find interactions of numerical features if they are transformed to discrete ones at first, by comparing it to some existing interactive feature selection algorithms on another two UCI data sets. According to the discussion in Section~\ref{sec:convergence}, longer chains lead to better estimations. So we set $D$=100,000, and introduce the maximum order of interaction $K$ as an additional parameter. A chain stops growing if either its length is larger than $D$, or the number of items in its tail node is no larger than $K$. \label{sec:experiments} \subsection{Click-Through Rate Prediction} Click-through rate (CTR) prediction is an important application of machine learning algorithms, which aims to predict the ratio of clicks to impressions of a specific link. The input features associate with either a user or an item, many of which are categorical and of high cardinality. The label indicates the clicks of a user to an item. Usually few items will be clicked by a user, which makes the data unbalanced. We conduct experiments on two public real-world datasets, named Criteo and Avazu. Criteo data set consists of a portion of Criteo’s traffic over a period of 7 days. There are 45 million users’ clicking records on displayed ads in the data, and the rows are chronologically ordered. It contains 26 categorical features and 13 numerical features. Avazu data set contains the records of whether a displayed mobile ad is clicked by a user or not. Click-through data of 10 days, ordered chronologically, is provided. It has 23 features, all of which are categorical. And the total number of samples is above 40 million. If one-hot encoding is applied, there will be 998,960 binary features for Criteo data set and 1,544,428 features for Avazu data set, which is obviously unacceptable. We first unify all the uncommon categories into a single category ``others''. A category is ``uncommon'' if the number of its occurrences is less than 10 for Criteo or 5 for Avazu. Then the categorical features are label encoded. As for numerical features, a value $z$ will be transformed to $(\log z)^2$ if $z>2$. Finally, each data set is divided into 10 parts, where 8 parts are used for training, 1 part for validation and 1 part for test. This procedure is actually the same as in the work of \cite{Song19}, which is also adopted by \cite{Song20, Tsang20}. As analyzed in Section~\ref{sec:computation_complexity} and Section~\ref{sec:convergence}, the more chains are generated, the more time and memory are needed, but the more accurate estimations of frequency and confidence are obtained. We apply \textit{Random Intersection Chains} on both data sets with $M$ from 100 to 15,000 for each part in the training set. The running time is shown in Figure~\ref{running_time}. As for memory requirements, the additional space cost caused by \textit{Random Intersection Chains} is at most the same as $2\times (15,000\times 8)=240,000$ observations, which is very small when compared with the original 40 million observations. To show the consistency of \textit{Random Intersection Chains}, we adopt the Jaccard-index as the criterion of similarity between two sets $S$ and $S'$, which is defined as \begin{equation} {\rm J}(S, S')=\frac{|S\cap S'|}{|S\cup S'|}. \end{equation} We calculate the Jaccard-index for interactions found by $M$=15000 and the interactions found by smaller values of $M$, and the results are exhibited in Figure~\ref{Jaccard_index}. As can be seen from Figure~\ref{running_time}, the running time is linear with the number of chains, and it's relatively small considering the rather large size of the data sets. Figure~\ref{Jaccard_index} indicates that the returns of \textit{Random Intersection Chains} are very similar for large $M$, which verifies the consistency. \begin{figure}[!t] \centering \subfigure[Running time of random intersection chains.]{\includegraphics[width=2.7in]{running_time.png} \label{running_time}} \hfil \subfigure[Jaccard-index of the found interactions.]{\includegraphics[width=2.7in]{jaccard_index.png} \label{Jaccard_index}} \caption{Running time and Jaccard-index for Random Intersection Chains with different number of chains on Criteo and Avazu data sets.} \label{fig:preformance_m} \end{figure} One of the advantages of the interactions defined in this paper is their interpretability. Since the Avazu data set contains non-anonymized features, we list the 10 most confident interactions with their estimated or accurate frequency and confidence in Table~\ref{tab:interactions}, where we only list out the name of features, and omit the specific value for each feature to keep notation uncluttered. We can conclude that ``banner\_pos'' plays an important role in advertising. This observation coincides with the intuition that an advertisement is more likely to be clicked if it's exhibited in a good position. Many interactions have a feature associated with “app” and a feature about “device”, which indicates the relationship between an item and a user. Also the estimations of frequency and confidence are pretty good, the RMSE of the estimations are $1.2\times 10^{-3}$, $1.9\times 10^{-3}$ and $1.1\times 10^{-3}$ for Frequency(-), Frequency(-) and Confidence, respectively. What's more, the corresponding Pearson correlation coefficient are 0.9987, 0.9977 and 0.9879, which means the numerical order of frequency or confidence is well preserved. Thus the patterns found by \textit{Random Intersection Chains} is likely to be the most frequent and confident ones. \begin{table}[!t]\scriptsize \caption{Ten most confident interactions for Avazu. Frequency(-) stands for the frequency in negative class and Frequency(+) for the frequency in positive class. ``Est.'' represents the values of estimators and ``True'' represents the accurate values in the data sets.} \label{tab:interactions} \centering \begin{tabular}{ccccccc} \hline \multirow{2}{*}{interaction} & \multicolumn{2}{c}{Frequency(-)} & \multicolumn{2}{c}{Frequency(+)} & \multicolumn{2}{c}{Confidence}\\ \cline{2-7} & Est. & True & Est. & True & Est. & True \\ \hline device\_id, C20 & 0.3607 & 0.3610 & 0.4495 & 0.4518 & 0.2032 & 0.2038 \\ banner\_pos,app\_domain,app\_category,device\_conn\_type & 0.3646 & 0.3633 & 0.4486 & 0.4497 & 0.2011 & 0.2020 \\ banner\_pos, app\_category, device\_conn\_type & 0.3646 & 0.3633 & 0.4486 & 0.4497 & 0.2011 & 0.2020 \\ banner\_pos, app\_domain, app\_category & 0.3835 & 0.3821 & 0.4717 & 0.4735 & 0.2010 & 0.2022 \\ banner\_pos, app\_category & 0.3835 & 0.3821 & 0.4717 & 0.4735 & 0.2010 & 0.2022 \\ banner\_pos, app\_id, app\_domain & 0.3759 & 0.3748 & 0.4604 & 0.4619 & 0.2003 & 0.2014 \\ banner\_pos, app\_id & 0.3759 & 0.3748 & 0.4604 & 0.4619 & 0.2003 & 0.2014 \\ banner\_pos, app\_domain, device\_conn\_type & 0.3684 & 0.3669 & 0.4498 & 0.4510 & 0.1998 & 0.2009 \\ device\_type, device\_conn\_type, C20 & 0.3690 & 0.3684 & 0.4502 & 0.4537 & 0.1997 & 0.2012 \\ banner\_pos, app\_domain & 0.3882 & 0.3867 & 0.4731 & 0.4751 & 0.1995 & 0.2008 \\ \hline \end{tabular} \end{table} Finally, the interactions are used as binary features, based on which several popular CTR prediction models are trained. We adopt 5 models, namely Wide\&Deep \citep{Cheng16}, DeepFM \citep{Guo17}, xDeepFM \citep{Lian18}, Deep\&Cross \citep{Wang17} and AutoInt \citep{Song19}, to test whether adding the interactions to the input is helpful. The results are shown in Table~\ref{tab:ctr_performance}, where the performance of GLIDER, an interaction detection methods proposed by \cite{Tsang20}, is also presented as a comparison. We can see that in most cases, adding the interactions found by \textit{Random Intersection Chains} leads to a significant improvement. In fact, an improvement as small as 0.001 is desirable for the Criteo data set \citep{Cheng16, Guo17, Wang17, Song19, Tsang20}, and \textit{Random Intersection Chains} lives up to this expectation. Perhaps due to the better learning rate we used, our baseline is better than GLIDER. Despite the difference between baselines, \textit{Random Intersection Chains} makes a comparable improvement to GLIDER. But according to \cite{Tsang20}, it will take several hours and more than 150 GB memory to perform GLIDER, while the requirement of \textit{Random Intersection Chains} is much lower. \begin{table} \caption{CTR prediction performance on two benchmark data sets. ``+RIC'' means adding the interactions found by \textit{Random Intersection Chains} to the input. All experiments were repeated for 5 times, and the means are provided with standard deviations in parentheses followed. The results which yield a p-value less than 0.05 are shown in bold. Rows with * are reported by \cite{Tsang20}. ``+GLIDER'' means the inclusion of interactions found by GLIDER. } \label{tab:ctr_performance} \centering \begin{tabular}{lcccc} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{Criteo} & \multicolumn{2}{c}{Avazu}\\ \cline{2-5} & AUC & logloss & AUC & logloss\\ \hline Wide\&Deep & 0.8087(4e-4) & 0.4427(3e-4) & 0.7826(2e-4) & 0.3781(1e-4) \\ +RIC & \textbf{0.8097(3e-4)} & \textbf{0.4418(2e-4)} & \textbf{0.7828(2e-4)} & \textbf{0.3779(1e-4)} \\ Wide\&Deep* & 0.8069(5e-4) & 0.4446(4e-4) & 0.7794(3e-4) & 0.3804(2e-4) \\ +GLIDER* & 0.8080(3e-4) & 0.4436(3e-4) & 0.7795(1e-4) & 0.3802(9e-5) \\ \hline DeepFM & 0.8087(4e-4) & 0.4427(3e-4) & 0.7819(3e-4) & 0.3785(2e-4) \\ +RIC & \textbf{0.8097(3e-4)} & \textbf{0.4418(2e-4)} & \textbf{0.7826(4e-4)} & \textbf{0.3781(2e-4)} \\ DeepFM* & 0.8079(3e-4) & 0.4436(2e-4) & 0.7792(3e-4) & 0.3804(9e-5) \\ +GLIDER* & 0.8097(2e-4) & 0.4420(2e-4) & 0.7795(2e-4) & 0.3802(2e-4) \\ \hline Deep\&Cross & 0.8084(7e-4) & 0.4430(7e-4) & 0.7824(2e-4) & 0.3782(1e-4) \\ +RIC & \textbf{0.8096(6e-4)} & \textbf{0.4419(5e-4)} & \textbf{0.7835(8e-4)} & \textbf{0.3776(4e-4)} \\ Deep\&Cross* & 0.8076(2e-4) & 0.4438(2e-4) & 0.7791(2e-4) & 0.3805(1e-4) \\ +GLIDER* & 0.8086(3e-4) & 0.4428(2e-4) & 0.7792(2e-4) & 0.3803(9e-5) \\ \hline xDeepFM & 0.8082(5e-4) & 0.4432(4e-4) & 0.7824(4e-4) & 0.3782(2e-4) \\ +RIC & \textbf{0.8103(2e-4)} & \textbf{0.4414(2e-4)} & 0.7825(4e-4) & 0.3781(2e-4) \\ xDeepFM* & 0.8084(2e-4) & 0.4433(2e-4) & 0.7785(3e-4) & 0.3808(2e-4) \\ +GLIDER* & 0.8097(3e-4) & 0.4421(3e-4) & 0.7787(4e-4) & 0.3806(1e-4) \\ \hline AutoInt & 0.8077(3e-4) & 0.4436(3e-4) & 0.7788(2e-4) & 0.3804(1e-4) \\ +RIC & \textbf{0.8090(4e-4)} & \textbf{0.4425(4e-4)} & \textbf{0.7795(3e-4)} & \textbf{0.3802(1e-4)} \\ AutoInt* & 0.8083 & 0.4434 & 0.7774 & 0.3811 \\ +GLIDER* & 0.8090(2e-4) & 0.4426(2e-4) & 0.7773(1e-4) & 0.3811(5e-5) \\ \hline \end{tabular} \end{table} \subsection{Tic-Tac-Toe Endgame Data} Tic-Tac-Toe endgame data set \citep{Matheus89, Aha91} encodes the complete set of possible board configurations at the end of tic-tac-toe games. There are 958 instances and 9 categorical features in the data set. The possible values for each feature are ``x'', ``o'' and ``b'', which stand for ``black'', ``white'' and ``blank'', respectively. There are 8 possible ways to win for both players (3 horizontal lines, 3 vertical lines and 2 diagonal lines). Our target is to learn these rules that determine which player wins the game. This is an ideal data set to test the effectiveness of \textit{Random Intersection Chains}, since all the features are categorical and the label is intrinsically determined by some rules. As the result of an ending game is completely determined by some rules, we can make an accurate prediction if we find all the interesting rules. So we make an effort to extract the total 16 valid rules, with different $d_{\rm freq}$. We set $K$=4 and $d_{\rm conf}$=10, so only the 10 most confident patterns for each player are kept. The range of $d_{\rm freq}$ we adopted is [20, 1000], and the number of found interesting rules is given in Figure~\ref{fig:valid_rule}. At the beginning, there are more interesting patterns can be found as $d_{\rm freq}$ grows. Actually for $d_{\rm freq}$ larger than 400, all the interesting patterns corresponding to x's victory are found, while there is one missing pattern for ``o''. We check the list of the found patterns, and found that the missing pattern is ``a3=o, b3=o, c3=o''. Its support is 32/958=0.0334, indeed a very small value. When $d_{\rm freq}$ is too large, not only the missing pattern doesn't occur, but also some already found patterns disappear. This is because some uncommon patterns have high confidence coincidentally. For example, ``a1=x, b1=o, b3=b'' occurs 30 times in the database, and all the instances containing this pattern happen to be in positive class. Since the number of the remained patterns are limited, these occasional patterns squeeze out the interesting ones. The results indicate that the size of priority queue influences the extracted patterns in two ways. On the one hand, small $d_{\rm freq}$ may lead to a neglect of some meaningful patterns. On the other hand, if it's too large, some occasionally confident patterns will be chosen, which can be treated as the ``overfitting'' phenomenon of \textit{Random Intersection Chains}. \begin{figure}[!t] \centering \includegraphics[width=3.5in]{valid_rule.png} \caption{Number of valid rules in confident rules.} \label{fig:valid_rule} \end{figure} \subsection{Reuters RCV1 Text Data} The Reuters RCV1 text data contains the tf-idf (term frequency-inverse document frequency) weighted presence of 47,148 word-stems in each document \citep{Lewis04}. \cite{Lewis04} used a data set consisting of 23,149 documents as the training set. Like the processing approach adopted by \cite{shah14}, we only consider word-stems appearing in at least 100 documents and the topics that contain at least 200 documents. This leaves 2484 word-stems as predictor variables and 52 topics as prediction targets. Also, tf-idf is transformed to a binary version, using 1 or 0 to represent whether it is positive. In this work, we split the original training data into a smaller training set and a test set. The first 13,149 instances are used for training and the rest for testing. For each topic $c$, the target is to predict whether a document belongs to this topic. Setting $d_{\rm freq}$=400, $d_{\rm conf}$=200 and $K$=4, $M$=300, we apply \textit{Random Intersection Chains} on the training set. Then we evaluate the interactions in two different ways. The first method, labeled by ``Best-Rule'', is classifying the instances by the best rule directly. The ``best'' rule is defined as the most confident one among the rules with supports larger than $p^{(c)}$/10, which is the same as what is used by \cite{shah14}. The other method is treating the rules as selected features for a linear model, which is called ``Rules+LR''. We also fit a linear model on the total 2484 features as a comparison, labeled as ``LR''. The precision and recall for the models are given in Figure~\ref{fig:results_rcv1}, where the best rules found by \textit{Random Intersection Chains} are shown in the right side of the figure. \begin{figure}[!t] \centering \subfigure[Precision on the test data.]{\includegraphics[height=6in]{precision_rcv1.png} \label{precision_rcv1}} \hfil \subfigure[Recall on the test data.]{\includegraphics[height=6in]{recall_rcv1.png} \label{recall_rcv1}} \caption{Precision and recall on the test data.} \label{fig:results_rcv1} \end{figure} We can see that ``Best-Rule'' sometimes gives good precision or recall, but its performance is usually the worst among the three models. This is not surprising because the information in a single rule is limited, and it's unreasonable to ask a rule to be both general and reliable on a complicated data set. ``Rules+LR'' yields similar precision but generally smaller recall when compared with ``LR''. At a first glance, it seems \textit{Random Intersection Chains} brings few benefits. But noticing that the input dimension of the linear model in ``Rules+LR'' is 200, while the number is 2484 in ``LR''. It rapidly reduces the data size and the computational burden, while the precision and recall only fall slightly. Also, the interpretability is enhanced since few features are used. \subsection{Other UCI Data} We also apply \textit{Random Intersection Chains} on the two data sets used by \cite{shah16}. The first is Communities and Crime Unnormalized Data Set, ``CCU'' for short, which contains crime statistics for the year 1995 obtained from FBI data, and national census data from 1990. We take violent crimes per capita as our response, which makes it a regression task. We process the data in the same way as \cite{shah16}. This leads to a data set consisting of 1903 observations and 101 features. The second data set is ``ISOLET'', which consists of 617 features based on the speech waveforms generated from utterances of each letter of the English alphabet. We consider classification on the notoriously challenging E-set consisting of the letters ``B'', ``C'', ``D'', ``E'', ``G'', ``P'', ``T'', ``V'' and ``Z''. And finally we have 2700 observations spread equally among 9 classes. We use the $l_1$-penalised linear regression as the base regression procedure, and penalised multinomial regression for the classification example. The regularization coefficient is determined by 5-fold cross-validation. To evaluate the procedures, we randomly select 2/3 of the instances for training and the rest for testing. This procedure is repeated 200 times for each of the data sets. Mean square error is used as the criterion for the regression model and misclassification rate is used for the classification task. All the settings are exactly the same as those used by \cite{shah16}, except we use $l_2$-regularizer to penalise the multinomial regression instead of group Lasso. This is because we don't know how \cite{shah16} grouped the features. Since the inputs for these two data sets are numerical and CCU data set corresponds to a regression task, \textit{Random Intersection Chains} can not be applied directly. To handle this difficulty, the continuous features and response should be transformed to a discrete version. The response of CCU data set is split into 5 categories by quantiles, and all the continuous features are then split to 5 intervals according to information gain. Setting $d_{\rm freq}$=2500, $d_{\rm conf}$=1225 and $K=5$, $M=300$, we add the product $X_{i_1}X_{i_2}\cdots X_{i_k}$ as an interactive feature to the input if there is a rule in the form of ($X_{i_1}$=$x_{i_1}$, $X_{i_2}$=$x_{i_2}$, ..., $X_{i_k}$=$x_{i_k}$) for some $x_{i_1}$, $x_{i_2}$, ..., $x_{i_k}$ in the resulting rule sets. The results of models with and without adding the rules found by \textit{Random Intersection Chains} are shown in Table \ref{tab:ccu_isolet}, labeled by ``RIC'' and ``Main''. We also list out the results reported by \cite{shah16}, including base procedures (``Main*''), iterated Lasso fits (``Iterated''), Lasso following marginal screening for interactions (``Screening''), Backtracking, Random Forests \citep{Breiman01}, hierNet \citep{bien13} and MARS \citep{Friedman91}. For CCU data set, our base model outperforms the one used by \cite{shah16}, which may caused by a better penalty parameter. \textit{Random Intersection Chains} leads to comparable or better result when compared to existing algorithms. As for ISOLET data set, the result of our base model is not as good as the one used by \cite{shah16}. This is not surprising since we simply use $l_2$-regularizer while \cite{shah16} adopted group Lasso to penalise the model. But we can see that \textit{Random Intersection Chains} can run on this data set and leads to a good improvement, while some existing methods such as Screening, hierNet, MARS are inapplicable. We think this could be an evidence of our method's efficiency. \begin{table}[!t] \caption{Results of CCU and ISOLET. Rows with * are reported by \cite{shah16}.} \label{tab:ccu_isolet} \centering \begin{tabular}{c||c|c} \hline \multirow{2}{*}{method} & \multicolumn{2}{c}{ERROR}\\ \cline{2-3} & Communities and crime & ISOLET\\ \hline Main & $0.404(5.7\times 10^{-3})$ & $0.0730(5.5\times 10^{-4}$) \\ RIC & $0.369(6.3\times 10^{-3}$) & $0.0665(5.3\times 10^{-4}$) \\ Main* & $0.414(6.5\times 10^{-3}$) & $0.0641(4.7\times 10^{-4}$)\\ Iterate* & $0.384(5.9\times 10^{-3}$) & $0.0641(4.7\times 10^{-4}$)\\ Screening* & $0.390(7.8\times 10^{-3}$) & - \\ Backtracking* & $0.365(3.7\times 10^{-3}$) & $0.0563(4.5\times 10^{-4}$)\\ Random Forest* & $0.356(2.4\times 10^{-3}$) & $0.0837(6.0\times 10^{-4}$)\\ hierNet* & $0.373(4.7\times 10^{-3}$) & - \\ MARS* & $5580.586(3.1\times 10^{3}$) & - \\ \hline \end{tabular} \end{table} \section{Conclusion} \label{sec:conclusion} Based on the contents of association rule mining and random intersections, we propose \textit{Random Intersection Chains} to discover meaningful categorical interactions. A number of chains are generated by intersections, and the patterns in the tail nodes are regarded as frequent patterns. Then the frequency and confidence of these patterns are estimated by maximum likelihood estimation and Bayes’ theorem. Finally the most confident patterns are selected. An efficient algorithm for selecting the most frequent subpatterns from a given pattern in polynomial time is also provided. We prove that there exist appropriate parameters that can keep all the frequent patterns, while the infrequent ones are prevented. The time and space complexities are analyzed, showing that the algorithm is both time- and memory-efficient. The asymptotic behavior of the estimations is guaranteed. When the number of chains goes to infinity, the estimated frequency and confidence converge to their true values. As a supplementary, s series of experiments are conducted to verify the effectiveness of \textit{Random Intersection Chains}. We show it's time efficient and consistent to detect the most frequent and confident patterns by applying the algorithm on two CTR prediction data sets. The prediction result verifies that adding these interactions are beneficial for CTR prediction. The ability of detecting useful patterns is further tested on the Tic-Toc-Toe data, where almost all the meaningful rules are found if parameters are appropriately chosen. The experiments on Reuters RCV1 Text data show that the found patterns can not only serve as a classifier themselves, but also be the input features for another model. We also compare our algorithm with some other interaction detection methods on several UCI data sets with continuous features or response. The results show that \textit{Random Intersection Chains} can help if the features or response are transformed into categorical ones beforehand. One limitation of \textit{Random Intersection Chains} is that it can not be applied directly on numerical features or response. We are trying to extend its application domain to these cases. Another difficulty lies in the choice of parameters. Different parameter settings influences the prediction performance, but tuning the parameters by grid search is time-consuming. We hope to find a better approach to chose the parameters. \acks{This work was partially supported by the National Natural Science Foundation of China under grants 11671418 and 12071428 and by the Zhejiang Provincial Natural Science Foundation of China under grant LZ20A010002.} \newpage
1,314,259,994,011
arxiv
\section{#1}} \newcommand{\addtocounter{section}{1} \setcounter{equation}{0}{\addtocounter{section}{1} \setcounter{equation}{0} \section*{Appendix \Alph{section}}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \newcommand{\tr}[1]{\mbox{\raisebox{-2.7mm}{$\stackrel{\displaystyle{\rm Tr}} {\scriptstyle{#1}}$}}} \newcommand{\hat{\imath}}{\hat{\imath}} \newcommand{\hat{\jmath}}{\hat{\jmath}} \newcommand{\hat{k}}{\hat{k}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\NP}[1]{Nucl.\ Phys.\ {\bf #1}} \newcommand{\PL}[1]{Phys.\ Lett.\ {\bf #1}} \newcommand{\CMP}[1]{Comm.\ Math.\ Phys.\ {\bf #1}} \newcommand{\PR}[1]{Phys.\ Rev.\ {\bf #1}} \newcommand{\PRL}[1]{Phys.\ Rev.\ Lett.\ {\bf #1}} \newcommand{\MPL}[1]{Mod.\ Phys.\ Lett.\ {\bf #1}} \newcommand{\IJMP}[1]{Int.\ J.\ Mod.\ Phys.\ {\bf #1}} \newcommand{\JETP}[1]{Sov.\ Phys.\ JETP {\bf #1}} \newcommand{\TMP}[1]{Teor.\ Math.\ Phys.\ {\bf #1}} \hyphenation{Blen-cowe pa-ra-fer-mion pa-ra-fer-mio-nic pa-ra-fer-mions} \newcommand{Zamolodchikov}{Zamolodchikov} \newcommand{A.B.~Zamolodchikov}{A.B.~Zamolodchikov} \newcommand{Al.B.~Zamolodchikov}{Al.B.~Zamolodchikov} \newcommand{\ZZ_N}{{\hbox{$\sf\textstyle Z\kern-0.4em Z$}}_N} \newcommand{{\rm e}}{{\rm e}} \newcommand{{d\over d\theta}}{{d\over d\theta}} \newcommand{\JMP}[1]{J.\ Math.\ Phys.\ {\bf #1}} \newcommand{\JP}[1]{J.\ Phys.\ {\bf #1}} \newcommand{\ubl}[1]{\{#1\}} \newcommand{\usbl}[1]{\left(#1\right)} \newcommand{\int^{\infty}_{-\infty}}{\int^{\infty}_{-\infty}} \newcommand{\int^{\infty}_{-\infty}\! d\theta}{\int^{\infty}_{-\infty}\! d\theta} \newcommand{\VEV}[1]{\langle #1\rangle} \newcommand{\ket}[1]{|#1\rangle} \newcommand{\bra}[1]{\langle #1|} \newcommand{{\cal E}}{{\cal E}} \newcommand{M\! R}{M\! R} \newcommand{\scriptscriptstyle}{\scriptscriptstyle} \newcommand{\gam}[1]{\gamma\left({\scriptstyle{#1}}\right)} \newcommand{\Gam}[1]{\Gamma\left({\scriptstyle{#1}}\right)} \newcommand{\Cam}[1]{C^{\scriptscriptstyle{(#1)}}} \newcommand{\renewcommand{\arraystretch}{1.65}}{\renewcommand{\arraystretch}{1.65}} \newcommand{\fracs}[2]{{\scriptstyle\frac{#1}{#2}}} \newcommand{\fract}[2]{{\textstyle\frac{#1}{#2}}} \newcommand{\fracs{1}{2}}{\fracs{1}{2}} \newcommand{\fract{1}{2}}{\fract{1}{2}} \newcommand{\frac{\partial}{\partial\theta}}{\frac{\partial}{\partial\theta}} \newcommand{\partial_{\theta}}{\partial_{\theta}} \newcommand{{\rm Log}_{\C}}{{\rm Log}_{\C}} \newcommand{{\rm Log}_{\C_a}}{{\rm Log}_{\C_a}} \newcommand{{\rm Log}_{\E_a}}{{\rm Log}_{\E_a}} \newcommand{{\rm Log}_{\C_1}}{{\rm Log}_{\C_1}} \newcommand{{\rm Log}_{\C_2}}{{\rm Log}_{\C_2}} \newcommand{\phantom{-}}{\phantom{-}} \newcommand{\vskip 4pt}{\vskip 4pt} \newcommand{\opnup}[1]{\renewcommand{\\}{\\[50 pt]}} \renewcommand{\bar}{\overline} \renewcommand{\tilde}{\widetilde} \newcommand\rdilog{{\cal L}} \newcommand\IM{^{\rm IM}} \newcommand\re{\hbox{Re}\,} \newcommand\im{\hbox{Im}\,} \newcommand\sign{\hbox{Sign}\,} \newcommand\Arg{{\rm Arg}} \newcommand\LYM{{\cal M}(2/5)} \newcommand\Ebndry{E_{\rm bndry}} \newcommand\Eblk{{\cal E}_{\rm bulk}} \renewcommand\hat{\widehat} \begin{document} \begin{titlepage} \vskip 0.5cm \begin{flushright} DTP-97-67 \\ KCL-MTH-97-71 \\ SPhT-97/163 \\ {\tt hep-th/9712197}\\ December 1997 \\ (revised) \end{flushright} \vskip 1.2cm \begin{center} {\Large {\bf TBA and TCSA with boundaries and}} \\[5pt] {\Large {\bf excited states } } \end{center} \vskip 0.8cm \centerline{Patrick Dorey% \footnote{e-mail: {\tt P.E.Dorey@durham.ac.uk, A.J.Pocklington@durham.ac.uk}}, Andrew Pocklington$^1$, Roberto Tateo\footnote{e-mail: {\tt Tateo@wasa.saclay.cea.fr}} and G\'erard Watts\footnote{e-mail: {\tt gmtw@mth.kcl.ac.uk}} } \vskip 0.6cm \centerline{${}^1$\sl Department of Mathematical Sciences,} \centerline{\sl University of Durham, Durham DH1 3LE, England\,} \vskip 0.2cm \centerline{${}^2$\sl Service de Physique Th\'eorique, CEA-Saclay,} \centerline{\sl F-91191 Gif-sur-Yvette Cedex, France\,} \vskip 0.2cm \centerline{${}^3$\sl Mathematics Department, } \centerline{\sl King's College London, Strand, London WC2R 2LS, U.K.} \vskip 0.9cm \begin{abstract} \vskip0.15cm \noindent We study the spectrum of the scaling Lee-Yang model on a finite interval {}from two points of view: via a generalisation of the truncated conformal space approach to systems with boundaries, and via the boundary thermodynamic Bethe ansatz. This allows reflection factors to be matched with specific boundary conditions, and leads us to propose a new (and non-minimal) family of reflection factors to describe the one relevant boundary perturbation in the model. The equations proposed previously for the ground state on an interval must be revised in certain regimes, and we find the necessary modifications by analytic continuation. We also propose new equations to describe excited states, and check all equations against boundary truncated conformal space data. Access to the finite-size spectrum enables us to observe boundary flows when the bulk remains massless, and the formation of boundary bound states when the bulk is massive.\\ \end{abstract} \end{titlepage} \setcounter{footnote}{0} \def\fnsymbol{footnote}{\fnsymbol{footnote}} \resection{Introduction} Integrable quantum field theories in domains with boundaries have attracted some attention of late, principally in the wake of a paper by Ghoshal and Zamolodchikov~\cite{GZa}. Such theories can be specified in terms of ultraviolet data consisting of the original boundary conformal field theory together with a specification of the particular perturbations chosen, both in the bulk and/or at the boundary, or else in terms of infrared data, which might consist of a bulk S-matrix together with a set of reflection factors encoding the scattering of each particle in the model off a single boundary. The reflection factors are constrained by various consistency conditions -- unitarity, crossing-unitarity~\cite{GZa}, and the boundary Yang-Baxter~\cite{Ca} and boundary bootstrap~\cite{FKa,GZa} equations -- and it is natural to explore the space of solutions to these conditions, and then attempt to match them with particular perturbed boundary conformal field theories. While the first aspect has been studied by many authors, the second is comparatively underdeveloped, and provides much of the motivation for the work to be described in this paper. The strategy (already employed with much success in boundaryless situations) will be to study finite-volume spectra by two different routes, one based on the ultraviolet data and one on the infrared data, and then to compare the results. The ultraviolet route will proceed via a generalisation of the truncated conformal space approach (TCSA)~\cite{YZa} to boundary situations. This seems to us to be of independent interest, and should be useful in non-integrable situations too. For brevity, we shall refer to this method as the BTCSA, for boundary truncated conformal space approach. On the infrared side, the main weapon will be the `BTBA', a modification of the thermodynamic Bethe ansatz (TBA) equations~\cite{Zb} adapted to boundary situations, introduced in refs.~\cite{Zta,LMSSa}. We have made a study of the analytic structure of these modified equations, uncovering a number of surprises along the way. We have also found new equations, some of them encoding the ground state energy in regimes where the previously-proposed equations break down, and others the energies of excited states. These new equations are similar in form to those found in~\cite{BLZa,DTa} for excited states in the more traditional (boundaryless) TBA. The agreement between BTCSA and BTBA results offers support for both, and in addition allows us to obtain concrete predictions about the relationship between various parameters appearing at short and long distances. Thus far, most of our work has been confined to what appears to be the simplest nontrivial example, namely the boundary scaling Lee-Yang model, but the methods advocated certainly have wider applicability. In this paper the emphasis will be on the structure of the BTBA equations and the matching of their solutions with the results that we have obtained by means of the BTCSA. A more detailed description of the BTCSA method itself will appear in a companion paper, currently in preparation~\cite{Us2}. \resection{The model} The ultraviolet limit of the scaling Lee-Yang model (SLYM) is the $\LYM$ minimal model~\cite{Cardy85}, a non-unitary conformal field theory with central charge $c = -22/5$. The field content and the operator description depends on the geometry considered, but far from any boundary the local fields are those of the theory on a plane, and are left and right Virasoro descendents of the primary fields $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$, of scaling dimension $0$, and $\varphi$, of scaling dimension $x_\varphi =\Delta_{\varphi}{+}\overline\Delta_{\varphi}= -2/5$. Both of these are scalars, and the one non-trivial fusion rule is $\varphi \times \varphi = \hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}} + \varphi$. The conventional normalisation of $\varphi$ is \begin{equation} \varphi(z) \; \varphi(w) = |z-w|^{4/5} \;+\; C_{\varphi\varphi}^\varphi\,\varphi(w)\,|z-w|^{2/5} + \ldots\;, \label{eq:norm1} \end{equation} but this results in $C$ being purely imaginary, and many other structure constants in the boundary theory being imaginary as well. For this reason, we have chosen the non-standard normalisation \begin{equation} \varphi(z) \; \varphi(w) = - |z-w|^{4/5} \;+\; C_{\varphi\varphi}^\varphi\, \varphi(w)\,|z-w|^{2/5} + \ldots\;, \label{eq:norm2} \end{equation} where we take $C_{\varphi\varphi\varphi} {=} - C_{\varphi\varphi}^\varphi$ to be a positive real number. If the physical geometry has a boundary, then a conformally invariant boundary condition (CBC) must be assigned to each component of that boundary, and the possible local fields on any part of the boundary depend on the particular conformal boundary condition found there. Cardy has classified the possible boundary conditions~\cite{Cardy89} and the field content on each of these can be found by solving the consistency conditions given by Cardy and Lewellen~\cite{CL91}. The boundary fields fall into irreducible representations of a single Virasoro algebra, so all that is needed to specify the local field content of a particular conformally invariant boundary condition is the set of weights of the primary boundary fields. It turns out that there are two conformal boundary conditions for $\LYM$, which by an abuse of notation we shall label by $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$ and $\Phi$. The $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$ boundary has only one primary boundary field, which is the identity $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$, while the $\Phi$ boundary has two, the identity and a field $\phi$ of scaling dimension $x_\phi = -1/5$. As a result, there are no relevant boundary perturbations of the $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$ boundary, and a single relevant perturbation of the $\Phi$ boundary, by the field $\phi$. As with equation (\ref{eq:norm2}), we choose the normalisation of $\phi$ to be \begin{equation} \phi(x) \; \phi(y) = - |x-y|^{2/5} \;+\; C_{\phi\phi}^\phi\, \phi(y)\,|x-y|^{1/5} + \ldots\;, \label{eq:norm3} \end{equation} with $C_{\phi\phi\phi}^{\vphantom{\phi}}= - C_{\phi\phi}^\phi$ a positive real number. Together equations (\ref{eq:norm2}) and (\ref{eq:norm3}) determine the bulk-boundary constant ${}^{\Phi}\!B_{\varphi}^\phi$ appearing in the expansion of $\varphi(x+iy)$ on the upper half plane with boundary condition $\Phi$ at $y=0$: \begin{equation} \varphi(x+iy) = {}^{\Phi}\!B_{\varphi}^\phi\,\phi(x)\,(2y)^{1/5} + \ldots\;. \end{equation} The only remaining independent structure constant appears in the operator product expansion of two boundary-changing operators, but is not needed to reproduce the results in this paper. This sketch will suffice for a description of the various boundary scaling Lee-Yang models that we shall be considering. Each can be regarded as a perturbation of one of the boundary conformal field theories just discussed. First, suppose that there is no boundary at all. There is only the bulk to perturb, and if the perturbation is to be relevant then there is only one bulk field, namely $\varphi$, to perturb by. The perturbed action \begin{equation} {\cal A}_{\rm SLYM}={\cal A}_{\LYM}+ \lambda\!\int^{\infty}_{-\infty}\!dy\int^{\infty}_{-\infty}\!dx\,\varphi(x,y) \label{arnold} \end{equation} is integrable, and, for $\lambda>0$, results in a massive scattering theory with a single particle type of mass $M$, and two-particle S-matrix~\cite{CMa} \begin{equation} S(\theta)=-\usbl{1}\usbl{2}~~,\quad \usbl{x}={\sinh\bigl({\theta\over 2}+{i\pi x\over 6}\bigr)\over \sinh\bigl({\theta\over 2}-{i\pi x\over 6}\bigr)}~. \label{asm} \end{equation} The exact relationship between $M$ and $\lambda$ was found in~\cite{Zg}. In the conventions implied by (\ref{eq:norm2}) and (\ref{arnold}), it is \begin{equation} M(\lambda)= {2^{19/12} \sqrt{\pi} \over 5^{5/16}} {\left( \Gamma(3/5) \Gamma(4/5) \right)^{5/12}\!\!\!\!\!\!\!\!\! \over \Gamma(2/3) \Gamma(5/6) }\;\;\;\;\;\lambda^{5/12} = (2.642944\dots)\lambda^{5/12}\;. \label{mlrel} \end{equation} Now add a single boundary along the imaginary axis $x=0$. Then, as explained in, for example,~\cite{GZa}, the S-matrix should be supplemented with a reflection factor encoding how the particle bounces off the boundary. There are four `minimal' possibilities, each of which satisfies all of the consistency conditions entailed by the S-matrix while minimising the number of poles and zeroes in the strip $0\le\im\theta\le\pi\,$\footnote{It might be more natural to minimise the number of poles and zeroes in the narrower strip $0\le\im\theta\le\pi/2\,$. In any event, we won't be imposing either version of minimality, but rather checking solutions directly against BTCSA data.}: \begin{eqnarray} R_{(1)}= \usbl{\fract{1}{2}}\usbl{\fract{3}{2}}\usbl{\fract{4}{2}}^{-1} \phantom{-} &,&~\quad R_{(2)}= \usbl{\fract{3}{2}}^{-1}\!\usbl{\fract{4}{2}}^{-1}\! \usbl{\fract{5}{2}}^{-1}\nonumber\\ R_{(3)}= -\usbl{\fract{1}{2}}\usbl{\fract{2}{2}}\usbl{\fract{3}{2}} {}~~\,&,&~\quad R_{(4)}= -\usbl{\fract{2}{2}}\usbl{\fract{3}{2}}^{-1}\!\usbl{\fract{5}{2}}^{-1} \label{lew} \end{eqnarray} The first two were given in~\cite{GZa}, while the second two are related to these by multiplication by the bulk S-matrix. (As observed in~\cite{Sa}, this procedure automatically generates further solutions to the consistency conditions.) In fact, an easing of the minimality requirement will be required in order to match all of the boundary conditions that will be encountered. For the time being we just note that the reflection factor \begin{equation} R_b(\theta)= \usbl{\fract{1}{2}} \usbl{\fract{3}{2}} \usbl{\fract{4}{2}}^{-1}\! \usbl{\fract{1-b}{2}}^{-1}\! \usbl{\fract{1+b}{2}} \usbl{\fract{5-b}{2}} \usbl{\fract{5+b}{2}}^{-1} \label{eq:rfb} \end{equation} is consistent with the bulk S-matrix for any value of the parameter $b$, and reduces to $R_{(1)}$, $R_{(2)}$ and $R_{(3)}$ for $b=0$, $-2$ and $1$ respectively. At this stage there is no way of telling which, if any, of these solutions is actually realised as the reflection factor of a concrete perturbed boundary conformal field theory. Such a theory, in the geometry currently under consideration, will have an action of the form \begin{equation} {\cal A}_{\rm BSLYM}={\cal A}_{\LYM+{\rm CBC}}+ \lambda\!\int^{\infty}_{-\infty}\!dy\int_{-\infty}^0\!dx\,\varphi(x,y)+ h\!\int^{\infty}_{-\infty}\!dy\,\phi_B(y)~, \label{arnie} \end{equation} where ${\cal A}_{\LYM+{\rm CBC}}$ is the action for the conformal field theory on the semi-infinite plane, with a definite conformal boundary condition at $x=0$, and $\phi_B(y)$ is one of the boundary fields allowed by that same boundary condition. We shall denote the boundary $\Phi$ with a term in the action $ h\!\oint ds\,\phi(s)$ by $\Phi(h)$. It is important to appreciate that since the bulk-boundary coupling ${}^{\Phi}\!B_{\varphi}^{\phi}$ is non-zero, the sign of $h$ is important, just as the fact that the bulk three-point coupling is non-zero means that the sign of $\lambda$ in (\ref{arnold}) and (\ref{arnie}) is important. As a final step, we add a second boundary to confine the system to a strip of finite width. The next section outlines how this situation can be analysed numerically, using a modification of the TCSA technique. \resection{Boundary TCSA} The TCSA method of~\cite{YZa} assumes periodic boundary conditions, and has to be adapted to our context of a strip of width $R$. A conformal mapping from the strip to the upper half plane sends the quantisation surface to a semi-circle of radius $1$, with the left and right boundary-perturbing operators sitting on the real axis at $-1$ and $1$ respectively. The Hamiltonian is then given in terms of the boundary conformal field theory as \begin{eqnarray} H_{\alpha\beta}(M,R) &=& \frac{\pi}{R} \, \left[\; L_0 - \frac{c}{24} \;+\; \lambda \left( \frac{R}{\pi}\right)^{2 - x_\varphi} \!\!\!\int_{0}^\pi\!\!d\theta\, \varphi( \exp(i \theta)) \right. \nonumber \\[3pt] && \;\;\;\;\;\;\;\;\;\;+\; \left. h_L \left(\frac{R}{\pi}\right)^{1-x_{\rm L}}\!\phi_{\rm L}(-1) \;+\; h_R \left(\frac{R}{\pi}\right)^{1-x_{\rm R}}\!\phi_{\rm R}(1) \;\right] \,,~~~~~ \label{eq:tcsa1} \end{eqnarray} where $\varphi$ is the bulk primary field and $\phi_{\rm L}$ and $\phi_{\rm R}$ are the boundary perturbations (if any) applied to the left and right boundaries of the strip. The scaling dimensions of these operators are $x_{\varphi}$, $x_{\rm L}$ and $x_{\rm R}$ respectively. The subscripts $\alpha$ and $\beta$ of $H_{\alpha\beta}(M,R)$ are included as reminders of the information residing in the left and right boundary conditions and perturbations, while $M$ is related to the bulk coupling $\lambda$ according to the relation (\ref{mlrel}). Since we are dealing here with relevant perturbations of the scaling Lee-Yang model, $x_{\varphi}$ is equal to $-2/5$, and each of $\phi_{\rm L}$ and $\phi_{\rm R}$ is either absent, or is the boundary field $\phi$ with scaling dimension $x_{\phi}=-1/5$. This latter option only arises when perturbing $\Phi$ boundaries -- as mentioned in the last section, the $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$ boundary does not support any relevant boundary perturbations. The essence of the BTCSA is to diagonalise the Hamiltonian (\ref{eq:tcsa1}) in a finite-dimensional subspace of the full Hilbert space, usually obtained by discarding those states of conformal weight larger than some cutoff. The matrix elements of the fields $\phi_{\rm L}(-1)$, $\phi_{\rm R}(1)$ and $\varphi(\exp(i\theta))$ can be found exactly, but in general the integral over $\theta$ in (\ref{eq:tcsa1}) has been performed numerically. If the Hilbert space of the model on a strip with conformal boundary conditions $\alpha$ and $\beta$ is ${\cal H}_{(\alpha,\beta)}\,$, and the irreducible Virasoro representation of weight $h$ is $V_h$, then using~\cite{Cardy89} we found \begin{equation} {\cal H}_{(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})} = V_0 \;,\;\;\;\; {\cal H}_{(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi)} = V_{-1/5} \;,\;\;\;\; {\cal H}_{(\Phi,\Phi)} = V_0 \oplus V_{-1/5} \;. \label{rowlf} \end{equation} The full set of structure constants and correlation functions necessary to evaluate (\ref{eq:tcsa1}) on these spaces can then be found using \cite{CL91}, and will be given in \cite{Us2}. We used the BTCSA to investigate the model with a pure boundary perturbation ($\lambda {=} 0$), and also with combined bulk / boundary perturbations. In the first case, perturbing by a relevant boundary field while leaving the bulk massless gives a renormalisation group flow which, while leaving the bulk properties of the conformal field theory unchanged, moves from one conformal boundary condition to another. The simplest context where we can study this phenomenon within the BTCSA is the bulk-conformal Lee-Yang model on a strip with $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi)$ boundary conditions. Perturbing the $\Phi$ boundary by the $\phi$ field should provoke a flow to another conformal boundary condition with fewer relevant boundary operators. The only candidate is $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$, and that is indeed what we found for {\em positive} values of $h\equiv h_R$. Since the bulk is massless, the only length scale in the problem is provided by the boundary field $h$, and the crossover occurs as a function of the dimensionless quantity $r=h^{5/6}R$. The best signal is provided by a plot of scaling functions, which we define in terms of the energy levels $E_n(h,R)$ by \begin{equation} E_n(h,R)= \frac{\pi}{R}F_n(h^{5/6}R)~. \label{polenta} \end{equation} In figure 1a we display the gaps $(F_n(r) - F_0(r))$ for the $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$ boundary conditions with $\log(r^{6/5}){=}\log(h R^{6/5})$ varying from $-4.9$ to $2.7$, using the BTCSA truncated to 98 states. We have used a logarithmic scale for to highlight the crossover region, in which there is a smooth flow between the ${\cal H}_{(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi)}$ and ${\cal H}_{(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})}$ spectra. As should be clear from the plot, the levels move from the degeneracy pattern given by $\chi_{-1/5}\,$, the character of the $V_{-1/5}$ representation, to that given by $\chi_0\,$: \begin{equation} \begin{array}{ll} \!q^{1/60}\,\chi_{{-}1/5}(q) = \\[2pt] {}~~ 1 + q + {q^2} + {q^3} + 2\,{q^4} + 2\,{q^5} + 3\,{q^6} + 3\,{q^7} + 4\,q^8 + 5\,q^9 + 6\,q^{10} + 7\,q^{11} + O(q^{12})\;,~~~~ \\[7pt] \!q^{-11/60}\;\chi_{0}(q)\,= \\[2pt] {}~~ 1 ~~~+~~ {q^2} + {q^3} ~+~{q^4} ~+~ {q^5} + 2\,{q^6} + 2\,{q^7} + 3\,q^8 + 3\,q^9 + 4\,q^{10} + 4\,q^{11} + O(q^{12})\;. \end{array} \end{equation} With $h$ {\em negative}\ the energies become complex at large (real) $R$, just as happens in the bulk if $\lambda$ is negated~\cite{YZa}. However, in this case the spectrum remains identifiable. For large negative $h$, it splits into three components. There are two complex conjugate sets, the real parts of which are counted by the character $\chi_0(q)$; and a third set which is real and which is also counted by the character $\chi_{0}(q)$, but which appears to have a different asymptotic behaviour in $h$ and to decouple in the $h\to\infty$ limit. To illustrate this point, in figure 1b we plot the first 53 scaling functions against $r^{6/5}=hR^{6/5}$, for $-12.2<hR^{5/6}{<}7.4$. The solid lines indicate real eigenvalues, and the dashed lines the real parts of complex eigenvalues. At the moment we are unsure quite what the interpretation of these complex eigenvalues is, or why the different components should be organised into representations of the Virasoro algebra. However similar spectra have been found before, for example see ref.~\cite{BWeh1}. {\begin{figure}[ht] \[\begin{array}{ll} \epsfxsize=.47\linewidth\epsfbox{NewFig1a.eps} {}& \epsfxsize=.47\linewidth\epsfbox{NewFig1a2.eps} \\ \parbox[t]{.47\linewidth}{\small 1a) The excited state scaling function gaps as a function of $\log( r^{6/5})$ for the $M{=}0$ LY model on a strip with boundary conditions $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$, with $h=(r/R)^{6/5}$. The multiplicities are in parentheses.} {}~&~ \parbox[t]{.47\linewidth}{\small 1b) The first 53 scaling functions $F_n(r)$ plotted against $r^{6/5}$ for the $M{=}0$ LY model on a strip with boundary conditions $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$, with $h=(r/R)^{6/5}$. The multiplicities are in parentheses.} \end{array}\] \end{figure}} We have also used the BTCSA to investigate massless boundary flows for other unitary and non-unitary minimal models, with both integrable and non-integrable boundary perturbations. In unitary cases the spectrum remains real, but in other respects the picture just outlined seems to be rather general. More work will be needed before the full picture is clear, but one particular point is worth stressing: if the bulk is {\it massless}, then a boundary perturbation cannot affect this, and hence {\it any} boundary perturbation, integrable or non-integrable, must flow to a conformal, and indeed integrable, boundary condition under the renormalisation group. This contrasts with the generic behaviour of a bulk perturbation, where fine-tuning is required if the infrared limit is to be anything other than massive. It also helps to explain why we were able to observe massless boundary flows in the BTCSA, despite the errors caused by the truncation to a finite number of levels. In analogous bulk situations, the TCSA makes a rather bad job of modelling bulk-massless flows, even in situations where the oppositely-perturbed, bulk-massive, flows are captured fairly well -- see for example section 5.2 of~\cite{LMCa}. \medskip As a second application of the BTCSA, we have investigated combined bulk / boundary perturbations in order to unravel the connection between the reflection factors (\ref{lew}), (\ref{eq:rfb}) and particular perturbed boundary conformal field theories. To this end it is only necessary to consider the $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})$ and $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$ boundary conditions, but we defer the presentation of these results until later in the paper, after the discussion of the BTBA method. \medskip Finally, we have used the BTCSA to investigate the complete partition function of the scaling Lee-Yang model on a cylinder of length $R$ and circumference $L$, with bulk mass $M$ and general boundary conditions $(\alpha,\beta)$ at the two ends of the cylinder. In terms of the spectrum of the strip Hamiltonian (\ref{eq:tcsa1}), this partition function is \begin{eqnarray} Z_{\alpha\beta}(M,R,L) &=&{\rm Tr}_{{\cal H}_{(\alpha,\beta)}} e^{-LH_{\alpha\beta}(M,R)}\nonumber\\[5pt] &=&\!\!\!\! \sum_{E_n\in\,{\rm spec}(H_{\alpha\beta}(M,R))}\!\!\!\!\!\! \exp( - L E_n ) \;. \label{bucket} \end{eqnarray} This partition function can also be evaluated by propagating states along the length of the cylinder, giving the large-$R$ asymptotic \begin{equation} Z_{\alpha\beta}(M,R,L) \sim_{R \to \infty}\, A_{\alpha\beta}(M,L)\, \exp( - R \,{E^{\rm p}_0(M,L)} ) \label{lrasympt} \end{equation} where $E^{\rm p}_0(M,L)$ is the ground state energy of the model on a circle of circumference $L$. The linear part of $\log A_{\alpha\beta}$ can be extracted by setting \begin{equation} \log(A_{\alpha\beta}(M,L)) =\log(\,g_{\alpha}(M,L)\,g_{\beta}(M,L)\,) + B_{\alpha\beta}\, L \;, \label{eq:aggb2} \end{equation} For the two boundary conditions we have, the two functions $g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)$ and $g_{\Phi(h)}(M,L)$ can be expressed in terms of dimensionless combinations as \begin{equation} g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L) = f_1(\, M\!L\,) \;,\;\;\;\; g_{\Phi(h)}(M,L) = f_2(\, M\!L \,,\, h L^{6/5} \,) \;. \end{equation} The numbers $f_1(0)$ and $f_2(0,0)$ are the ``universal ground state degeneracies" discussed in~\cite{AL} of the corresponding UV conformal boundary conditions, i.e. $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$ and $\Phi(0)$ respectively, with values as given in table \ref{tab:lgg}. The opposite, $L\to\infty$, limit must be taken with more care, as results are different for the two cases of massless and massive bulk. At $M{=}0$ (critical bulk), for $h>0$ the function $f_2(0, h L^{6/5})$ governs the crossover between conformal boundary conditions $\Phi(0)$ and $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$, so that we expect \cite{AL} \begin{equation} \lim_{L \to \infty\atop h>0} g_{\Phi(h)}(0,L) = \lim_{\kappa \to +\infty} f_2(0,\kappa ) = f_1(0) \;. \label{eq:lim1} \end{equation} However, for a massive bulk, we expect that the limit $L \to \infty$ will lead to a purely massive boundary theory, for which the ground state degeneracy is 1, so that in all cases for the Lee-Yang model, \begin{equation} \lim_{L \to \infty\atop M>0} g_{\alpha}(M,L) = 1 \;. \label{eq:lim2} \end{equation} The quantities that we have just been discussing arise in a limit where the formula (\ref{bucket}) might be expected to break down, at least when truncated to a finite number of levels. Nevertheless, it turned out to be possible to extract both $E^{\rm p}_0(M,L)/M$ and the product $g_{\alpha}(M,L)\,g_{\beta}(M,L)$ from the numerical approximation to the partition function as calculated from the spectrum of the BTCSA Hamiltonian. As a first example, we calculated $\log|E^{\rm p}_0(M,L)/M|$ from the BTCSA approximation to the partition function, and compared it with the result obtained directly by using the TCSA for a circle. The result should be independent of the boundary conditions, and that is indeed what we found. The results are given in figure 2a, in which we compare the direct TCSA (or TBA) evaluation of $\log|E^{\rm p}_0(M,L)/M|$ for the model on a circle (solid line) with the result obtained from the BTCSA evaluation of $Z_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}\Phi(0)}$ with a truncation to 29 states (points), and of $Z_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}\One}$ with a truncation to 26 states (open circles). As can be seen, the values for $E^{\rm p}_0(M,L)/M$ obtained via the BTCSA are indeed independent of the boundary conditions on the strip, and agree with the expected answers obtained by looking directly in the other channel. {\begin{figure}[htb] \[\begin{array}{c} {\epsfxsize=.57\linewidth\epsfbox[0 50 250 210]{NewFig1b.eps}}\\ \parbox[t]{.57\linewidth}{\small 2a) $\log |E^{\rm p}_0(M,L)/M|$ plotted against $\log(ML)$ for the SLYM on a circle: TCSA for the system on a circle (line) compared with results from the BTCSA with boundary conditions $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(0))$ (points) and $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})$ (circles).} \end{array}\] \end{figure}} For our second example we estimated the ground state degeneracy functions $\log(\, g_{\alpha}(M,L)\, g_{\beta}(M,L)\, )$ for $(\alpha,\beta)$ taking the two values $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})$ and $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(0))$. The BTCSA calculations are rather more sensitive to truncation error than in the previous example, and we took results from various truncations up to $98$ states, and then extrapolated in the truncation level. The numerical results at $(M\!L)=0.02$ are compared with the exact results at $L=0$ in table \ref{tab:lgg}. { \begin{table}[htb] {\renewcommand{\arraystretch}{1.6} \[ \begin{array}{c|c|c} \hbox{The b.c.s} & \hbox{The exact value of} & \hbox{The BTCSA estimate for} \\[-3mm] (\alpha,\beta) & \log(\, g_{\alpha}(M,0)\, g_{\beta}(M,0)\, ) & \log(\, g_{\alpha}(M,\frac{0.02}M)\, g_{\beta}(M,\frac{0.02}M)\, ) \\[1mm] \hline (\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}) & \fract{1}{2}\log((5{-}\sqrt5)/10)=-0.64296... & -0.643 \pm 0.001 \\ \hline (\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(0)) & \fract{1}{2}\log((5{+}\sqrt5)/10)=-0.161754... & -0.1616 \pm 0.0005 \end{array} \]}% \vspace{-2truemm} \caption{% the BTCSA estimates and exact values of $\log(\, g_{\alpha}(M,0)\, g_{\beta}(M,0)\, )$} \label{tab:lgg} \end{table} } We also performed a more detailed investigation of $\log(\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\,)$, and $\log(\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\, g_{\Phi(h)}(M,L)\,)$. These should both flow from their UV values in table 1 to IR values of 0; but whereas there is a single flow for the $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})$ boundary conditions, $\log(\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\, g_{\Phi(h)}(M,L)\,)$ depends on the dimensionless parameter $h M^{-6/5}$. For small $|h M^{-6/5}|$ this should still leave essentially a single crossover region from $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi)$ conformal boundary conditions in the UV to massive boundary conditions in the IR, but we should expect that for large $h M^{-6/5}$ there are two crossover regions -- first a massless boundary flow at $L h^{5/6} \sim 1$ from the $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi)$ CBCs in the UV to effective $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})$ CBCs, and then a crossover from these CBCs to the IR boundary conditions which should agree with the single crossover for the true $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})$ case. In figure 2b we give plots of these functions against $\log( M\! L )$ for various values of $(h M^{-6/5})$. The two straight lines in this plot are the exact UV values from table 1. The lowermost curve is $\log(\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\,)$ from BTCSA at 43 states, showing a smooth flow from the UV value $-0.64...$ to the IR value $0$. The remaining curves are $\log(\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\, g_{\Phi(h)}(M,L)\,)$ from BTCSA at 36 states. The values of $hM^{-6/5}$, starting with the uppermost line and ending with the lowest line, are $-0.65$, $-0.5$, $-0.4$, $-0.25$, $-0.1$, 0, 0.1, 0.25, 0.5, 1.0, and 3.0 respectively. {\begin{figure}[htb] \[\begin{array}{c} {\epsfxsize=.8\linewidth\epsfbox[0 50 288 238]{flow5.eps}}\\ \parbox[t]{.8\linewidth}{\raggedright \small 2b) $\log(\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\,)$ and $\log(\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\, g_{\Phi(h)}(M,L)\,)$ plotted against against $\log( M\! L )$ for various values of $(h M^{-6/5})$; details given in the text. Also shown are the exact UV values from table 1. } \end{array}\] \end{figure}} There was one problem in finding numerical values for $\log(\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\, g_{\Phi(h)}(M,L)\,)$, and this was estimating $B_{\alpha\beta}\, L$. It is only for large values of $M\!L$ that equation (\ref{eq:aggb2}) holds, and for large values of $|h M^{-6/5}|$ this is outside the region of convergence of the BTCSA method. We were only able to find $B_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}\Phi(h)}\, L$ directly from equation (\ref{eq:aggb2}) for $|h M^{-6/5}| \lesssim 0.1$; for the remaining values of $h M^{-6/5}$, we estimated $B_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}\Phi(h)}\,L$ by extrapolation of the data for $|h M^{-6/5}| \leq 0.0125$. To mark this loss of accuracy, for $ -0.25\leq (h M^{-6/5}) \leq 0.25$ we have plotted $\log(\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\, g_{\Phi(h)}(M,L)\,)$ for $-6< \log(M\!L) < 5$ with solid lines, but for the remaining values of $(h M^{-6/5})$ we have only given results for $\log(M\!L) <1$, and used dashed lines. We estimate the error in $\log(\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\, g_{\Phi(h)}(M,L)\,)$ at $M\!L=1$ arising from the extrapolation of $B_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}\Phi(h)}\, L$ to be of the order of $0.01\%$ for $(hM^{-6/5})=0.3$, rising to $20\%$ for $(hM^{-6/5})=3.0$. For larger values of $(hM^{-6/5})$ the errors in extrapolation render our data meaningless. For small values of $|h M^{-6/5}|$ (less than about $0.3$), we expect our results to be quite accurate. While the quantitative results are less good for the larger values, we do still believe that there is good qualitative agreement. We now comment on the results in figure 2b. There are essentially three different behaviours shown in this figure. \begin{enumerate} \item{} For $ |h M^{-6/5}|\sim 0$ we found a single crossover to the massive case, with only small deviations due to the boundary field. \item{} For $ (h M^{-6/5}) \gtrsim 1.0$, we found clear signs of two crossover regions as $L$ varied from $0$ to $\infty$: the first (for small $L$) a massless boundary flow from the value with $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(0))$ b.c.s to that with $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})$ b.c.s, and then a second region (for larger $L$) with a crossover to zero as the bulk mass scale dominates. We were not able to go to large enough values of $ (h M^{-6/5}) $ to truly separate the two crossover regions, but we think that figure 2b is very suggestive of the behaviour we propose. \item{} For some value of $(h M^{-6/5})$ between $-0.8$ and $-0.6$, there is a critical value at which $\log(\, g_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}}(M,L)\, g_{\Phi(h)}(M,L)\,)$ diverges -- this was signalled by a pole in a rational fit to the data for $B_{\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}\Phi(h)}\,L$ for small $|h M^{-6/5}|$. The predicted position of this pole is in agreement with the critical value $(h_{\rm crit}M^{-6/5}) = {-}0.68529$ discussed, from a different point of view, at the end of section~6 below. \end{enumerate} \resection{Boundary TBA} For the rest of this paper we restrict our attention to cases where the bulk is massive, so that the bulk S-matrix is known. In situations where boundary reflection factors are also known (or conjectured), TBA-like equations for the ground-state energy on an interval have been put forward in refs.~\cite{Zta,LMSSa}. The derivation of these equations begins with the expression given in~\cite{GZa} for the boundary state $\ket{B_{\alpha}}$ corresponding to the boundary condition $\alpha$: \begin{equation} \ket{B_{\alpha}}=\exp\left[\int^{\infty}_{-\infty}\! d\theta K_{\alpha}(\theta)A(-\theta)A(\theta)\right]\ket{0}\,, \label{bstate} \end{equation} where $K_{\alpha}(\theta)$ is related to the reflection factor for the $\alpha$ boundary condition by $K_{\alpha}(\theta)= R_{\alpha}(\frac{i\pi}{2}{-}\theta)$, and $A(\theta)$ (denoted $A^{\dagger}(\theta)$ in \cite{LMSSa}) is the Faddeev-Zamolodchikov operator creating a single particle with rapidity $\theta$. (As in~\cite{LMSSa}, we ignore the possible presence of additional zero-momentum particles in this state. At least in some regimes, this will be retrospectively justified via a comparison with BTCSA data.) The next step is to express the partition function $Z_{\alpha\beta}(M,R,L)$ of the model on a cylinder of length $R$ and circumference $L$ as \begin{equation} Z_{\alpha\beta}(M,R,L)\sim {\phantom{|}}_L\!\bra{B_{\alpha}}\exp(-RH_{\rm p}(M,L))\ket{B_{\beta}}% \!\!{\phantom{|}}_L \end{equation} where $H_{\rm p}(M,L)$ is the Hamiltonian for the system living on a circle of circumference $L$, with periodic boundary conditions and bulk mass $M$, and $\ket{B_{\alpha}}\!\!{\phantom{|}}_L$ and $\ket{B_{\beta}}\!\!{\phantom{|}}_L$ are boundary states set up as in (\ref{bstate}), but on the circle rather than the infinite line. Expanding the expressions for these states and then making a saddle-point approximation (bearing in mind the quantisation conditions imposed on the momenta by the periodic boundary conditions) ultimately leads to an expression for $-(\log Z_{\alpha\beta}(M,R,l/M))/l$ which becomes exact in the limit $l\rightarrow\infty$. But in this limit -- the opposite to that considered in the discussion following equation (\ref{lrasympt}) -- the same quantity is given as $E_0^{\alpha\beta}(M,R)$, the sought-after ground-state energy of $H_{\alpha\beta}(M,R)$. The calculational details can be found in ref.~\cite{LMSSa}; the upshot is that $E_0^{\alpha\beta}(M,R)$ is expressed in terms of the solution $\varepsilon(\theta)$ to the following nonlinear integral equation: \begin{equation} \varepsilon(\theta)=2r\cosh\theta-\log\lambda_{\alpha\beta}(\theta)-\phi{*}L(\theta) \label{kermit} \end{equation} where $r=M\! R$, $L(\theta)=\log\bigl(1{+}e^{-\varepsilon(\theta)}\bigr)$, $f{*}g(\theta)=\frac{1}{2\pi}\int^{\infty}_{-\infty}\! d\theta'f(\theta{-}\theta')g(\theta')\,$, and \begin{equation} \lambda_{\alpha\beta}(\theta)=K_{\alpha}(\theta)K_{\beta}(-\theta)~,\qquad \phi(\theta)=-i\frac{\partial}{\partial\theta}\log S(\theta)~, \end{equation} with $S(\theta)$ the bulk S-matrix~(\ref{asm}), and $M$ is the particle mass (\ref{mlrel}). The solution to~(\ref{kermit}) for a given value of $r$ (and of any boundary-related parameters hidden inside the labels $\alpha$ and $\beta$) determines a function $c(r)$: \begin{equation} c(r)=\frac{6}{\pi^2}\int^{\infty}_{-\infty}\! d\theta\, r\cosh\theta L(\theta) \label{piggy} \end{equation} in terms of which $E_0^{\alpha\beta}(M,R)$ (which from here on we abbreviate to $E_0(R)$, when there is no danger of confusion) is \begin{equation} E_0(R)=\Ebndry+\Eblk R-\frac{\pi}{24 R}c(M\! R)~, \label{bath} \end{equation} where $\Ebndry$ is a possible constant contribution to $E_0(R)$ coming from the boundaries, and $\Eblk$ is the bulk energy per unit length. We will also be working with a dimensionless ground state scaling function $F(r)$. With the bulk mass now nonzero, the discussion of BTBA results is most convenient if this is used to set the overall length scale, rather than the boundary magnetic field used in the definition (\ref{polenta}) of the last section. Thus we set \begin{equation} F(r)=\frac{R}{\pi}E_0(R) =\frac{r}{\pi M}\Ebndry+\frac{r^2}{\pi M^2}\Eblk-\frac{1}{24}c(r)~. \label{scooter} \end{equation} The equations (\ref{kermit})--(\ref{piggy}) are superficially very similar to the more familiar TBA equations found for periodic boundary conditions, but there are some important new features, as we now show. The identity $\lambda_{\alpha\beta}(\theta{-}i\pi/3) \lambda_{\alpha\beta}(\theta{+}i\pi/3)= \lambda_{\alpha\beta}(\theta)\,$ (which follows from the boundary bootstrap equations for the scaling Lee-Yang model) is enough to establish that the function $Y(\theta)=\exp(\varepsilon(\theta))$ solves the same $Y$-system as found in the standard Lee-Yang TBA, namely~\cite{Zf} \begin{equation} Y(\theta-i\pi/3)\,Y(\theta+i\pi/3)=1+Y(\theta)~. \label{blob} \end{equation} {}From there, the standard periodicity property $Y(\theta{+}5i\pi/3)=Y(\theta)$ follows by simple substitution. However the consequent $r$-dependence of the solutions is different, because the $\lambda_{\alpha\beta}(\theta)$ term in~(\ref{kermit}) gives the function $\varepsilon(\theta)$ a non-trivial behaviour near $\theta=0$, and this persists even in the $r\rightarrow 0$ limit. The limiting shape of $\varepsilon(\theta)$ is a pair of `kink' regions near to $\theta=\pm\log(1/r)$, separated by two plateaux from what might fancifully be called a `breather' region near to $\theta=0$. (This is in contrast to the situation in the usual TBA where there are just the two kink regions at $\theta\approx\pm\log(1/r)$, separated by a single plateau of length $2\log(1/r)$~\cite{Zb}.) As explained in \cite{Zf}, the $r$-dependence of the ultraviolet-limiting solutions creeps in via interactions between the regions where the $\theta$-dependence of $\varepsilon(\theta)$ remains non-trivial, and here these are separated by a distance $\log(1/r)$, {\em half} the value found for periodic boundary conditions. As a result, the regular part of the expansion of $c(r)$ is generically a power series in $r^{6/5}$, rather than the $r^{12/5}$ that is found when the boundary conditions are periodic. The other, `irregular', parts of the function $c(r)$ can be traced to the integral against $r\cosh\theta$ in (\ref{piggy}). The mechanism is as described in \cite{Zb}, though the arguments must be generalised to incorporate the effect of the central breather region. If, in terms of the blocks $(x)$ defined in equation (\ref{asm}), the reflection factors $R_{\alpha/\beta}$ are equal to $\prod_{x\in A_{\alpha/\beta}}\!\usbl{x}\,$, then the final result is that \begin{equation} c(r)=\frac{4\sqrt{3}}{\pi}\Biggl(\sum_{x\in A_{\alpha}\cup A_{\beta}} \!\!\!\sin\frac{x\pi}{3}\Biggr)r-\frac{2\sqrt{3}}{\pi}r^2+ \hbox{`regular terms'}, \label{llm} \end{equation} where `regular terms' refers to the already-discussed power series in $r^{6/5}$. As will be justified via a specific example in the next section, the scaling function $F(r)$ is expected to expand in powers of $r^{6/5}$ alone about $r=0$. This will reproduced by the BTBA result of (\ref{scooter}) and (\ref{llm}), so long as the irregular terms in (\ref{llm}) cancel against the explicit bulk and boundary energies included in (\ref{scooter}). This requirement determines the constants $\Eblk$ and $\Ebndry$ exactly: \begin{equation} \Ebndry=\frac{1}{2\surd{3}}% \Biggl(\sum_{x\in A_{\alpha}\cup A_{\beta}}% \!\!\!\sin\frac{x\pi}{3}\Biggr)M \qquad;\qquad \Eblk=-\frac{1}{4\surd 3}M^2\,. \label{beaker} \end{equation} The value of $\Eblk$ is the same as is found when the boundary conditions are periodic~\cite{Zb}. This is as it should be, since $\Eblk$ reflects a bulk property of the model. Finally we remark that the driving term in~(\ref{kermit}), $2r\cosh\theta-\log\lambda_{\alpha\beta}(\theta)\,$, is singular at the poles and zeroes of $\lambda_{\alpha\beta}(\theta)$. Therefore a set of $r$-independent points at which $Y(\theta)$ either vanishes or is infinite can be read directly off~(\ref{kermit}), at least in the strip $-\pi/3\le\im\theta\le\pi/3\,$\footnote{beyond this strip, the $Y$-system relation (\ref{blob}) should be used.}. Points of the first sort, at which $L(\theta)$ is also singular, are also seen in solutions of more usual TBA equations; they were called `type II' in~\cite{DTb}. Those of the second sort are a new feature of the boundary TBA, and will be dubbed `type~III'. \resection{The $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$ ground state in the BTBA} We can now discuss the physical import of these results, with particular reference to the strip with $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$ boundary conditions mentioned in section~3. For this setup the Hamiltonian~(\ref{eq:tcsa1}) has $\phi_{\rm L}=0$, $2{-}x_{\varphi}=12/5$, and $1{-}x_{\rm R}=6/5$. Therefore, the conformal perturbation theory expansion of $F(r)$ does have a regular expansion in $r^{6/5}$, and so matches the behaviour derived in the last section within the BTBA. Next, we should decide which reflection factors describe the $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$ situation. We found that substituting $\lambda_{\alpha\beta}(\theta)=K_{(1)}(-\theta)K_b(\theta)$ into the basic BTBA equation (\ref{kermit}) enabled us to reproduce BTCSA data for ${-}0.68529<h<0.554411$, the necessary values of $b$ lying in the range $-2\le b\le 2$. The natural conclusion, which will receive further support from its consistency with results to be reported in the remainder of this paper, is to associate boundary conditions with reflection factors as follows: \begin{eqnarray} \hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}~~&\leftrightarrow&R_{(1)}(\theta)\label{waldorf}\\[2pt] \Phi(h)&\leftrightarrow&\,R_b(\theta)\label{astoria} \end{eqnarray} The precise relation between $h$ and $b$ will be discussed shortly, but in terms of $b$ the constant part of $E_0(R)$ at large $R$ now follows from the general result (\ref{beaker}), and is: \begin{equation} \Ebndry=\frac{1}{2}\left(\sqrt3-1+2\sin\frac{b\pi}{6}\right)\!M~. \label{gonzo} \end{equation} Our numerical work with this particular $\lambda_{\alpha\beta}(\theta)$ was complicated by the presence, for $-2<b<2$, of a double type~II singularity in $L(\theta)$ at $\theta{=}0$. (At $b{=}{-}2$ there is no singularity, and at $b{=}2$ it is of order $4$.) Therefore, in the general case $e^{-\varepsilon(\theta)}$ is singular at $\theta{=}0$, and the direct numerical integration of equation (\ref{kermit}) gives very unsatisfactory results: we estimated the accuracy for $c(r)$ at small $r$ to be between $10^{-2}$ and $10^{-3}$. To alleviate this problem we defined \begin{equation} \hat{\varepsilon}(\theta)= \varepsilon(\theta)- q \log \tanh \fract{3}{4}\theta+ \log\hat\lambda_{\alpha\beta}(\theta)~~,~~ \hat{L}(\theta)= \log\left( \tanh^q\fract{3}{4} \theta + \hat\lambda_{\alpha\beta}(\theta) e^{-\hat{\varepsilon}(\theta)}\right) \label{eq2} \end{equation} where $q$ is the order of the pole at the origin ($0$, $2$ or $4$), \begin{equation} \hat\lambda_{\alpha\beta}(\theta)=\lambda_{\alpha\beta}(\theta) \left(\frac{S(\theta{-}i\pi/3)}{S(\theta{+}i\pi/3)}\right)^{q/2}\,, \label{eq3} \end{equation} and $\alpha=(1)$, $\beta=b$ for this particular case. (The motivation for these redefinitions came from analogous man\oe uvres performed in refs.~\cite{BLZa,DTb} for certain excited-state TBA equations for periodic boundary conditions.) Equations (\ref{kermit}) and (\ref{piggy}) can then be recast in the following form: \begin{equation} \hat{\varepsilon}(\theta)=2r\cosh\theta-\phi{*}\hat{L}(\theta)~~,~~ c(r)= {12\sqrt3\over \pi}qr+ \frac{6}{\pi^2}\int^{\infty}_{-\infty}\! d\theta\, r\cosh\theta\hat{L}(\theta) \label{eq5} \end{equation} These revised equations are nonsingular on the real axis, and so can be solved numerically with higher accuracy. The attempt to go beyond the range $-2<b<2$ exposes a couple of subtleties of the boundary TBA. We start with the situation as the point $b=-2$ is approached. A study of the singularities of $L(\theta)$ in the complex $\theta$-plane revealed a pair of $Y(\theta)=-1$ (`type I' in the language of~\cite{DTb}) singularities on the imaginary axis, at $\theta=\pm\theta_p$ say. They are confined to the segment $ 0 < |\im \theta_p| < (b{+}2)\pi/6 $ by the double type II singularity at the origin, and type III singularities at $\pm i(b{+}2)\pi/6$. As $b$ approaches $-2$ from above, the length of this segment shrinks to zero and the points $\pm\theta_p$ are forced towards the origin. In order to continue round $-2$ to smaller real values of $b$, a deformation of integration contours is therefore required. This is just as occurs during the analytic continuation (in $r$) of ordinary TBA equations, discussed in refs.~\cite{DTa,DTb}. When the deformed contours are returned to the real $\theta$-axis, residue terms are picked up~\cite{DTa}, and the singularities at $\pm\theta_p$ become `active', in that their positions appear explicitly in the analytically-continued equations. These equations are: \begin{eqnarray} &&\varepsilon(\theta)=2r\cosh\theta - \log\lambda_{(1)b}(\theta) + \log { S(\theta-\theta_p) \over S(\theta+\theta_p) } -\phi{*}L(\theta)\,, \label{sam}\\[2pt] &&c(r)=\frac{6}{\pi^2}\int^{\infty}_{-\infty}\! d\theta r\cosh\theta L(\theta) + i {24 r \over \pi} \sinh\theta_p\,. \label{eq7} \end{eqnarray} As in \cite{DTa}, we adopt the convention that $\theta_p$ has a positive imaginary part {\it after}\ the real $\theta$ axis has been crossed, and the corresponding singularity has become active. Its precise value appears as a free parameter in the equations, and must be fixed by imposing \begin{equation} \varepsilon(\theta_p) = i\pi\,. \label{zoot} \end{equation} These equations describe the ground state, for all real $r$, whenever $b$ is in the interval $-4<b<-2$. As before, the redefinitions (\ref{eq2}) can be used to put the equations into a form better suited to numerical work. At $b{=}2$, a different phenomenon occurs: two (inactive) type II singularities, at $\pm i(b{-}2)\pi/6$, hit the real $\theta$ axis. As explained in section 3.4 of ref.~\cite{DTb}, when in the standard TBA the real $\theta$ axis is crossed by type II singularities, the continued equations can always be recast into the form that they had before the crossing, although in general some of the active singularities may have to be relocated from their analytically-continued positions. The same is true here, and the mechanism is as follows. After the point $b{=}2$, the singularities at $\pm i(b{-}2)$ are active and the initial analytic continuation of the basic BTBA equation (\ref{kermit}) is therefore \begin{equation} \varepsilon(\theta)=2r\cosh\theta-\log\lambda_{(1)b}(\theta) -\log\frac{S(\theta-i(b{-}2)\pi/6)}{S(\theta+i(b{-}2)\pi/6)} -\phi{*}L(\theta)\,. \end{equation} Equation (\ref{sam}) arose in just the same way, though there was a different sign for the extra term, because of the opposite signs of the residues for type I and type II singularities. This time there is no need for an equivalent of equation (\ref{zoot}), since the relevant singularities owe their existence to the factor $\lambda_{(1)b}(\theta)$ and so, as mentioned at the end of section~4, their positions are fixed irrespective of any other details of the solution. Now the identity \begin{equation} K_b(\theta) \frac{S(\theta-i(b{-}2)\pi/6)}{S(\theta+i(b{-}2)\pi/6)} =K_{4-b}(\theta) \end{equation} can be used to rewrite the analytically-continued equation as \begin{equation} \varepsilon(\theta)=2r\cosh\theta-\log\lambda_{(1)4-b}(\theta) -\phi{*}L(\theta)\,. \end{equation} As promised, this has the same form as the basic BTBA equation (\ref{kermit}), which applied before the point $b{=}2$ was passed, the only change being that the parameter $b$ has been replaced by $4{-}b$. It is straightforward to check that the expression (\ref{piggy}) for $c(r)$ behaves in an analogous fashion under the continuation, and so the system of equations has a (somewhat hidden) symmetry about $b{=}2$. Furthermore, as described in a related context in \cite{DTb}, there is no reason for the $b$-dependence to be in any way singular at this point: it is best thought of as a `coordinate singularity', with the apparently discontinuous behaviour residing solely in the BTBA equations, and not in the functions $Y$ and $c$ that they encode. If the original equations had been rewritten using contours shifted away from the real $\theta$ axis for all integrations, then the same functions would have been recovered, but from a system for which the point $b{=}2$ had no special status. Combining the symmetry about $b{=}2$ with the evident symmetry of the equations about $b{=}{-3}$ leads to the conclusion that the ground-state BTBA is periodic in the parameter $b$ with period $10$. This might be a little surprising, since from~(\ref{eq:rfb}) the periodicity in $b$ of the driving term $\lambda_{(1)b}(\theta)$ itself is $12$, but is well confirmed by our numerical results. Consider now the `regular' part of $c$, a function of $b$ and $r$ with an expansion in powers of $r^{6/5}$. By the result just established, each coefficient in this expansion must be a periodic function of $b$ with period $10$. An excellent numerical fit for the first coefficient turns out to be $-5.1041654269(9) \sin((b{+}0.5)\pi/5)$ (remarkably, all higher modes allowed by the periodicity appear to be absent). This coefficient can also be found in terms of $h$, and is equal to $ 24\,{\Gamma(2/5)^{1/2}\,\Gamma(6/5)^{1/2}\,}{\Gamma(4/5)^{-1}\,} (\pi\,M)^{-6/5}\,h =1.6794178\,M^{-6/5}\,h\,$. Combining the two expressions yields the relation between $b$ and $h$: \begin{equation} h(b)={-}0.685289983(9) \sin\Bigl((b{+}0.5)\pi/5\Bigr)M^{6/5}\,. \label{fozzie} \end{equation} With $h(b)$ given by this formula we found an excellent numerical agreement between the BTBA and BTCSA data at all real values of $b$. A particularly striking check came on setting $b{=}{-}0.5\,$: from (\ref{fozzie}), this should correspond to the `pure' $\Phi$ boundary condition, with no boundary field, and should therefore have an expansion in powers of $r^{12/5}$. Thus {\em all}\ odd powers of $r^{6/5}$ (and not just the first) should vanish in the regular expansion of $c(r)$ at this point, and within our numerical accuracy that is indeed what we found: \begin{eqnarray} c(r)|_{b=-0.5}&=& 0.4 + \fract{6(\sqrt3{-}1)(2{-}\sqrt2)}{\pi}\,r-\fract{2\sqrt3}{\pi}\,r^2\nonumber\\ &&~ {}-8.18{\times}10^{-12}\,r^{6/5} + 0.554031116\,r^{12/5} \nonumber\\ &&~ {}+1.14{\times}10^{-9}\,r^{18/5} - 0.0025228 r^{24/5}- 2.88{\times} 10^{-7}\,r^6 + \dots\qquad \label{animal} \end{eqnarray} \resection{The generalisation to excited states} We now turn to the excited states. In principle, the analytic continuation method of~\cite{DTa} can be used to derive the relevant generalisations of the basic BTBA equations. We have yet to complete this in detail, but knowledge of the method allowed us to make an educated guess as to the form that the equations would take, which we then verified by means of a direct comparison with BTCSA data. For the one-particle states on the strip, the equations turned out to have the same form as those found in~\cite{DTa} for {\em two}-particle states on a circle. With $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$ boundary conditions they are: \begin{eqnarray} &&\varepsilon(\theta)=2r\cosh\theta-\log\lambda_{(1)b}(\theta) + \log {S(\theta-\theta_0) \over S(\theta-\bar{\theta}_0)} + \log {S(\theta+\theta_0) \over S(\theta+\bar{\theta}_0)} -\phi{*}L(\theta)\,,\qquad\quad \label{eq101}\\ &&c(r)=\frac{6}{\pi^2}\int^{\infty}_{-\infty}\! d\theta r\cosh\theta L(\theta) +i{24 r\over\pi}(\sinh\theta_0 - \sinh\bar{\theta}_0)\,,\\[2pt] &&\varepsilon(\theta_0)=(2n{+}1)\pi i~~,~~ \varepsilon(\bar\theta_0)=-(2n{+}1)\pi i\,,\label{r1} \end{eqnarray} where, as before, $\lambda_{(1)b}(\theta)=K_{(1)}(\theta)K_b(-\theta)$. As in the last section, we desingularised these equations before studying them numerically. For real $r$, $\bar\theta_0$ is equal to $\theta_0^{\,*}$, the complex conjugate of $\theta_0$, and so only one of the conditions (\ref{r1}) needs to be imposed. This is the generic form of the equations for $-2<b<2$. For levels for which the one-particle excited-state equations retain the form of (\ref{eq101})--(\ref{r1}) as $r\rightarrow\infty$, a large-$r$ asymptotic can be extracted much as in~\cite{BLZa,DTa}. The result is \begin{equation} E(R)-E_0(R)= M \cosh \beta_0 \label{bbae} \end{equation} where $\beta_0=\re(\theta_0)$ satisfies \begin{equation} 2r\sinh \beta_0 - i \log R_{(1)} (\beta_0)R_{b} (\beta_0)=2\pi k~, \label{bba} \end{equation} and the value of the integer $k$ is fixed by a combination of the quantisation condition (\ref{r1}) and the sign of $(\im(\theta_0){-}\pi/6)\,$: \begin{equation} k=2n +\fract{1}{2}(1{-}{\rm sign}(\im(\theta_0){-}\pi/6))\,. \end{equation} Equation (\ref{bba}) is just the `boundary Bethe ansatz' (BBA) quantisation condition for the rapidity $\pm\beta_0$ of a single particle bouncing between the two ends of the interval. Further analytic results can be obtained, but for the rest of this section we will instead discuss some general features that emerge as $b$ is varied. For $b$ in the range $-4<b<-2$, a man\oe uvre similar to that seen in the last section for the ground state in the same region appears to be necessary, resulting in the appearance of a further pair of active singularities in the system, situated on the imaginary axis. Even for real $r$, there are then two independent singularity positions to be fixed, making the equations harder to handle numerically. We therefore leave this regime to one side for the time being, and move on to the range $-2<b<-1$. In this regime, the basic one-particle equations (\ref{eq101})--(\ref{r1}) hold for all real $r$, for all one-particle levels. Figures 3a and 3b compare BTBA results (points) with the first few levels found using the BTCSA (continuous lines), at $b=-1.5$, $h{=}0.402803\,M^{6/5}$. Notice that the plotted points do not cover all of the lines visible on the figure: those missed correspond to states containing more than one particle in the infrared, and are presumably described by BTBA systems with more active singularities than the four (at $\pm\theta_0$ and $\pm\bar\theta_0$) present in (\ref{eq101})--(\ref{r1}). {\begin{figure}[t] \[\begin{array}{ll} \epsfxsize=.45\linewidth \epsfbox[0 50 288 238]{NewFig2a.eps} {}~~&~~ \epsfxsize=.445\linewidth \epsfbox[0 50 288 238]{NewFig2b.eps} \\ \parbox[t]{.45\linewidth}{\small 3a) Energy levels $E(R)/M$ plotted against $r$ for the $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$ boundary conditions, with $b{=}{-}1.5$, $h{=}0.402803\, M^{6/5}$. BTBA results (points) compared with the BTCSA (lines). } {}~~&~~ \parbox[t]{.45\linewidth}{\small 3b) The same as figure 3a, but showing the scaling functions $F(r)$ to exhibit the ultraviolet behaviour. } \end{array}\] \vskip -10pt \end{figure}} As $b$ passes $-1$, the lowest excited state as seen in plots of BTCSA data breaks away from the other excited levels and dips below the one-particle threshold. Physically the reason for this is clear: at $b{=}{-}1$ an extra pole in $R_b(\theta)$ enters the physical strip, signalling the appearance of a boundary bound state. After this point the infrared behaviour of this level changes from that of a free particle bouncing between the two boundaries to that of a particle trapped near to the $\Phi(h)$ boundary, a state with asymptotic gap $M\!\cos((b{+}1)\pi/6)$. All of this can be seen in the BTCSA data, with the value of $b$ related to $h$ via~(\ref{fozzie}). Since $h(b)$ had previously been obtained by a matching of data in the ultraviolet, this infrared agreement provides a nontrivial check on the consistency of our results. {\begin{figure}[ht] \[\begin{array}{ll} \epsfxsize=.45\linewidth \epsfbox[0 50 288 238]{NewFig3a.eps} {}~~&~~ \epsfxsize=.45\linewidth \epsfbox[0 50 288 238]{NewFig3b.eps} \\ \parbox[t]{.45\linewidth}{\small 4) Energy levels $E(R)/M$ plotted against $r$ for $b{=}{0.8}$, $h{=}{-}0.499555\,M^{6/5}$. Labelling as in figure 3a. } {}~~&~~ \parbox[t]{.45\linewidth}{\raggedright\small 5) Energy levels $E(R)/M$ plotted against $r$ for $b{=}{1.5}$, $h{=}{-}0.65175 M^{6/5}$. Labelling as in figure 3a, with BBA results plotted as ($\,\circ\,$). } \end{array}\] \vskip -10pt \end{figure}} Figure 4 shows the situation at $b{=}0.8$, $h{=}{-}0.499555\,M^{6/5}$, by which stage the dip in the first level has become quite pronounced. The points on the graph show BTBA results found using (\ref{kermit}) and (\ref{eq101})--(\ref{r1}), and it will be observed that the lowest set of excited points stops short. (The other sets persist to $r{=}\infty$ and for these the derivation of the BBA asymptotic (\ref{bbae}) is valid.) At $r\approx 4$, the basic excited-state BTBA (\ref{r1}) for the first excited level breaks down, with an initial transition similar to that observed in~\cite{DTb} for the second excited state of the $T_2$ model. This reflects the fact that the particle is becoming bound to the $\Phi(h)$ boundary. The BTBA equations become more complicated in this regime, and we postpone their detailed study to future work. However in some regimes preliminary predictions can be obtained, in the spirit of~\cite{KTWa}, by supposing that solutions to the BBA equation~(\ref{bba}) can be continued to complex values of $\beta_0$. This amounts to approximating the problem by the quantum {\it mechanics}\ of a single particle reflecting off the two walls with amplitudes $R_{(1)}$ and $R_b$. For $-1<b<0$, the first excited level as found from the BTCSA can be modelled with reasonable accuracy by a solution $\beta_0(r)$ to the BBA which starts off real at small $r$, reaches zero at some ($b$-dependent) point $r_c$, and then becomes purely imaginary, tending to $i (b{+}1)\pi/6$ from below as $r$ tends to infinity. However for $b>0$ the only candidate solution is satisfactory at best only for large $r$, which is why such points have been omitted from figure~4. (The relevant $\beta_0(r)$ tends to $i(b{+}1)\pi/6$ from above at large $r$, implying an approach to the asymptotic mass gap from below rather than above, but our large-$r$ BTCSA data is not yet precise enough to check on this prediction.) Continuing to increase $b$, the next development occurs at $b{=}1$, $h{=}{-}0.554411\,M^{6/5}$, where a second excited level starts to drop below threshold, this time tending to a gap of $M\!\cos((b{-}1)\pi/6)$ as $R\rightarrow\infty$. This heralds the arrival of a second boundary bound state pole in $R_b(\theta)$ onto the physical strip. Figure~5 shows the situation for $b{=}1.5$. For this value of $b$, the predicted dip in the second excited level is rather small (about $0.03\,M\,$) and is only seen at rather large values of $R$, where the BTCSA is at the limits of its useful range. A more accurate reflection of the infrared behaviour of this second excited level is probably provided by the analytically-continued BBA solution plotted on the figure as open circles. The BTBA equations (\ref{r1}) now break down for both the first and the second excited levels, at $r\approx 4$ and $r\approx 9$ respectively. Finally, at $b{=}2$, $h{=}h_{\rm crit}{=}{-}0.68529\,M^{6/5}$, the asymptotic mass gap of the lowest excited state hits zero. It is not possible to increase $h$ any further without making $b$ complex; if a larger value is used in the BTCSA then we find energy levels becoming complex at large $r$. This can be seen analytically by combining (\ref{fozzie}) and (\ref{gonzo}) to write $\Ebndry$ as a function of $h\,$: a square-root singularity is revealed at $h{=}h_{\rm crit}$, matching perfectly behaviour that can be seen directly in the BTCSA. Presumably, the boundary perturbation has destabilised the bulk vacuum, a situation that will most probably repay further study. \resection{Other combinations of boundary conditions} Consideration of the strip with $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$ boundary conditions has sufficed to establish the key relation (\ref{fozzie}), and has also enabled us to check on the basic consistency of our associations (\ref{waldorf}) and (\ref{astoria}) of the reflection factors $R_{(1)}$ and $R_b$ with the $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$ and $\Phi(h(b))$ boundary conditions. However, it is natural to wonder how the two other possible pairs of boundary conditions can be described, and this is the topic of this section. First, to the $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})$ situation. From the identifications made earlier, one might expect that ground state would be described by the basic BTBA system (\ref{kermit}), with $\lambda_{\alpha\beta}(\theta) =K_{(1)}(\theta)K_{(1)}(-\theta)$. However, a calculation of the ultraviolet central charge as in \cite{LMSSa} yields the answer $c(0)=2/5$, instead of the $-22/5$ expected on the basis of the conformal result (\ref{rowlf}). Moreover, numerical comparisons of BTBA and BTCSA results show no agreement. To motivate the resolution of this problem, we recall from section~3 that the $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$ boundary condition can be obtained from the $\Phi(h)$ boundary condition by sending $h$ to ${+}\infty$. That was with the bulk massless, but if we consider an interval of length $R$ with $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$ boundary conditions and take the ${+}h/M^{6/5}\rightarrow\infty$ limit while keeping $R$ of the same order as the bulk scale $1/M$, then it should be possible (after suitable renormalisations) to arrive at the correct BTBA. We hope to say more about this elsewhere, but for now we just observe, from (\ref{fozzie}), that at the level of the BTBA the desired continuation will be found on setting $b=-3+i\hat b$, and then varying $\hat b$ from zero to infinity. The starting-point for this continuation lies in the $-4<b<-2$ regime of the $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(h))$ ground-state BTBA, and is hence described by the modified system (\ref{sam})--(\ref{zoot}). This observation gave us the hint that the $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})$ ground state might also be described by a BTBA system in which two singularities are already active. We therefore returned to the numerical study of the modified equations (\ref{sam})--(\ref{zoot}), this time with $\lambda_{\alpha\beta}(\theta) = K_{(1)}(\theta)K_{(1)}(-\theta)$. The BTCSA data for the $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}})$ ground state was matched perfectly over the full range of $r$, with a behaviour that is very similar to that of the first excited-state TBA for the SLYM on a circle~\cite{BLZa,DTa}. The equations exhibit a single transition between an infrared regime, where two active singularities lie on the imaginary $\theta$ axis, and an ultraviolet regime where these are split and join the kink systems at $\theta\approx\pm\log(1/M\! R)$. As a result, each of these kink systems has a type~II singularity sitting on the real $\theta$ axis, a fact which modifies the predicted value of the ultraviolet effective central charge $c(0)$. The calculation is just as described in refs.~\cite{BLZa,DTa}, and the desired result $c(0)=-22/5$ is indeed recovered. For $(\Phi(h),\Phi(h'))$, calculations for the BTCSA are more involved, since two irreducible Virasoro representations appear in the decomposition (\ref{rowlf}) of ${\cal H}_{(\Phi,\Phi)}$. Work on this is still in progress, but in the meantime we have considered the one case where a simple prediction can be made against which to test conjectures for the BTBA, namely $h=h'=0$. With the boundary fields set to zero, the regular part of $c(r)$ should exceptionally expand in powers of $r^{12/5}$, rather than $r^{6/5}$, just as seen for the $(\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}},\Phi(0))$ boundary condition in (\ref{animal}). For this case we found unambiguously in favour of a BTBA of the modified type (\ref{sam})--(\ref{zoot}), with $\lambda_{\alpha\beta}(\theta)=K_{-0.5}(\theta)K_{-0.5}(-\theta)$. The fit of $c(r)$ to the two irregular terms plus a series in $r^{6/5}$ for this proposal yields \begin{eqnarray} c(r)|_{b=b'=-0.5}&=& 0.4 - \fract{12(\sqrt3{-}1)(\sqrt2{-}1)}{\pi}\,r-\fract{2\sqrt3}{\pi}\,r^2\nonumber\\ &&~ {} - 3.89{\times}{{10}^{-12}}\,{r^{{\frac{6}{5}}}} + 1.79288\,{r^{{\frac{12}{5}}}} \nonumber\\ &&~ {} - 4.48{\times}{{10}^{-9}}\,{r^{{\frac{18}{5}}}} - 0.414179\,{r^{{\frac{24}{5}}}} - 9.54{\times}{{10}^{-7}}\,{r^6} + \dots\qquad \end{eqnarray} As in the earlier fit (\ref{animal}), the numerical data is consistent with the coefficients of all odd powers of $r^{6/5}$ being zero, and this supports the idea that the $(\Phi(0),\Phi(0))$ ground state is indeed described by the modified BTBA system (\ref{sam})--(\ref{zoot}). However this conclusion must remain preliminary while a comparison with BTCSA data is lacking, and more work will be needed before we can make any definitive statements about the situation for other values of $h$ and $h'$. \resection{Conclusions} We hope to have shown that the combination of boundary truncated conformal space and boundary thermodynamic Bethe ansatz techniques allows a detailed analysis to be made of integrable boundary models. Even for a theory as simple as the scaling Lee-Yang model, a rich structure has been revealed. Our results support the contention that the integrable boundary conditions of the model are $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$, with reflection factor $R_{(1)}(\theta)$, and the one-parameter family $\Phi(h)$, with reflection factors $R_b(\theta)$. It is worth pointing out that these latter are a particular subset of the reflection factors given by Ghoshal in~\cite{Ga} for the lowest breather in the sine-Gordon model, considered at the Lee-Yang point. It is natural to suppose that a boundary variant of quantum group reduction is at work here, and it would be interesting to explore this aspect further. Perhaps even more natural would be to make a link with a reduction of the boundary Bullough-Dodd model, since in that case the classically integrable boundary conditions already lie in a couple of one-parameter families~\cite{BCDRa}. {}From BTCSA and BTBA results, and also from a simple consideration of their ultraviolet limits, it is clear that the $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$ boundary condition is not the same as the $\Phi(h)$ boundary condition for any finite value of $h$. Nevertheless, we note that $R_{(1)}=R_{b=0}$. Thus the infrared data provided by an S-matrix and reflection factor alone is not enough to characterise a boundary condition completely. It is still possible that there is a special relationship between the $\hbox{{\rm 1{\hbox to 1.5pt{\hss\rm1}}}}$ and $\Phi(h(b{=}0))$ boundary conditions, but further numerical work will be required before we can make any definite claims one way or the other. Finally we would like to reiterate that the boundary scaling Lee-Yang model was studied in this paper just as a first example. The methods that we have used should be applicable in many other situations, and we intend to report on such matters in due course. \bigskip \bigskip \noindent{\bf Acknowledgements --- } PED thanks Ed Corrigan, and GMTW thanks M.~Blencowe, M.~Ortiz and I.~Runkel for conversations, and A.~Honecker for pointing out reference \cite{BWeh1}. We would also like to thank Jean-Bernard Zuber for helpful comments. The work was supported in part by a TMR grant of the European Commission, contract reference ERBFMRXCT960012, in part by a NATO grant, number CRG950751, and in part by an EPSRC grant GR/K30667. PED and GMTW thank the EPSRC for Advanced Fellowships, AJP thanks the EPSRC for a Research Studentship, and RT thanks the Mathematics Department of Durham University for a postdoctoral fellowship and SPhT Saclay for hospitality. \bigskip \noindent{\bf Note --- } for some recent work exploring related issues from a slightly different angle, see ref.~\cite{LSSa}. \bigskip \renewcommand\baselinestretch{0.95}
1,314,259,994,012
arxiv
\section{Introduction} \label{sec:intro} Image inpainting is a task that complete a missing region in an image using information from the valid regions. We can remove unwanted objects, text, or scratches via image inpainting techniques. Following rapid development of deep-learning-based imaging algorithms, most of the image inpainting models are trained with extensive training images. Some methods have been proposed to handle arbitrary mask shapes~\cite{GatedConv, PartialConv} or to capture semantic structures using novel architectures or loss functions~\cite{EC, CA}. These models can learn good natural image priors from huge dataset and are good at processing textures and structures that frequently appear in the training dataset. \begin{figure}[t] \captionsetup[subfigure]{labelformat=empty} \centering \begin{subfigure}[c]{0.24\linewidth} \centering \includegraphics[width=\linewidth]{Images/Introduction/Introduction-Masked.jpg} \caption{\footnotesize{Input}} \end{subfigure} \begin{subfigure}[c]{.24\linewidth} \centering \includegraphics[width=\linewidth]{Images/Introduction/Introduction-GC.jpg} \caption{\footnotesize{GatedConv \cite{GatedConv}}} \end{subfigure} \begin{subfigure}[c]{.24\linewidth} \centering \includegraphics[width=\linewidth]{Images/Introduction/Introduction-EC.jpg} \caption{\footnotesize{EdgeConnect \cite{EC}}} \end{subfigure} \begin{subfigure}[c]{.24\linewidth} \centering \includegraphics[width=\linewidth]{Images/Introduction/Ours.jpg} \caption{\footnotesize{\textbf{AdaFill(Ours)}}} \end{subfigure} \caption{GatedConv~\cite{GatedConv} and EdgeConnect~\cite{EC} show splashing or diffusing artifacts and cannot grasp internal similarity. In contrast, our method recover with less artifacts. } \label{fig:introduction} \end{figure} \begin{figure*}[!ht] \centering \begin{subfigure}[c]{1.0\textwidth} \centering \includegraphics[width=1.0\textwidth]{Images/Method/Scheme_and_Structure.pdf} \end{subfigure} \caption{\textbf{Left}: Overall flow. In the training phase, we begin from the pre-trained inpainting network $G$ (or random initialization for \textit{ZeroFill}). Next, for the test-time adaptation, we degrade a test image $x_d$ with random child masks $M_{c_{i=0, 1, 2, ...}}$ and put them into the network. The output of the network has to be same with the test image $x_d$ at the valid regions. After test-time training, the test image $x_d$ is passed along with its parent mask $M_p$ to get the final inpainted image. \textbf{Right}: Structure of the inpainting network $G$.} \label{fig:overall scheme} \end{figure*} However, these models have difficulty in recovering images whose patterns are totally different from those of the training images. This domain gap causes severe color artifacts, and these models cannot exploit internally similar patches that appear in the test images, as depicted in Fig.~\ref{fig:introduction}. As shown in Fig.~\ref{fig:introduction}, the color patterns are not consistent with the surrounding regions, which causes artifacts. To cope with this problem, inspired by the recent internal learning algorithms~\cite{DIP, ZeroShotSR}, we propose a test-time adaptation algorithm for image inpainting named \textit{AdaFill}. First, we modify the existing inpainting networks to fit a single test image and pre-train the network on a large-scale dataset to achieve external image priors. Next, we train the model on the test image only so that the network can focus on the internal pixel distribution by exploiting valid regions explicitly. With this simple scheme, our model can handle color artifacts caused by domain gap and exploit the internal similarity of a test image. We also propose a non-pre-trained version of \textit{AdaFill}, called \textit{ZeroFill}, that shows performance comparable with the pre-trained models. To the best of our knowledge, this is the first work that tackles the distributional shift problem in image inpainting. Compared with the other restoration tasks, the restoration performance in image inpainting is more dependent on the training dataset. Therefore, generalization for out-of-distributed images is an important issue for practical usage. \section{Related Works} \label{sec:preliminary section} Natural images have a high unique internal similarity, where similar structures or textures across various scales appear recurrently within an image. Several studies have previously verified that such internal similarity can be utilized for the single image super-resolution task~\cite{ZeroShotSR, zontak2011internal, glasner2009super}. They show that the internal statistical prior from a single image is powerful and often better than the generalized statistics from the large-scale training. Our work is closely related to the ZSSR~\cite{ZeroShotSR} that perform image super-resolution from a single image via internal learning. They artificially generate training samples from a low-resolution test image using re-downsampling. Compared with the ZSSR that exploits whole degraded images, our method utilizes valid regions as strong training cues. Similarly, DIP~\cite{DIP} propose a method that implicitly learns the prior of a single image. They show that this internal prior can be used to recover images with various types of degradations including image inpainting. However, such implicit internal prior has difficulty in recovering extreme degradations such as large holes or holes with extreme non-local patterns. In contrast, our method explicitly learns the internal prior using artificial training samples as well as external prior using large-scale pre-training. \begin{table*}[t!] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{c|ccc|ccc|ccc|ccc} Model & \multicolumn{3}{c|}{Gated Conv~\cite{GatedConv}} & \multicolumn{3}{c|}{Edge Connect~\cite{EC}} & \multicolumn{3}{c|}{DIP~\cite{DIP}} & \multicolumn{3}{c}{AdaFill(Ours)} \\ \hline Dataset & PSNR & SSIM & LPIPS & PSNR & SSIM & LPIPS & PSNR & SSIM & LPIPS & PSNR & SSIM & LPIPS \\ \hline\hline T91~\cite{OtherDataset} & 27.15 & 0.889 & 0.0755 & \textbf{27.85} & \textbf{0.901} & \textbf{0.0692} & 25.46 & 0.851 & 0.1047 & 27.26 & 0.870 & 0.0843 \\ Urban100~\cite{Urban100} & 23.14 & 0.854 & 0.0722 & 24.06 & \textbf{0.866} & 0.0721 & 22.23 & 0.805 & 0.1082 & \textbf{24.52} & 0.845 & \textbf{0.0699} \\ Google map~\cite{GoogleMap} & 24.65 & 0.846 & 0.0939 & 26.25 & 0.848 & \textbf{0.0888} & 24.47 & 0.836 & 0.1164 & \textbf{26.73} & \textbf{0.858} & 0.0910 \\ Facade~\cite{Facade} & 26.25 & 0.900 & 0.0570 & 26.02 & 0.886 & 0.0688 & 25.79 & 0.890 & 0.0691 & \textbf{28.55} & \textbf{0.921} & \textbf{0.0451} \\ BCCD~\cite{BCCD} & 34.25 & 0.956 & \textbf{0.0595} & \textbf{34.43} & 0.954 & 0.0662 & 30.47 & 0.948 & 0.0890 & 34.26 & \textbf{0.962} & 0.0631 \\ KLH~\cite{KLH} & 33.25 & \textbf{0.823} & \textbf{0.1162} & 20.91 & 0.791 & 0.3419 & 30.33 & 0.751 & 0.1579 & \textbf{33.48} & 0.781 & 0.1919 \\ Document~\cite{Document} & 19.67 & 0.910 & 0.0762 & 18.84 & 0.876 & 0.1316 & 17.49 & 0.865 & 0.1122 & \textbf{20.72} & \textbf{0.919} & \textbf{0.0585} \end{tabular}% } \vspace{-0.1cm} \caption{Quantitative comparison with GatedConv~\cite{GatedConv}, EdgeConnect~\cite{EC}, and DIP~\cite{DIP}} \vspace{-0.4cm} \label{tab:model comparison} \end{table*} \section{Method} \label{sec:method} Our overall framework and network structure are described in Fig.~\ref{fig:overall scheme}. In the training phase, we begin from the pre-trained inpainting network $G$ and we fine-tune on a single degraded image $x_d$. For \textit{ZeroFill}, we skip the pre-training process. We assume that the image $x_d$ is distorted by a distortion function $d(\cdot)$ with a parent mask $M_p$ from the clean image $x$ like $x_d=d(x)$. The parent mask $M_p$ represents the invalid pixels of $x_d$ with value 1 and valid pixels of $x_d$ with value 0. Therefore, we can represent the distorted image as $ x_d = x \odot (1 - M_p) + M_p $, where $\odot$ represents element-wise multiplication. To enable our network to learn the internal similarity and exploit it for inpainting, we define a similar distortion function $d'(\cdot)$ with the child mask $M_c$. As illustrated in Fig.~\ref{fig:overall scheme}, we degrade the given image using the child mask $d'(x_d) = x_d \odot (1 - M_c) + M_c$. The child masks are randomly generated during training. We denote this double-distorted image as $x_{dd'}$. In the case where the shape of the parent mask $M_p$ is different from the irregular mask or box mask, we use the parent mask as a child mask with random rotation and scaling at a certain rate. Our goal is to make the network learn the mapping function from the double-distorted image $x_{dd'}$ to the given single distorted image $x_d$. \begin{equation} \bar{x} = G([x_{dd'}, M_c]) \label{equation:eq2} \end{equation} \noindent where $\bar{x}$ is a preliminary prediction, $G$ is our inpainting network, and $[\cdot, \cdot]$ is concatenation operation. The predicted image $\bar{x}$ has to be same with the given image $x_d$ for the valid pixels. Since we do not know the ground truth of the invalid pixels in $x_d$, we degrade $\bar{x}$ with the same distortion function $\bar{x}_d = d(\bar{x})$. After degradation, we use following $L1$ loss to train the inpainting network $G$. \begin{equation} \mathcal{L}_{L1} = |\bar{x}_d - x_d|_1 \label{equation:eq4} \end{equation} From this training step, the network $G$ can learn the restoration patterns in the degraded image using the valid regions while ignoring the parent distortions. At the inference phase, we proceed one forward propagation with the test image that was used for training with equation, $\hat{x} = G([x_d, M_p])$. Here, $\hat{x}$ is our final result image. The structure of our inpainting network $G$ is described on the right side of Fig.~\ref{fig:overall scheme}. \noindent \vspace{-0.5cm} \section{Experiments} \label{sec:experiment} \noindent \textbf{Settings.} We pre-train the network using Places365~\cite{Places365} dataset for 1 epochs with the settings in~\cite{EC}. For the test-time adaptation, we use the following hyper-parameters: batch size 8, learning rate 0.0001, Adam~\cite{Adam} optimizer with $\beta_1 = 0.5, \beta_2 = 0.9$, and 1,000 training iterations. For \textit{ZeroFill}, we use 5,000 iterations without pre-training. We evaluate our model with LPIPS~\cite{LPIPS}, SSIM, and PSNR. We compare our model with other models that learn only the explicit prior only (pre-trained models, GatedConv~\cite{GatedConv} and EdgeConnect~\cite{EC}), and only the internal prior only (DIP~\cite{DIP}). \noindent \textbf{Dataset.} We use various datasets for evaluating our model: T91~\cite{OtherDataset}, Urban100~\cite{Urban100}, Google Map~\cite{GoogleMap}, Facade~\cite{Facade}, BCCD~\cite{BCCD}, KLH~\cite{KLH}, BSD200~\cite{BSD}, and Document~\cite{Document}. These are out-of-distributed from the Places365~\cite{Places365} dataset whose distribution is focused on various places images. In contrast, these images are small objects, natural scenes, artificial structures, medical images, satellite images, or text images. We subsample and pre-process each dataset and use two type of holes: box mask and irregular mask. For detailed description, please refer to the supplementary materials. \begin{figure}[t] \begin{minipage}[b]{1.0\columnwidth} \centering \centerline{\includegraphics[width=1.0\textwidth]{Images/Results/Comparison-Internal_Similarity.jpg}} \end{minipage} \caption{Values above the images represent the internal similarity scores, and values below the images indicate the PSNR / SSIM / LPIPS~\cite{LPIPS}. The results show that the higher internal similarity scores leads to the better results with our model.} \label{fig:internal similarity} \end{figure} \subsection{Experimental Results} \label{ssec:quantitative result} Quantitative results are described in Table~\ref{tab:model comparison}. From these results, our model outperforms DIP for all datasets and metrics. In almost cases, our model is superior to the pre-trained models even though our model is a one-stage network. It reveals that exploiting the internal statistics of a test image is critical to image inpainting. If the dataset has strong internal similarities, such as Urban100, Google Map, and Facade, our model consistently performs better. In addition, if the distribution of the dataset is far from that of the training dataset, such as KLH and Document, pre-trained models cannot recover well. Our qualitative results are compared in Fig.~\ref{fig:compare}. As mentioned above, similar results are observed. In the case of large internal similarity within an image, our model perfectly recover the hole regions, while other models show severe artifacts. \begin{figure*}[hbt!] \centering \rotatebox{90}{\hspace{-5.4cm}\footnotesize \textbf{AdaFill(Ours)} \quad\quad\quad\;\; DIP~\cite{DIP} \quad\quad\quad EdgeConnect~\cite{EC} \quad\quad GatedConv~\cite{GatedConv} \quad\quad\quad\quad\; Input} \begin{subfigure}[c]{0.135\textwidth} \centering \includegraphics[width=\textwidth]{Images/Results/Comparison-GoogleMap.jpg} \caption*{\footnotesize{Google Map~\cite{GoogleMap}}} \end{subfigure} \hspace{-0.15cm} \begin{subfigure}[c]{.135\textwidth} \centering \includegraphics[width=\textwidth]{Images/Results/Comparison-Facade.jpg} \caption*{\footnotesize{Facade~\cite{Facade}}} \end{subfigure} \hspace{-0.15cm} \begin{subfigure}[c]{.135\textwidth} \centering \includegraphics[width=\textwidth]{Images/Results/Comparison-T91.jpg} \caption*{\footnotesize{T91~\cite{OtherDataset}}} \end{subfigure} \hspace{-0.15cm} \begin{subfigure}[c]{.135\textwidth} \centering \includegraphics[width=\textwidth]{Images/Results/Comparison-Urban100.jpg} \caption*{\footnotesize{Urban100~\cite{Urban100}}} \end{subfigure} \hspace{-0.15cm} \begin{subfigure}[c]{.135\textwidth} \centering \includegraphics[width=\textwidth]{Images/Results/Comparison-Document.jpg} \caption*{\footnotesize{Document~\cite{Document}}} \end{subfigure} \hspace{-0.15cm} \begin{subfigure}[c]{.135\textwidth} \centering \includegraphics[width=\textwidth]{Images/Results/Comparison-BSD200.jpg} \caption*{\footnotesize{BSD200~\cite{BSD}}} \end{subfigure} \hspace{-0.15cm} \begin{subfigure}[c]{.135\textwidth} \centering \includegraphics[width=\textwidth]{Images/Results/Comparison-KLH.jpg} \caption*{\footnotesize{KLH~\cite{KLH}}} \end{subfigure} \caption{Qualitative comparison results with pre-trained model of GatedConv~\cite{GatedConv}, EdgeConnect~\cite{EC} and DIP~\cite{DIP}. We can confirm that pre-trained models show color artifacts and lower ability in capturing internal similarity in a single image.} \label{fig:compare} \end{figure*} \subsection{Internal Similarity} \label{sec:internal similiarty} From the Fig.~\ref{fig:internal similarity}, result show that the higher internal similarity score, the better restoration performance is achieved from our method. Our method perfectly recovers the hole regions when the internal similarity is high (the first column of Fig.~\ref{fig:internal similarity}.) To get the internal similarity, we first use pre-trained VGG19~\cite{VGG} and extract features from \texttt{relu 5-1} layer. Next, we calculate the pixel-wise similarities using the cosine similarity to get the similarity map of size $HW \times HW$, where $H$ and $W$ are the height and width of the feature map, respectively. Finally, we average the similarity map to get the final internal similarity score. \begin{table}[t!] \centering \resizebox{\linewidth}{!}{% \begin{tabular}{c|cc|cc|ccc} & PT & TTA & One St. & BN, NN & PSNR & SSIM & LPIPS \\ \hline\hline EC & $\surd$ & & & & 25.63 & 0.847 & 0.1283 \\ EC-TTA & $\surd$ & $\surd$ & & & 28.52 & \textbf{0.884} & 0.1095 \\ EC-TTA & $\surd$ & $\surd$ & $\surd$ & & 28.57 & 0.883 & 0.0882 \\ AdaFill & $\surd$ & $\surd$ & $\surd$ & $\surd$ & \textbf{28.57} & 0.882 & \textbf{0.0837} \\ ZeroFill & & $\surd$ & $\surd$ & $\surd$ & 27.47 & 0.878 & 0.1108 \end{tabular}% } \caption{Ablation study. PT: pre-training, TTA: test-time adaptation, One St: one-stage network, BN: batch normalization, NN: nearest-neighbor upsampling with convolution.} \label{tab:ablation study} \end{table} \subsection{Ablation Study} \label{sec:ablation study} We conducted ablation studies to find the optimal structure for the test-time adaptation with a single image. These results are compared in Table~\ref{tab:ablation study}. For the ablation experiments, we use the first 10 images from each dataset in Table~\ref{tab:model comparison}. We modified two things from the EdgeConnect~\cite{EC} baseline structure. The first one is using only the second stage of the EdgeConnect to reduce the number of parameters. The second one is replacing instance normalization~\cite{InstanceNorm} $+$ transposed convolution with batch normalization~\cite{BatchNorm} $+$ nearest-neighbor upsampling with convolution. These modifications increase the perceptual restoration quality, and reduce the color and annoying artifacts a lot. The results also show that our non-pre-trained model, \textit{ZeroFill} even show a slightly better performance than the pre-trained model. \section{Conclusion} \label{sec:conclution} We propose a simple test-time adaptation scheme called \textit{AdaFill} for image inpainting and \textit{ZeroFill} as an unsupervised version. The results show that the previous pre-trained models cannot generalize well on the out-of-distributed images. In contrast, our methods can overcome this domain gap and fully exploit the internal similarity of test images. As a future works, to reduce the test time, exploiting meta-learning~\cite{MZSR} can be adapted for practical usage. \vspace*{\fill} \noindent\footnotesize\textbf{Acknowledgement. }This research was supported by R\&D program for Advanced Integrated-intelligence for Identification (AIID) through the National Research Foundation of KOREA(NRF) funded by Ministry of Science and ICT (NRF-2018M3E3A1057289). \normalsize \vfill \pagebreak \bibliographystyle{IEEEbib}
1,314,259,994,013
arxiv
\section{\indexname}]% \thispagestyle{plain}\parindent\z@ \parskip\z@ \@plus .3\p@\relax \let\item\@idxitem} {\if@restonecol\onecolumn\else\clearpage\fi} \makeatother \makeatletter \let\o@verbatim\verbatim \def\verbatim{% \ifhmode\unskip\par\fi \ifx\@currsize\normalsize \small \fi \o@verbatim } \renewcommand \verbatim@font {% \normalfont \ttfamily \catcode`\<=\active \catcode`\>=\active } \RequirePackage{shortvrb} \MakeShortVerb{\|} \begingroup \catcode`\<=\active \catcode`\>=\active \gdef<{\@ifnextchar<\@lt\@meta} \gdef>{\@ifnextchar>\@gt\@gtr@err} \gdef\@meta#1>{\m{#1}} \gdef\@lt<{\char`\<} \gdef\@gt>{\char`\>} \endgroup \def\@gtr@err{% \ClassError{ltxguide}{% Isolated \protect>% }{% In this document class, \protect<...\protect> is used to indicate a parameter.\MessageBreak I've just found a \protect> on its own. Perhaps you meant to type \protect>\protect>? }% } \def\verbatim@nolig@list{\do\`\do\,\do\'\do\-} \newcommand{\m}[1]{\mbox{$\langle$\it #1\/$\rangle$}} \renewcommand{\arg}[1]{{\tt\string{}\m{#1}{\tt\string}}} \newcommand{\oarg}[1]{{\tt }\m{#1}{\t ]}} \def\cmd#1{\cs{\expandafter\cmd@to@cs\string#1}} \def\cmd@to@cs#1#2{\char\number`#2\relax} \DeclareRobustCommand\cs[1]{\texttt{\char`\\#1}} \newcommand*{\file}[1]{\texttt{#1}}% \def\GetFileInfo#1{% \def\filename{#1}% \def\@tempb##1 ##2 ##3\relax##4\relax{% \def\filedate{##1}% \def\fileversion{##2}% \def\fileinfo{##3}}% \edef\@tempa{\csname ver@#1\endcsname}% \expandafter\@tempb\@tempa\relax? ? \relax\relax} \def{} \def{} \def{} \makeatother \title{% Probing the Masses and Radii of Donor Stars in Eclipsing X-ray Binaries with the Swift Burst Alert Telescope }% \author{Joel~B.~Coley\altaffilmark{1,2}, Robin~H.~D.~Corbet\altaffilmark{1,2}, Hans~A.~Krimm\altaffilmark{3,4}} \email{jcoley1@umbc.edu} \altaffiltext{1}{University of Maryland,Baltimore County, MD, USA} \altaffiltext{2}{CRESST/Mail Code 662, X-ray Astrophysics Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA} \altaffiltext{3}{Universities Space Research Association, 7178 Columbia Gateway Drive, Columbia, MD 21046, USA} \altaffiltext{4}{CRESST/Mail Code 661, X-ray Astroparticle Physics Laboratory, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA} \expandafter\GetFileInfo\expandafter{\jobname.tex}% \begin{document} \begin{abstract} Physical parameters of both the mass donor and compact object can be constrained in X-ray binaries with well-defined eclipses, as our survey of wind-fed supergiant X-ray binaries (SGXBs) IGR J16393-4643, IGR J16418-4532, IGR J16479-4514, IGR J18027-2016 and XTE J1855-026 reveals. Using the orbital period and Kepler's third law, we express the eclipse half-angle in terms of radius, inclination angle and the sum of the masses. Pulse-timing and radial velocity curves can give masses of both the donor and compact object as in the case of the ``double-lined" { binaries IGR J18027-2016 and XTE J1855-026}. The eclipse half angles are { 15$^{+3}_{-2}$}, { 31.7$^{+0.7}_{-0.8}$}, { 32$\pm$2}, { 34$\pm$2} and { 33.6$\pm$0.7} degrees for IGR J16393-4643, IGR J16418-4532, IGR J16479-4514, IGR J18027-2016 and XTE 1855-026, respectively. In wind-fed systems, the primary not exceeding the Roche-lobe size provides an upper limit on system parameters. In IGR J16393-4643, spectral types of B0 V or B0-5 III are found to be consistent with the eclipse duration and Roche-lobe, but the previously proposed donor stars in IGR J16418-4532 and IGR J16479-4514 were found to be inconsistent with the Roche-lobe size. Stars with spectral types O7.5 I and earlier are possible. For IGR J18027-2016, the mass and radius of the donor star lie between { 18.6--19.4}\,$M_\sun$ and { 17.4--19.5}\,$R_\sun$. We constrain the neutron star mass between 1.37--{ 1.43}\,$M_\sun$. { We find the mass and radius of the donor star in XTE J1855-026 to lie between 19.6--20.2\,$M_\sun$ and 21.5--23.0\,$R_\sun$. The neutron star mass was constrained to 1.77--1.82\,$M_\sun$.} Eclipse profiles are asymmetric in IGR J18027-2016 and XTE J1855-026, which we attribute to { accretion wakes}. \end{abstract} \section{Introduction} High-Mass X-ray Binaries (HMXBs) are relatively young systems, which consist of a compact object (neutron star or black hole) and an early-type OB star orbiting the common center of mass. First discovered in the 1970s, HMXBs are split into two individual classes--Be X-ray binaries (BeXBs) and supergiant X-ray binaries (SGXBs). BeXBs are transient systems where the compact object is in a wide, typically eccentric, orbit (P$\gtrsim$10\,days) around a rapidly rotating non-supergiant B-type star. The primary mode of mass transfer occurs when the compact object passes through the circumstellar decretion disc of the mass donor. In SGXBs, the compact object is in a short ($\sim$1--42\,day) orbit around an OB-supergiant where the accretion mechanism is either via the powerful stellar wind and/or Roche-lobe overflow. While the eccentricity in most SGXBs with short orbital periods is near zero, some SGXBs host compact objects in highly eccentric orbits \citep[e.g. GX 301-2][]{2014MNRAS.441.2539I}. In wind-accretors, the X-ray luminosity is on the order of 10$^{35}$--10$^{36}$\,erg s$^{-1}$. However, a much higher luminosity is found in systems where the donor fills its Roche-lobe, $\sim$10$^{38}$\,erg s$^{-1}$ \citep{2004RMxAC..21..128K}. Many HMXBs host a neutron star where modulation is often seen at its rotation period. The mass of the neutron star can be constrained in eclipsing X-ray pulsars, which can lead to an improved understanding of the neutron star Equation of State \citep[][and references therein]{2011A&A...532A.124M}. Currently, the neutron star mass has been constrained in 10 XRBs where the lower limit for each object corresponds to edge-on orbital inclinations and the upper limit is calculated when the donor star just fills its Roche-lobe. While over 100 Equations of State have been proposed \citep{2006Msngr.126...27K}, only one model is physically correct \citep{2010A&A...509A..79M,2011A&A...532A.124M}. The mass ratio between the neutron star and donor star is equal to the ratio between the semi-amplitude of the radial velocities of the { donor star, $K_{\rm O}$, and the neutron star, $K_{\rm X}$}. Throughout the paper, we use a definition of mass ratio, $q$, as the ratio of the compact object mass to that of the donor star mass { \citep{1984ARA&A..22..537J}}. The orbital period of the binary, $P$, and $K_{\rm X}$ can be calculated using pulse-timing analysis \citep[][and references therein]{2005A&A...441..685V}, { which is analogous to measuring the Doppler shift of spectral lines in the optical and/or near-infrared \citep{1984ARA&A..22..537J}.} The projected semi-major axis can be determined from the semi-amplitude of the radial velocity of the neutron star. The semi-amplitude of the radial velocity of the donor star can be determined using optical { and/or near-infrared} spectroscopic information. { Eclipse measurements can also be exploited as timing markers to determine the binary orbital evolution of HMXBs. A significant orbital period derivative, $\dot{P}$, was previously found in several eclipsing HMXBs \citep[e.g 4U 1700-377; SMC X-1; Cen X-3; LMC X-4 and OAO 1657-415,][]{1996ApJ...459..259R,2010MNRAS.401.1532R,2015A&A...577A.130F} and can be used to investigate the orbital evolution over long periods of time. While several contending theories to explain the orbital decay have been investigated, the most probable explanations involve tidal interaction and rapid mass transfer between the components of the binary systems \citep[][and references therein]{2015A&A...577A.130F}.} The \textsl{Swift} Burst Alert Telescope (BAT), sensitive to X-rays in the 15--150\,keV band \citep{2005SSRv..120..143B}, provides an excellent way to study highly absorbed SGXBs. The large absorption found in these systems is problematic for instruments such as the \textsl{Rossi X-ray Timing Explorer (RXTE)} All Sky Monitor (ASM), which operated in the 1.5--12\,keV band \citep{2011ApJS..196....6L}. The sensitivity to higher energy X-rays allows \textsl{Swift}-BAT to peer through this absorption \citep[][and references therein]{2013ApJ...778...45C}. We present here constraints on the mass and radius of the donor star in eclipsing XRBs. The probability of an eclipse in a supergiant XRB with an orbital period less than 20\,d can be expressed in terms of the orbital period, mass of the donor star, and radius of the donor star \citep[Equation 1;][]{2002ApJ...581.1293R}. The probability of an eclipse in long-period XRBs is low. Using a literature search, { we determined} HXMB systems where BAT observations can { significantly} improve the { the properties of the stellar components and orbital evolution of the systems using eclipsing properties. Five} eclipsing XRBs were identified: IGR J16393-4643, IGR J16418-4532, IGR J16479-4514, IGR J18027-2016 and XTE J1855-026, { which are all highly obscured SGXBs}. { We note while} the masses of both the donor and compact object had previously been constrained in IGR J18027-2016, the error on the eclipse half-angle was large at 4.5$\degr$ \citep{2005A&A...439..255H}. The error { estimates concerning the radius and mass of the donor star are} significantly improved in our analysis. This paper is structured in the following order: \textsl{Swift} BAT observations and the description of the eclipse model are presented in Section 2; Section 3 focuses on individual sources that are known to be eclipsing. Section 4 presents a discussion of the results and the conclusions are outlined in Section 5. If not stated otherwise, the uncertainties and limits presented in the paper are at the 1$\sigma$ confidence level. \section{Data Analysis and Modelling} \subsection{Swift BAT} The BAT on board the \textsl{Swift} spacecraft is a hard X-ray telescope operating in the 15--150\,keV energy band \citep{2005SSRv..120..143B}. The detector is composed of CdZnTe where the detecting area and field of view (FOV) are 5240\,cm$^2$ and 1.4\,sr (half-coded), respectively \citep{2005SSRv..120..143B}. The BAT provides an all-sky hard X-ray survey with a sensitivity of $\sim$1\,mCrab \citep{2010ApJS..186..378T}. The Crab produces $\sim$0.045\,counts cm$^{-2}$ s$^{-1}$ over the entire energy band. We analyzed BAT data obtained during the time period MJD\,53416--56745 (2005 February 15--2014 March 29). Light curves were retrieved using the extraction of the BAT transient monitor data available on the NASA GSFC HEASARC website\footnote{http://heasarc.gsfc.nasa.gov/docs/swift/results/transients/} \citep{2013ApJS..209...14K}, which includes orbital and daily-averaged light curves. We used the orbital light curves in the 15--50\,keV energy band in our analysis, { which have typical exposures of $\sim$6\,min} (see Section~\ref{Five Eclipsing HMXBs}). { The short exposures, which are somewhat less than typical \textsl{Swift} pointing times ($\sim$20\,min), can arise due to the observing plan of \textsl{Swift} itself where BAT is primarily tasked to observe gamma-ray bursts \citep{2013ApJS..209...14K}.} The light curves were further screened to exclude bad quality points. We only considered data where the data quality flag (``DATA\_FLAG") was set to 0. Data flagged as ``good" are sometimes suspect, where a small number of data points with very low fluxes and implausibly small uncertainties were found \citep{2013ApJ...778...45C}. These points were removed from the light curves. { We corrected the photon arrival times to the solar system barycenter. We used the scripts made available on the Ohio State Astronomy webpage\footnote{http://astroutils.astronomy.ohio-state.edu/time/.} In this paper, the barycenter-corrected times are referred to as Barycenter Modified Julian Date (BMJD).} We initially derived the orbital period for each XRB in our sample using Discrete Fourier transforms (DFTs) of the light curves to search for periodicities in the data. We weighted the contribution of each data point to the power spectrum by its uncertainty, using the ``semi-weighting" technique \citep{2007PThPS.169..200C, 2013ApJ...778...45C}, where the error bars on each data point and the excess variability of the light curve are taken into account. We derived uncertainties on the orbital periods using the expression given in \citet{1986ApJ...302..757H}. \subsection{Eclipse Modeling} \label{Eclipse Modeling} { We initially modeled the eclipses using a symmetric ``step and ramp" function (see Table~\ref{Step and Ramp Model}) where the intensities are assumed to remain constant before ingress, during eclipse and after egress and follow a linear trend during the ingress and egress transitions \citep{2014ApJ...793...77C}. The count rates before ingress, during eclipse and after egress were each considered to be free parameters and were fit as follows: $C_{\rm ing}$ was fit from binary phase $\phi=$-0.2 to the start of ingress, $C_{\rm ecl}$ was fit during eclipse and $C_{\rm eg}$ was fit from the end of egress to phase $\phi=$0.2 (see Equation~\ref{Flux Equation}). While we find the eclipse profiles of IGR J16393-4643, IGR J16418-4532 and IGR J16479-4514 to be symmetric within errors, the profiles show some asymmetry in the cases of IGR J18027-2016 and XTE J1855-026. We note that a symmetric ``step and ramp" function could lead to systematic errors and we therefore fit the eclipse profiles using an asymmetric ``step and ramp" function (see Equantion~\ref{Flux Equation} and Table~\ref{Asymmetric Step and Ramp Model}). The parameters in this model are as follows: the phases corresponding to the start of ingress and start of egress, $\phi_{\rm ing}$ and $\phi_{\rm eg}$, the duration of ingress, $\Delta{\phi}_{\rm ing}$, the duration of egress, $\Delta{\phi}_{\rm eg}$, the pre-ingress count rate, $C_{\rm ing}$, the post-egress count rate, $C_{\rm eg}$, and the count rate during eclipse, $C_{\rm ecl}$. The eclipse duration and mid-eclipse phase are calculated using Equations~\ref{Half Angle Equation} and~\ref{Mid Eclipse Equation}, respectively.} \begin{equation} \label{Flux Equation} C = \left\{ \begin{array}{ll} C_{\rm ing}, & -0.2 \leq \phi \leq \phi_{\rm ing} \\ (\frac{C_{\rm ecl}-C_{\rm ing}}{\Delta{\phi}_{\rm ing}}) (\phi-\phi_{\rm ing})+C_{\rm ing}, & \phi_{\rm ing} \leq \phi \leq \phi_{\rm ing}+\Delta{\phi}_{\rm ing} \\ C_{\rm ecl}, & \phi_{\rm ing}+\Delta{\phi}_{\rm ing} \leq \phi \leq \phi_{\rm egr} \\ (\frac{C_{\rm egr}-C_{\rm ecl}}{\Delta{\phi}_{\rm eg}}) (\phi-\phi_{\rm egr})+C_{\rm ecl}, & \phi_{\rm egr} \leq \phi \leq \phi_{\rm egr}+\Delta{\phi}_{\rm eg} \\ C_{\rm eg}, & \phi_{\rm egr}+\Delta{\phi}_{\rm eg} \leq \phi \leq 0.2 \\ \end{array} \right. \end{equation} \begin{equation} \label{Half Angle Equation} \Delta{\phi}_{\rm ecl}=\phi_{\rm egr}-(\phi_{\rm ing}+\Delta{\phi}_{\rm ing}) \end{equation} \begin{equation} \label{Mid Eclipse Equation} \phi_{\rm mid}=\frac{1}{2}(\phi_{\rm egr}+(\phi_{\rm ing}+\Delta{\phi}_{\rm ing})) \end{equation} \begin{deluxetable}{cccccc} \tablecolumns{6} \tabletypesize{\small} \tablewidth{0pc} \tablecaption{Eclipse Model Parameters, Assuming a Symmetric Eclipse Profile} \tablehead{ \colhead{Model Parameter} & \colhead{IGR J16393-4643} & \colhead{IGR J16418-4532} & \colhead{IGR J16479-4514} & \colhead{IGR J18027-2016} & \colhead{XTE J1855-026}} \startdata $\phi_{\rm ing}$ & { -0.079$_{-0.014}^{+0.006}$} & { -0.107$\pm$0.002} & { -0.114$_{-0.004}^{+0.003}$} & { -0.150$_{-0.003}^{+0.005}$} & { -0.131$_{-0.002}^{+0.001}$} \\ $\Delta{\phi}$ & { 0.040$_{-0.008}^{+0.009}$} & { 0.019$_{-0.003}^{+0.002}$} & { 0.029$_{-0.004}^{+0.003}$} & { 0.053$_{-0.003}^{+0.004}$} & { 0.038$\pm$0.002} \\ $C$$^a$ & { 1.27$\pm$0.04} & { 1.18$\pm$0.05} & { 1.05$\pm$0.06} & { 1.59$\pm$0.06} & { 2.64$\pm$0.06} \\ $\phi_{\rm egr}$ & { 0.048$_{-0.005}^{+0.004}$} & { 0.088$_{-0.001}^{+0.002}$} & { 0.092$_{-0.004}^{+0.003}$} & { 0.098$_{-0.004}^{+0.002}$} & { 0.094$\pm$0.002} \\ $C_{\rm ecl}$$^a$ & { 0.71$\pm$0.06} & { 0.00$\pm$0.05} & { 0.03$\pm$0.05} & { 0.17$\pm$0.04} & { -0.04$\pm$0.05} \\ \tableline $\Delta{\phi}_{\rm ecl}$ & { 0.09$^{+0.01}_{-0.02}$} & { 0.175$\pm$0.003} & { 0.177$^{+0.005}_{-0.007}$} & { 0.196$^{+0.007}_{-0.005}$} & { 0.187$\pm$0.003} \\ $P_{\rm orb}^b$ & { 4.23794$\pm$0.00007} & { 3.73880$\pm$0.00002} & { 3.31965$\pm$0.00006} & { 4.56999$\pm$0.00005} & { 6.07410$\pm$0.00004} \\ $\dot{P}_{\rm orb}^c$ & { -5$\pm$4} & { 0.7$\pm$1.0} & { 3$\pm$2} & { -2$\pm$2} & { 0.5$\pm$1.0} \\ $T_{\rm mid}^d$ & { 55074.99$_{-0.04}^{+0.02}$} & { 55087.714$\pm$0.006} & { 55081.571$^{+0.009}_{-0.012}$} & { 55083.79$_{-0.01}^{+0.02}$} & { 55079.055$_{-0.009}^{+0.010}$} \\ $\Theta_{\rm e}^e$ & { 16$^{+2}_{-3}$} & { 31.5$\pm$0.6} & { 31.9$^{+0.9}_{-1.3}$} & { 35$\pm$1} & { 33.6$^{+0.6}_{-0.5}$} \\ \tableline $\chi^2_\nu$ (dof) & { 1.13(77)} & { 0.84(77)} & { 1.03(77)} & { 0.93(77)} & { 1.23(77)} \\ \enddata \tablecomments{\\* $^a$ Units are 10$^{-3}$\,counts cm$^{-2}$ s$^{-1}$. \\* $^b$ Refined orbital periods using an $O-C$ analysis. Units are days. \\* $^c$ The orbital period derivative at the 90$\%$ confidence interval found using an $O-C$ analysis. Units are 10$^{-7}$\,d d$^{-1}$. \\* $^d$ Units are barycentered Modified Julian Time (BMJD). Phase 0 is defined as eclipse center. \\* $^e$ Units are degrees.} \label{Step and Ramp Model} \end{deluxetable} \begin{deluxetable}{cccccc} \tablecolumns{6} \tabletypesize{\small} \tablewidth{0pc} \tablecaption{{ Eclipse Model Parameters, Assuming an Asymmetric Eclipse Profile}} \tablehead{ \colhead{{ Model Parameter}} & \colhead{{ IGR J16393-4643}} & \colhead{{ IGR J16418-4532}} & \colhead{{ IGR J16479-4514}} & \colhead{{ IGR J18027-2016}} & \colhead{{ XTE J1855-026}}} \startdata { $\phi_{\rm ing}$} & { -0.092$_{-0.005}^{+0.008}$} & { -0.108$\pm$0.002} & { -0.118$\pm$0.003} & { -0.160$\pm$0.005} & { -0.136$\pm$0.002} \\ { $\Delta{\phi}_{\rm ing}$} & { 0.06$\pm$0.01} & { 0.020$\pm$0.003} & { 0.024$\pm$0.008} & { 0.08$\pm$0.01} & { 0.042$\pm$0.003} \\ { $\Delta{\phi}_{\rm eg}$} & { 0.039$\pm$0.006} & { 0.014$_{-0.004}^{+0.003}$} & { 0.026$_{-0.003}^{+0.007}$} & { 0.018$_{-0.004}^{+0.005}$} & { 0.038$\pm$0.003} \\ { $C_{\rm ing}$$^a$} & { 1.28$\pm$0.06} & { 1.07$_{-0.06}^{+0.07}$} & { 1.01$\pm$0.08} & { 1.43$\pm$0.08} & { 2.44$\pm$0.08} \\ { $C_{\rm eg}$$^a$} & { 1.26$\pm$0.06} & { 1.22$\pm$0.07} & { 1.04$\pm$0.08} & { 1.67$\pm$0.08} & { 2.79$_{-0.08}^{+0.09}$} \\ { $\phi_{\rm egr}$} & { 0.049$_{-0.006}^{+0.004}$} & { 0.089$\pm$0.002} & { 0.086$\pm$0.003} & { 0.112$\pm$0.002} & { 0.092$_{-0.002}^{+0.001}$} \\ { $C_{\rm ecl}$$^a$} & { 0.69$\pm$0.06} & { 0.00$\pm$0.05} & { 0.05$\pm$0.05} & { 0.14$\pm$0.05} & { -0.05$\pm$0.05} \\ \tableline { $\Delta{\phi}_{\rm ecl}$} & { 0.09$^{+0.02}_{-0.01}$} & { 0.176$\pm$0.004} & { 0.180$\pm$0.009} & { 0.19$\pm$0.01} & { 0.186$\pm$0.004} \\ { $P_{\rm orb}^b$} & { 4.23810$\pm$0.00007} & { 3.73881$\pm$0.00002} & { 3.31961$\pm$0.00004} & { 4.56988$\pm$0.00006} & { 6.07413$\pm$0.00004} \\ { $\dot{P}_{\rm orb}^c$} & { -6$\pm$4} & { 0.2$\pm$0.8} & { 3$\pm$2} & { -3$\pm$4} & { 0.5$\pm$1.9} \\ { $T_{\rm mid}^d$} & { 55074.99$\pm$0.03} & { 55087.721$_{-0.008}^{+0.007}$} & { 55081.57$\pm$0.02} & { 55083.88$\pm$0.03} & { 55079.07$\pm$0.01} \\ { $\Theta_{\rm e}^e$} & { { 15$^{+3}_{-2}$}} & { 31.7$^{+0.7}_{-0.8}$} & { 32$\pm$2} & { 34$^{+3}_{-2}$} & { 33.6$\pm$0.7} \\ \tableline { $\chi^2_\nu$ (dof)} & { 1.18(80)} & { 0.97(80)} & { 1.08(80)} & { 1.01(80)} & { 1.03(80)} \\ \enddata \tablecomments{\\* $^a$ Units are 10$^{-3}$\,counts cm$^{-2}$ s$^{-1}$. \\* $^b$ Refined orbital periods using an $O-C$ analysis. Units are days. \\* $^c$ The orbital period derivative at the 90$\%$ confidence interval found using an $O-C$ analysis. Units are 10$^{-7}$\,d d$^{-1}$. \\* $^d$ Units are BMJD. Phase 0 is defined as eclipse center. \\* $^e$ Units are degrees.} \label{Asymmetric Step and Ramp Model} \end{deluxetable} \begin{deluxetable}{ccccc} \tablecolumns{5} \tablewidth{0pc} \tablecaption{{ Eclipse Model Parameters, Including Historical Mid-Eclipse Times}} \tablehead{ \colhead{{ Model Parameter}} & \colhead{{ IGR J18027-2016$^{a,c}$}} & \colhead{{ IGR J18027-2016$^{a,d}$}} & \colhead{{ XTE J1855-026$^{b,c}$}} & \colhead{{ XTE J1855-026$^{b,d}$}}} \startdata { $\phi_{\rm ing}$} & { -0.147$_{-0.005}^{+0.004}$} & { -0.167$_{-0.004}^{+0.005}$} & { -0.131$_{-0.002}^{+0.001}$} & { -0.136$\pm$0.002} \\ { $\Delta{\phi}_{\rm ing}$} & { 0.053$_{-0.007}^{+0.005}$} & { 0.082$\pm$0.009} & { 0.038$\pm$0.002} & { 0.040$\pm$0.003} \\ { $\Delta{\phi}_{\rm eg}$} & { 0.053$_{-0.007}^{+0.005}$} & { 0.027$_{-0.006}^{+0.004}$} & { 0.038$\pm$0.002} & { 0.037$\pm$0.003} \\ { $C_{\rm ing}$$^e$} & { 1.59$\pm$0.05} & { 1.25$\pm$0.08} & { 2.64$\pm$0.06} & { 2.45$\pm$0.08} \\ { $C_{\rm eg}$$^e$} & { 1.59$\pm$0.05} & { 1.68$\pm$0.08} & { 2.64$\pm$0.06} & { 2.79$_{-0.08}^{+0.09}$} \\ { $\phi_{\rm egr}$} & { 0.099$\pm$0.003} & { 0.102$\pm$0.002} & { 0.094$\pm$0.002} & { 0.092$_{-0.002}^{+0.001}$} \\ { $C_{\rm ecl}$$^e$} & { 0.17$\pm$0.04} & { 0.14$\pm$0.04} & { -0.04$\pm$0.05} & { -0.03$\pm$0.05} \\ \tableline { $\Delta{\phi}_{\rm ecl}$} & { 0.193$^{+0.007}_{-0.009}$} & { 0.19$\pm$0.01} & { 0.187$\pm$0.003} & { 0.187$\pm$0.004} \\ { $P_{\rm orb}^f$} & { 4.56982$\pm$0.00003} & { 4.56993$\pm$0.00003} & { 6.07412$\pm$0.00003} & { 6.07414$\pm$0.00003} \\ { $\dot{P}_{\rm orb}^g$} & { 0.8$\pm$0.9} & { 0.2$\pm$1.1} & { -0.1$\pm$0.5} & { 0.0$\pm$0.5} \\ { $T_{\rm mid}^h$} & { 55083.78$\pm$0.01} & { 55083.82$\pm$0.01} & { 55079.056$\pm$0.009} & { 55079.07$\pm$0.01} \\ { $\Theta_{\rm e}^i$} & { 35$\pm$1} & { 34$\pm$2} & { 33.6$^{+0.5}_{-0.6}$} & { 33.7$\pm$0.7} \\ \tableline { $\chi^2_\nu$ (dof)} & { 1.20(77)} & { 0.93(80)} & { 1.02(77)} & { 1.05(80)} \\ \enddata \tablecomments{\\* $^a$ Includes the mid-eclipse times derived in \citet{2005A&A...439..255H}, \citet{2009RAA.....9.1303J} and \citet{2015A&A...577A.130F}. \\* $^b$ Includes the mid-eclipse times derived in \citet{2002ApJ...577..923C} and \citet{2015A&A...577A.130F}. \\* $^c$ Assuming a Symmetric Eclipse Profile. \\* $^d$ Assuming an Asymmetric Eclipse Profile. \\* $^e$ Units are 10$^{-3}$\,counts cm$^{-2}$ s$^{-1}$. \\* $^f$ Refined orbital periods using an $O-C$ analysis. Units are days. \\* $^g$ The orbital period derivative at the 90$\%$ confidence interval found using an $O-C$ analysis. Units are 10$^{-7}$\,d d$^{-1}$. \\* $^h$ Units are BMJD. Phase 0 is defined as eclipse center. \\* $^i$ Units are degrees.} \label{Historic Step and Ramp Model} \end{deluxetable} The eclipse duration, time of mid-eclipse, and eclipse half-angle ($\Theta_{\rm e}=\Delta{\phi}_{\rm ecl}$$\times$180$\degr$) from fitting the BAT folded light curves for each source are reported in Tables~\ref{Step and Ramp Model}--~\ref{Historic Step and Ramp Model}. For each source, we initially used an ephemeris based on our determination of the orbital period from the DFT and time of mid-eclipse. Using an `observed minus calculated' $O-C$ analysis (see { Figures~\ref{O-C Residuals}--\ref{O-C Historic}}), we refined the orbital periods and improved on the time of mid-eclipse for each XRB in our sample. { We note that no eclipses are visible in the unfolded light curves and it is necessary to observe multiple cycles of folded light curves in order for eclipses to be seen. We} divide the light curves { into} five equal time intervals ($\sim$670\,days), with the exception of IGR J16393-4643 (see Section~\ref{IGR J16393-4643 Results}), and calculate the mid-eclipse epoch for each interval { (see Table~\ref{O-C Table}) In the cases of IGR J18027-2016 and XTE J1855-026, we combine our derived mid-eclipse times with those reported in the literature \citep[][and references therein]{2015A&A...577A.130F}. We note that while mid-eclipse times were previously derived for both IGR J16393-4643 and IGR J16479-4514 (see Table~\ref{Historic O-C Table}), these were not used since no error estimate was reported \citep{2015MNRAS.446.4148I,2009AIPC.1126..319B}.} We then fit the mid-eclipse times using the orbital change function (see Equation~\ref{Orbital Change Function}) where $n$ is the number of binary orbits { given to the nearest integer}, $P_{\rm orb}$ is the orbital period in days, $\dot{P}_{\rm orb}$ is the period derivative at $T_0$, and the error on the linear term is the orbital period error. In all five cases, we improve the error estimate on the orbital period by nearly an order of magnitude (see Section~\ref{Five Eclipsing HMXBs}). We do not find a significant $\dot{P}_{\rm orb}$ for any source { (see Tables~\ref{Step and Ramp Model}--~\ref{Historic Step and Ramp Model})}. \begin{equation} \label{Orbital Change Function} T_{\rm n}=T_0+n P_{\rm orb}+\frac{1}{2} n^2 P_{\rm orb} \dot{P}_{\rm orb} \end{equation} \begin{figure}[ht] \centerline{\includegraphics[width=3in]{april14-2015_BAT-OCcorrections_allsources.ps}} \figcaption[april14-2015_BAT-OCcorrections_allsources.ps]{ The observed minus calculated ($O-C$) eclipse time residuals for IGR J16393-4643 (top), IGR J16418-4532 (second panel), IGR J16479-4514 (middle), IGR J18027-2016 (fourth panel) and XTE J1855-026 (bottom) { fit using a symmetric step-and-ramp function}. We subtract the best linear polynomial fit for each source and correct the orbital periods accordingly. For IGR J16393-4643 (top) we only use the first four points to obtain a good fit (see Section~\ref{IGR J16393-4643 Results}). \label{O-C Residuals} } \end{figure} \begin{figure}[ht] \centerline{\includegraphics[width=3in]{april14-2015_BAT-OCasymmetric_allsources.ps}} \figcaption[april14-2015_BAT-OCasymmetric_allsources.ps]{ The observed minus calculated ($O-C$) eclipse time residuals for IGR J16393-4643 (top), IGR J16418-4532 (second panel), IGR J16479-4514 (middle), IGR J18027-2016 (fourth panel) and XTE J1855-026 (bottom) fit { using an asymmetric step-and-ramp function}. We subtract the best linear polynomial fit for each source and correct the orbital periods accordingly. For IGR J16393-4643 (top) we only use the first four points to obtain a good fit (see Section~\ref{IGR J16393-4643 Results}). \label{O-C Asymmetric} } \end{figure} \begin{figure}[ht] \centerline{\includegraphics[width=3in]{may4-2015_BAT-OCcombined_allsources.ps}} \figcaption[may4-2015_BAT-OCcombined_allsources.ps]{ Derived $O-C$ eclipse time residuals for IGR J16393-4643 (top), IGR J16418-4532 (second panel), IGR J16479-4514 (middle), IGR J18027-2016 (fourth panel) and XTE J1855-026 (bottom) fit combined with those in the literature. We subtract the best linear polynomial fit for each source and correct the orbital periods accordingly. For IGR J16393-4643 (top) we only use the first four points to obtain a good fit (see Section~\ref{IGR J16393-4643 Results}). \label{O-C Historic} } \end{figure} \begin{deluxetable}{ccccc} \tablecolumns{5} \tabletypesize{\small} \tablewidth{0pc} \tablecaption{{ Mid-eclipse Time Measurements For $O-C$ Analysis}} \tablehead{ \colhead{{ Source}} & \colhead{{ Orbital}} & \colhead{{ Mid-eclipse time$^a$}} & \colhead{{ Mid-eclipse time$^b$}} \\ \colhead{} & \colhead{{ Cycle (N)}} & \colhead{{ (MJD)}} & \colhead{{ (MJD)}}} \startdata { IGR J16393-4643} & { -313} & { 53748.46$\pm$0.03} & { 53748.44$\pm$0.03} \\ { IGR J16393-4643} & { -155} & { 54418.15$_{-0.02}^{+0.04}$} & { 54418.14$_{-0.03}^{+0.04}$} \\ { IGR J16393-4643} & { 1} & { 55079.22$_{-0.02}^{+0.07}$} & { 55079.22$_{-0.02}^{+0.03}$} \\ { IGR J16393-4643} & { 158} & { 55744.55$_{-0.03}^{+0.02}$} & { 55744.54$_{-0.08}^{+0.02}$} \\ { IGR J16393-4643} & { 315} & { \nodata$^c$} & { \nodata$^c$} \\ \tableline { IGR J16418-4532} & { -359} & { 53745.50$\pm$0.01} & { 53745.491$_{-0.009}^{+0.013}$} \\ { IGR J16418-4532} & { -179} & { 54418.461$^{+0.009}_{-0.010}$} & { 54418.468$\pm$0.009} \\ { IGR J16418-4532} & { 0} & { 55087.71$\pm$0.01} & { 55087.72$\pm$0.01} \\ { IGR J16418-4532} & { 179} & { 55756.963$^{+0.011}_{-0.008}$} & { 55756.967$\pm$0.009} \\ { IGR J16418-4532} & { 357} & { 56422.47$\pm$-0.02} & { 56422.47$_{-0.01}^{+0.02}$} \\ \tableline { IGR J16479-4514} & { -402} & { 53747.19$_{-0.03}^{+0.02}$} & { 53747.21$_{-0.03}^{+0.02}$} \\ { IGR J16479-4514} & { -200} & { 54417.63$\pm$0.02} & { 54417.61$_{-0.02}^{+0.03}$} \\ { IGR J16479-4514} & { 0} & { 55081.47$^{+0.03}_{-0.02}$} & { 55081.48$_{-0.03}^{+0.02}$} \\ { IGR J16479-4514} & { 200} & { 55745.33$_{-0.04}^{+0.03}$} & { 55745.28$\pm$0.04} \\ { IGR J16479-4514} & { 401} & { 56412.60$_{-0.04}^{+0.06}$} & { 56412.59$_{-0.04}^{+0.03}$} \\ \tableline { IGR J18027-2016} & { -292} & { 53749.41$_{-0.03}^{+0.05}$} & { 53749.40$_{-0.05}^{+0.04}$} \\ { IGR J18027-2016} & { -143} & { 54430.32$\pm$0.02} & { 54430.34$_{-0.02}^{+0.04}$} \\ { IGR J18027-2016} & { 0} & { 55083.79$_{-0.03}^{+0.02}$} & { 55083.85$_{-0.03}^{+0.08}$} \\ { IGR J18027-2016} & { 146} & { 55751.02$\pm$0.03} & { 55751.06$_{-0.04}^{+0.03}$} \\ { IGR J18027-2016} & { 292} & { 56418.12$_{-0.03}^{+0.02}$} & { 56418.20$_{-0.04}^{+0.03}$} \\ \tableline { XTE J1855-026} & { -219} & { 53748.82$\pm$0.01} & { 53748.84$\pm$0.01} \\ { XTE J1855-026} & { -109} & { 54416.99$\pm$0.02} & { 54416.99$\pm$0.02} \\ { XTE J1855-026} & { 0} & { 55079.05$\pm$0.01} & { 55079.09$_{-0.03}^{+0.02}$} & \\ { XTE J1855-026} & { 110} & { 55747.20$\pm$0.01} & { 55747.21$\pm$0.02} \\ { XTE J1855-026} & { 219} & { 56409.30$\pm$0.02} & { 56409.33$\pm$0.02} \\ \enddata \tablecomments{{ $^a$ Obtained using the symmetric step-and-ramp function. \\* $^b$ Obtained using the asymmetric step-and-ramp function. \\* $^c$ A bad fit for IGR J16393-4643 was obtained in the mid-eclipse time between MJD\,56079--56745.}} \label{O-C Table} \end{deluxetable} \begin{deluxetable}{cccccc} \tablecolumns{6} \tabletypesize{\small} \tablewidth{0pc} \tablecaption{{ Historic Mid-eclipse Time Measurements For IGR J18027-2016 and XTE J1855-026}} \tablehead{ \colhead{{ Source}} & \colhead{{ Orbital}} & \colhead{{ Mid-eclipse time}} & \colhead{{ Satellite}} & \colhead{{ Reference}} \\ \colhead{} & \colhead{{ Cycle (N)}} & \colhead{{ (MJD)}} & \colhead{{ (MJD)}} & \colhead{} & \colhead{}} \startdata { IGR J16393-4643} & { -391} & { 53417.955} & { \textsl{Swift}} & { \citet{2015MNRAS.446.4148I}} \\ { IGR J16479-4514} & { -161} & { 54547.05418} & { \textsl{Swift}} & { \citet{2009AIPC.1126..319B}} \\ \tableline { IGR J18027-2016} & { -638} & { 52168.22$\pm$0.12} & { \textsl{BeppoSAX}} & { \citet{2003ApJ...596L..63A}} \\ { IGR J18027-2016} & { -638} & { 52168.26$\pm$0.04} & { \textsl{BeppoSAX}} & { \citet{2005A&A...439..255H}} \\ { IGR J18027-2016} & { -471} & { 52931.37$\pm$0.04} & { \textsl{INTEGRAL}} & { \citet{2005A&A...439..255H}} \\ { IGR J18027-2016} & { -399} & { 53260.37$\pm$0.07} & { \textsl{INTEGRAL}} & { \citet{2009RAA.....9.1303J}} \\ { IGR J18027-2016} & { -286} & { 53776.82$\pm$0.07} & { \textsl{Swift}} & { \citet{2009RAA.....9.1303J}} \\ { IGR J18027-2016} & { -267} & { 53863.10$\pm$0.14} & { \textsl{INTEGRAL}} & { \citet{2015A&A...577A.130F}} \\ { IGR J18027-2016} & { -127} & { 54503.38$\pm$0.07} & { \textsl{Swift}} & { \citet{2009RAA.....9.1303J}} \\ \tableline { XTE J1855-026} & { -590} & { 51495.25$\pm$0.02} & { \textsl{RXTE}} & { \citet{2002ApJ...577..923C}} \\ { XTE J1855-026} & { -391} & { 52704.04$\pm$0.05} & { \textsl{INTEGRAL}} & { \citet{2015A&A...577A.130F}} \\ { XTE J1855-026} & { -176} & { 54009.97$\pm$0.05} & { \textsl{INTEGRAL}} & { \citet{2015A&A...577A.130F}} \\ { XTE J1855-026} & { -31} & { 54890.68$\pm$0.05} & { \textsl{INTEGRAL}} & { \citet{2015A&A...577A.130F}} \\ \enddata \tablecomments{{ Historical mid-eclipse times for IGR J16479-4514, IGR J18027-2016 and XTE J1855-026 found using \textsl{RXTE}, \textsl{Swift} BAT and \textsl{INTEGRAL}.}} \label{Historic O-C Table} \end{deluxetable} X-ray binaries that are eclipsing have an eclipse duration that is only dependent on the radius of the mass donor, inclination angle of the system and the orbital separation of the components provided that the orbit is circular ($e=0$). Using the observed orbital period and Kepler's third law, the duration can be written in terms of the sum of the donor star and compact object masses, which stipulates that the eclipse half-angle, $\Theta_{\rm e}$, can now be expressed in terms of the radius, inclination and masses of the components. In one set of calculations, we assume a 1.4\,$M_\sun$ compact object which may be appropriate for an accreting neutron star \citep{1931ApJ....74...81C}. The region allowed by the measured eclipse half-angle for each binary in Mass-Radius space is shown in Section~\ref{Five Eclipsing HMXBs}. The inclination is constrained between edge-on orbits (left boundary of the dark shaded region) and close to face-on orbits (the right boundary of the light shaded region). We can attach additional constraints assuming that the mass donor underfills the Roche-lobe (right boundary of the dark shaded region), which is dependent on the mass ratio of the system and the orbital separation. To calculate the eclipse half-angle and the Roche-lobe radius, we used Equation 7 in { \citet{1984ARA&A..22..537J}, also used by \citet{1996ApJ...459..259R},} and Equation 2 in \citet{1983ApJ...268..368E}, respectively. Further constraints on the parameters of the donor star are imposed with pulse-timing techniques (dashed red lines in Figures~\ref{J18027 Mass Radius Plot} and~\ref{J1855 Mass Radius Plot}). For the systems where pulse-timing results were not available, we additionally calculated the minimum inclination angle of the system, $i_{\rm min}$, that is consistent with the measured eclipse half-angle (see Table~\ref{Step and Ramp Model}). When the { semi-amplitude of the} radial velocities of both the compact object and the mass donor are known (e.g. IGR J18027-2016, { XTE J1855-026}), the mass ratio between the compact object and mass donor can be calculated { \citep[Equation 6,][]{1984ARA&A..22..537J}}. This means that in addition to the radius and mass of the donor star, the mass of the compact object can be constrained. The mass of the donor star can be written in terms of the { semi-amplitude of the} radial velocity of the compact object, orbital period, Newton's gravitational constant, inclination angle of the system and the mass ratio { \citep{1984ARA&A..22..537J}}. Likewise, the compact object mass can be written in terms of the { semi-amplitude of the} of the radial velocity of the donor star, orbital period, Newton's gravitational constant, inclination angle of the system and the mass ratio { \citep{1984ARA&A..22..537J}}. To calculate the masses of both the donor star and the compact object, we used Equations 2 and 3 in { \citet{1999MNRAS.307..357A}, also used by \citet{1983adsx.conf...13R}}. For consistency, we compare our derived constraints on the masses and radii of the donor stars with those expected for the previously proposed spectral types. For the systems where pulse-timing results were not available, we calculate the predicted eclipse half-angles as a function of inclination angle using the mass and radius for the derived spectral types (see Section~\ref{Five Eclipsing HMXBs}). Generally we used results from \citet{2006ima..book.....C} for main-sequence, giant and supergiant luminosity classes. These are represented by the red, green and blue dashed lines in mass-radius space for each system (see Section~\ref{Five Eclipsing HMXBs}). For O-type supergiants, we also use Tables 3 and 6 in \citet{2005A&A...436.1049M} to compare our results (see blue dotted lines in Figures~\ref{J16393 Mass Radius Plot},~\ref{J16418 Mass Radius Plot} and~\ref{J16479 Mass Radius Plot}). The constraints for B-type supergiants are additionally compared with Tables 3 and 6 in \citet{2007A&A...463.1093L} for B-type supergiants (blue crosses in Figures~\ref{J18027 Mass Radius Plot} and~\ref{J1855 Mass Radius Plot}). \section{Five Eclipsing HMXBs} \label{Five Eclipsing HMXBs} \subsection{IGR J16393-4643 (=AX J16390.4-4642)} \label{IGR J16393-4643 Results} IGR J16393-4643 is an HMXB first discovered and listed as AX J16390.4-4642 in the \textsl{ASCA} Faint Source Catalog \citep{2001ApJS..134...77S} and was later detected with \textsl{INTEGRAL} \citep{2006A&A...447.1027B}. The average flux in the 20--40\,keV band was found to be 5.1$\times$10$^{-11}$\,erg cm$^{-2}$ s$^{-1}$, and intensity variations were found to exceed a factor of 20 \citep{2006A&A...447.1027B}. In the 2--10\,keV energy band, the unabsorbed flux was found to be 9.2$\times$10$^{-11}$\,erg cm$^{-2}$ s$^{-1}$ \citep{2006A&A...447.1027B}. A proposed mass donor 2MASS J16390535-4242137 was found in the \textsl{XMM-Newton} error circle, which is thought to be an OB supergiant \citep{2006A&A...447.1027B}. However, a precise position of the donor star obtained with \textsl{Chandra} shows this candidate to be positionally inconsistent with the X-ray source \citep{2012ApJ...751..113B}. Using the \textsl{Spitzer} Galactic Legacy Infrared Mid-Plane Survey (GLIMPSE), \citet{2012ApJ...751..113B} proposed that the counterpart must be a distant reddened B-type main-sequence star. Using \textsl{INTEGRAL} and \textsl{XMM-Newton}, \citet{2006A&A...447.1027B} found a 912.0$\pm$0.1\,s modulation, which was interpreted as the neutron star rotation period. \citet{2015MNRAS.446.4148I} recently refined this to 908.79$\pm$0.01\,s using \textsl{Suzaku}. A $\sim$3.7\,day orbital period was suggested using a pulse timing analysis, although orbital periods of $\sim$50.2 and $\sim$8.1\,days were not completely ruled out \citep{2006ApJ...649..373T}. While various possible orbital solutions and accretion mechanisms have been proposed, orbital periods of 4.2368$\pm$0.0007 and 4.2371$\pm$0.0007\,days were clearly found from data from \textsl{Swift}-BAT and \textsl{RXTE} PCA, respectively \citep{2010ATel.2570....1C}. This was refined to 4.2386$\pm$0.0003\,d \citep{2013ApJ...778...45C}, also using BAT data. \citet{2015MNRAS.446.4148I} recently derived an orbital period of $\sim$366150\,s (4.24\,d) with BAT, which is also consistent with the result from \citet{2013ApJ...778...45C}. The position in Corbet's diagram shows that IGR J16393-4643 is an SGXB \citep{2013ApJ...778...45C}. \citet{2013ApJ...778...45C} identified the presence of a possible superorbital period of $\sim$15\,days; although with low significance. \begin{figure}[ht] \centerline{\includegraphics[angle=-90,width=3in]{4-22-15_BAT_proper-folded-lightcurves_IGRJ16393-4643.ps}} \figcaption[4-22-15_BAT_proper-folded-lightcurves_IGRJ16393-4643.ps]{ \textsl{Swift}-BAT light curve of IGR J16393-4643 { in the 15--50\,keV band} folded on the orbital period (top) using 20 bins. T0 is defined at BMJD\,55074.99, corresponding to mid-eclipse. A detailed folded light curve with 80 bins (bottom) is fit with { both a symmetric ``step and ramp" function (green) and asymmetric ``step and ramp" function (red)}, which model the eclipse. The symmetric ``step and ramp" function was shifted accordingly. \label{J16393 Folded Half Angle} } \end{figure} The BAT light curves folded on the orbital period revealed a sharp dip, which was interpreted as an eclipse \citep{2013ApJ...778...45C,2015MNRAS.446.4148I}. With \textsl{Swift} BAT, \citet{2015MNRAS.446.4148I} constrained the eclipse half angle to be $\sim$17$\degr$, corresponding to a duration of $\sim$0.75\,d ($\sim$65.1\,ks). Using the relationship between eclipse duration and stellar radius, along with the definition of the Roche-lobe from \citet{1984avis.book.....B}, \citet{2015MNRAS.446.4148I} calculated the allowed range of orbital inclinations of the system. Assuming a star with spectral type O9 I \citep{2012ApJ...751..113B}, the orbital inclination was constrained to 39--57$\degr$ \citep{2015MNRAS.446.4148I}. A main-sequence B-type star yields orbital inclinations between 60--77$\degr$ \citep{2012ApJ...751..113B,2015MNRAS.446.4148I}. We derive a { 4.2378$\pm$0.0004\,d} orbital period for IGR J16393-4643 using a DFT, which is consistent with the results from \citet{2013ApJ...778...45C}. Using an $O-C$ analysis (see Section~\ref{Eclipse Modeling}), this is further refined to { 4.23810$\pm$0.00007\,d}. We note that we obtain a bad fit in the mid-eclipse time between MJD\,56079--56745. As a result, we only use data between MJD\,53416--56078 in our $O-C$ analysis (see { Figures~\ref{O-C Residuals}--\ref{O-C Historic}}--~\ref{O-C Historic}). { Using the quadratic orbital change function (see Equation~\ref{Orbital Change Function}), we find the orbital period derivative to be -5$\pm$4$\times$10$^{-7}$\,d d$^{-1}$, which is consistent with zero.} The duration of the observed eclipse was calculated to be { 31$^{+6}_{-5}$\,ks} { (0.36$^{+0.07}_{-0.06}$\,d)}, yielding an eclipse half-angle of 15$_{-3}^{+2}$$\degr$ (see Table~\ref{Step and Ramp Model}). We find these to be consistent with the result from \citet{2015MNRAS.446.4148I}. The source flux does not reach 0\,counts cm$^{-2}$ s$^{−1}$ in the folded light curves during eclipse (see Figure~\ref{J16393 Folded Half Angle}). We interpret this dip as an eclipse since the feature is persistent over many years of data. The rapid ingress and egress requires obscuration by clearly defined boundaries that are suggestive of an object such as the mass donor in the system { \citep[e.g.][]{2014ApJ...793...77C}}. We discuss the nature of the non-zero flux during eclipse in Section~\ref{What is the nature of the non-zero eclipse flux in IGR J16393-4643?}. We calculate the predicted eclipse half-angle $\Theta_{\rm e}$ as a function of inclination angle of the system (see Figure~\ref{J16393 Eclipse Half Angle}). The calculation assumes a neutron star mass of 1.4\,$M_\sun$, and the primary stellar masses and radii given in Table~\ref{J16393 Primary Parameters}. We calculate the minimum inclination angle of the system, $i_{\rm min}$, that is consistent with the measured eclipse half-angle (see Table~\ref{J16393 Primary Parameters}). We find that stars with spectral types B0 V, B0-5 III and B0 I satisfy the constraint imposed by the eclipse half-angle (see Table~\ref{J16393 Primary Parameters}). We note that while a B5 III star satisfies the constraint imposed by the minimum value of the eclipse half-angle under the assumption that the neutron star is 1.4\,$M_\sun$, this spectral type does not satisfy the eclipse half-angle for a more massive neutron star. \begin{figure}[ht] \centerline{\includegraphics[width=3in]{4-21-2015_IGRJ16393-4643_weighted_eclipse_param.eps}} \figcaption[4-21-2015_IGRJ16393-4643_weighted_eclipse_param.eps]{ The black curves show the predicted eclipse half angle of IGR J16393-4643 as a function of inclination angle for stars with the indicated spectral types. The red and black dashed lines indicate the eclipse half angle and estimated error as measured by \textsl{Swift} BAT. We assume a neutron star mass of 1.4\,$M_\sun$ (top) and of mass 1.9\,$M_\sun$ (bottom) and typical masses and radii for the assumed companion spectral type (see Table~\ref{J16393 Primary Parameters}). The blue vertical dashed lines indicate the lower limit of the inclination angle. Inclinations to the left of these correspond to stars that overfill the Roche-lobe. \label{J16393 Eclipse Half Angle} } \end{figure} \begin{deluxetable}{ccccccccc} \tablecolumns{9} \tablewidth{0pc} \tablecaption{Physical Parameters for Previously Proposed Mass Donors for IGR J16393-4643} \tablehead{ \colhead{Spectral Type} & \colhead{$M/M_\sun$} & \colhead{$q$$^a$} & \colhead{$R/R_\sun$} & \colhead{$R_{\rm L}$$/R_\sun$$^b$} & \colhead{$i_{\rm min}$$\degr$$^c$}} \startdata \textsl{B0 III} & \textsl{20} & 0.070 & \textsl{13} & 18.5 & { 68} \\ \textsl{B5 III} & \textsl{7} & 0.200 & \textsl{6.3} & 11.7 & { 79} \\ \textsl{B0 V} & \textsl{17.5} & 0.080 & \textsl{8.4} & 17.5 & { 79} \\ \textsl{B0 I} & \textsl{25} & 0.056 & \textsl{25$^d$} & 20.4$^d$ & { 41} \\ \enddata \tablecomments{The values in italics are obtained from \citet{2006ima..book.....C} \\* $^a$ The mass ratio, $q$, is defined as $M_{\rm x}/M_{\rm c}$ where $M_{\rm x}$ is the compact object and $M_{\rm c}$ is the donor star. \\* $^b$ The definition for the Roche lobe, $R_{\rm L}$, as given in \citet{1983ApJ...268..368E}, assuming $M_{\rm NS}$ is 1.4\,$M_\sun$. \\* $^c$ The minimum inclination angle of the system that is consistent with the measured eclipse half-angle. \\* $^d$ A B0 I classification significantly overfills the Roche-lobe and are therefore, it is excluded from our analysis. \\* } \label{J16393 Primary Parameters} \end{deluxetable} \begin{figure}[ht] \centerline{\includegraphics[width=3in]{april22-2015_IGRJ16393-4643_mass-constraints.eps}} \figcaption[april22-2015_IGRJ16393-4643_mass-constraints.eps]{ Log-log plot of stellar mass as a function of stellar radius for IGR J16393-4643. The dark shaded region indicates the allowed spectral types that satisfy the constraints imposed by both the eclipse duration and Roche-lobe size. The light shaded region only indicates spectral types that satisfy the observed eclipse duration. Stellar masses and radii are reported in Table~\ref{J16393 Primary Parameters}. The red, green and blue lines indicate interpolations for main-sequence, giant and supergiant luminosity classes, respectively. The dashed, dotted and dash-dotted lines indicate stellar masses and radii derived from \citet{2006ima..book.....C}, \citet{2005A&A...436.1049M} and \citet{2000asqu.book.....A}, respectively. \label{J16393 Mass Radius Plot} } \end{figure} Since the X-ray luminosity is lower than what would be expected for Roche-lobe overflow \citep{2004RMxAC..21..128K}, we can attach an additional constraint assuming that the mass donor underfills the Roche-lobe radius (see Figure~\ref{J16393 Mass Radius Plot}). We note that while the derived masses and radii for the spectral type B0 I from \citet{2006ima..book.....C} satisfies the eclipse half-angle, the assumed radius would be larger than the Roche-lobe radius \citep{1983ApJ...268..368E}. Therefore, an B0 I spectral type must be excluded (see Table~\ref{J16393 Primary Parameters}). \subsection{IGR J16418-4532} IGR J16418-4532 is a candidate Supergiant Fast X-ray Transient (SFXT) first discovered with the \textsl{INTEGRAL} satellite by \citet{2004ATel..224....1T} at a flux of 3$\times$10$^{-11}$\,erg cm$^{-2}$ s$^{-1}$ in the 20--40\,keV band. The near-infrared spectral energy distribution of the most probable Two Micron All Sky Survey (2MASS) counterpart was measured with the 3.5\,m New Technology Telescope (NTT) at La Silla Observatory, \citet{2008A&A...484..801R} found a spectral type of O8.5 I. \citet{2013A&A...560A.108C} proposed a spectral type of BN0.5 Ia based on features in the near-infrared spectrum such as Br$(7-4)$ and the emission and absorption of neutral helium. Using \textsl{XMM-Newton}, \citet{2006A&A...453..133W} found a 1246$\pm$100\,s modulation, which was interpreted as a neutron star rotation period. This was later refined to 1212$\pm$6\,s \citep{2012MNRAS.420..554S}, also using \textsl{XMM-Newton}. Recently, \citet{2013MNRAS.433..528D} further refined the rotation period to 1209.1$\pm$0.4\,s with \textsl{XMM-Newton}. A $\sim$3.73\,d orbital period was found using data from the \textsl{Swift} BAT and the \textsl{RXTE} ASM instruments, where $P_{\rm orb}$ was reported as 3.753$\pm$0.004\,d and 3.7389$\pm$0.0004\,d, respectively \citep{2006ATel..779....1C}. Using an extended ASM dataset, \citet{2011ApJS..196....6L} found a 3.73886$\pm$0.00028\,d period, which is consistent with the earlier result from \citet{2006ATel..779....1C}. \citet{2013ApJ...778...45C} further refined this to 3.73886$\pm$0.00014\,d using BAT. A $\sim$14.7\,d modulation was found using BAT and \textsl{INTEGRAL}, which was interpreted as a superorbital period \citep{2013ApJ...778...45C, 2013ATel.5131....1D}. The ASM and BAT light curves folded on the orbital period revealed a sharp dip with near zero mean flux, which was interpreted as a total eclipse \citep{2006ATel..779....1C}. Subsequent observations of the eclipse included \textsl{Swift} X-ray Telescope (XRT) \citep{2012MNRAS.419.2695R} and \textsl{INTEGRAL} \citep{2013MNRAS.433..528D}. With \textsl{Swift} BAT, \citet{2012MNRAS.419.2695R} constrained the eclipse half-angle to be 0.17$\pm$0.05 of the orbital period, corresponding to a duration of 0.6$\pm$0.2\,d (55$\pm$16\,ks). The duration of the eclipse was found to be $\sim$0.75\,d in the archival data set from \textsl{INTEGRAL}/IBIS, covering the time period MJD\,52650--55469 \citep{2013MNRAS.433..528D}. While the estimate in \citet{2013MNRAS.433..528D} is significantly larger than the constraints from \citet{2012MNRAS.419.2695R}, a lower limit of $\sim$0.583\,d was found in a combined study with \textsl{INTEGRAL}/IBIS and \textsl{XMM-Newton} \citep{2013MNRAS.433..528D}. \begin{figure}[ht] \centerline{\includegraphics[angle=-90,width=3in]{april22-2015_BAT_proper-folded-lightcurves_IGRJ16418-4532.ps}} \figcaption[april22-2015_BAT_proper-folded-lightcurves_IGRJ16418-4532.ps]{ \textsl{Swift}-BAT light curve of IGR J16418-4532 { in the 15--50\,keV band} folded on the orbital period (top) using 20 bins. T0 is defined at BMJD\,55087.721, corresponding to mid-eclipse. A detailed folded light curve with 80 bins (bottom) is fit with { both a symmetric ``step and ramp" function (green) and asymmetric ``step and ramp" function (red)}, which model the eclipse. The symmetric ``step and ramp" function was shifted accordingly. \label{J16418 Folded Half Angle} } \end{figure} We derive a 3.73863$\pm$0.00015\,d orbital period for IGR J16418-4532 using the fundamental peak in the power spectrum, while the first harmonic yields 3.73882$\pm$0.00011\,d. Using an $O-C$ analysis (see { Figures~\ref{O-C Residuals}--\ref{O-C Historic}}), we refine this to { 3.73881$\pm$0.00002\,d}. { Using the quadratic orbital change function (see Equation~\ref{Orbital Change Function}), we find the orbital period derivative to be 0.7$\pm$1.0$\times$10$^{-7}$\,d d$^{-1}$, which is consistent with zero.} Folding the light curve on our refined orbital period (see Figure~\ref{J16418 Folded Half Angle}), we calculate the duration of the observed eclipse to be { 57$\pm$1\,ks (0.66$\pm$0.01\,d)}. This yields an eclipse half-angle of { 31.7$^{+0.7}_{-0.8}$$\degr$} (see Table~\ref{Asymmetric Step and Ramp Model}). { We note that we find the eclipse half-angle to be 31.5$\pm$0.6$\degr$ assuming a symmetric step-and-ramp function.} We find the eclipse properties to be consistent with the results from \citet{2012MNRAS.419.2695R} and \citet{2013MNRAS.433..528D}. Under the assumption that the mass and radius of the proposed mass donor are 31.54\,$M_\sun$ and 21.41\,$R_\sun$ \citep{2005A&A...436.1049M}, the duration of the observed eclipse is consistent with the proposed mass donor, where we find the orbital inclination to be { 60--63$\degr$} for an O8.5 I spectral type (see Figures~\ref{J16418 Eclipse Half Angle} and~\ref{J16418 Mass Radius Plot}). \begin{figure}[ht] \centerline{\includegraphics[width=3in]{3-30-2015_IGRJ16418-4532_weighted_eclipse_param.eps}} \figcaption[3-30-2015_IGRJ16418-4532_weighted_eclipse_param.eps]{ The black curves show the predicted eclipse half angle of IGR J16418-4532 as a function of inclination angle for stars with the indicated spectral types. The red and black dashed lines indicate the eclipse half angle and estimated error as measured by \textsl{Swift} BAT. We assume a neutron star mass of 1.4\,$M_\sun$ (top) and of mass 1.9\,$M_\sun$ (bottom) and typical masses and radii for the assumed companion spectral type (see Table~\ref{J16418 Primary Parameters}). The blue vertical dashed lines indicate the lower limit of the inclination angle. Inclinations to the left of these correspond to stars that overfill the Roche-lobe. \label{J16418 Eclipse Half Angle} } \end{figure} \begin{figure}[ht] \centerline{\includegraphics[width=3in]{march30-2015_IGRJ16418-4532_mass-constraints.eps}} \figcaption[march30-2015_IGRJ16418-4532_mass-constraints.eps]{ Log-log plot of stellar mass as a function of stellar radius for IGR J16418-4532. The dark shaded region indicates the allowed spectral types that satisfy the constraints imposed by both the eclipse duration and Roche-lobe size. The light shaded region only indicates spectral types that satisfy the observed eclipse duration. Stellar masses and radii are reported in Table~\ref{J16418 Primary Parameters}. The dashed, dotted and dash-dotted lines indicate stellar masses and radii derived from \citet{2006ima..book.....C}, \citet{2005A&A...436.1049M} and \citet{2000asqu.book.....A}, respectively. \label{J16418 Mass Radius Plot} } \end{figure} \begin{deluxetable}{ccccccccc} \tablecolumns{9} \tablewidth{0pc} \tablecaption{Physical Parameters for Previously Proposed Mass Donors for IGR J16418-4532} \tablehead{ \colhead{Spectral Type} & \colhead{$M/M_\sun$} & \colhead{$q$$^a$} & \colhead{$R/R_\sun$} & \colhead{$R_{\rm L}$$/R_\sun$$^b$} & \colhead{$i$$\degr$}} \startdata \textsl{O8.5 I} & \textsl{31.54} & 0.044 & \textsl{21.41$^c$} & { 20.71$^c$} & { 61--63} \\ \textsl{O8.5 I} & \textsl{33.90} & 0.041 & \textsl{22.20$^c$} & { 21.35$^c$} & { 60--62} \\ \textsl{O9 I} & \textsl{29.63} & 0.047 & \textsl{21.76$^c$} & { 20.17$^c$} & { 58--60} \\ \textsl{O9 I} & \textsl{31.95} & 0.044 & \textsl{22.60$^c$} & { 20.82$^c$} & { 57--59} \\ \enddata \tablecomments{The values in italics are obtained from \citet{2005A&A...436.1049M}. \\* $^a$ The mass ratio, $q$, is defined as $M_{\rm x}/M_{\rm c}$ where $M_{\rm x}$ is the compact object and $M_{\rm c}$ is the donor star. \\* $^b$ The definition for the Roche lobe, $R_{\rm L}$, as given in \citet{1983ApJ...268..368E}, assuming $M_{\rm NS}$ is 1.4\,$M_\sun$. \\* $^c$ These spectral types significantly overfill the Roche-lobe and are therefore excluded from our analysis. } \label{J16418 Primary Parameters} \end{deluxetable} Since the X-ray luminosity is lower than what would be expected for Roche-lobe overflow \citep{2004RMxAC..21..128K}, we can attach an additional constraint assuming that the mass donor underfills the Roche-lobe radius (see Figure~\ref{J16418 Mass Radius Plot}). We note that while the derived masses and radii for the spectral types from \citet{2005A&A...436.1049M} satisfy the eclipse half-angle, the assumed radius would be larger than the Roche-lobe radius \citep{1983ApJ...268..368E}. Therefore, an O8.5 I spectral type must be excluded. \subsection{IGR J16479-4514} IGR J16479-4514 is an intermediate SFXT, which has been proposed to host either an O8.5 I \citep{2008A&A...484..783C,2008A&A...484..801R} or a O9.5 Iab \citep{2008A&A...486..911N} mass donor. First discovered by the \textsl{INTEGRAL} satellite in 2003 August \citep{2003ATel..176....1M}, the fluxes in the 18--25\,keV and 25--50\,keV energy bands were found to be $\sim$12\,mCrab and $\sim$8\,mCrab, respectively. The flux was later shown to increase by a factor of $\sim$2 on 2003 August 10 \citep{2003ATel..176....1M}. Using \textsl{Swift} BAT data, \citet{2009MNRAS.397L..11J} found the presence of a 3.319$\pm$0.001\,day modulation, which was interpreted as the orbital period. A 3.3193$\pm$0.0005\,day modulation was independently found by \citet{2009MNRAS.399.2021R} also using BAT. { \citet{2013ApJ...778...45C} found the orbital period to be 3.3199$\pm$0.0005\,d, which is consistent with the results from \citet{2009MNRAS.397L..11J} and \citet{2009MNRAS.399.2021R}.} The presence of a 11.880$\pm$0.002\,d superorbital period was found by \citet{2013ApJ...778...45C} using BAT. \citet{2013ATel.5131....1D} reported a 11.891$\pm$0.002\,d superorbital period using \textsl{INTEGRAL}, confirming the result. No pulse period has been identified. The BAT light curves folded on the orbital period revealed a sharp dip, which was interpreted as an eclipse with a proposed duration of $\sim$52\,ks \citep{2009MNRAS.397L..11J}. This confirmed an earlier \textsl{XMM-Newton} observation where a decay from a higher to lower flux state was interpreted as the ingress of an eclipse \citep{2009AIPC.1126..319B}. A 2012 \textsl{Suzaku} observation covered $\sim$80 percent of the orbital cycle of IGR J16479-4514, where temporal and spectral properties were analyzed during eclipse and out-of-eclipse \citep{2013MNRAS.429.2763S}. Since the ingress of the eclipse was not covered in \textsl{Suzaku} observation, the exact duration of the eclipse could only be constrained to 46--143\,ks (0.53--1.66\,d) \citep{2013MNRAS.429.2763S}. \begin{figure}[ht] \centerline{\includegraphics[width=3in]{april29-2015_BAT_proper-folded-lightcurves_IGRJ16479-4514.ps}} \figcaption[april29-2015_BAT_proper-folded-lightcurves_IGRJ16479-4514.ps]{ \textsl{Swift}-BAT light curve of IGR J16479-4514 { in the 15--50\,keV band} folded on the orbital period (top) using 20 bins. T0 is defined at BMJD\,55081.57, corresponding to mid-eclipse. A detailed folded folded light curve with 80 bins (bottom) is fit with { both a symmetric ``step and ramp" function (green) and asymmetric ``step and ramp" function (red)}, which models the eclipse. The symmetric ``step and ramp" function was shifted accordingly. \label{J16479 Folded Half Angle} } \end{figure} Using a DFT, we derive the orbital period of IGR J16479-4514 to be { 3.31998$\pm$0.00014\,d}. We refine this to { 3.31961$\pm$0.00004\,d} using an $O-C$ analysis (see { Figures~\ref{O-C Residuals}--\ref{O-C Historic}}) and fold the light curve on our refined orbital period to calculate the eclipse half-angle (see Figure~\ref{J16479 Folded Half Angle}). { Using the quadratic orbital change function (see Equation~\ref{Orbital Change Function}), we find the orbital period derivative to be 3$\pm$2$\times$10$^{-7}$\,d d$^{-1}$, which is consistent with zero.} We calculate the duration of the observed eclipse to be { 52$\pm$3\,ks (0.60$\pm$0.03\,d)}, which is consistent with results from \citet{2009MNRAS.397L..11J}. This yields an eclipse half-angle of { 31.9$^{+0.9}_{-1.3}$}$\degr$ (see Table~\ref{Step and Ramp Model}). Using values from \citet{2005A&A...436.1049M} for the masses and radii of the proposed spectral type of the mass donor, the duration of the observed eclipse is consistent with the proposed mass donor (see Figures~\ref{J16479 Eclipse Half Angle}), where we find the orbital inclination to be { 54}--58$\degr$ or 47--51$\degr$ for a O8.5 I or O9.5 Iab spectral type, respectively (see Table~\ref{J16479 Primary Parameters}). While the previously preposed spectral types satisfy the eclipse half-angle, we note that the radius of the proposed spectral type is larger than the Roche-lobe \citep{1983ApJ...268..368E}. Therefore, the previously proposed O8.5 I or O9.5 Iab spectral types must be excluded (see Figure~\ref{J16479 Mass Radius Plot}). \citet{2013MNRAS.429.2763S} proposed a mass donor with mass $\sim$35\,$M_\sun$ and radius $\sim$20\,$R_\sun$, which could satisfy the constraints imposed by the Roche-lobe radius. \begin{figure}[ht] \centerline{\includegraphics[width=3in]{4-29-2015_IGRJ16479-4514_corrected_eclipse_param.eps}} \figcaption[4-29-2015_IGRJ16479-4514_corrected_eclipse_param.eps]{ The black curves show the predicted eclipse half angle of IGR J16479-4514 as a function of inclination angle for stars with the indicated spectral types. The red and black dashed lines indicate the eclipse half angle and estimated error as measured by \textsl{Swift} BAT. We assume a neutron star mass of 1.4\,$M_\sun$ (top) and of mass 1.9\,$M_\sun$ (bottom) and typical masses and radii for the assumed companion spectral type (see Table~\ref{J16479 Primary Parameters}). The blue vertical dashed lines indicate the lower limit of the inclination angle. Inclinations to the left of these correspond to stars that overfill the Roche-lobe. \label{J16479 Eclipse Half Angle} } \end{figure} \begin{figure}[ht] \centerline{\includegraphics[width=3in]{april29-2015_IGRJ16479-4514_mass-constraints.eps}} \figcaption[april29-2015_IGRJ16479-4514_mass-constraints.eps]{ Log-log plot of stellar masses as a function of stellar radius for IGR J16479-4514. The light shaded region indicates the allowed spectral types that satisfy the observed eclipse. The dark shaded region indicates the spectral types where both the eclipse duration and Roche-lobe radius constraints are satisfied. Stellar masses and radii are reported in Table~\ref{J16479 Primary Parameters}. The dashed, dotted and dash-dotted lines indicate stellar masses and radii derived from \citet{2006ima..book.....C}, \citet{2005A&A...436.1049M} and \citet{2000asqu.book.....A}, respectively. \label{J16479 Mass Radius Plot} } \end{figure} \begin{deluxetable}{ccccccccc} \tablecolumns{9} \tablewidth{0pc} \tablecaption{Physical Parameters for Previously Proposed Mass Donors for IGR J16479-4514} \tablehead{ \colhead{Spectral Type} & \colhead{$M/M_\sun$} & \colhead{$q$$^a$} & \colhead{$R/R_\sun$} & \colhead{$R_{\rm L}$$/R_\sun$$^b$} & \colhead{$i$$\degr$}} \startdata \textsl{O8.5 I} & \textsl{31.54} & 0.044 & \textsl{21.41$^c$} & { 19.13$^c$} & { 55--58} \\ \textsl{O8.5 I} & \textsl{33.90} & 0.041 & \textsl{22.20$^c$} & { 19.73$^c$} & { 54--56} \\ \textsl{O9.5 I} & \textsl{27.83} & 0.050 & \textsl{22.11$^c$} & { 18.14$^c$} & { 48--51} \\ \textsl{O9.5 I} & \textsl{30.41} & 0.046 & \textsl{23.11$^c$} & { 18.84$^c$} & { 47--49} \\ \enddata \tablecomments{The values in italics are obtained from \citet{2005A&A...436.1049M}. \\* $^a$ The mass ratio, $q$, is defined as $M_{\rm x}/M_{\rm c}$ where $M_{\rm x}$ is the compact object and $M_{\rm c}$ is the donor star. \\* $^b$ The definition for the Roche lobe, $R_{\rm L}$, as given in \citet{1983ApJ...268..368E}, assuming $M_{\rm NS}$ is 1.4\,$M_\sun$. \\* $^c$ These spectral types significantly overfill the Roche-lobe and are therefore excluded from our analysis. } \label{J16479 Primary Parameters} \end{deluxetable} \subsection{IGR J18027-2016 (=SAX J1802.7-2017)} \label{IGR J18027-2016 Results} IGR J18027-2016 (=SAX J1802.7-2017) is an SGXB, which has been proposed to host either a B1 Ib \citep{2010A&A...510A..61T} or B0-B1 I \citep{2011A&A...532A.124M} mass donor. First detected with \textsl{BeppoSAX} in 2001 September \citep{2003ApJ...596L..63A}, the average flux in the 0.1--10\,keV energy band was found to be 3.6$\times$10$^{-11}$\,ergs cm$^{-2}$ s$^{-1}$. Pulse-timing analysis suggested a $\sim$4.6\,day orbital period \citep{2003ApJ...596L..63A}, which was later refined to 4.5696$\pm$0.0009\,days using \textsl{INTEGRAL} \citep{2005A&A...439..255H}. { Combining the mid-eclipse times derived in \citet{2003ApJ...596L..63A} and \citet{2005A&A...439..255H} with later observations with \textsl{Swift} BAT and \textsl{INTEGRAL} in an $O-C$ analysis, the orbital period was further refined to 4.5693$\pm$0.0004\,d \citep{2009RAA.....9.1303J}. Fitting a quadratic model to the derived mid-eclipse times, a period derivative of (3.9$\pm$1.2)$\times$10$^{-7}$\,d\,d$^{-1}$ was found \citep{2009RAA.....9.1303J}. Using \textsl{INTEGRAL}, \citet{2015A&A...577A.130F} recently refined the orbital period and period derivative to 4.5697$\pm$0.0001\,d and (2.1$\pm$3.6)$\times$10$^{-7}$\,d\,d$^{-1}$, respectively.} \citet{2003ApJ...596L..63A} found the presence of a $\sim$139.6\,s modulation { using \textsl{BeppoSAX}}, which they interpeted as the neutron star rotation period. \citet{2005A&A...439..255H} found the pulse period using \textsl{XMM-Newton} to be 139.61$\pm$0.04\,s, which is consistent with an earlier result from \citet{2003ApJ...596L..63A}. However, no evidence for superorbital modulation was found \citep{2013ApJ...778...45C}. \begin{figure}[ht] \centerline{\includegraphics[angle=-90,width=3in]{4-22-2015_BAT_proper-folded-lightcurves_IGRJ18027-2016.ps}} \figcaption[4-22-2015_BAT_proper-folded-lightcurves_IGRJ18027-2016.ps]{ \textsl{Swift}-BAT light curve of IGR J18027-2016 { in the 15--50\,keV band} folded on the orbital period (top) using 20 bins. T0 is defined at BMJD\,55083.82$\pm$0.01, corresponding to mid-eclipse. A detailed folded light curve with 80 bins (bottom) is fit with { both a symmetric ``step and ramp" function (green) and asymmetric ``step and ramp" function (red)}, which model the eclipse. The symmetric ``step and ramp" function was shifted accordingly. \label{J18027 Folded Half Angle} } \end{figure} \citet{2003ApJ...596L..63A} derived the epoch of the NS superior conjuction to occur during the first 50\,ks of an observation where IGR J18027-2016 was undetected. Since the epoch of superior conjuction corresponds to a time where the mid-eclipse is expected to occur, \citet{2003ApJ...596L..63A} suggested the presence of a possible eclipse with a half-duration of 0.47$\pm$0.10\,days. \citet{2005A&A...439..255H} { and \citet{2009RAA.....9.1303J}} later confirmed this result where the eclispe half-angle was found to be 0.61$\pm$0.08\,radians (34.9$\pm$4.6\,$\degr$) { and $\sim$0.604\,radians ($\sim$34.6\,$\degr$), respectively}. \citet{2015A&A...577A.130F} recently refined the eclipse half-angle to 31$\pm$2\,$\degr$ using \textsl{INTEGRAL}. Pulse-timing and radial velocity curves have helped place constraints on the physical properties on both the donor star as well as the compact object. Using a pulse-timing analysis with \textsl{BeppoSAX} and \textsl{XMM-Newton}, the projected semi-major axis of the neutron star was found to be 68$\pm$1\,lt-s \citep{2005A&A...439..255H}. Assuming a 1.4\,$M_\sun$ neutron star, \citet{2005A&A...439..255H} constrained the mass and radius of the mass donor star to 18.8--29.3\,$M_\sun$ and 14.7--23.4\,$R_\sun$, respectively. { Using \textsl{Swift} BAT and \textsl{INTEGRAL}, \citet{2009RAA.....9.1303J} further constrained the radius of the donor star to 16.4--24.7\,$R_\sun$.} The mass donor was observed between 2010 May 26 and 2010 September 8 (MJD\,55342--55447) with the \textsl{European Southern Observatory Very Large Telescope} \citep[\textsl{ESO VLT},][]{2011A&A...532A.124M}. The { semi-amplitude of the} radial velocity of the mass donor, $K_{\rm O}$, was found to be 23.8$\pm$3.1\,km s$^{-1}$ \citep{2011A&A...532A.124M}. Since the projected semi-major axis of the neutron star can be expressed in terms of a radial velocity { semi-amplitude}, $K_{\rm X}$, the ratio between the masses of the compact object and the mass donor can be calculated according to Equation { 2} in { \citet{1999MNRAS.307..357A}}. \citet{2011A&A...532A.124M} found the mass ratio, q, to be 0.07$\pm$0.01. Using the mass ratio and the eclipse half-angle measured in \citet{2005A&A...439..255H}, the mass and radius of the donor star were refined to values between 18.6$\pm$0.8\,$M_\sun$ and 16.8$\pm$1.5\,$R_\sun$ at edge-on inclinations to 21.8$\pm$2.4\,$M_\sun$ and 19.8$\pm$0.7\,$R_\sun$ where the donor star fills the Roche-lobe \citep{2011A&A...532A.124M}. The mass of the compact object was constrained to be between 1.36$\pm$0.21 and 1.58$\pm$0.27\,$M_\sun$ in the two limits. The large error on the estimate of the eclipse half-angle from \citet{2005A&A...439..255H} contributes significantly to the uncertainties on these measurements. Using a similar analysis with \textsl{INTEGRAL}, \citet{2015A&A...577A.130F} constrained the mass of the neutron star to 1.6$\pm$0.3\,$M_\sun$. \begin{figure}[ht] \centerline{\includegraphics[width=3in]{4-13-2015_IGRJ18027-2016_weighted_eclipse_param.eps}} \figcaption[4-13-2015_IGRJ18027-2016_weighted_eclipse_param.eps]{ The black curves show the predicted eclipse half angle of IGR J18027-2016 as a function of inclination angle for stars with the indicated spectral types. The red and black dashed lines indicate the eclipse half angle and estimated error as measured by \textsl{Swift} BAT. \label{J18027 Eclipse Half Angle} } \end{figure} \begin{figure}[ht] \centerline{\includegraphics[width=3in]{dec4-2014_IGRJ18027-2016_mass-constraints.eps}} \figcaption[dec4-2014_IGRJ18027-2016_mass-constraints.eps]{ Log-log plots of stellar masses as a function of stellar radii for IGR J18027-2016. The shaded region indicates the allowed spectral types to satisfy the observed eclipse duration and pulse-timing constraints. Stellar masses and radii are reported in Table~\ref{J18027 Primary Parameters}. Stars and crosses indicate the spectral types according to \citet{2006ima..book.....C} and \citet{2007A&A...463.1093L}, respectively. \label{J18027 Mass Radius Plot} } \end{figure} \begin{deluxetable}{cc} \tablecolumns{3} \tablewidth{0pc} \tablecaption{System Parameters for IGR J18027-2016} \tablehead{ \colhead{Parameter} & \colhead{Value}} \startdata $P_{\rm orb}$$^a$ & { 4.56993$\pm$0.00003\,d} \\ $P_{\rm pulse}$$^b$ & 139.61$\pm$0.04\,s \\ $a_{\rm x}$ $sin$\textsl{i}$^b$ & 68$\pm$1\,lt-s \\ $K_{\rm o}$$^c$ & 23.8$\pm$3.1\,km s$^{-1}$ \\ $T_{\rm mid}$ & { BMJD\,55083.82$\pm$0.01} \\ $f(M)$$^b$ & 16$\pm$1\,$M_\sun$ \\ $q$$^c$ & 0.07$\pm$0.01 \\ $\Theta_{\rm e}$ & { 34$\pm$2}$\degr$ \\ \tableline $M_{\rm donor}$ & { 18.6$\pm$0.9}--{ 19.4$\pm$0.9}\,$M_\sun$ \\ $R_{\rm donor}$ & { 17.4$\pm$0.9}--{ 19.5$^{+0.8}_{-0.7}$}\,$R_\sun$ \\ $a$ & 31.4--33.2\,$R_\sun$ \\ $M_{\rm NS}$ & { 1.37$\pm$0.19}--{ 1.43$\pm$0.20}\,$M_\sun$ \\ $i$ & { 73.3}--90$\degr$ \\ \enddata \tablecomments{ $^a$ The orbital period is refined using an $O-C$ analysis. \\ $^b$ The pulse period, projected semi-major axis and mass function are given in \citet{2005A&A...439..255H}. \\* $^c$ The { semi-amplitude of the} radial velocity of the mass donor and mass ratio are given in \citet{2011A&A...532A.124M}.} \label{J18027 Primary Parameters} \end{deluxetable} We derive a { 4.57022$\pm$0.00013\,d} orbital period for IGR J18027-2016 using a DFT, which is consistent with the results from \citet{2005A&A...439..255H}. Using an $O-C$ analysis (see { Figures~\ref{O-C Residuals}--\ref{O-C Historic}}), we refine this to { 4.56993$\pm$0.00003\,d}. { Using the quadratic orbital change function (see Equation~\ref{Orbital Change Function}), we find the orbital period derivative to be 0.2$\pm$1.1$\times$10$^{-7}$\,d d$^{-1}$, which is consistent with zero.} Folding the light curve on our refined orbital period(see Figure~\ref{J18027 Folded Half Angle}), we { measure} the duration of the observed eclipse to be { 74$\pm$4\,ks (0.85$\pm$0.05\,d)}. We find the eclipse half-angle to be { 34$\pm$2}$\degr$ (see Table~\ref{Step and Ramp Model}). Since the X-ray luminosity of IGR J18027-2016 is modest, we again attach the constraint that the donor star underfills the Roche-lobe (see Figures~\ref{J18027 Eclipse Half Angle} and~\ref{J18027 Mass Radius Plot}). This constrains the mass and radius of the donor star as well as the mass of the neutron star (see Figure~\ref{J18027 Mass Radius Plot}). Using the eclipse half-angle and the expression for the mass donor when the mass ratio is known \citep[Eq.4;][]{2011A&A...532A.124M}, we calculate the donor star mass and radius as well as mass of the compact object. We find the mass and radius of the donor star to be { 18.6$\pm$0.9}\,$M_\sun$ and { 17.4$\pm$0.9}\,$R_\sun$ and { 19.4$\pm$0.9}\,$M_\sun$ and { 19.5$^{+0.8}_{-0.7}$}\,$R_\sun$ in the two limits (see Table~\ref{J18027 Primary Parameters}). In the allowed limits, we constrain the mass of the neutron star to between { 1.37$\pm$0.19}\,$M_\sun$ and { 1.43$\pm$0.20}\,$M_\sun$ (see Table~\ref{J18027 Primary Parameters}). While our results are in agreement with calculations in \citet{2011A&A...532A.124M}, we note that the error estimate is { only marginally} improved in our analysis. { The driving factor on the error estimate of the neutron star mass is the uncertainty of $\sim$13$\%$ on the { semi-amplitude of the} radial velocity of the donor star as reported in \citet{2011A&A...532A.124M}.} \subsection{XTE J1855-026} \label{XTE J1855-026 Results} XTE J1855-026 is an SGXB discovered during \textsl{RXTE} scans along the Galactic plane \citep{1999ApJ...517..956C}. Through 11 scanning observations of the Scutum arm, the 2--10\,keV flux of XTE J1855-026 varied from an upper limit of 10\,counts s$^{-1}$ to 136$\pm$15\,counts s$^{-1}$ \citep{1999ApJ...517..956C}. Using the \textsl{RXTE} Proportional Counter Array (PCA), \citet{1999ApJ...517..956C} found a 361.1$\pm$0.4\,s modulation, which was interpreted as the neutron star rotation period. \citet{2002ApJ...577..923C} later refined the pulse period to 360.741$\pm$0.002\,s with the PCA. An analysis using the \textsl{RXTE} All-sky Monitor (ASM) revealed the presence of a 6.0724$\pm$0.0009\,d modulation, which is interpreted as the orbital period \citep{2002ApJ...577..923C}. { Using an $O-C$ analysis with \textsl{INTEGRAL}, \citet{2015A&A...577A.130F} recently refined this to 6.07415$\pm$0.00008\,d. No significant orbital period derivative was found using the quadratic orbital change function \citep{2015A&A...577A.130F}.} This places XTE J1855-026 in the wind-fed supergiant region of the Corbet diagram \citep{1986MNRAS.220.1047C}. \citet{2002ApJ...577..923C} constrained the eccentricity of XTE J1855-026 to e$\leq$0.04 using pulse-timing analysis. The projected semi-major axis of the neutron star was found to be 82.8$\pm$0.8\,lt-s for a circular orbit and 80.5$\pm$1.4\,lt-s for the eccentric solution \citep{2002ApJ...577..923C}. In this section, we consider the scenario where the orbit is circular. We calculate constraints on the mass and radius of the donor star in the more complicated scenario where the orbit is considered to have a modest eccentricity in Section~\ref{Constraints on eccentricity}. Using optical and near-infrared spectroscopy obtained with the William Herschel Telescope (WHT), \citet{2008ATel.1876....1N} found the mass donor to be a supergiant with spectral type B0 Iaep. The light curves folded on the orbital period reveal a sharp dip, which was interpreted as an eclipse with a total phase duration of 0.198--0.262 \citep{2002ApJ...577..923C}. The phase duration corresponded to an eclipse half-angle 36$\degr$$\leq$$\Theta_{\rm e}$$\leq$47$\degr$. The eclipse duration was found to be 93$\pm$3\,ks (1.08$\pm$0.03\,d) in the archival \textsl{INTEGRAL} data set \citep{2015A&A...577A.130F}, corresponding to an eclipse half-angle of 32$\pm$1$\degr$. This measurement is somewhat lower than result from \citet{2002ApJ...577..923C}. { Optical radial velocity curves recently obtained with the \textsl{Isaac Newton Telescope (INT)}, the \textsl{Liverpool Telescope (LT)} and the \textsl{WHT} help place additional constraints on the components of the system \citep{2015arXiv150301087G}. The { semi-amplitude of the} radial velocity of the donor star was found to be 26.8$\pm$8.2\,km s$^{-1}$. Expressing the projected semi-major axis of the neutron star as a radial velocity { semi-amplitude}, \citet{2015arXiv150301087G} found the ratio between the masses of the components of the system to be 0.09$\pm$0.03 and notes that a large eccentricity of $\sim$0.4--0.5 was found in the optical orbital solutions. This strongly contrasts with the X-ray orbital solution reported in \citet{2002ApJ...577..923C}, suggesting that caution must be taken in interpreting the optical orbital solutions. \citet{2015arXiv150301087G} refined the spectral type of the mass donor to a BN0.2 Ia supergiant and found the mass and radius of the donor star to be 13$^{+19}_{-7}$\,$M_\sun$ and 27$^{+21}_{-10}$\,$R_\sun$, respectively.} \begin{figure}[ht] \centerline{\includegraphics[angle=-90,width=3in]{april22-2015_BAT_proper-folded-lightcurves_XTEJ1855-026.ps}} \figcaption[april22-2015_BAT_proper-folded-lightcurves_XTEJ1855-026.ps]{ \textsl{Swift}-BAT light curve of XTE J1855-026 { in the 15--50\,keV band} folded on the orbital period (top) using 20 bins. T0 is defined as { MJD\,55079.0685}, corresponding to mid-eclipse. A detailed folded folded light curve with 80 bins (bottom) is fit with { both a symmetric ``step and ramp" function (blue) and asymmetric ``step and ramp" function (red)}, which models the eclipse. \label{J1855 Folded Half Angle} } \end{figure} We derive a { 6.07411$\pm$0.00014\,d} orbital period for XTE J1855-026 using the fundamental peak in the power spectrum. Using an $O-C$ analysis (see { Figures~\ref{O-C Residuals}--\ref{O-C Historic}}), we refine this to { 6.07413$\pm$0.00004\,d} and fold the light curve on the refined orbital period to calculate the eclipse half-angle (see Figures~\ref{J1855 Folded Half Angle} and ~\ref{J1855 Eclipse Half Angle}). We calculate the duration of the observed eclipse to be { 98$\pm$2\,ks (1.13$\pm$0.02\,d)}, yielding an eclipse half-angle of { 33.6$\pm$0.7\,degrees} (see Table~\ref{J1855 Primary Parameters}). This indicates that our derived eclipse duration is somewhat less than the result from \citet{2002ApJ...577..923C} and is consistent with the measurement in \citet{2015A&A...577A.130F}.{ Using the quadratic orbital change function (see Equation~\ref{Orbital Change Function}), we find the orbital period derivative to be 0.0$\pm$0.5$\times$10$^{-7}$\,d d$^{-1}$, which is consistent with zero.} { Since the upper limits of the stellar mass and radius are constrained by the orbital inclination where the Roche lobe is just filled, we again attach constraints on the mass and radius of the donor star as well as the mass of the compact object. We find the inclination where the donor star fills the Roche lobe to be 76.4$\degr$. The stellar mass and radius of the donor star are found to be 19.6$\pm$1.1\,$M_\sun$ and 21.5$\pm$0.5\,$R_\sun$ and 20.2$\pm$1.2\,$M_\sun$ and 23.0$\pm$0.5\,$R_\sun$ in the two limits (see Table~\ref{J1855 Mass Radius Plot}). We find the mass of the neutron star to be between 1.77$\pm$0.55\,$M_\sun$ and 1.82$\pm$0.57\,$M_\sun$ (see Table~\ref{J1855 Mass Radius Plot}), where the driving factor on the large error estimate is the uncertainty of $\sim$31$\%$ on the radial velocity { semi-amplitude} of the donor star as reported in \citet{2015arXiv150301087G}.} \begin{figure}[ht] \centerline{\includegraphics[width=3in]{april1-2015_XTEJ1855-026_weighted_eclipse_param.eps}} \figcaption[april1-2015_XTEJ1855-026_weighted_eclipse_param.eps]{ The black curves show the predicted eclipse half angle of XTE J1855-026 as a function of inclination angle for stars with the indicated spectral types. The red and black dashed lines indicate the eclipse half angle and estimated error as measured by \textsl{Swift} BAT. \label{J1855 Eclipse Half Angle} } \end{figure} \begin{figure}[ht] \centerline{\includegraphics[width=3in]{april23-2015_XTEJ1855-026_mass-constraints.eps}} \figcaption[april23-2015_XTEJ1855-026_mass-constraints.eps]{ Log-log plots of stellar masses as a function of stellar radii for XTE J1855-026. The shaded region indicates the allowed spectral types to satisfy the observed eclipse duration, pulse-timing and optical radial velocity constraints. Stellar masses and radii are reported in Table~\ref{J1855 Primary Parameters}. Stars and crosses indicate the spectral types according to \citet{2006ima..book.....C} and \citet{2007A&A...463.1093L}, respectively. \label{J1855 Mass Radius Plot} } \end{figure} \begin{deluxetable}{cc} \tablecolumns{3} \tablewidth{0pc} \tablecaption{{ System Parameters for XTE J1855-026}} \tablehead{ \colhead{{ Parameter}} & \colhead{{ Value}}} \startdata { $P_{\rm orb}$$^a$} & { 6.07413$\pm$0.00004\,d} \\ { $P_{\rm pulse}$$^b$} & 360.741$\pm$0.002\,s \\ { $\dot{P}_{\rm pulse}$$^b$} & 1.5$\pm$3.6\,s s$^{-1}$$\times$10$^{-8}$ \\ { $a_{\rm x}$ $sin$\textsl{i}$^b$} & 82.8$\pm$0.8\,lt-s \\ { $K_{\rm o}$$^c$} & { 26.8$\pm$8.2\,km s$^{-1}$} \\ { $T_{\rm mid}$} & { MJD\,55079.07$\pm$0.01} \\ { $f(M)$$^b$} & 16.5$\pm$0.5\,$M_\sun$ \\ { $q$$^c$} & { 0.09$\pm$0.03} \\ { $\Theta_{\rm e}$} & { 33.6$\pm$0.7$\degr$} \\ \tableline { $M_{\rm donor}$} & { 19.6$\pm$1.1--20.2$\pm$1.2\,$M_\sun$} \\ { $R_{\rm donor}$} & { 21.5$\pm$0.5--23.0$\pm$0.5\,$R_\sun$} \\ { $a$} & { 38.9--39.2\,$R_\sun$} \\ { $M_{\rm NS}$} & { 1.77$\pm$0.55\,$M_\sun$--1.82$\pm$0.57\,$M_\sun$} \\ { $i$} & { 76.4--90$\degr$} \\ \enddata \tablecomments{ { $^a$ The orbital period is refined using an $O-C$ analysis.} \\ { $^b$ The pulse period, derivative of the pulse period, projected semi-major axis and mass function are given in \citet{2002ApJ...577..923C}.} \\* { $^c$ The { semi-amplitude of the} radial velocity of the mass donor and mass ratio are given in \citet{2015arXiv150301087G}.}} \label{J1855 Primary Parameters} \end{deluxetable} \section{Discussion} We discuss our findings of IGR J16393-4643, IGR J16418-4532, IGR J16479-4514, IGR J18027-2016 and XTE J1855-055. The radii for the previously proposed spectral types in IGR J16418-4532 and IGR J16479-4514 would significantly overfill the Roche-lobe, which suggests an earlier spectral type. Below, we discuss in detail the nature of the mass donors in each system, mechanisms that could result in the residual emission observed in IGR J16393-4643 and comment on the nature of the eclipse profiles. \subsection{What is the nature of the mass donors in each system?} \subsubsection{IGR J16393-4643} Our results show that stars with spectral type B0 V and B0-5 III satisfy the constraints imposed by the eclipse half-angle as well as the Roche-lobe (see Section~\ref{IGR J16393-4643 Results}). While some supergiant stars such as a B0 I satisfy the eclipse half-angle, the required radius would be larger than the Roche-lobe. We calculated the Roche-lobe radius for stars of spectral type B0 I to be { 18.5\,$R_\sun$}. This is clearly smaller than the radii reported in \citet{2006ima..book.....C} and \citet{2000asqu.book.....A}, which are 25\,$R_\sun$ and 30\,$R_\sun$, respectively (see Table~\ref{J16393 Primary Parameters}). Since the radius for this proposed supergiant is too large to satisfy the constraint imposed by the Roche-lobe, we suggest that if the donor star is a supergiant then it must be a slightly earlier spectral type. IGR J16393-4643 was observed in the near-infrared on 2004 July 9 (MJD\,53195.3) using the 3.5\,m New Technology Telescope (NTT) at La Silla Observatory \citep{2008A&A...484..783C}. While \citet{2008A&A...484..783C} conclude that the spectral type of the donor star is BIV-V based on the spectral features and spectral energy distribution (SED), we note that \citet{2008ATel.1450....1N} proposed the donor star to be a K or M supergiant using the same SED. Furthermore, observations with the \textsl{Chandra} observatory show that the previously proposed counterpart is positionally inconsistent with the X-ray source \citep{2012ApJ...751..113B}. Using the GLIMPSE survey, \citet{2012ApJ...751..113B} proposed that the counterpart must be a distant reddened B-type star. \subsubsection{IGR J16418-4532} \label{IGR J16418-4532 Candidates} Our results show that while the derived masses and radii for the previously proposed spectral types satisfy the eclipse half-angle (see Table~\ref{J16418 Primary Parameters}), the radius would be larger than the Roche-lobe size \citep{1983ApJ...268..368E}. An O8 I star has a mass of 28\,$M_\sun$ according to \citet{2006ima..book.....C} and \citet{2000asqu.book.....A}. We find the maximum radius for a 28\,$M_\sun$ star to satisfy the constraint imposed by the Roche-lobe to be 18.2\,$R_\sun$. This is clearly smaller than the radii reported in \citet{2006ima..book.....C} and \citet{2000asqu.book.....A}, which are 22\,$R_\sun$ and 20\,$R_\sun$, respectively (see Table~\ref{J16418 Primary Parameters}). Since the radius for each proposed spectral type is too large to satisfy the constraint imposed by the Roche-lobe, it is our contention that the donor must be an earlier spectral type. We find that spectral classes O7.5 I and earlier satisfy the constraint imposed by the Roche-lobe radius (see Table~\ref{IGR J16418-4532 Candidate Parameters}). The ratio between the radius of the donor star and that of the Roche-lobe, $\beta$, is found to exceed 0.9, which is consistent with other HMXBs that host supergiants { \citep{1984ARA&A..22..537J}}. Transitional Roche-lobe accretion has been proposed in IGR J16418-4532 where a fraction of the mass transfer is due to a focused wind \citep{2012MNRAS.420..554S}. A focused wind or accretion stream requires a mass donor that nearly fills the Roche-lobe. If this is the case, we would expect variations in the mass accretion rate that would be attributable to a focused wind or accretion stream \citep{1997ASPC..121..361B}. This mechanism would lead to large variability in the X-ray intensity and could be observed in folded-light curves as well as hardness-intensity diagrams. Large intensity swings on the order of $\sim$100 were observed in both \textsl{Swift} and \textsl{XMM-Newton} observations of IGR J16418-4532 \citep{2012MNRAS.420..554S,2013ATel.5131....1D}. \begin{deluxetable}{cccccccccccc} \tablecolumns{12} \tablewidth{0pc} \tablecaption{Possible Parameters of Candidate Donor Stars for IGR J16418-4532} \tablehead{ \colhead{Spectral Type} & \colhead{$M/M_\sun$} & \colhead{$q$} & \colhead{$R/R_\sun$} & \colhead{$R_{\rm L}$$/R_\sun$$^a$} & \colhead{$R_{\rm L}$$/R$} & \colhead{$M_{\rm V}$} & \colhead{$(J-K)_{\rm 0}$$^a$} & \colhead{$E(J-K)$$^a$} & \colhead{$i$$\degr$$^b$} & \colhead{$d_{\rm sun}$$^c$} & \colhead{$d_{\rm sun}$$^d$} \\ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{(kpc)} & \colhead{(kpc)}} \startdata \textsl{O6 Ia} & 44.10 & 0.032 & 19.95 & { 23.85} & { 0.84} & \textsl{-6.38} & -0.21 & 2.60 & { 77--81} & 11.9 & 11.9 \\ \textsl{O6.5 Ia} & 41.20 & 0.034 & 20.22 & { 23.18} & { 0.87} & \textsl{-6.38} & -0.21 & 2.60 & { 73--76} & 12.0 & 12.1 \\ \textsl{O7 Ia} & 38.44 & 0.036 & 20.49 & { 22.52} & { 0.91} & \textsl{-6.38} & -0.21 & 2.60 & { 70--73} & 12.0 & 12.3 \\ \textsl{O7.5 Ia} & 36.00 & 0.039 & 20.79 & { 21.90} & { 0.95} & \textsl{-6.38} & -0.21 & 2.60 & { 67--69} & 12.0 & 12.4 \\ \textsl{O8 Ia} & 33.72 & 0.042 & 21.10 & { 21.31} & { 0.99} & \textsl{-6.38} & -0.21 & 2.60 & { 64--66} & 11.9 & 12.6 \\ \enddata \tablecomments{Possible Parameters of Candidate Donor Stars. \\* $^a$ The value for $(J-K)_{\rm 0}$ was calculated using $(J-H)_{\rm 0}$ and $(H-K)_{\rm 0}$ published in \citet{2006A&A...457..637M}. $E(J-K)$ is found by subtracting $(J-K)_{\rm 0}$ from the observed $J-K$ \\* $^b$ The range of inclination angles of the system consistent with the measured eclipse half-angle. \\* $^c$ The distance the object is from the Sun using the distance modulus. \\* $^d$ The distance the object is from the Sun using the radius to distance ratio derived from spectral energy distribution. The radius to distance ratio is found to be 3.77$\times$10$^{-11}$ \citep{2008A&A...484..783C}. \\* } \label{IGR J16418-4532 Candidate Parameters} \end{deluxetable} \begin{figure} \centerline{\includegraphics[width=3in]{march30-2015_IGRJ16418-4532_candidate_eclipse_param.eps}} \figcaption[march30-2015_IGRJ16418-4532_candidate_eclipse_param.eps]{ The black curves show the predicted eclipse half angle as a function of inclination angle for stars with the candidate spectral types for IGR J16418-4532. The red and black dashed lines indicate the eclipse half angle and estimated error as measured by \textsl{Swift} BAT. We assume a neutron star mass of 1.4\,$M_\sun$ (top) and of mass 1.9\,$M_\sun$ (bottom) and typical masses and radii for the assumed companion spectral type (see Table~\ref{IGR J16479-4514 Candidate Parameters}). The blue vertical dashed lines indicate the lower limit of the inclination angle. Inclinations to the left of these correspond to stars that overfill the Roche-lobe. \label{J16418 Candidate Half Angle} } \end{figure} Near-infrared spectral features previously led to a spectral classification of either a late O-type supergiant \citep{2008A&A...484..783C} or a BN0.5 Ia \citep{2013A&A...560A.108C}. We note the presumed radius of both spectral types overfills the Roche-lobe, which shows that the proposed spectral classifications are incorrect. The spectral type of the mass donor places an additional constraint on the source distance. Under the assumption that the K-band magnitude ($m_{\rm K}$) and extinction in the V-band ($A_{\rm V}$) are magnitudes 11.48 and 14.5 for an O8.5 I classification, \citep{2008A&A...484..801R} found the distance of the source to be $\sim$13\,kpc. Converting $A_{\rm V}$ to $A_{\rm K}$ using Table 3 in \citet{1985ApJ...288..618R}, we confirm the calculations for the distance of IGR J16418-4532 in \citet{2008A&A...484..801R} using the distance modulus. The distance of IGR J16418-4532 assuming the aforementioned spectral types are reported in Table~\ref{IGR J16418-4532 Candidate Parameters} using the values for $M_{\rm V}$ obtained from \citet{2006MNRAS.371..185W} and \citet{2006A&A...457..637M}. We also use the radius to distance ratio from the near-infrared spectral energy distribution (SED) measurements reported in \citep{2008A&A...484..801R} together with our eclipse measurements to estimate the distance of IGR J16418-4532. We find the distance to be between 11.9--12.6\,kpc, which is slightly smaller than that reported in \citet{2008A&A...484..801R} for an O8 I star. IGR J16418-4532 is a heavily absorbed SFXT where the observed $N_{\rm H}$, measured to be between 3.9$^{+1.2}_{-0.9}$$\times$10$^{22}$\,atoms cm$^{-2}$ and 7$\pm$2$\times$10$^{22}$\,atoms cm$^{-2}$ \citep{2012MNRAS.419.2695R}, exceeds the values reported by the Leiden/Argentine/Bonn survey \citep{2005A&A...440..775K} and in the review by \citet{1990ARA&A..28..215D}, which are 1.59$\times$10$^{22}$ and 1.88$\times$10$^{22}$\,atoms cm$^{-2}$, respectively. Since the measured $N_{\rm H}$ was found to be in excess of the Galactic H I, determining the interstellar fraction of $N_{\rm H}$ is problematic and the value in \citet{2008A&A...484..801R} for the extinction cannot be verified without the presence of systematic error. We compare the observed value of the $J-K$ color of 2.39\footnote{\url{http://www.iasfbo.inaf.it/\~masetti/IGR/sources/16418.html}} with the intrinsic $(J-K)_{\rm 0}$ for the proposed mass donors (see Table~\ref{IGR J16418-4532 Candidate Parameters}). Calculating the difference between the observed $J-K$ and the intrinsic $(J-K)_{\rm 0}$, we find the reddening values $E(J-K)$ for each proposed spectral type for the mass donor (see Table~\ref{IGR J16418-4532 Candidate Parameters}). We calculate the reddening in the $B-V$ band, $E(B-V)$, using Equation 1 in \citet{2009MNRAS.400.2050G} and the extinction in the V-band ($A_{\rm V}$) \citep{2008A&A...484..801R}. Converting $E(B-V)$ to $E(J-K)$ using Table 3 in \citet{1985ApJ...288..618R}, we find $E(J-K)$ to be 2.45. While this is slightly lower than what would be expected for late O supergiant spectral types (see Table~\ref{IGR J16418-4532 Candidate Parameters}), the presence of systematic error described above prevents an exact calculation. These results show that stars with spectral type O7.5 I and earlier satisfy the constraints imposed by both the duration of the eclipse and the Roche-lobe (see Figure~\ref{J16418 Candidate Half Angle}). Since the measured $N_{\rm H}$ is largely in excess of interstellar values measured by \citet{2005A&A...440..775K} and \citet{1990ARA&A..28..215D}, determining what fraction of $N_{\rm H}$ is interstellar in origin is problematic. We find measurements such as the distance to be consistent with the distances determined by \citet{2008A&A...484..801R}. Given our measurements, spectral types near O7.5 I are reasonable. By constraining this type of star, we solved the first part of a three part problem. The remaining pieces include pulse-timing measurements which can be done in the X-ray and radial velocity measurements in the near-infrared. \subsubsection{IGR J16479-4514} Our results show that while the expected masses and radii for the previously proposed spectral types for IGR J16479-4514 satisfy the eclipse half-angle (see Table~\ref{J16479 Primary Parameters}), the implied radius would be larger than the Roche-lobe radius \citep{1983ApJ...268..368E}. This is similar to the situation observed in IGR J16418-4532, which we describe in Section~\ref{IGR J16418-4532 Candidates}. We calculate the Roche-lobe radius for stars of spectral types O8.5 I and O9.5 Iab to be 19.1\,$R_\sun$ and 18.1\,$R_\sun$, respectively (see Table~\ref{J16479 Primary Parameters}). Since the radius for each proposed spectral type is too large to satisfy the constraint imposed by the Roche-lobe (see Table~\ref{J16479 Primary Parameters}), we suggest that the donor must be a slightly earlier spectral type. We find that spectral classes O7 I and earlier satisfy the Roche-lobe constraint (see Table~\ref{IGR J16479-4514 Candidate Parameters}). The ratio between the radius of the donor star and that of the Roche-lobe, $\beta$, is found to exceed 0.9, which is consistent with other HMXBs that host supergiants { \citep{1984ARA&A..22..537J}}. Transitional Roche-lobe has been proposed in IGR J16479-4514 \citep{2013MNRAS.429.2763S}, which requires a mass donor that nearly fills the Roche-lobe. \citet{2013MNRAS.429.2763S} found phase-locked flares were found in their observation of IGR J16479-4514. \citet{2013MNRAS.429.2763S} attributes the physical mechanism responsible for these flares which are spaced 0.2 in phase as large-scale structures in the wind. \begin{deluxetable}{cccccccccccc} \tablecolumns{12} \tablewidth{0pc} \tablecaption{Possible Parameters of Candidate Donor Stars for IGR J16479-4514} \tablehead{ \colhead{Spectral Type} & \colhead{$M/M_\sun$} & \colhead{$q$} & \colhead{$R/R_\sun$} & \colhead{$R_{\rm L}$$/R_\sun$$^a$} & \colhead{$R_{\rm L}$$/R$} & \colhead{$M_{\rm V}$} & \colhead{$(J-K)_{\rm 0}$$^a$} & \colhead{$E(J-K)$$^a$} & \colhead{$i$$\degr$$^b$} & \colhead{$d_{\rm sun}$$^c$} & \colhead{$d_{\rm sun}$$^d$} \\ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{(kpc)} & \colhead{(kpc)}} \startdata \textsl{O6 Ia} & 44.10 & 0.032 & 19.95 & { 22.03} & 0.91 & \textsl{-6.38} & -0.21 & 3.48 & { 69--75} & 4.44 & 4.50 \\ \textsl{O6.5 Ia} & 41.20 & 0.034 & 20.22 & 21.42 & 0.94 & \textsl{-6.38} & -0.21 & 3.48 & { 66--71} & 4.46 & 4.56 \\ \textsl{O7 Ia} & 38.44 & 0.036 & 20.49 & { 20.80} & { 0.99} & \textsl{-6.38} & -0.21 & 3.48 & { 63--68} & 4.46 & 4.62 \\ \textsl{O7.5 Ia} & 36.00 & 0.039 & 20.79 & { 20.23} & { 1.03} & \textsl{-6.38} & -0.21 & 3.48 & { 60--64} & 4.48 & 4.69 \\ \textsl{O8 Ia} & 33.72 & 0.042 & 21.10 & { 19.68} & { 1.07} & \textsl{-6.36} & -0.21 & 3.48 & { 58--61} & 4.44 & 4.81 \\ \enddata \tablecomments{Possible Parameters of Candidate Donor Stars. \\* $^a$ The value for $(J-K)_{\rm 0}$ was calculated using $(J-H)_{\rm 0}$ and $(H-K)_{\rm 0}$ published in \citet{2006A&A...457..637M}. $E(J-K)$ is found by subtracting $(J-K)_{\rm 0}$ from the observed $J-K$ \\* $^b$ The range of inclination angles of the system consistent with the measured eclipse half-angle. \\* $^c$ The distance the object is from the Sun using the distance modulus. \\* $^d$ The distance the object is from the Sun using the radius to distance ratio derived from spectral energy distribution. The radius to distance ratio is found to be 1$\times$10$^{-10}$ \citep{2008A&A...484..783C}. \\* } \label{IGR J16479-4514 Candidate Parameters} \end{deluxetable} \begin{figure}[ht] \centerline{\includegraphics[width=3in]{april29-2015_IGR_J16479-4514_candidate_eclipse_param.eps}} \figcaption[april29-2015_IGR_J16479-4514_candidate_eclipse_param.eps]{ The black curves show the predicted eclipse half angle as a function of inclination angle for stars with the candidate spectral types for IGR J16479-4514. The red and black dashed lines indicate the eclipse half angle and estimated error as measured by \textsl{Swift} BAT. We assume a neutron star mass of 1.4\,$M_\sun$ (top) and of mass 1.9\,$M_\sun$ (bottom) and typical masses and radii for the assumed companion spectral type (see Table~\ref{IGR J16479-4514 Candidate Parameters}). The blue vertical dashed lines indicate the lower limit of the inclination angle. Inclinations to the left of these correspond to stars that overfill the Roche-lobe. \label{J16479 Candidate Half Angle} } \end{figure} Using near-infrared spectral features, \citet{2008A&A...484..783C} previously determined the spectral classification of the donor star to be a late O-type supergiant \citep{2008A&A...484..783C}. The donor spectral type places an additional constraint on the distance of IGR J16479-4514. The K-band magnitude ($m_{\rm K}$) and the extinction in the V-band ($A_{\rm V}$) are found to be 9.79 and 18.5 \citep{2008A&A...484..801R}. Additionally, the $R_*/D_*$ ratio was found to be 1$\times$10$^{-10}$ \citep{2008A&A...484..783C}. Using this information, the minimum distance of IGR J16479-4514 was found to be $\sim$4.9\,kpc \citep{2008A&A...484..783C}. Converting $A_{\rm V}$ to $A_{\rm K}$ using Table 3 in \citet{1985ApJ...288..618R}, we find $A_{\rm K}$ to be 2.07 and confirm the calculation for the distance of IGR J16479-4514 using the distance modulus. The distances of IGR J16479-4514 assuming the aforementioned spectral types are reported in Table~\ref{IGR J16479-4514 Candidate Parameters} using the values for $M_{\rm V}$ obtained from \citet{2006MNRAS.371..185W} and \citet{2006A&A...457..637M}. We find that the distance of stars with spectral type O7 I and earlier to be between 4.4--4.6\,kpc, which are resonably consistent with the measurements in \citet{2008A&A...484..801R}. IGR J16479-4514 is a heavily absorbed SFXT where the $N_{\rm H}$ was measured to be 9.5$\pm$0.3$\times$10$^{22}$\,atoms cm$^{-2}$ \citep{2013MNRAS.429.2763S}. This is an order of magnitude larger than the Galactic H I values reported by the Leiden/Argentine/Bonn survey \citep{2005A&A...440..775K} and in the review by \citet{1990ARA&A..28..215D}, which are 1.87$\times$10$^{22}$ and 2.14$\times$10$^{22}$\,atoms cm$^{-2}$, respectively. Since the measured $N_{\rm H}$ for both sources was found to be in excess of the Galactic H I, determining the interstellar fraction of $N_{\rm H}$ is difficult. Therefore, we cannot verify the value for the extinction in \citet{2008A&A...484..801R} without the presence of systematic error. We calculate the reddening values $E(J-K)$ for each proposed spectral type for the mass donor (see Table~\ref{IGR J16479-4514 Candidate Parameters}) using the observed value of the $J-K$ color of 3.27\footnote{\url{http://www.iasfbo.inaf.it/\~masetti/IGR/sources/16479.html}} and the intrinsic $(J-K)_{\rm 0}$ for the proposed mass donors (see Table~\ref{IGR J16479-4514 Candidate Parameters}). To check for consistency with late O supergiant stars, we compare this with the reddening in the $B-V$ band using Equation 1 in \citet{2009MNRAS.400.2050G} and the extinction in the V-band ($A_{\rm V}$) \citep{2008A&A...484..801R}. Converting $E(B-V)$ to $E(J-K)$ using Table 3 in \citet{1985ApJ...288..618R}, we find the reddening in the $E(J-K)$ band to be 3.12. While this is slightly lower than what would be expected for late O supergiant spectral types, the presence of systematic errors, described above prevents an exact calculation. Stars with spectral type O7 I and earlier satisfy the eclipse duration and Roche-lobe constraints (see Figure~\ref{J16479 Candidate Half Angle}). Determining the interstellar fraction of $N_{\rm H}$ is difficult since the measured $N_{\rm H}$ is greatly in excess of interstellar values. We find the distance of the proposed counterparts to be consistent with those determined by \citet{2008A&A...484..801R}. Since no pulsation period was identified, the next step to constrain the donor star would be radial velocity measurements in the near-infrared. \subsubsection{IGR J18027-2016} Our results show that the mass and radius of the donor star in IGR J18027-2016 can be constrained to be between { 18.6$\pm$0.9}\,$M_\sun$ and { 17.4$\pm$0.9}\,$R_\sun$ and { 19.4$\pm$0.9}\,$M_\sun$ and { 19.5$^{+0.8}_{-0.7}$}\,$R_\sun$ for edge-on orbits and inclinations where the Roche-lobe is completely filled, respectively. We find the inclination where the Roche-lobe is filled to be { 73.3}$\degr$, which is consistent with the earlier results from \citet{2011A&A...532A.124M}. Since the { semi-amplitude of the} radial velocities of both the donor star and compact object are known, we also find that the mass of the neutron star can be constrained. We calculate the mass of the neutron star to be between { 1.37$\pm$0.19}\,$M_\sun$ and { 1.43$\pm$0.20}\,$M_\sun$ for our lower and upper limits (see Section~\ref{IGR J18027-2016 Results}). Since the radius of the donor star is constrained, we also can estimate the distance, optical and near infrared magnitudes of IGR J18027-2016. Using SED measurements, \citet{2008A&A...484..783C} calculated the radius to distance ratio to be 4$\pm$1$\times$10$^{-11}$, where the uncertainties are at the 90$\%$ confidence interval. At the 90$\%$ confidence interval, we find the eclipse half-angle to be { 34$_{-3}^{+4}$$\degr$} and the radius of the donor star to be constrained to between { 17$^{+2}_{-1}$\,$R_\sun$} and { 19$\pm$1\,$R_\sun$} in the two limits described in Section~\ref{IGR J18027-2016 Results}. Combining our results for the radius of the donor star with the radius to distance ratio \citep[Table 6;][]{2008A&A...484..783C}, we find that the distance of IGR J18027-2016 can be constrained to { 11$\pm$2\,kpc} and { 12$\pm$2\,kpc} in the two limits. Using the distance modulus \citep[e.g.][]{2008A&A...486..911N}, the absolute magnitude of IGR J18027-2016 can be calculated. The apparent magnitude in the R-band, $R$, and the extinction in the V-band, $A_{\rm V}$ were found to be 16.9 and 8.5 \citep{2008A&A...482..113M}. We find the absolute magnitude in the R-band, $M_{\rm R}$ to be $\sim$-5 at both constraints, which is what would be expected for a B-type supergiant \citep{2006A&A...457..637M,2006MNRAS.371..185W}. Our results are consistent with the previously proposed B1 Ib \citep{2010A&A...510A..61T} or B0-B1 I \citep{2011A&A...532A.124M} spectral types in IGR J18027-2016. We constrain the mass and radius of the donor star to be between { 18.6--19.4}\,$M_\sun$ and { 17.4--19.5}\,$R_\sun$. We also constrain the mass of the neutron star to be { 1.18--1.63}\,$M_\sun$, which { marginally} improves on the results in \citet{2011A&A...532A.124M}{ , which was found to be 1.15--1.85\,$M_\sun$}. \subsubsection{XTE J1855-026} { We find that the mass and radius of the donor star in XTE J1855-026 are constrained between 19.6$\pm$1.1\,$M_\sun$ and 21.5$\pm$0.5\,$R_\sun$ at edge-on orbits to 20.2$\pm$1.2\,$M_\sun$ and 23.0$\pm$0.5\,$R_\sun$ where the Roche-lobe is just filled. The inclination where the Roche-lobe is filled is found to be 76.4$\degr$. We find the derived masses and radii to be consistent with those reported in \citet{2006ima..book.....C} and \citet{2000asqu.book.....A} for stars with spectral type B0 I. Since the { semi-amplitude of the} radial velocities for both the donor star and compact object are known, we find that the mass of the neutron star can be constrained to be between 1.77$\pm$0.55\,$M_\sun$ and 1.82$\pm$0.57\,$M_\sun$ (see Section~\ref{XTE J1855-026 Results}).} { We note that the large error estimates in the mass of the neutron star are likely to be attributed to substantial uncertainties in the estimate in $K_{\rm o}$ as reported in \citet{2015arXiv150301087G}. This is likely to be attributed to emission line contamination and/or changes in the stellar wind.} Based on optical and near-infrared spectra together with the analysis reported in \citet{2002ATel..102....1V}, the spectral type of the donor star was previously determined to be a B0 Iaep \citep{2008ATel.1876....1N}. { Based on the ratio of the equivalent widths of Si IV to Si III which is a diagnostic for supergiant spectral types \citep{1971ApJS...23..257W}, the mass donor was recently refined to a BN0.2 Ia by \citet{2015arXiv150301087G}.} Using SED measurements, \citet{2013ApJ...764..185C} calculated the radius and distance to be 26.9\,$R_\sun$ and 10.8$\pm$1.0\,kpc, which places XTE J1855-026 in the Scutum arm region. { Combining the properties of the newly derived spectral type with the distance modulus, \citet{2015arXiv150301087G} recently calculated the nominal distance of XTE J1855-026 to be 10$^{+7}_{-4}$\,kpc.} Based on these results, the radius to distance ratio can be calculated to be 5.6$\pm$0.5$\times$10$^{-11}$. Combining our results for the radius of the donor star with the calculated radius to distance ratio, we find the distance of XTE J1855-026 can be constrained to { 8.6$\pm$0.8\,kpc} and { 9.2$\pm$0.9\,kpc} in the two limits. \citet{1999ApJ...517..956C} measured the $N_{\rm H}$ in XTE J1855-026 to be 14.7$\pm$0.6$\times$10$^{22}$\,atoms cm$^{-2}$, which exceeds the values reported by the Leiden/Argentine/Bonn survey \citep{2005A&A...440..775K} and in the review by \citet{1990ARA&A..28..215D}. In a \textsl{Swift} XRT observation, the $N_{\rm H}$ was found to be 4.1$\pm$0.5$\times$10$^{22}$\,atoms cm$^{-2}$ \citep{2008ATel.1875....1R}. These are signifcantly larger than the measurements for the interstellar $N_{\rm H}$, which are 6.62$\times$10$^{21}$\,atoms cm$^{-2}$ \citep{2005A&A...440..775K} and 7.35$\times$10$^{21}$\,atoms cm$^{-2}$ \citep{1990ARA&A..28..215D}. Since the measured $N_{\rm H}$ was found to be in excess of the Galactic H I, the conversion between the interstellar $N_{\rm H}$ and the extinction $A_{\rm V}$ is problematic and the value of 5.8$\pm$0.9 in \citet{2013ApJ...764..185C} cannot be verified without the presence of systematic error. \subsection{What is the nature of the non-zero eclipse flux in IGR J16393-4643?} \label{What is the nature of the non-zero eclipse flux in IGR J16393-4643?} The source flux during eclipse in IGR J16393-4643 does not reach 0\,counts s$^{-1}$ in the folded light curves (see Figure~\ref{J16393 Folded Half Angle}). We find the ratio between the flux during eclipse to the flux outside eclipse to be { 54$\pm$5$\%$}. This is significantly larger than what is observed in the other XRBs in our study (see Table~\ref{Step and Ramp Model}). We first discuss the possible scenario where we observe a partial eclipse in IGR J16393-4643. In our model for the eclipse (see Section~\ref{Eclipse Modeling}), we assume the compact object to be a point source { \citep{1984ARA&A..22..537J}}. While this is a valid assumption since the radius of the compact object is much smaller than that of the donor star, our approximation does not consider the extended X-ray emission region. In this case, the constraints the mass and radius of the donor star must take into account the size of the extended emission region. Since a significant residual flux is observed, it is likely that we observe a partial eclipse. We also consider the possibility that the residual emission observed in IGR J16393-4643 is attributed to a dust-scattering halo similar to what is observed in some other HMXBs \citep[Cen X-3; Vela X-1; OAO 1657-415,][]{1991MNRAS.251...76D,1994ApJ...436L...5W,2006MNRAS.367.1147A}. While residual emisison has been attributed to a dust-scattering in some other HMXBs, the residual emission found in IGR J16393-4643 in the BAT folded light curves (15--50\,keV) is seen at much higher levels (see Table~\ref{Step and Ramp Model}). A dust-scattering halo is predominantly a soft X-ray phenomenon. While a significant fraction of the out-of-eclipse flux may be in a dust-scattering halo \citep{1995A&A...293..889P}, we expect a smaller fraction to be in the energies resolved by BAT \citep[e.g.][and references therein]{2014ApJ...793...77C}. Therefore, we conclude a dust-scattering halo cannot be the sole mechanism responsible for the residual emission seen in the folded light curves. Finally, we discuss the possibility that Compton scattering and reprocessing in a region of dense gas could account for the residual flux in eclipse. \citet{2006A&A...447.1027B} found that a Compton emission (\texttt{comptt} within Xspec) model with an electron temperature of 4.4$\pm$0.3\,keV and optical depth of 9$\pm$1 provides a good fit to the average spectrum of IGR J16393-4643. Additionally, Fe K$\alpha$ and Fe K$\beta$ lines were found at 6.4\,keV and 7.1\,keV, respectively, where the ratio of the iron intensities is seen to be consistent with photoionization \citep{2006A&A...447.1027B,1993A&AS...97..443K,2015MNRAS.446.4148I}. The equivalent widths (EQW) { in the \textsl{XMM-Newton} observation} were found to be 60$\pm$30\,eV and an upper limit of 120\,eV, for the Fe K$\alpha$ and Fe K$\beta$ lines respectively \citep{2006A&A...447.1027B}. { In a recent \textsl{Suzaku} observation, the EQW of the Fe K$\alpha$ and Fe K$\beta$ lines were found to be 46$^{+7}_{-6}$\,eV and an upper limit of 30\,eV \citep{2015MNRAS.446.4148I}}. While the mechanism responsible for the Fe K$\alpha$ and Fe K$\beta$ is likely to be fluorescence of cold matter, the equivalent widths point to a likely origin in a spherical distribution of dense gas \citep{2006A&A...447.1027B}. Therefore, Compton scattering and reprocessing might be the sole contributor to the residual emission observed in the BAT folded light curves. It is likely that a combination of the mechanisms described above account for the residual emission found in the folded light curves of IGR J16393-4643 (see Section~\ref{IGR J16393-4643 Results}). Since the count rate during eclipse is significantly larger than what is observed in most other eclipsing HMXBs, it is likely that only a small fraction comes from a dust-scattering halo. \subsection{What mechanism is responsible for asymmetries in the eclipse profile?} \label{Constraints on eccentricity} { The eclipse profiles show the presence of asymmetries (see Tables~\ref{Asymmetric Step and Ramp Model} and~\ref{Historic Step and Ramp Model}) as previously noted by \citet{2015A&A...577A.130F} in the cases of IGR J18027-2016 and XTE J1855-026 and \citet{2009MNRAS.397L..11J} in the case of IGR J16479-4514. These asymmetries seen in the ingress and egress durations are suggestive of the presence of complex structures in the wind such as accretion or photoionization wakes.} We first discuss the possible case that the asymmetry in the eclipse profiles can be attributed to accretion wakes. In an HMXB driven entirely by a spherical wind, material is only accreted in a cylindrical region where the kinetic energy is less than the gravitational potential energy of the compact object. The radius of the accretion cylinder is the Bondi-Hoyle accretion radius \citep[Equation 1,][]{1996A&A...311..793F}. Perturbed material forms an ``accretion wake" that typically trails the orbit of the neutron star and results in large intrinsic column densities \citep{1990ApJ...356..591B}. Prior to eclipse, the progressively increasing $N_{\rm H}$ partially obscures the X-ray emission resulting in longer ingress durations compared to the duration of egress. Since the accretion wake is located beyond the compact object during egress, no apparent increase in the intrinsic $N_{\rm H}$ is observed. The ingresses observed are somewhat larger than egress, which is consistent with the presence of an accretion wake \citep{1990ApJ...356..591B}. The count rate prior to ingress is also somewhat smaller than that observed after egress, providing additional evidence for accretion wakes. Hardness ratios or measurements of $N_{\rm H}$ folded on the orbital period could be implemented to confirm the possibility of accretion wakes. We also consider the possibility that photoionization wakes could explain the asymmetric eclipse profiles. The eclipse profiles of IGR J18027-2016 and XTE J1855-026 are compared to those seen in eclipsing systems where asymmetric density enhancements are observed on large spatial dimensions \citep[e.g. Vela X-1;][]{1996A&A...311..793F}. In systems where the X-ray luminosity is significantly high, a switch-off of the radiative driving force could lead to a reduced wind velocity and enhanced wind density \citep{1996A&A...311..793F}. This enhanced X-ray scattering region trails the neutron star and results in ingress durations that are significantly larger than those observed at egress \citep{1996A&A...311..793F}. The eclipse profiles of IGR J18027-2016 and XTE J1855-026 differ from those expected from a photoionization wake--the ingress duration in Vela X-1 was seen to be $\phi=$0.11 \citep{1996A&A...311..793F}. { We additionally discuss how energy dependence in the asymmetric eclipse profiles can arise. The high $N_{\rm H}$, on the order of 10$^{23}$\,atoms cm$^{-2}$, implies the presence of a strong circumstellar wind \citep{2005AIPC..797..402K}, and the X-ray absorption due to this will lead to sharper ingress and egress transitions compared to that seen at lower energies. In a \textsl{Suzaku} observation of IGR J16479-4514 that covers part of the eclipse, \citet{2013MNRAS.429.2763S} found the egress transition to be broader in soft energies. In their review, \citet{2015A&A...577A.130F} found asymmetries to be slightly enhanced at lower energies compared to higher energies. While investigating the energy dependence in the eclipse transitions is beyond the scope of the present paper, this study in obscured SGXBs will be difficult due to a reduced count rate at low energies that results from the large intrinsic absorption present in these systems \citep{2015A&A...577A.130F}.} { Finally, we consider the possibility that the asymmetric eclipse profiles could be attributed to a small to modest eccentricity. Since we considered objects with relatively short orbital periods, we expect the eccentricity of the systems to be near zero \citep{1977A&A....57..383Z,2014MNRAS.440.1626M}. The eccentricity in IGR J18027-2016 and XTE J1855-026 were both noted to be small to modest, where $e$ was found to be less than 0.2 and 0.04, respectively \citep{2003ApJ...596L..63A,2002ApJ...577..923C}. In the cases of IGR J16393-4643, IGR J16418-4532 and IGR J16479-4514 where no pulse-timing or radial-velocity methods are available to determine an orbital solution, we constrain the maximum allowed eccentricity to that where the radius of the donor star completely fills the Roche-lobe at periastron \citep[][and references therein]{2013MNRAS.434.2182G}. These were all found to be near zero (see Figure~\ref{Non Radial-Velocitiy Eccentricity Constraint}). Since the eccentricities are found to be small to modest, we do not expect these to result in sizeable asymmetries in the eclipse profile. Additionally, apsidal advance will be apparent in the case that an eccentric orbit could lead to asymmetries in the eclipse profile. While apsidal advance would be difficult to detect in the $\sim$9\,yr of \textsl{Swift} data, we believe it to be unlikely for asymmetries to be solely attributed to small to modest eccentricities. Furthermore, accurate measurements of apsidal advance will depend on multiple pulse-timing measurements of these systems, which are not yet available.} \begin{figure}[ht] \centerline{\includegraphics[width=3in]{may4-2015_No-RV_eccentric-test.eps}} \figcaption[may4-2015_No-RV_eccentric-test.eps]{ A sample plot of the Roche-lobe of the donor stars in IGR J16393-4643 (top), IGR J16418-4532 (middle) and IGR J16479-4514 (bottom) vs. orbital phase for eccentricities ranging from 0.0 to 0.1 in steps of 0.02. The horizontal red line represents the radius of the donor star under the assumption that the donor is an O9 I for IGR J16393-4643 and O7 I for both IGR J16418-4532 and IGR J16479-4532 \citep{2005A&A...436.1049M}. \label{Non Radial-Velocitiy Eccentricity Constraint} } \end{figure} \section{Conclusion} Eclipsing X-ray binaries provide an opportunity to constrain the physical parameters of the donor star as well as the compact object. To determine the eclipse half-angle in our survey, we modeled the eclipses using { both} symmetric { and asymmetric} step-and-ramp { functions}. The luminosity of each system is less than expected for Roche-lobe overflow \citep{2004RMxAC..21..128K}, which means we can attach the constraint that the mass donor underfills the Roche-lobe. Since IGR J18027-2016 { and XTE J1855-026 are the only ``double-lined binaries"} in our sample, we calculate the parameters of the other systems assuming the neutron stars to be at the white-dwarf Chandrasekhar limit, 1.4$M_\sun$. We also calculated the parameters of the other systems assuming a more massive neutron star--1.9$M_\sun$. Our results show that stars with spectral type B III satisfy both constraints imposed by the eclipse duration and Roche-lobe for IGR J16393-4643. Assuming the estimates for the mass and radius of a B5 III star reported in \citet{2006ima..book.....C}, we find the mass and radius of the donor star to exceed 7\,$M_\sun$ and 6.3\,$R_\sun$. B I stars were found to overfill the Roche-lobe. The source emission in IGR J16393-4643 does not reach 0\,counts s$^{-1}$ in the folded light curves, where the fraction between the flux in eclipse to that outside eclipse was found to be { 54$\pm$5$\%$} (see Tables~\ref{Step and Ramp Model}--~\ref{Historic Step and Ramp Model}). Compton scattering and reprocessing in a dense region of gas could possibly account for the X-ray emission region not obscured by the donor star. Our results show that the previously proposed O8.5 I and BN0.5 Ia spectral types for the mass donor in IGR J16418-4532 must be excluded. While these spectral types satisfy the eclipse half-angle, the Roche-lobe is significantly overfilled. Stars with spectral type O7.5 I or earlier are consistent with both the eclipse half-angle and the Roche-lobe. In this case, we find the mass and radius of the donor star to exceed 36.00\,$M_\sun$ and 20.79\,$R_\sun$ assuming the estimates for the mass and radius of an O7.5 I star reported in \citet{2005A&A...436.1049M}. We find the minimum inclination angle of the system to be { 67}$\degr$. The distance measurements of IGR J16418-4532 are consistent with the previously determined distance (see Table~\ref{IGR J16418-4532 Candidate Parameters}); however, determining the interstellar fraction of $N_{\rm H}$ was found to be problematic. { The ingress and egress durations in the folded light curves were found to be asymmetric where the ingress duration is longer than the egress duration. This is likely attributed to the presence of an accretion wake or from a focused stream as noted in \citet{2012MNRAS.420..554S} and \citet{2013ATel.5131....1D}.} The previously proposed O8.5 Ia and O9.5 Iab spectral classifications of the mass donor in IGR J16479-4514 must be excluded because the Roche-lobe is significantly overfilled. However, we found that stars with spectral type O7 I and earlier satisfy both constraints imposed by the eclipse duration and Roche-lobe. Assuming the estimates for the mass and radius of an O7 I star reported in \citet{2005A&A...436.1049M}, the mass and radius of the donor star are found to exceed 38.44\,$M_\sun$ and 20.49\,$R_\sun$, respectively. We find the minimum inclination angle of the system to be { 63$\degr$}. The distance measurements remain unchanged from earlier measurements (see Table~\ref{IGR J16479-4514 Candidate Parameters}); however, the interstellar fraction of $N_{\rm H}$ remains undetermined. We find { the ingress and egress durations to be symmetric within the error bars.} The ratio between the radius of the donor star and Roche-lobe was found to exceed 0.9, which shows the possibility of transitional Roche-lobe overflow. The mass and radius of the donor star in IGR J18027-2016 was constrained to { 18.6$\pm$0.9}\,$M_\sun$ and { 17.4$\pm$0.9}\,$R_\sun$ and { 19.4$\pm$0.9}\,$M_\sun$ and { 19.5$^{+0.8}_{-0.7}$}\,$R_\sun$ in the two limits. We find the inclination angle where the donor star just fills the Roche-lobe size to be { 73.3}$\degr$. We also find the distance measurements of IGR J18027-2016 to be { 11$\pm$2\,kpc} and { 12$\pm$2\,kpc} in the allowed limits. In the allowed limits, we constrained mass of the neutron star to between { 1.37$\pm$0.19}\,$M_\sun$ and { 1.43$\pm$0.20}\,$M_\sun$. The folded light curve shows complicated and asymmetric ingress and egress durations, which can be explained either by the presence of accretion wakes. Our results show the mass and radius of the donor star in XTE J1855-026 to be constrained to { 19.6$\pm$1.1\,$M_\sun$ and 21.5$\pm$0.5\,$R_\sun$} at edge-on orbits to { 20.2$\pm$1.2\,$M_\sun$ and 23.0$\pm$0.5\,$R_\sun$} where the Roche-lobe size is just filled. We find the inclination angle where the donor star just fills the Roche-lobe size to be { 76.4\,degrees}. In the allowed limits, we find the distance of XTE J1855-026 can be constrained to { 8.6$\pm$0.8}\,kpc and { 9.2$\pm$0.9}\,kpc. { We find the mass of the neutron to be constrained between 1.77$\pm$0.55\,$M_\sun$ and 1.82$\pm$0.57\,$M_\sun$.} Complicated and asymmetric ingress and egress durations were seen in the folded light curve, which suggests { the presence of complex structure in the wind}. To further constrain the physical parameters of the donor star and the compact object in all these systems, additional observations are required. Constraining the mass of the neutron star will help constrain the Equation-of-State. Since the pulse period has been accurately measured, for IGR J16393-4643 and IGR J16418-4532 the study would benefit from both pulse-timing analysis as well as radial-velocity curves in the near-infrared. A radial-velocity curve in the optical or near-infrared would provide additional constraints for IGR J16479-4514 where a pulse-period has yet to be identified. \acknowledgements { We thank the anonymous referee for useful comments and support from NASA 14-ADAP14-0167.}
1,314,259,994,014
arxiv
\section{Introduction} When an electron current flows perpendicular to a magnetic field through a conducting medium, the charges are forced to deviate to one side creating an imbalance which results in a measurable electric potential conveying important information about the material. A device based on this, so-called Hall effect, has been studied in detail by Ausserlechner \cite[]{Auss} who has found that its operating features are summed up in the Hall-geometry-factor \[G(\lambda_f,\lambda_p)=\frac{1}{{\bf K'}\left(\frac{1-p}{1+p}\right){\bf K}\left(\frac{1-f}{1+f}\right)}\int_0^1\frac{\int _0^x\frac{\D y}{\sqrt{1-\left(\frac{1-p}{1+p}\right)^2(1-y^2)}\sqrt{1-y^2}}}{\sqrt{1-x^2}\sqrt{1-\left[1-\left(\frac{1-f}{1+f}\right)^2\right](1-x^2)}}\D x.\]Here $p$ and $f$ are related to the input and output resistances by $\lambda_f=2{\bf K}(f)/{\bf K'}(f)$ and $\lambda_p={\bf K'}(p)/[2{\bf K}(p)]$, with the complete elliptic integral of the first kind being defined by\[ \mathbf K(\sqrt{t}):=\int_0^{\pi/2}\frac{\D\theta}{\sqrt{1-t\sin^2\theta}}\equiv \mathbf K'(\sqrt{1-t}). \]Due to the symmetry of the device $G(\lambda_f,\lambda_p)/\sqrt{\lambda_f\lambda_p}$ must be unchanged under the substitution $(\lambda_f,\lambda_p)\rightarrow(2/\lambda_f,2/\lambda_p).$ This can be recast into the remarkable identity that $$\int_0^{\pi}\frac{\D x}{\sqrt{1-p\cos x}}\int_0^x\frac{\D y}{\sqrt{1+q\cos y}}$$ is invariant under $(p,q)\rightarrow(\sqrt{1-p^2},\sqrt{1-q^2})$, which is our aim to prove in this note. \section{A Double Integral Identity} \begin{theorem}\label{thm:pq_recip}For parameters $p,q\in(0,1)$, define correspondingly $p'=\sqrt{1-p^2},q'=\sqrt{1-q^2}$, then we have an integral identity $A(p,q)=A(p',q')$, where\begin{align} A(p,q):={}&\int_{0}^\pi\D x\int_0^x\D y\frac{1}{\sqrt{1-\smash[b]{p}\cos x}\sqrt{1+\smash[b]{q\cos y}}}\notag\\={}&\frac{4}{\sqrt{(1-\smash[b]p)(1+\smash[b]{q})}}\int_0^{\pi/2}\frac{\D\theta}{\sqrt{1+\frac{2p}{1-p}\sin^2\theta}}\int_0^\theta\frac{\D\phi}{\sqrt{1-\frac{2q}{1+q}\sin^2\phi}}. \end{align}\end{theorem} Before proving the functional equation stated in the theorem above, we need to convert double integrals like $A(p,q)$ into single integrals over the products of elliptic integrals and elementary functions, as described in the lemma below.\begin{lemma}\label{lm:K_int_repn}For $0<\beta<\alpha<1$, the following identity holds:\footnote{The constraint $0<\beta<\alpha<1$ is needed in the derivation of \eqref{eq:K_int_sum}, the validity of which extends to $ \alpha=2p/(p-1)<0,\beta=2q/(1+q)\in(0,1)$, by virtue of analytic continuation. }\begin{align} &\int_0^{\pi/2}\frac{\D\theta}{\sqrt{1-\smash[b]{\alpha}\sin^2\theta}}\int_0^\theta\frac{\D\phi}{\sqrt{1-\beta\sin^2\smash[b]{\phi}}}\notag\\={}&\frac{1}{\pi}\int_0^\beta\frac{\mathbf K(\sqrt{1-\smash[b]{\beta}})\mathbf K(\sqrt{t})}{\sqrt{1-t}+\sqrt{1-\alpha}}\frac{\D t}{\sqrt{1-t}}+\frac{1}{\pi}\int_\beta^1\frac{\mathbf K(\sqrt{\smash[b]{\beta}})\mathbf K(\sqrt{1-t})}{\sqrt{1-t}+\sqrt{1-\alpha}}\frac{\D t}{\sqrt{1-t}},\label{eq:K_int_sum} \end{align}where the integrations are carried out along straight line-segments joining the end points. \end{lemma} \begin{proof}In what follows, we write $\mathbb Y_\lambda(X):=\sqrt{X(1-X)(1-\lambda X)} $ for $X\in(0,1) $ and $\lambda\in(0,1)$, with the square root taking positive values. It is clear that the complete elliptic integral $\mathbf K(\sqrt{\lambda}),\lambda\in(0,1)$ satisfies \begin{align} \mathbf K(\sqrt{\lambda})=\frac{1}{2}\int_0^1\frac{\D X}{\mathbb Y_\lambda(X)}.\label{eq:K_int_repn} \end{align} For $0<\beta<\alpha<1$, we have an addition formula of Legendre type \cite[][Eq.~2.3.26]{AGF_PartII} \begin{align} \frac{\pi}{\mathbb Y_{\alpha}( U)}\int^1_{ U}\frac{\D u}{\mathbb Y_{\beta}(u)}={}&\int_{0}^1 \frac{2\alpha\mathbf K(\sqrt{1-\smash[b]{\beta}})}{1-\alpha UV} \frac{V\D V}{\mathbb Y_{\alpha}(V)}+\int_{0}^1 \frac{2\alpha\mathbf K(\sqrt{\smash[b]{\beta}})}{1-(1-\alpha U)V} \frac{V\D V}{\mathbb Y_{1-\alpha}(V)}\notag\\{}&-\int^{1}_{\frac{1-\alpha}{1-\beta}}\frac{\D X}{\mathbb Y_{1-\beta}(X)}\int_{\frac{1-(1-\beta)X}{\alpha}}^1\frac{\D V}{\mathbb Y_{\alpha}(V)}\frac{\alpha V}{1-\alpha UV}.\label{eq:3rd_kind_add_comb_prep} \end{align} Integrating over $U\in(0,1)$, we obtain\begin{align}& \pi\int_0^1\frac{\D U}{\mathbb Y_{\alpha}( U)}\int^1_{ U}\frac{\D u}{\mathbb Y_{\beta}(u)}=4\pi\mathbf K(\sqrt{\smash[b]{\vphantom\beta\alpha}})\mathbf K(\sqrt{\smash[b]{\beta}})-\pi\int_0^1\frac{\D U}{\mathbb Y_{\alpha}( U)}\int^U_{ 0}\frac{\D u}{\mathbb Y_{\beta}(u)}\notag\\={}&-2\mathbf K(\sqrt{1-\smash[b]{\beta}})\int_{0}^1 \frac{\log(1-\alpha V)\D V}{\mathbb Y_{\alpha}(V)}+2\mathbf K(\sqrt{\smash[b]{\beta}})\int_{0}^1 \frac{\log\frac{1-(1-\alpha) V}{1-V}\D V}{\mathbb Y_{1-\alpha}(V)}\notag\\{}&+\int^{1}_{\frac{1-\alpha}{1-\beta}}\frac{\D X}{\mathbb Y_{1-\beta}(X)}\int_{\frac{1-(1-\beta)X}{\alpha}}^1\frac{\D V}{\mathbb Y_{\alpha}(V)}\log(1-\alpha V).\label{eq:Apq_add_form} \end{align}Here, the first two single integrals over $V$ can be evaluated in closed form \cite[][Eqs.~2.2.3 and 2.2.4]{AGF_PartII}:\begin{align} \int_{0}^1 \frac{\log(1-\alpha V)\D V}{\mathbb Y_{\alpha}(V)}={}& \mathbf K(\sqrt{\alpha})\log(1-\alpha),\label{eq:logdn_int} \\\int_{0}^1 \frac{\log\frac{1-(1-\alpha) V}{1-V}\D V}{\mathbb Y_{1-\alpha}(V)}={}&\pi \mathbf K(\sqrt{\alpha})+ \mathbf K(\sqrt{1-\alpha})\log(1-\alpha),\end{align}while the last double integral satisfies \cite[cf.][Eq.~2.3.2]{AGF_PartII}\begin{align} &\int^{1}_{\frac{1-\alpha}{1-\beta}}\frac{\D X}{\mathbb Y_{1-\beta}(X)}\int_{\frac{1-(1-\beta)X}{\alpha}}^1\frac{\D V}{\mathbb Y_{\alpha}(V)}\log(1-\alpha V)\notag\\={}&\frac{2\mathbf K(\sqrt{1-\smash[b]{\beta}})}{\pi}\int_0^1\frac{(1-\beta U)\D U}{\mathbb Y_\beta(U)}\int_0^1\frac{\D W}{\sqrt{W(1-W)}}\frac{\log(1-\alpha W-\beta(1-W))}{1-[\alpha W+\beta(1-W)]U}\notag\\{}&-\frac{2\mathbf K(\sqrt{\smash[b]{\beta}})}{\pi}\int_0^1\frac{[1-(1-\beta)U]\D U}{\mathbb Y_{1-\beta}(U)}\int_0^1\frac{\D W}{\sqrt{W(1-W)}}\frac{\log(1-\alpha W-\beta(1-W))}{1-[1-\alpha W-\beta(1-W)]U}. \end{align} Substituting $W=(1-\beta U)V/(1-\beta UV)$ such that \begin{align} \frac{W}{1-W}=\frac{(1-\beta U)V}{1-V}, \end{align}we obtain\begin{align}&\int _0^1\frac{(1-\beta U)\D U}{\mathbb Y_\beta(U)}\int_0^1\frac{\D W}{\sqrt{W(1-W)}}\frac{\log(1-\alpha W-\beta(1-W))}{1-[\alpha W+\beta(1-W)]U}\notag\\={}&\int _0^1\frac{\D U}{ \sqrt{U(1-U)}}\int_0^1\frac{\D V}{\sqrt{V(1-V)}}\frac{\log\left( 1-\alpha+\frac{(\alpha-\beta)(1-V)}{1-\beta U V} \right)}{1-\alpha U V}, \end{align}where\begin{align}& \frac{\log\left( 1-\alpha+\frac{(\alpha-\beta)(1-V)}{1-\beta U V} \right)-\log(1-\alpha V)}{1-\alpha U V}\notag\\={}&\int_0^\beta\left[ \frac{1}{1-tUV}- \frac{1-\alpha}{(1-t)(1-V)+(1-\alpha)(1-tU)V}\right]\frac{\D t}{t-\alpha} \end{align}allows us to integrate over $V$ and $U$ in a sequel on the right-hand side, leading to \begin{align}&\int _0^1\frac{(1-\beta U)\D U}{\mathbb Y_\beta(U)}\int_0^1\frac{\D W}{\sqrt{W(1-W)}}\frac{\log(1-\alpha W-\beta(1-W))}{1-[\alpha W+\beta(1-W)]U}\notag\\={}&2\pi\left[\int_0^\beta\frac{\mathbf K(\sqrt{t})}{t-\alpha}\left( 1-\sqrt{\frac{1-\alpha}{1-t}} \right)\D t+\frac{\mathbf K(\sqrt{\alpha})}{2}\log(1-\alpha)\right]. \end{align}Here, in the last step, we have evaluated \begin{align}&\int _0^1\frac{\D U}{ \sqrt{U(1-U)}}\int_0^1\frac{\D V}{\sqrt{V(1-V)}}\frac{\log(1-\alpha V)}{1-\alpha U V}\notag\\={}&\pi\int_0^1\frac{\log(1-\alpha V)\D V}{\mathbb Y_\alpha(V)}=\pi \mathbf K(\sqrt{\alpha})\log(1-\alpha) \end{align}with the aid of \eqref{eq:logdn_int}. Likewise, starting with a variable substitution $W=[1-(1-\beta) U]V/[1-(1-\beta )UV]$ such that \begin{align} \frac{W}{1-W}=\frac{[1-(1-\beta) U]V}{1-V}, \end{align} we may compute \begin{align}& \int_0^1\frac{[1-(1-\beta)U]\D U}{\mathbb Y_{1-\beta}(U)}\int_0^1\frac{\D W}{\sqrt{W(1-W)}}\frac{\log(1-\alpha W-\beta(1-W))}{1-[1-\alpha W-\beta(1-W)]U}\notag\\={}&2\pi\left[ \int_1^\beta\frac{\mathbf K(\sqrt{1-t})}{t-\alpha}\left( 1-\sqrt{\frac{1-\alpha}{1-t}} \right)\D t-\frac{\pi\mathbf K(\sqrt{\alpha})}{2}+\frac{\mathbf K(\sqrt{1-\alpha})}{2}\log(1-\alpha) \right]. \end{align}Thus, the claimed identity is verified.\end{proof} Exploiting the integral identity in the lemma above, together with some modular transformations of elliptic integrals, we will prove Theorem~\ref{thm:pq_recip}. \begin{proof}[Proof of Theorem~\ref{thm:pq_recip}]We recall that the Legendre function of the first kind of degree $-1/4$ is defined by \begin{align}& P_{-1/4}(1-2t):={_2}F_1\left( \left.\begin{array}{c} \frac{1}{4},\frac{3}{4} \\ 1 \\ \end{array}\right| t\right)\notag\\={}&\frac{1}{\sqrt{2}\pi}\int_0^{1}\left[ \frac{u(1-tu)}{1-u} \right]^{-1/4}\frac{\D u}{1-u},\quad t\in\mathbb C\smallsetminus[1,+\infty). \end{align}The following relations between $P_{-1/4}$ and the complete elliptic integral $\mathbf K$ are recorded in Ramanujan's notebook \cite[][Chap.~33, Theorems 9.1 and 9.2]{RN5}:\begin{align} \mathbf K\left( \sqrt{\frac{2q}{1+q}} \right)={}&\frac{\pi}{2}\sqrt{1+\smash[b]{q}}P_{-1/4}(1-2q^2),\label{eq:quarter1}\\\mathbf K\left( \sqrt{\frac{1-q}{1+q}} \right)={}&\frac{\pi}{2}\sqrt{\frac{1+\smash[b]{q}}{2}}P_{-1/4}(2q^2-1),\label{eq:quarter2} \end{align} which are provable by standard transformations of the respective hypergeometric functions, provided that $q\in(0,1)$. With the information listed in the last paragraph, we see that \begin{align} A(p,q)={}&\int_0^{2q/(1+q)}\frac{\sqrt{2}P_{-1/4}(2q^{2}-1)\mathbf K(\sqrt{t})}{\sqrt{1-t}\sqrt{1-\smash[b]{p}}+\sqrt{1+\smash[b]{p}}}\frac{\D t}{\sqrt{1-t}}\notag\\{}&+\int_{2q/(1+q)}^1\frac{{2}P_{-1/4}(1-2q^{2})\mathbf K(\sqrt{1-t})}{\sqrt{1-t}\sqrt{1-\smash[b]{p}}+\sqrt{1+\smash[b]{p}}}\frac{\D t}{\sqrt{1-t}}. \end{align} On one hand, with $t=4\sqrt{s}/(1+\sqrt{s})^2$ and Landen's transformation \cite[][item~163.02]{ByrdFriedman} \begin{align} \mathbf K(\sqrt{s})={}&\frac{1}{1+\sqrt{s}}\mathbf K\left( \frac{2\sqrt[4]{s}}{1+\sqrt{s}} \right),\quad 0<s<1,\label{eq:Landen_2} \end{align}we have\begin{align}&\int_0^{2q/(1+q)}\frac{\mathbf K(\sqrt{t})}{\sqrt{1-t}\sqrt{1-\smash[b]{p}}+\sqrt{1+\smash[b]{p}}}\frac{\D t}{\sqrt{1-t}}\notag\\={}&2\int_{0}^{(1-\sqrt{1-\smash[b]{q}^2})/(1+\sqrt{1-\smash[b]{q}^2})}\frac{\mathbf K(\sqrt{s})}{(1-\sqrt{s})\sqrt{1-\smash[b]{p}}+(1+\sqrt{s})\sqrt{1+\smash[b]{p}}}\frac{\D s}{\sqrt{s}}. \end{align}On the other hand, it is clear from a substitution $t=1-s$ that \begin{align}& \int_{2q/(1+q)}^1\frac{\mathbf K(\sqrt{1-t})}{\sqrt{1-t}\sqrt{1-\smash[b]{p}}+\sqrt{1+\smash[b]{p}}}\frac{\D t}{\sqrt{1-t}}\notag\\={}&\int_{0}^{(1-q)/(1+q)}\frac{\mathbf K(\sqrt{s})}{\sqrt{s}\sqrt{1-\smash[b]{p}}+\sqrt{1+\smash[b]{p}}}\frac{\D s}{\sqrt{s}}\notag\\={}&\int_0^{(1-q)/(1+q)}\frac{\sqrt{2}\mathbf K(\sqrt{s})}{(1-\sqrt{s})\sqrt{1-\sqrt{1-\smash[b]{p}^2}}+(1+\sqrt{s})\sqrt{1+\smash[b]{\sqrt{1-\smash[b]{p}^2}}}}\frac{\D s}{\sqrt{s}}. \end{align}Here, the last equality results from a pair of elementary identities for $p\in(0,1)$: \begin{align} \sqrt{\frac{1+\sqrt{1-\smash[b]{p}^{2}}}{2}}\pm \sqrt{\frac{1-\sqrt{1-\smash[b]{p}^{2}}}{2}}=\sqrt{1\pm \smash[b]{p}}, \end{align}which are readily verified by squaring both sides. Therefore, with $p'=\sqrt{1-p^2},q'=\sqrt{1-q^2} $, we have \begin{align} A(p,q)={}&\int_{0}^{(1-q')/(1+q')}\frac{2\sqrt{2}P_{-1/4}(1-2q'^2)\mathbf K(\sqrt{s})}{(1-\sqrt{s})\sqrt{1-\smash[b]{p}}+(1+\sqrt{s})\sqrt{1+\smash[b]{p}}}\frac{\D s}{\sqrt{s}}\notag\\{}&+\int_{0}^{(1-q)/(1+q)}\frac{2\sqrt{2}P_{-1/4}(1-2q^2)\mathbf K(\sqrt{s})}{(1-\sqrt{s})\sqrt{1-\smash[b]{p'}}+(1+\sqrt{s})\sqrt{1+\smash[b]{p'}}}\frac{\D s}{\sqrt{s}},\label{eq:Apq_Ap'q'} \end{align}which is evidently equal to $A(p',q')$. \end{proof} \noindent {\bf Acknowledgement} M.L.G. thanks Udo Ausserlechner (Infinion Technologies) and Michael Milgram (Geometrics Unlimited) for insightful correspondence. Financial support of MINECO (Project MTM2014-57129-C2-1-P) and Junta de Castilla y Leon (UIC 0 11) is acknowledged.
1,314,259,994,015
arxiv
\section*{Abstract} Sit-to-stand transitions are an important part of activities of daily living and play a key role in functional mobility in humans. The sit-to-stand movement is often affected in older adults due to frailty and in patients with motor impairments such as Parkinson's disease leading to falls. Studying kinematics of sit-to-stand transitions can provide insight in assessment, monitoring and developing rehabilitation strategies for the affected populations. We propose a three-segment body model for estimating sit-to-stand kinematics using only two wearable inertial sensors, placed on the shank and back. Reducing the number of sensors to two instead of one per body segment facilitates monitoring and classifying movements over extended periods, making it more comfortable to wear while reducing the power requirements of sensors. We applied this model on 10 younger healthy adults (YH), 12 older healthy adults (OH) and 12 people with Parkinson's disease (PwP). We have achieved this by incorporating unique sit-to-stand classification technique using unsupervised learning in the model based reconstruction of angular kinematics using extended Kalman filter. Our proposed model showed that it was possible to successfully estimate thigh kinematics despite not measuring the thigh motion with inertial sensor. We classified sit-to-stand transitions, sitting and standing states with the accuracies of 98.67\%, 94.20\% and 91.41\% for YH, OH and PwP respectively. We have proposed a novel integrated approach of modelling and classification for estimating the body kinematics during sit-to-stand motion and successfully applied it on YH, OH and PwP groups. \section*{Introduction} Kinematic modelling of human body motion gives an insight into specific movements which can be used for studying human gait and posture, assessing the quality of movements, for monitoring and diagnostic purposes and developing rehabilitation strategies. The reviews by Yang \textit{et al.} (2010)~\cite{Yang2010} and Fong \textit{et al.} (2010)~\cite{Fong2010TheReview} suggest that several studies have shown some initial results for monitoring and rehabilitation of people with motor functional impairments by examining different categories of motions including static postures such as sitting, standing, lying down; cyclic dynamic activities such as walking, running, stairs climbing; as well as transitions such as sit-to-stand and stand-to-sit to move between static and dynamic activities. Out of these motions, investigating the kinematics of sit-to-stand transitions is important because of their significance in functional mobility~\cite{Janssen2002DeterminantsReview}. The sit-to-stand transitions have been studied widely in children~\cite{Guarrera-Bowlby2004FormAdults}, adults~\cite{VanLummel2013} and older adults~\cite{Najafi2002MeasurementElderly,Ganea2011} for assessing their mobility in activities of daily living~\cite{Ganea2012}. The sit-to-stand transitions are an important part to activities of daily living with an estimated frequency of $60\pm22$ for healthy adults~\cite{Dall2010}. Studying these transitions is also beneficial for clinical monitoring of patients with motor disorders such as Parkinson's disease~\cite{Zijlstra2012,Hubble2015, Rodrguez-Martn2014}, predicting falls~\cite{Aziz2014,Stack2016,Doheny2011} and frailty~\cite{Galan-mercant2014,Ganea2007} in older adults. Hence, there is an increasing research interest in investigating the biomechanics and kinematics of these postural transitions. Sit-to-stand is a good representative transition that is easy to record in a controlled environment and is also primarily a planar transition, hence in this study, we model sit-to-stand kinematics and classify these transitions. The kinematics of human motions can be estimated using inertial sensors, such as accelerometers and gyroscopes that provide a reliable, cost effective and wearable alternative to motion capture systems for detecting posture and movements~\cite{Lugade2014, Fortune2014ValidityVelocities, Millor2014}. Inertial sensors have been used to identify sit-to-stand transitions~\cite{VanLummel2013, Rodriguez-Martin2012} and also to extract biomechanical information~\cite{Music2008}. Often, parameters such as transition duration, angular and linear velocities, trunk tilt range, spectral edge frequencies and entropy values are used to evaluate functional performance of sit-to-stand and stand-to-sit transitions~\cite{Millor2014a}. The sit-to-stand transitions can be identified by using a single or multiple inertial sensors positioned on various locations such as the waist, hip or lower back~\cite{VanLummel2013, Bidargaddi2007WaveletAccelerometer, Regterschot2014SensitivityAdults} and chest~\cite{Godfrey2014AAccelerometer, Jovanov2013}. Various classification schemes such as the wavelet methods~\cite{Bidargaddi2007WaveletAccelerometer, Lockhart2013} and Support Vector Machines (SVM)~\cite{Rodrguez-Martn2014} have been used to identify sit-to-stand and stand-to-sit from the inertial sensors. Most of the previous studies focus on classification of the sit-to-stand transitions and very few model their kinematics. A theoretical model of sit-to-stand has been proposed by Musi\'{c} \textit{et al.} (2008)~\cite{Music2008}. Assessing a movement by modelling its kinematics is important for diagnosing and determining the severity of motor impairment, devising rehabilitation strategy, and monitoring patient's progress and outcomes of the intervention~\cite{Baker2006GaitRehabilitation}. The activity classification on the other hand, enables recognition of different movements~\cite{Avci2010} which is useful in developing assistive technologies. Combining the modelling and classification can help in pinpointing the problem areas in the movement as well as assessing the change in the motion kinematics in the affected population. However, to our knowledge, there are no methods where modelling of kinematics and classification of sit-to-stand transitions are explored via inter-dependent algorithms. The sit-to-stand kinematics are typically modelled by placing one sensor per segment~\cite{Music2008, Boonstra2006TheGyroscopes,Roetenberg2009XsensSensors,Zhou2008UseTracking}; additionally, multiple force sensors are also used~\cite{Music2008}. In this study, we show that the body kinematics can be modelled using only two wearable inertial sensors, instead of placing sensors on all the segments of the body or more traditional five sensor configuration with sensors on two legs, two hands and waist~\cite{Stack2016}. To our knowledge, this has not been evaluated on lower limb activities. In our previous work, we presented a two-segment model to estimate kinematics of upper limb with an inertial sensor on each segment~\cite{Villeneuve2017ReconstructionHealthcare}. In this study, we expand upon this two-segment upper limb model~\cite{Villeneuve2017ReconstructionHealthcare} to develop a three-segment model for estimating sit-to-stand transition kinematics by including a classification based modelling approach using only two inertial sensors. Unlike our previous work, in this study, we not only integrate classification of sit-to-stand transitions in the modelling process, but also use fewer number of sensors than the body segments being modelled. Additionally, we also demonstrate the generalisability of our novel integrated kinematics estimation approach using three different participant groups with varied ages and motor abilities including young healthy adults, older healthy adults and people with Parkinson's disease. The aims of this work are: \begin{enumerate} \item To design an integrated approach for monitoring and classification of sit-to-stand transitions and validate this model by comparing the results with motion capture reference data. \item To apply this method to estimate the three-segment sit-to-stand angular kinematics of older healthy participants and people with Parkinson's using only two inertial sensors. This is appropriate for people with motor-related physiological conditions to better understand their condition and the affected motion kinematics. \end{enumerate} We have chosen these groups to represent the problems that people tend to develop later in the life. We demonstrate this using a novel method of modelling human motion kinematics using only two inertial sensors with triaxial accelerometer and triaxial gyroscope so as to understand sit-to-stand movement in healthy individuals and individuals with motor impairments. We apply this model to accurately estimate the angular kinematics and classify sit-to-stand and stand-to-sit movements with three-segment body model consisting of the shank, thigh and back. We have reduced the number of sensors to make the system more comfortable to wear and facilitate measurements for longer duration, while reducing the energy requirements and sensor setup time. \section*{Parametric modelling and estimation of angular kinematics} The sit-to-stand transition angular kinematics for the three segments: the shank, thigh and back were modelled using the measurements from two inertial sensors placed on the shank and back. This was achieved in two stages: 1) modelling the relationship between the limb kinematics and sensor measurements; 2) parameter estimation using the model. The parameter estimation was done in further two stages. First, the shank and back kinematics were estimated directly from the inertial measurements. Second, as there was no sensor on the thigh, the corresponding kinematics were reconstructed using the previous outcome. A combined classification based approach was used to estimate the thigh kinematics. \subsection*{Estimation of the angular kinematics for the shank and the back} \subsubsection*{Kinematic model for the shank and the back} We estimated the kinematics during sit-to-stand using a 2-dimensional three segment model of a body in the sagittal plane. We have chosen a 2-dimensional model because the sit-to-stand motion occurs mainly in the sagittal plane and hence contains the maximum information about the motion. Most of the studies in the literature investing the kinematics of the three body segments also assumed that the movement was restricted to the sagittal plane~\cite{Fong2010TheReview}. This 2-dimensional model is sufficient for our purposes to study the kinematics of sit-to-stand transitions and classify them in the all the three participant groups. The third dimension might not provide additional information about sit-to-stand especially in the less dynamic older healthy and people with Parkinson's groups. This is discussed further in the Discussion section. The first, second and third segment represent the shank (S), thigh (T) and back (B) respectively as shown in the Fig~\ref{fig1}. Two inertial sensors with a triaxial accelerometer and a triaxial gyroscope each were placed on the shank and back. The inertial sensors were placed at a distance $L_S$ from the ankle on the shank and at a distance $L_B$ from the hip on the back. Angular kinematics including position $\theta_i$, velocity $\omega_i$ and acceleration $\alpha_i$, $i\in \{ S,T,B\}$ for each of the three segments were to be determined. The $\theta_i$, $\omega_i$ and $\alpha_i$ are functions of time where $\theta_S,\theta_T\in[0,\pi/2)$ and $\theta_B\in[-\pi/2,\pi/2)$. \begin{figure}[!h] \centering \includegraphics {Fig1} \caption{{\bf Leg and trunk three-segment 2-dimensional model in the sagittal plane.} $\theta_S$ is the angle for the shank, $\theta_T$ is the angle for the thigh and $\theta_B$ is the angle for the back. Green squares on the shank and back segment represent the inertial sensors. $L_S$ and $L_B$ denote the distance of the sensor placement on the shank from the ankle and the back from the hip respectively. } \label{fig1} \end{figure} For estimation of the shank and back angular kinematics, the translation of the ankle and hip joints is neglected and the same model is applied to both the segments. Such approximation is straightforward for the ankle as the foot is on the ground during sit-to-stand transitions and the shank pivots around the ankle; it is also reasonable for the hip and necessary for the kinematic estimation from only one sensor. Vectors are written in bold; the superscript of the vector refers to the coordinate frame in which it is written, for e.g. $^0\textbf{g}$; if omitted, the vector is written in the reference frame $\{0\}$; matrices are written in capital letters. Rotation matrices from frame $j$ to frame $k$ are written as ${^j}{R_k}$. The sensor is located at $^1\textbf{d}_i = \left[0, L_i, 0\right]\in\mathbb{R}^3,~{i\in\{ S,B\}}$ in the local frame, and $^0\textbf{d}_i={^0}{R_1} ^1\textbf{d}_i $ in the reference frame. The linear acceleration $\textbf{a}_i\in\mathbb{R}^3,~{i\in\{ S,B\}}$ of the sensor, written in the reference frame is given by Eq~(\ref{eq:linear-to-angular-model})~\cite{Craig2005IntroductionControl}, where $\boldsymbol{\mathbf{\omega}}_i\in\mathbb{R}^3$ is the angular velocity and $\boldsymbol{\dot{\mathbf{\omega}}}_i\in\mathbb{R}^3$ is the angular acceleration. \begin{eqnarray}\label{eq:linear-to-angular-model} ^0{\textbf{a}}_i = {^0}{\boldsymbol{\dot{\mathbf{\omega}}}}_i \times {^0}{\textbf{d}}_i + {^0}{\boldsymbol{\mathbf{\omega}}}_{i} \times ({^0}{\boldsymbol{\mathbf{\omega}}}_i \times {^0}{\textbf{d}}_i) \end{eqnarray} Hence, the linear accelerations measured by accelerometer on the shank and back are modelled by Eq~(\ref{eq:shank_model}) \begin{eqnarray}\label{eq:shank_model} ^1{\textbf{a}}_i ={^1}{R}_0 (^0{\textbf{a}}_i+{^0}\textbf{g}) \end{eqnarray} \noindent where, $^1{R}_0=\begin{bmatrix} \cos\!\left({\theta}_{s}\right) & \sin\!\left({\theta}_{s}\right) & 0\\ -\sin\!\left({\theta}_{s}\right) & \cos\!\left({\theta}_{s}\right) & 0\\ 0 & 0 & 1 \end{bmatrix}$ is the rotation transformation between the frame 0 (world) and frame 1 (sensor) and gravity component in the reference frame $^0\textbf{g}= \begin{bmatrix} 0 \\ g \\ 0 \end{bmatrix}$ pointing upwards. Thus, for the shank and the back, we get (\ref{eq:shank_model2})~\cite{Villeneuve2017ReconstructionHealthcare}. \begin{eqnarray}\label{eq:shank_model2} ^1{\textbf{a}}_i=\left(\begin{array}{c} g\, \sin\!\left(\mathrm{\theta_i}\right) - L_i\, \mathrm{\alpha_i}\\ g\, \cos\!\left(\mathrm{\theta_i}\right) - L_i\, {\mathrm{\omega_i}}^2 \end{array}\right), ~{\textbf{a}_i \in\mathbb{R}^3},~{i\in\{ S,B\}} \end{eqnarray} In order to estimate the angle of the back relative to the reference frame, one can apply the shank model if the acceleration of the hip is neglected. \subsubsection*{Extended Kalman filter to estimate the angular kinematics}\label{section:EKF} Expanding on our previous work~\cite{Villeneuve2017ReconstructionHealthcare} for estimating upper limb kinematics, we use extended Kalman filter (EKF) for obtaining the angle $\theta_i$, the angular velocity $\omega_i$ and the angular acceleration $\alpha_i, ~{i\in\{ S,B\}}$ for the shank and back independently. The state vector for the EKF is given by $ \boldsymbol{x}_t = [ \theta_i, \omega_i , \alpha_i]^T $, where $ \boldsymbol{x}_t$ is a function of time. The transition matrix $F$ that describes a link between a new state sample from the previous one is given by Eq~(\ref{eq:EKF-Fmatrix})~\cite{Todorov2007ProbabilisticData}. \begin{eqnarray}\label{eq:EKF-Fmatrix} F=\begin{bmatrix} 1 & \Delta T & \frac{\Delta T^2}{2}\\ 0 & 1 & \Delta T\\ 0 & 0 & 1 \end{bmatrix} \end{eqnarray} \noindent where, $\Delta T$ is sampling period (in this case 0.02 s). The process model for a single link for EKF is given by Eq~(\ref{eq:EKF-process model}). \begin{eqnarray}\label{eq:EKF-process model} \boldsymbol{x}_t = F \boldsymbol{x}_{t-1} + \boldsymbol{v}_{t-1} \end{eqnarray} \noindent where, $\boldsymbol{v}_t \sim \mathcal{N}(0, Q)$ is the process noise, which is a centred Gaussian noise of a covariance matrix Q, where we have chosen $Q = \begin{bmatrix} (\Delta T^2)^2 & 0 & 0\\ 0 & (0.1\Delta T)^2 & 0\\ 0 & 0 & (0.04)^2 \end{bmatrix} $. We want to estimate $\theta_i$, $\omega_i$ and $\alpha_i$ from the measurements observed from one accelerometer and one gyroscope. Using the relationship between the linear acceleration $a_x$ on x-axis, $a_y$ on y-axis obtained from the accelerometer measurements and the angular kinematics given in Eq~(\ref{eq:shank_model2}); and the angular velocity measurement obtained directly from the z-axis of the gyroscope $gyr_z$, we can establish the relation given in Eq~(\ref{eq:measurement-kinematics relation}) for each time point $t$. \begin{eqnarray}\label{eq:measurement-kinematics relation} \underbrace{ \left(\begin{array}{c} a_{x,i}\\ a_{y,i}\\ gyr_{z,i} \end{array}\right)}_{\boldsymbol{z}_t} = \underbrace{ \left(\begin{array}{c} g\, \sin\!\left(\mathrm{\theta_i}\right) - L\, \mathrm{\alpha_i}\\ g\, \cos\!\left(\mathrm{\theta_i}\right) - L\, {\mathrm{\omega_i}}^2\\ \omega \end{array}\right) }_{H(\boldsymbol{x}_t)} \end{eqnarray} \noindent where, $~{i\in\{ S,B\}}$. The measurements obtained from the inertial sensors are noisy and hence the measurement model for the EKF is given by Eq~(\ref{eq:EKF-measurement-model}). \begin{eqnarray}\label{eq:EKF-measurement-model} \boldsymbol{z}_t = H (\boldsymbol{x}_{t}) + \boldsymbol{w}_t \end{eqnarray} \noindent where, $\boldsymbol{w}_t \sim \mathcal{N}(0, R)$ and $ R = \begin{bmatrix} (\frac{g}{10})^2 & 0 & 0\\0 & (\frac{g}{10})^2 & 0\\ 0 & 0 & (0.005)^2 \end{bmatrix} $ is the covariance matrix of Gaussian measurement noise resulting from the accelerometer and gyroscope. The exact values of Q and R were fine tuned manually offline, which is a common approach of determining these parameters of Kalman filters~\cite{Welch1995AnFilter}. The process model given in (\ref{eq:EKF-process model}) is used in the prediction step for the EKF and the measurement model given in (\ref{eq:EKF-measurement-model}) is used in the updating step of the EKF. This EKF model is used independently for obtaining shank kinematics $\theta_S$, $\omega_S$ and $\alpha_S$ and back kinematics $\theta_B$, $\omega_B$ and $\alpha_B$ using the measurements from inertial sensors placed on the shank and the back respectively. \subsection*{Estimation of the angular kinematics for the thigh} The thigh movement is not measured using an inertial sensor and hence, its angular kinematics could not be estimated directly. We used a classification-based approach by using the kinematics from the shank and back to identify four classes: sitting, standing, sit-to-stand and stand-to-sit. The thigh angular kinematics were estimated for each of the classes separately because we observed from the reference data that, the kinematics for each class could be modelled by a different function. A two-tiered classification scheme was used. The first classifier distinguished between a stationary state (sitting and standing) and transition state (sit-to-stand and stand-to-sit). The second classifier classified sit-to-stand and stand-to-sit movements; and based on that, a probabilistic approach was used to determine sitting or standing state. The analysis pipeline of the estimation of the kinematics of the shank, back and thigh is given in Fig~\ref{fig:processing}. \begin{figure}[!h] \centering \includegraphics{Fig2} \caption{{\bf Processing steps for estimation of Shank, Back and Thigh kinematics.} (A) Shank kinematics estimation process using the model and EKF. (B) Back kinematics estimation process using the model and EKF.(C) Thigh kinematics estimation process by integrating the results of (A) and (B) and two-tiered classification scheme to segment and identify sit-to-stand (SiSt), stand-to-sit (StSi), sitting and standing states.} \label{fig:processing} \end{figure} \subsubsection*{Classification 1- automatic segmentation and identification of stationary and transition states} The first classifier segmented and identified the data with multiple sit-to-stand movements into individual stationary and transition states automatically from the shank and back angular kinematics. A one dimensional feature vector was created by first, multiplying together the angular kinematics $\theta_i$ and $\omega_i$, $i\in \{ S,B\}$ for the shank and back; second, taking absolute value of the feature vector; and third, smoothing it with a moving average filter $y_t=1/n \sum_{i=0}^{n-1} x_{t+i}$ with n=5 to eliminate trivial peaks and avoid spurious misclassification. This feature vector had values close to zero during the stationary states and higher values during transition state. Multiplying $\theta_i$ and $\omega_i$ together to obtain this feature vector made the values in the stationary state even smaller and enhanced the difference between stationary and transition states. A threshold for classification was determined automatically from the right edge of first bin in the histogram of the peaks from feature vector. Bin width for the histogram was obtained using Scott's rule~\cite{Scott2010ScottsRule}. Histogram was computed using the MATLAB (The MathWorks, Inc., Natick, Massachusetts, US) function \textit{histogram()}. The first bin of the histogram captured feature values of stationary states which were close to zero, hence the right edge of the first bin computed using Scott's rule gave the threshold value. A binary linear classification was performed using this threshold. Spurious misclassifications were identified and corrected automatically by finding one or two samples that had a different class from their neighbouring samples. Based on the classification results, a series of sit-to-stand movements was segmented into stationary and transition states. Fig~\ref{fig:classification} shows the two-tier classification scheme. \begin{figure}[!h] \centering \includegraphics{Fig3} \caption{{\bf Classification scheme for sit, stand, sit-to-stand and stand-to-sit.} State diagram representing the classification scheme for sit, stand, sit-to-stand and stand-to-sit. The vertical dashed line represents the threshold for the Classifier 1 to classify the stationary state and the transition state. The horizontal dashed line represents the classification boundary for Classifier 2 to classify the transition state into sit-to-stand and stand-to-sit.} \label{fig:classification} \end{figure} \subsubsection*{Classification 2 - classifying sit-to-stand, stand-to-sit, sitting and standing states using unsupervised learning} Once the stationary and transition state segments were identified, the second classification was done on the transition segments to classify sit-to-stand and stand-to-sit states. We employed unsupervised learning using k-means clustering~\cite{Arthur2007} to automatically classify sit-to-stand and stand-to-sit states. Four-dimensional features were used for k-means clustering. The four features obtained for each transition segment were as follows: the slope of the linear regression of the segment $\omega_S$ (such that it captures the amount of increase or decrease in $\omega_S$ in the selected transition segment which differs between sit-to-stand and stand-to-sit), the slope of the linear regression of the segment $\omega_B$, the difference between the start and end points of the segment $\theta_S$ and the difference between the start and end points of the segment $\theta_B$. After classification, the labels were assigned to the two clusters post-hoc based of their values. The classification was performed on individual participant independently with limited number of trials. The advantage of using unsupervised learning was that only single trial from an individual participant with as few as two sit-to-stand transitions could be used to classify both the states correctly. The sit-to-stand and stand-to-sit states in two participants with Parkinson's who managed to perform only two sit-to-stand transitions were also classified correctly using unsupervised learning. As opposed to this, supervised learning requires several examples of sit-to-stand transitions to train the classifier. Based on whether the previous transition segment was sit-to-stand or stand-to-sit, the stationary state segment was classified into sitting or standing state as follows: \begin{itemize} \item If the previous transition segment was sit-to-stand, then the probability of standing state in the current stationary segment was set to 1 and hence, the segment was classified as standing state. \item If the previous transition segment was stand-to-sit, then the probability of the sitting state in the current stationary segment was set to 1 and hence, the segment was classified as sitting state. \end{itemize} We based the probability of identifying stationary states on the class of the previous transition state because the angular velocity and angular acceleration both are zero during sitting and standing and the angle of shank and back varies according to individual's posture. Fig~\ref{fig:processing}C shows the classification scheme used for estimating the thigh kinematics. \subsubsection*{Estimation of the thigh angular kinematics}\label{section:thigh_kinematics} Based on the classification of each segment, we used a model similar to a single neuron of an artificial neural network with appropriate activation function to estimate the thigh kinematics for each state (see Fig~\ref{fig:thigh_model}). We assumed that when a person is standing, the thigh angle, $\theta_T$ is $0^o$ and when a person is sitting on a chair, $\theta_T$ is $90^o$. We confirmed this by observing the distribution of the thigh angles while sitting and standing from the reference data collected from the young healthy participants as detailed in the next section~\ref{section:experimental_protocol}. The angular velocity $\omega_T$ and angular acceleration $\alpha_T$ were zero during the stationary states. \begin{figure}[!h] \centering \includegraphics[scale = 0.4]{thigh_model} \caption{{\bf Model to estimate thigh angle.} A model based on an artificial neuron with a sigmoid activation function to estimate stand-to-sit thigh angle where, $t$ is the input time segment of stand-to-sit transition, weight $w$ determines the speed of transition, bias $b$ determines the centre of transition, $x$ is classifier 2 output where $x = 0$ for stand-to-sit and $x = 1$ for sit-to-stand transition, gain $G$ scales the output of activation function between $0^o$ and $90^o$ and the thigh angle $\theta_T$ is the output. } \label{fig:thigh_model} \end{figure} To estimate stand-to-sit transition angle $\theta_T$, we used a model similar to artificial neural network with a single neuron with sigmoid activation function Eq~(\ref{eq:sigmoid-StSi})) for regression (see Fig~\ref{fig:thigh_model}). This approach is generalisable and enables more complex transitions to be modelled but is sufficient for the experiments we describe in this study. The model parameters $w$ and $b$ determined the speed of transition and the centre of the sigmoid curve, the midpoint of the transition segment respectively. Input to the model $t$ is the time window of transition segment. The value of $w$ indicating the speed of transition was estimated, in the range of 0 and 1, by minimising the root mean squared error between the estimated angle and the ground truth reference angle of the thigh. The optimum estimated value of $w$ was found to be 0.135. Additional model parameter $x$ is the classification result from classifier 2 where $x = 0$ for stand-to-sit and $x = 1$ for sit-to-stand transition. We chose activation function in Eq~(\ref{eq:sigmoid}) to obtain a smooth transition of the angle for sit-to-stand by assuming that the angles for sit-to-stand and stand-to-sit are symmetrical for the thigh, which was also confirmed by observing the angular velocities in the ground truth reference data which were indistinguishable. \begin{eqnarray}\label{eq:sigmoid} \theta_T = F_{SiSt}(t,x) = \frac{e^{x(-w(t-b))}} {1 + e^{-w(t-b)}} \end{eqnarray} For stand-to-sit transition, the thigh angle is modelled by Eq~(\ref{eq:sigmoid-StSi}) with $x = 0$ in Eq~(\ref{eq:sigmoid}). \begin{eqnarray}\label{eq:sigmoid-StSi} \theta_T = F_{SiSt}(t, 0) = \frac{1} {1 + e^{-w(t-b)}} \end{eqnarray} For sit-to-stand transition, the thigh angle is modelled by Eq~(\ref{eq:sigmoid-SiSt}) with $x = 1$ in Eq~(\ref{eq:sigmoid}). \begin{eqnarray}\label{eq:sigmoid-SiSt} \theta_T = F_{SiSt}(t, 1) = 1 - \frac{1} {1 + e^{-w(t-b)}} \end{eqnarray} Sigmoid function in Eq~(\ref{eq:sigmoid}) is differentiable and thus $\omega_T = \frac{d \theta_T}{dt}$ and $\alpha_T = \frac{d^2 \theta_T}{dt}$ for sit-to-stand and stand-to-sit are valid. This model with a single neuron was sufficient to estimate thigh angle, however, it can be extended to a more complex artificial neural network with any bounded continuous differentiable activation function to perform regression to estimate the thigh angle. \section*{Study Design} \subsection*{Participants} Participants from the three groups were recruited to take part in this study which was done in two stages. In the first stage, 10 younger healthy adults (YH) ($37.4 \pm 9.9$ years $(mean \pm SD)$, 4 female) participated in the study conducted at the University of Reading. All participants were over the age of 18 and in good physical health without musculoskeletal or neurological conditions. Ethical approval for this study was obtained from the ethics committee of the University of Reading. In the second stage, 12 older healthy adults (OH) ($74 \pm 9.1$ years, 11 female) and 12 people with Parkinson's disease (PwP) ($74.3 \pm 7.4$ years, 6 female) participated in the study conducted at the University of Southampton. Out of the 12 PwP participants, eight participants had a score of 3 on the Hoehn and Yahr (H\&Y) scale, one participant had H\&Y score of 2.5, one participant had H\&Y score of 2 and two participants had H\&Y score of 1.5. All the participants in these two groups were over the age of 60, were able to walk independently unaided, and reported themselves to be able to perform transfers, walking and activities in standing three times over a period of approximately one hour. People with Parkinson's disease had a diagnosis made by a specialist at least 12 months prior to the study. The data from 4 OH and 10 PwP participants was collected in their home and the data from rest of the participants was collected in the laboratory. The ethical approval for this study was obtained from the ethics committee of the University of Southampton. The participants in all the three groups gave a written informed consent prior to the participation. \subsection*{Equipment} \subsubsection*{Wearable sensors} Wearable sensors, custom designed at the University of Reading, were used to collect the movement data in this study. Each wearable sensor consisted of a triaxial accelerometer and a triaxial gyroscope. The data were stored to the internal SD card. The sensors sampled at nominal rate of 50 Hz and $\pm4$ g provided an actual sampling rate of $49.985\pm 0.016$ Hz. The bandwidth at nominal sampling rate was 21 Hz with a noise density of 0.14 $mg/\sqrt[]{Hz}$. All sensor data were resampled using video recording as an external time base. Further details of the sensors can be found in~\cite{Villeneuve2017ReconstructionHealthcare}. The wearable sensors were attached to the shank and the back as shown in Fig~\ref{fig:coda}. \begin{figure}[!h] \centering \includegraphics{Fig4} \caption{{\bf Full inertial wearable sensors and Codamotion marker setup.} (A) Shank and thigh inertial sensors and Codamotion markers positions (B) Back inertial sensor and Codamotion marker positions. The long hollow arrows show the position of the inertial sensors placed on the shank and back. The solid short arrows show the position of Codamotion markers on the leg and the back.} \label{fig:coda} \end{figure} \subsubsection*{Motion capture} To provide the ground truth data for validating the results of the parametric estimation models, we collected motion capture data using the Codamotion 3D Motion Analysis System (Codamotion, Rothley, UK) for the young healthy (YH) participant group. The Codamotion active markers were used to track the motions of the body during data collection. The ODIN (Codamotion, Rothley, UK) software was then used to extract the body segment angular displacement, velocity and acceleration. The Codamotion sensors were placed on the leg and back as shown in Fig~\ref{fig:coda}. The motion capture data was not collected for OH and PwP groups because the data recording was done at home for several participants since these groups had difficulty travelling. Rigorously validated model for kinematics estimation using motion capture data from YH was applied to OH and PwP groups. \subsection*{Experimental protocol}\label{section:experimental_protocol} Prior to wearable sensors' data collection, the distance in meters of the inertial sensor on the shank from the ankle $(L_S)$ and the distance of the inertial sensor on the back from the hip $(L_B)$ were recorded for input into the modelling algorithm for each participant. The wearable inertial sensors were attached to the right shank and to the middle of the lower back using elastic straps. The active Codamotion markers were placed on the shank and back close to the inertial sensors for YH. The clusters of Codamotion markers were also attached to the thigh where there was no inertial sensor. The YH participants were then asked to perform three sets of five sit-to-stand and stand-to-sit transitions, with rest in between the sets as required. The OH and PwP participants were asked to perform a single set of three sit-to-stand and stand-to-sit transitions. \subsection*{Data Processing} Data obtained from the inertial sensors on the shank and the back were synchronised using ELAN software~\cite{Wittenburg2006ELAN:Research} by tapping the sensors together at the beginning of the trial and at the end of the trial. The Codamotion data in the YH participants was synchronised with the inertial sensors by moving the right foot backwards and forwards at the beginning and at the end of the experiment. After aligning the data, the Codamotion data was subsampled to 50 Hz to match the sampling rate to the inertial sensors. Also, the Codamotion data was calibrated such that the angles of the shank, thigh and back were close to $0^o$ in the standing position. The measurements from the x-axis and the y-axis of the accelerometer and the z-axis of the gyroscope on the shank and the back (Fig~\ref{fig1}) were used as measurement input to the EKF model detailed in the previous section~\ref{section:EKF} to obtain the shank and back angular kinematics $\theta_i$, $\omega_i$ and $\alpha_i,~{i\in\{ S,B\}}$. Using the angular kinematics of the shank and back, the classification was performed to obtain segments of data belonging to the standing, sitting, sit-to-stand and stand-to-sit states. For estimating thigh kinematics $\theta_T$, $\omega_T$ and $\alpha_T$, function in Eq~(\ref{eq:sigmoid}) was used as detailed in the previous section to model thigh angular kinematics. Since there was no sensor on the thigh, the estimation of the kinematics was completely dependent on the estimated kinematics from the shank and the back. The estimated body kinematics were compared against reference Codamotion data in the YH participants and the model was applied to the OH and PwP participants. All the analysis was completed using MATLAB. \section*{Results} \subsection*{Estimated angular kinematics in younger healthy adults (YH)} \subsubsection*{Comparison of the estimated angular kinematics with the reference data in younger healthy adults (YH)} We can observe from Fig~\ref{fig:estimated_kinematics} that the models for the shank, thigh and back estimated the angular kinematics accurately as compared to the reference data recorded with the Codamotion system. There is some difference in the back angles as seen in Fig~\ref{fig:estimated_kinematics}B because the Codamotion sensor shifted in the seated position when the participant's back touched the back of the chair. We can observed from Fig~\ref{fig:estimated_kinematics}C that, the thigh kinematics were estimated accurately using the proposed integrated approach of modelling and classification despite the lack of inertial sensor on this location. \begin{figure}[!h] \centering \includegraphics{Fig5} \caption{{\bf Comparison of estimated kinematics with Codamotion reference kinematics in young healthy participants.} An example of good angular kinematics estimation (left column) is shown for (A) Shank, (B) Back and (C) Thigh. The segmentation of sit-to-stand, stand-to-sit, sit and stand (no label on the top) is also shown with different background colours and labels at the top. An example of discrepancies between estimated kinematics and reference kinematics from Codamotion data (right column) is shown for (D) Shank,(E) Back and (F) Thigh.} \label{fig:estimated_kinematics} \end{figure} Even though the estimated kinematics of the shank and back matched the kinematics obtained from the reference Codamotion data in most cases, we observed some offset between the two in some participants as shown in Fig~\ref{fig:estimated_kinematics}D and E. For estimation of the thigh angle, it was assumed that the angle is $0^o$ while standing and $90^o$ while sitting. However, the sitting angle depends on the posture of individual's sitting position. In Fig~\ref{fig:estimated_kinematics}F, the individual sat in a slightly different posture with the feet tucked under the chair, making the thigh angle less than $90^o$ during sitting. Thus, the visual inspection of the plots of angular kinematics for all the trials of all the YH participants showed that the estimated kinematics matched the reference kinematics obtained from the Codamotion sensor. The next section details the quantitative results of the comparison of the estimated and reference kinematics. \subsubsection*{Normalised root mean squared error between estimated and reference angular kinematics} The normalised root mean squared error (NRMSE) was calculated between the estimated angular kinematics using the model shown in Fig~\ref{fig:processing} and the reference angular kinematics obtained from motion capture for evaluating the quality of the models used for estimation. The average NRMSEs over the three runs for all the YH participants are shown in Table~\ref{table1}. We observed that the NRMSEs are very low with the average of $10\%$ for angular velocity and angular acceleration. The NRMSEs for angular displacement are higher due to the offset between the inertial sensors and Codamotion markers on the shank. Thus, these NRMSE results confirms that our proposed model estimates the angular kinematics of the three segments of the body correctly. \begin{table*}[!ht] \begin{adjustwidth}{-1.25in}{0in} \centering \renewcommand{\arraystretch}{1.3} \caption{{\bf Normalised root mean squared error (NRMSE) for shank, back and thigh for all the young healthy (YH) participants}} \label{table1} \begin{tabular}{c|c c c|c c c|c c c} \hline \multicolumn{1}{c|}{} & \multicolumn{3}{c|}{\bfseries Shank NRMSE} & \multicolumn{3}{c|}{\bfseries Thigh NRMSE} & \multicolumn{3}{c}{\bfseries Back NRMSE}\\ \hline \bfseries \thead{Participant\\YH} & \bfseries Angle & \bfseries \thead{Angular \\Velocity} & \bfseries \thead{Angular \\Acc.} & \bfseries Angle & \bfseries \thead{Angular \\Velocity} & \bfseries \thead{Angular\\ Acc.} & \bfseries Angle & \bfseries \thead{Angular \\Velocity} & \bfseries \thead{Angular \\Acc.}\\ \hline 1 & 0.1401 & 0.0465 & 0.0617 & 0.0929 & 0.1018 & 0.1745 & 0.2553 & 0.0907 & 0.0932\\ 2 & 0.1386 & 0.0615 & 0.0520 & 0.1373 & 0.0877 & 0.1012 & 0.1705 & 0.0549 & 0.0942\\ 3 & 0.3683 & 0.0821 & 0.1366 & 0.0953 & 0.0914 & 0.1202 & 0.1197 & 0.0585 & 0.1385\\ 4 & 0.3537 & 0.0585 & 0.1348 & 0.0718 & 0.0722 & 0.1239 & 0.1075 & 0.0542 & 0.1049\\ 5 & 0.3464 & 0.1277 & 0.1597 & 0.1013 & 0.0745 & 0.0917 & 0.2090 & 0.0962 & 0.1750\\ 6 & 0.2963 & 0.0848 & 0.0446 & 0.2457 & 0.1310 & 0.1409 & 0.1549 & 0.0760 & 0.0696\\ 7 & 0.3163 & 0.0746 & 0.1113 & 0.1157 & 0.1304 & 0.1928 & 0.1348 & 0.0513 & 0.0902\\ 8 & 0.2363 & 0.0439 & 0.0940 & 0.1156 & 0.0963 & 0.1272 & 0.1160 & 0.0624 & 0.1376\\ 9 & 0.3191 & 0.0559 & 0.1229 & 0.1504 & 0.1299 & 0.1477 & 0.1476 & 0.0694 & 0.1501\\ 10 & 0.4788 & 0.0830 & 0.0916 & 0.1906 & 0.1160 & 0.1245 & 0.2009 & 0.0621 & 0.0948\\ \hline\hline \bfseries Average & \bfseries0.2994 & \bfseries0.0718 & \bfseries0.1009 & \bfseries0.1317 & \bfseries0.1032 & \bfseries0.1345 & \bfseries0.1616 & \bfseries0.0676 & \bfseries0.1148 \\ \hline \bfseries Std. Dev. & 0.1004 & 0.0247 & 0.0390 & 0.0524 & 0.0226 & 0.0309 & 0.0476 & 0.0155 & 0.0333\\ \hline \end{tabular} \end{adjustwidth} \end{table*} \subsubsection*{Bland-Altman analysis} The Bland-Altman plots~\cite{Giavarina2015} for comparing the reference angular kinematics obtained from Codamotion and the angular kinematics estimated by our proposed method in all the YH participants are shown in Fig~\ref{fig:bland_altman}. The Bland-Altman plots show that there is an agreement between the reference kinematics and estimated kinematics as the mean difference between the two is zero which is represented by the solid horizontal line and the majority of the points are located between $\pm 2$ standard deviation represented by the dotted horizontal lines. There is a small discrepancy between the reference and estimated shank since their mean difference is slightly larger than zero. The individual participants have Bland-Altman plots similar to each other and to the the Fig~\ref{fig:bland_altman} of all the participants together. \begin{figure}[!h] \centering \includegraphics[scale=0.58]{Fig6} \caption{{\bf Bland-Altman plots.} The Bland-Altman plots showing comparison between the reference Codamotion angular kinematics and estimated angular kinematics using the proposed integrated approach of modelling and classification for the shank, thigh and back. The x-axis shows the mean of the two measures and the y-axis shows the difference between the two measures. The solid horizontal represents the mean difference between the reference and estimated kinematics and the dotted horizontal lines show the $\pm 2$ standard deviation boundaries. A Bland-Altman plot typically looks for points to be within $\pm 2$ standard deviations of the mean difference, the title of each sub-figure has this as percentage.} \label{fig:bland_altman} \end{figure} Thus using NRMSE and Bland-Altman analysis, we validated the wearable inertial sensors against the reference data from young individuals, and our proposed angular kinematics estimation model was found to be stable in this context. \subsection*{Estimated angular kinematics in older healthy adults (OH) and people with Parkinson's (PwP)} We applied the integrated modelling and classification method on the OH and PwP participant groups to estimate their angular kinematics during sit-to-stand transitions. An example of the estimated angular kinematics for OH and PwP is given in Fig~\ref{fig:estimated_kinematics_old}A-C and D-F respectively. The proposed method was able to reliably classify the sit-to-stand and stand-to-sit transitions in both OH and PwP groups and thus estimate their angular kinematics. We observed that the angular kinematics in these two groups were not as smooth as those in the YH group. The fluctuations were visually observed especially in the angular velocity and angular acceleration Fig~\ref{fig:estimated_kinematics_old}A, B, D and E which could be an indication of overall instability during the sit-to-stand movements or tremors in the PwP group. In many participants, this instability was also observed in the stationary states. The PwP group showed more instability than the OH group upon visual inspection as seen in Fig~\ref{fig:estimated_kinematics_old}D and E. We observed that in many participants in these two groups, there was a brief pause during the sit-to-stand and stand-to-sit transitions which may indicate that the older adults perform these transitional movements more statically by keeping their accelerations low and making their velocity zero half way through the transition. This is an interesting finding giving an insight into the strategies used by different groups for performing sit-to-stand transitions and will require further investigation. \begin{figure}[!h] \centering \includegraphics{Fig7} \caption{{\bf Estimated angular kinematics in older healthy adults (OH) and people with Parkinson's (PwP).} An example of estimated angular kinematics in OH participants (left column) is shown for (A) Shank, (B) Back and (C) Thigh. The segmentation of sit-to-stand, stand-to-sit, sit and stand (no label on the top) is also shown with different background colours and labels at the top. An example of estimated angular kinematics in PwP participants (right column) is shown for (D) Shank, (E) Back, and (F) Thigh.} \label{fig:estimated_kinematics_old} \end{figure} \subsection*{Comparison of results in younger healthy adults (YH), older healthy adults (OH) and people with Parkinson's (PwP)} \subsubsection*{Classification accuracies} The two-tiered classification approach successfully segmented and classified sit-to-stand motion in sitting, standing, sit-to-stand and stand-to-sit stages in all the three participant groups with high accuracy. The best classification accuracies were obtained in the YH group. The classification accuracies together for all the four states are 98.67\%, 94.20\% and 91.41\% for YH, OH and PwP respectively. There were no false positives in stationary and transition state classifications. The misclassifications occurred when the sitting and standing postures were identical and indistinguishable. This happened when participants sat upright. In such cases, the sensor data was also identical for the sit-to-stand transitions. \subsubsection*{Timings for sit-to-stand and stand-to-sit transitions} The average time taken for sit-to-stand was $1.44\pm 0.36$ s, $1.80\pm0.54$ s and $2.29\pm1.44$ s in YH, OH and PwP respectively. The average time taken for stand-to-sit was $1.55\pm0.33$ s, $1.77\pm0.65$ s and $2.18\pm0.84$ s for YH, OH and PwP respectively. The YH group took the least amount of time to perform transitions and approximately equal amount of time for sit-to-stand and stand-to-sit. The OH group performed the transitions slower than the YH group and took approximately same time for both the transitions. The PwP group took more time to perform sit-to-stand transitions than the rest of the two groups. The timings for the YH for different participants is consistent with a small standard deviation as compared to the other groups. The variability in the timings to perform the transitions gradually increases from YH, OH to PwP group. This is shown in the box plot in the Fig~\ref{fig:timings}. Comparing the timings of three groups during sit-to-stand and stand-to-sit independently using Mann-Whitney U test and Bonferroni correction for multiple comparisons between the three groups revealed that sit-to-stand timings of YH and PwP were significantly different $(p<0.001)$ and OH and PwP were also significantly different $(p<0.05)$. Also, during stand-to-sit, timings of YH and OH were significantly different $(p<0.001)$ and timings of YH and OH were significantly different $(p<0.001)$. \begin{figure}[h!] \centering \includegraphics {Fig8} \caption{{\bf Timings of sit-to-stand and stand-to-sit in younger healthy (YH) adults, older healthy (OH) adults and people with Parkinsons (PwP).} The time taken by participants to perform sit-to-stand and stand-to-sit from all the three YH, OH and PwP groups. The black triangle shows the mean time. Statistically significant differences (Mann-Whitney U test with Bonferroni correction for multiple tests) in the timings among the three groups for sit-to-stand and stand-to-sit are shown by the stars indicting the p-values (one star indicates $p<0.5$,two stars indicate $p<0.01$ and three stars indicate $p<0.001$). } \label{fig:timings} \end{figure} \subsubsection*{Variability in the posture of sitting and standing} The box plots in Fig~\ref{fig:posture_variability}A and B show the posture variability in the YH, OH and PwP groups during sitting and standing respectively. The variability in the average angles of the shank and the back differ in three groups. Overall, the back angles show higher variability. The variability is higher in the OH and PwP groups suggesting that the older adults have variable postures during sitting and standing depending on their physical condition and due to change in their centre of gravity in order to maintain their balance. The sitting position shows more postural variability (see Fig~\ref{fig:posture_variability}A) because of the differences in the sitting styles such as hunching, leaning back on the chair, tucking their feet under the chair or sitting very upright. However, this variability is not present in the YH group in the standing posture in contrast to OH and PwP groups during standing. The Mann-Whitney U test with Bonferroni correction for multiple comparisons showed that the shank and back angles during sitting and standing were significantly different between YH and PwP $(p<0.01)$ and also between the other groups in some cases as shown in Fig~\ref{fig:posture_variability}. This shows that the age and the physical condition affects the posture which can be detected by the estimated angular kinematics. \begin{figure}[!h] \centering \includegraphics{Fig9} \caption{{\bf Variability in the posture during sitting and standing in the shank and back in younger healthy (YH) adults, older healthy (OH) adults and people with Parkinson's (PwP).} (A) The average angles of the shank and back in YH, OH and PwP participants during sitting. The black triangle shows the mean angle. (B) The average angles of the shank and back in YH, OH and PwP participants during standing. Statistically significant differences (Mann-Whitney U test with Bonferroni correction for multiple tests) in the angles of the shank and back among the three groups during sitting and standing are shown by the stars indicting the p-values (one star indicates $p<0.5$,two stars indicate $p<0.01$ and three stars indicate $p<0.001$).} \label{fig:posture_variability} \end{figure} \subsubsection*{Differences in sit-to-stand and stand-to-sit transitions in younger healthy adults (YH), older healthy adults (OH) and people with Parkinson's (PwP)} The Fig~\ref{fig:grand_avg}A-D and E-H show the grand average sit-to-stand and stand-to-sit angular velocities and angles respectively for the shank and the back. The velocities are highest in the YH and lowest in PwP for the shank and the back during both sit-to-stand and stand-to-sit (see Fig~\ref{fig:grand_avg}A-D). This suggests that the OH and PwP perform statically stable transitions in order to maintain their balance. YH have greater angel for the shank during sit-to-stand transitions than the other groups (see Fig~\ref{fig:grand_avg}E and F). The shank angles are very small in the PwP group (Fig~\ref{fig:grand_avg}E-H). Thus, angular kinematics can inform us about the differences in sit-to-stand transitions in the three groups. \begin{figure}[!h] \centering \includegraphics[scale=0.85]{Fig10} \caption{{\bf Grand average shank and back velocity and angles during sit-to-stand and stand-to-sit in younger healthy (YH) adults, older healthy (OH) adults and people with Parkinson's (PwP).} (A) Grand average shank velocities in the three participant groups during sit-to-stand. (B) Shank velocities during stand-to-sit. (C) Back velocities during sit-to-stand. (D) Back velocities during stand-to-sit. (E) Shank angles during sit-to-stand. (F) Shank angles during stand-to-sit. (G) Back angles during sit-to-stand. (H) Back angles during stand-to-sit.} \label{fig:grand_avg} \end{figure} \section*{Discussion} Expanding on our previous work to estimate two-segment upper limb kinematics using one inertial sensor on each limb segment ~\cite{Villeneuve2017ReconstructionHealthcare}, in this study, we have developed a three-segment body model integrated with classifier to estimate the angular kinematics and classify sit-to-stand motion using only two inertial sensors placed on the shank and back. This provides a low-cost solution to model the sit-to-stand activities which are crucial for human mobility and are often affected due to old age and motor impairment. This approach combines both kinematic analysis and movement classification approaches, and thus could be employed for monitoring the quality of movement as well as assessing general movement patterns and may therefore be of value in clinical decision making. We have demonstrated that our approach of combining the model and the classification for estimation of kinematics is robust and stable from its successful application across all the three participant groups comprising of younger healthy adults, older healthy adults and people with Parkinson's. Our approach can be generalised across populations for modelling sit-to-stand kinematics. We have chosen a simplified 2-dimensional model because the sit-to-stand transitions occur predominantly in the sagittal plane~\cite{Music2008}. The sagittal movement assumption is common for estimating kinematics since finding 3-dimensional kinematics poses difficulty when using body fixed inertial sensors~\cite{Fong2010TheReview, Dejnabadi2006EstimationSensors}. Further work is needed to include out of plane information, however, adding an extra dimension might not give more information about sit-to-stand motion, although may be helpful when considering combined sit to stand and turning movements. The 3-dimensional model is likely to be of particular relevance when considering movement dynamics in the younger healthy participants. However, we are developing this model to study the OH and the PwP groups that are less dynamic and hence their sit-to-stand transitions are restricted to the sagittal plane for which our 2-dimensional model is sufficient. The EKF based model successfully estimated the kinematics of the shank and back which was confirmed by comparing the outcome to the Codamotion reference data as shown in Fig~\ref{fig:estimated_kinematics}A and B. The EKF models the measurement noise and the process noise to estimate accurate results, unlike the approach of obtaining kinematics directly from accelerometer measurements which needs explicit de-noising due to the noise and drifts in the sensors, and differentiation and integration of their output. Our model is insensitive to the distances of sensors $L_i, {i\in\{ S,B\}}$ from the ankle and hip (see Fig~\ref{fig1}). This shows that the model is robust across people with different anatomical measurements and the location of sensor placement on the body segment. In our model, we have ignored the translation and acceleration of the hip during sit-to-stand transitions. This has introduced bias in the data. However, by comparing the results of YH participants with the reference Codamotion data (Fig~\ref{fig:estimated_kinematics}, Fig~\ref{fig:bland_altman} and Table~\ref{table1}), we observe that the bias is insignificant enough to allow this assumption. The bias will be higher when the accelerations are high, consequently since the OH and particularly PwP groups have lower accelerations, the bias will also be low in these groups and can be disregarded. The model was validated on the YH participants and then applied on the OH and PwP participants. We were not able to do 3D motion capture with the Codamotion system in OH and PwP groups, primarily because the data was collected from people's home. This was seen as appropriate since the participant groups had difficulty travelling. We aimed to collect the data with minimal disruption and given the problems of setting up the Codamotion system in the home environment, we intentionally omitted this stream of data. However, the validation of wearable sensors using motion capture data collected from YH group gave a strong indication that our kinematics estimation method using the proposed model works as seen from the small NRMSE (Table~\ref{table1}) and Bland-Altman plots (Fig~\ref{fig:bland_altman}) and hence further validation for the OH and PwP groups was not required. We minimised the number of sensors required to estimate the kinematics of three-segment model. We chose to place sensors on shank and back because only this combination can model three-segment kinematics by estimating thigh kinematics. If two consecutive segments were chosen, the third could not be detected. It was also easier to place sensors on the shank and the back than thigh, because of larger muscle movements in thigh during sit-to-stand and discomfort during sitting with a sensor on thigh. We have dealt with a difficult problem of estimating thigh kinematics effectively without placing an inertial sensor on this location. Estimating thigh kinematics from this missing data is challenging because it is an ill posed problem and the thigh angle has infinitely many solutions in the range between $0^o$ and $90^o$ for sit-to-stand activity. To deal with this, we have incorporated a classification based approach where we identify the current state (sit, stand, sit-to-stand or stand-to-sit) and apply different models to the individual state. Thus, we uniquely combine two challenges: classification of different stages in sit-to-stand movement and obtaining angular kinematics for the three-segment body model. Even though the estimated thigh kinematics are accurate with small average error of 13\% for the angle, angular velocity and angular acceleration of the shank, thigh and back in comparison to the reference motion capture data in YH Table~\ref{table1}), we have based it on the observational assumptions that when the person is seated on the chair, the angle of the thigh is $90^o$ and when the person is standing, the thigh angle is $0^o$. This highly depends on the posture of an individual while sitting or standing which is observed in Fig~\ref{fig:estimated_kinematics}F where the sitting angle for thigh is slightly less than $90^o$. Since, sitting and standing postures differ from person to person, this approach might not yield accurate results in the cases for postural defects, specifically in the OH and PwP groups. We have modelled the transitions between $0^o$ and $90^o$ for sit-to-stand and stand-to-sit by using a single neural network node with sigmoid activation function in Eq~(\ref{eq:sigmoid}) with an input from the sit-to-stand transition classifier. This sigmoid function was chosen because, it is continuous and differentiable leading to smooth transitions between sitting and standing states. The assumption of symmetrical sit-to-stand and stand-to-sit transitions allowed us to estimate the parameter $w$ only once. This assumption of symmetry might not be true of OH and PwP. This model can be generalised and extended with a more complex artificial neural network for regression with any other suitable continuously differentiable activation functions. We have achieved high classification accuracies of 98.67\%, 94.20\% and 91.41\% for YH, OH and PwP respectively using a two-tiered classification with k-means clustering. This unsupervised learning allowed us to classify movements with high accuracy on individual participants with a large inter-participant variability using as few as two repetitions of movements. This could be useful in clinical settings where collecting large amount of movement data and training machine learning models is not feasible due to time limitations. The OH and PwP groups showed higher variability in the average time taken to perform sit-to-stand transitions (Fig~\ref{fig:timings}) and also in the angles of the shank and especially back while sitting and standing (Fig~\ref{fig:posture_variability}). These variabilities can be attributed to the differences in the mobility levels in OH group and the varying effect of Parkinson's on mobility in the PwP group which may also affect their posture. The YH group has a consistent posture during standing while the OH and PwP groups show larger variability in their average back angle (Fig~\ref{fig:posture_variability}B) because of the instability during standing. The PwP group showed very low accelerations and low velocities (see Fig~\ref{fig:grand_avg}) leading to performing statically stable movements by pausing in the middle of sit-to-stand and stand-to-sit transitions. Thus, the angular kinematics of the three-segment body model in the sagittal plane provide insights into differences in sit-to-stand transitions in different groups varying in age and functional mobility. The ability to stand up from sitting indicates balance control and functional lower limb strength. The inability to stand up from sitting and display of unsteadiness when completing the task suggests the person is more likely to have restricted mobility and be at risk of falling. Hence it is important to be able to assess sit-to-stand performance as it can allow for the identification of persons at risk. Our proposed method allows detailed assessment of sit-to-stand motion by modelling all the three segments of the body involved in this motion minimising the number of sensors needed for sit-to-stand tests. Our proposed method can hence be used as a tool alongside other methodologies for assessment of sit-to-stand transitions. Our novel combined approach facilitates comprehensive study of sit-to-stand movement in varying demographics of people which not only models movements providing their continuous kinematics, but also segments and classifies individual movements and computes time taken for individual transfers. Our proposed method is not restricted to sit-to-stand movement and can also be extended to have broader applications in sports science to study a range of different motions. Finally, our proposed method will allow long term (multi-day to multi-week) movement studies to be conducted since these sensors are unobtrusive and easy to wear. \section*{Conclusion} In this paper, we have proposed a novel integrated approach for estimating the body kinematics during sit-to-stand transition motions using only two wearable inertial sensors with a triaxial accelerometer and a triaxial gyroscope each. This provides an inexpensive and portable way of estimating human motion as opposed to expensive optic motion sensor systems requiring elaborated setup or placing a sensor on each segment of the body. The two wearable sensors are comfortable for prolonged use and require low power to operate. A robust three-segment body kinematic model is formed based on limb kinematic model and parameter estimation using EKF. We have tested this model on the three groups of young healthy adults, older healthy adults and people with Parkinson's disease. We have solved two challenges of modelling and classification of sit-to-stand and stand-to-sit movements by incorporating classifier in the estimation of the body kinematics. Our model not only estimates the kinematics on the shank and the the back accurately, but also the kinematics for thigh which is an ill-posed problem as there is no inertial sensor on this location. Thus, our model effectively deals with the missing data and at the same time segments and classifies the sit-to-stand and stand-to-sit, standing and sitting states robustly using unsupervised learning with an accuracy of 98.67\%, 94.20\% and 91.41\% for YH, OH and PwP respectively. The estimated kinematics are similar to the ground truth kinematics obtained from the commercial Codamotion system as compared in YH participants. \section*{Acknowledgments} The authors would like to thank all the participants in this study who helped in recording the data and enabled this research.
1,314,259,994,016
arxiv
\section*{Introduction} Inspired by the recent paper \cite{Totaro} of Totaro, we investigate the relationship between ampleness of restrictions of line bundles to general complete intersections and the vanishing properties of higher cohomology groups. This train of thought will eventually lead us to a generalization of Fujita's vanishing theorem to big line bundles. Vanishing theorems played a central role in algebraic geometry during the last fifty years. Results of this sort due to Serre, Kodaira, Kawamata--Viehweg among others are fundamental building blocks of complex geometry, and are indispensable to the successes of minimal model theory. Classically, vanishing theorems apply to ample or big and nef line bundles. However, there has been a recent shift of attention towards big line bundles, which, although possess less positivity, still turn out to share many of the good properties of ample ones (see \cite{AIL} and the references therein). It has been known for some time that big line bundles behave in a cohomologically positive way in degrees roughly above the dimension of the stable base locus. An easy asymptotic version of this appeared in \cite[Proposition 2.15]{ACF}, while Matsumura \cite[Theorem 1.6]{Mats} gave a partial generalization of the Kawamata--Viehweg vanishing theorem along these lines. In this work we will present similar results guaranteeing the vanishing of cohomology groups of high degree under various partial positivity conditions. We work over the complex numbers, $X$ is an irreducible projective variety of dimension $n$ unless otherwise mentioned. Divisors are meant to be Cartier unless otherwise mentioned. Following the footsteps of Andreotti--Grauert~\cite{AG} and Demailly--Peternell--Schneider~\cite{DPS}, Totaro establishes a very satisfactory theory of line bundles with vanishing cohomology above a certain degree. The concept has various characterizations all falling under the heading of $q$-ampleness. What is of interest to us is the following version, which goes by the name 'naive $q$-ampleness' in \cite{Totaro}. We call a line bundle $L$ (naively) $q$-ample on $X$ for a natural number $q$, if for every coherent sheaf $\ensuremath{{\mathcal F}}$ on $X$ there exists an integer $m_0$ depending on $\ensuremath{{\mathcal F}}$ such that \[ \HH{i}{X}{\ensuremath{{\mathcal F}}\otimes \ensuremath{\mathcal O}_X(mL)} \ensuremath{\,=\,} 0 \ \ \ \text{for all $i>q$ and $m\geq m_0$.} \] It is immediate from the definition that $0$-ampleness coincides with ampleness, while it is proved in \cite[Theorem 10.1]{Totaro} that a divisor is $(n-1)$-ample exactly if it does not lie in the negative of the pseudo-effective cone in the N\'eron--Severi space. The notion of $q$-ampleness shares many important properties of traditional ampleness, for example it is open both in families and in the N\'eron--Severi space. In general, the behaviour of $q$-ample divisors remains mysterious. Our motivation comes from the connection to geometric invariants describing partial positivity, in particular, to amplitude on restrictions to general complete intersection subvarieties. The main results of this note are vanishing theorems valid for not necessarily ample --- oftentimes not even big --- divisors. They all follow the same principle: positivity of restrictions of line bundles results in partial vanishing of higher cohomology groups. Our first statement of note is a uniform variant of Serre vanishing. \begin{theoremA}(Theorem~\ref{thm:A}) Let $X$ be an irreducible projective variety, $L$ a Cartier divisor, $A_1,\dots,A_q$ very ample Cartier divisors on $X$ such that $L|_{E_1\cap\dots\cap E_q}$ is ample for general $E_j\in |A_j|$, $1\leq j\leq q$. Then for any coherent sheaf $\ensuremath{{\mathcal F}}$ on $X$ there exists an integer $m(L,A_1,\dots,A_q,\ensuremath{{\mathcal F}})$ such that \[ \HHv{i}{X}{\ensuremath{{\mathcal F}}\otimes \ensuremath{\mathcal O}_X(mL+\sum_{j=1}^{q}k_jA_j)} \ensuremath{\,=\,} 0 \] for all $i>q$, $m\geq m(L,A_1,\dots,A_q,\ensuremath{{\mathcal F}})$ and $k_1,\dots,k_q\geq 0$. \end{theoremA} In particular, setting $k_1=\dots=k_q=0$ in the Theorem we obtain that $L$ is $q$-ample. This way we recover a slightly weaker version of \cite[Theorem 3.4]{DPS}. In a few remarks we then relate Theorem A to invariants expressing partial positivity and the inner structure of various cones of divisors in the N\'eron--Severi space. Here again we go along similar lines to \cite{DPS}; it turns out that sacrificing a certain amount of generality buys drastically simplified proofs. Next, we treat the case of vanishing for adjoint divisors. First, a variant of the theorem of Kawamata and Viehweg. \begin{theoremB}(Theorem~\ref{thm:B}) Let $X$ be a smooth projective variety, $L$ a divisor, $A$ a very ample divisor on $X$. If $L|_{E_1\cap\dots\cap E_k}$ is big and nef for a general choice of $E_1,\dots,E_k\in |A|$, then \[ \HH{i}{X}{\ensuremath{\mathcal O}_X(K_X+L)} \ensuremath{\,=\,} 0 \ \ \ \text{for $i>k$.} \] \end{theoremB} The conditions of Matsumura hold true under our assumptions, hence we recover \cite[Theorem 1.6]{Mats}. Building on the above result, we arrive at our main achievement, a generalization of Fujita's theorem \cite{Fuj} to big divisors. \begin{theoremC}(Theorem~\ref{thm:Fujita}) Let $X$ be a complex projective scheme, $L$ a Cartier divisor, $\ensuremath{{\mathcal F}}$ a coherent sheaf on $X$. Then there exists a positive integer $m_0(L,\ensuremath{{\mathcal F}})$ such that \[ \HH{i}{X}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(mL+D)} \ensuremath{\,=\,} 0 \] for all $i>\dim \Bplus (L)$, $m\geq m_0(L,\ensuremath{{\mathcal F}})$, and all nef divisors $D$ on $X$. \end{theoremC} Here $\Bplus(L)$ denotes the augmented base locus of $L$; this can be defined as the stable base locus of the $\ensuremath{\mathbb Q}$-Cartier divisor $L-A$ for any sufficiently small ample class $A$. A few words about the organization of the paper. Section 1 is devoted to Theorem A, and a discussion of invariants measuring partial positivity. Theorem B is treated in Section 2, while Section 3 is given over to a short treatment of base loci on schemes. The proof of Theorem C takes up the last section. \subsection*{Acknowledgements} Helpful discussions with Brian Conrad, Tommaso de Fernex, Lawrence Ein, Daniel Greb, Stefan Kebekus, Rob Lazarsfeld, Vlad Lazi\'c, Sebastian Neumann, Mihnea Popa, Tomasz Szemberg, and Burt Totaro were much appreciated. \section{Ampleness on restrictions and cones in the N\'eron--Severi space} In this section we prove a Fujita--Serre type vanishing statement and consider an application to cone structures in $N^1(X)_\ensuremath{\mathbb R}$. This is where one can see most clearly the yoga of obtaining partial vanishing of higher cohomology groups by forcing ampleness on restrictions. \begin{thm}\label{thm:A} Let $X$ be an irreducible projective variety, $L$ a Cartier divisor, $A_1,\dots,A_q$ very ample Cartier divisors on $X$ such that $L|_{E_1\cap\dots\cap E_q}$ is ample for general $E_j\in |A_j|$. Then for any coherent sheaf $\ensuremath{{\mathcal F}}$ on $X$ there exists an integer $m(L,A_1,\dots,A_q,\ensuremath{{\mathcal F}})$ such that \[ \HHv{i}{X}{\ensuremath{{\mathcal F}}\otimes \ensuremath{\mathcal O}_X(mL+\sum_{j=1}^{q}k_jA_j)} \ensuremath{\,=\,} 0 \] for all $i>q$, $m\geq m(L,A_1,\dots,A_q,\ensuremath{{\mathcal F}})$ and $k_j\geq 0$. In particular, $L$ is $q$-ample. \end{thm} \begin{proof} Let $E_j\in |A_j|$ ($1\leq j\leq q$) be a sequence of general divisors. In particular we assume all possible intersections to be irreducible. Consider the set of standard exact sequences \begin{eqnarray}\label{eqn:ses} 0 & \to & \ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_j}(mL+\sum_{l=1}^{q}{k_lA_l}) \to \ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_j}(mL+\sum_{l=1}^{q}{k_lA_l}+A_{j+1}) \\ &\to & \ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_{j+1}}(mL+\sum_{l=1}^{q}{k_lA_l}+A_{j+1}) \to 0 \nonumber \end{eqnarray} for all $0\leq j\leq q-1$, all $m$, and all $k_1,\dots,k_q\geq 0$. Here $Y_j\ensuremath{ \stackrel{\textrm{def}}{=}} E_1\cap\dots\cap E_j$ for all $1\leq j\leq q$, for the sake of completeness set $Y_0\ensuremath{ \stackrel{\textrm{def}}{=}} X$. Take a look at the last one of these. Fujita's vanishing theorem on $Y_q=E_1\cap\dots\cap E_q$ applied to the ample divisor $L|_{Y_q}$ gives that \[ \HHv{i}{Y_q}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_q}(mL+\sum_{l=1}^{q}{k_lA_l})} \ensuremath{\,=\,} 0 \] for all $i\geq 1$, $m\geq m(\ensuremath{{\mathcal F}},L,A_1,\dots,A_q,Y_q)$ and all $k_1,\dots,k_q\geq 0$. This implies that the groups on the sides of the exact sequence \begin{eqnarray*} && \HHv{i-1}{Y_q}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_q}(mL+\sum_{l=1}^{q}{k_lA_l}+A_q)} \lra \HHv{i}{Y_{q-1}}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_{q-1}}(mL+\sum_{l=1}^{q}{k_lA_l})} \\ && \lra \HHv{i}{Y_{q-1}}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_{q-1}}(mL+\sum_{l=1}^{q}{k_lA_l}+A_q)} \lra \HHv{i}{Y_q}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_q}(mL+\sum_{l=1}^{q}{k_lA_l}+A_q)} \end{eqnarray*} vanish for $i\geq 2$, $m\geq m(\ensuremath{{\mathcal F}},L,A_1,\dots,A_q,Y_q)$ and $k_1,\dots,k_q\geq 0$. Consequently, \begin{eqnarray*} \HHv{i}{Y_{q-1}}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_{q-1}}(mL+\sum_{l=1}^{q}{k_lA_l})} & \ensuremath{\,=\,} & \HHv{i}{Y_{q-1}}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_{q-1}}(mL+\sum_{l=1}^{q}{k_lA_l}+kA_q)} \end{eqnarray*} for all $i\geq 2$, $m\geq m(\ensuremath{{\mathcal F}},L,A_1,\dots,A_q,Y_q)$, and $k_1,\dots,k_q\geq 0$. Then \[ \HHv{i}{Y_{q-1}}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_{q-1}}(mL+\sum_{l=1}^{q}{k_lA_l}+kA_q)} \ensuremath{\,=\,} 0 \] follows for all $k\geq 0$ from Serre vanishing applied to the ample divisor $A_q|_{Y_{q-1}}$. By the semicontinuity theorem and the general choice of the $E_j$'s we can drop the dependence of $Y_q$. Working backwards along the cohomology sequences associated to the sequences~\eqref{eqn:ses}, we obtain by descending induction on $j$ that \[ \HHv{i}{Y_j}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{Y_j}(mL+\sum_{l=1}^{q}{k_lA_l})} \ensuremath{\,=\,} 0 \] for $i> q-j$,$m\gg 0$, and all $k_1,\dots,k_q\geq 0$. This gives the required result when $j=0$. \end{proof} \begin{rmk} We point out that the proof works under the less restrictive assumption that $A_i$ is ample, globally generated, and not composed of a pencil for all $1\leq i\leq q$. \end{rmk} The next step is to connect up with $q$-ampleness. Partial positivity was studied in the form of uniform $q$-ampleness in \cite{DPS}, where among many other achievements it was established that uniform $q$-ampleness respects numerical equivalence of Cartier divisors. For the sake of completeness we briefly recall Totaro's main result on $q$-ample line bundles. \begin{thm}\cite[Theorem 8.1]{Totaro} Let $X$ be a projective scheme over a field of characteristic zero, $A$ a very ample divisor on $X$, $0\leq q\leq n=\dim X$ an integer. Then there exists a natural number $m_0$ such that for all Cartier divisors $L$ on $X$ the following properties are equivalent. \begin{enumerate} \item There exists a natural number $n_0$ such that $\HH{i}{X}{\ensuremath{\mathcal O}_X(n_0L-jA)}=0$ for all $i>q$ and $1\leq j\leq m_0$. \item ($L$ is naively $q$-ample) For every coherent sheaf $\ensuremath{{\mathcal F}}$ on $X$ there exists an integer $m(L,\ensuremath{{\mathcal F}})$ such that $\HH{i}{X}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(mL)}=0$ for all $i>q$ and $m\geq m(L,\ensuremath{{\mathcal F}})$. \item ($L$ is uniformly $q$-ample) There exists a constant $\lambda>0$ such that for all $i>q$,$j>0$ and $\tfrac{m}{j}\geq \lambda$ the cohomology groups $\HH{i}{X}{\ensuremath{\mathcal O}_X(mL-jA)}$ vanish. \end{enumerate} \end{thm} This first consequence of Theorem~\ref{thm:A} is the following claim, which was also proved by Demailly--Peternell--Schneider under the more general assumption that $L$ is $(n-q)$-flag ample (see \cite[Definition 3.1]{DPS}). Their proof however requires considerably more effort. \begin{cor} With notation as above, if $L|_{E_1\cap\dots\cap E_q}$ is ample for general $E_j\in |A_j|$, then $L$ is $q$-ample. \end{cor} The above vanishing result provides a birational variant for the higher asymptotic cohomology $\hhat{i}{X}{L}$ of $L$. We remind that \[ \hhat{i}{X}{L} \ensuremath{ \stackrel{\textrm{def}}{=}} \limsup_{m} \frac{\hh{i}{X}{\ensuremath{\mathcal O}_X(mL)}}{m^n/n!}\ ; \] note that $\widehat{h}^0(X,L)=\vol{X}{L}$ holds by definition. For properties of higher asymptotic cohomology the reader is referred to \cite{dFKL,ACF}, or Demailly's paper \cite{Dem} in the analytic setting. \begin{cor}\label{cor:asy} Let $X$ be an irreducible projective variety, $L$ a Cartier divisor on $X$. Assume that there exists a proper birational morphism $\pi:Y\to X$, a natural number $q$, and very ample divisors $A_1,\dots ,A_q$ on $Y$ such that $\pi^*L|_{E_1\cap\dots\cap E_q}$ is ample for general elements $E_i\in |A_i|$, for all $1\leq i\leq q$. Then \[ \hhat{i}{X}{L} \ensuremath{\,=\,} 0 \ \ \text{ for $i>q$.} \] \end{cor} \begin{proof} As a consequence of Theorem~\ref{thm:A}, one has $\HH{i}{Y}{\pi^*\ensuremath{\mathcal O}_X(mL)}=0$ for all $i>q$ and $m\gg 0$. This gives $\hhat{i}{Y}{\pi^*L}=0$ for all $i>q$. By the birational invariance of asymptotic cohomology \cite[Corollary 2.10]{ACF} \[ \hhat{i}{X}{L} \ensuremath{\,=\,} \hhat{i}{Y}{\pi^*L} \ensuremath{\,=\,} 0 \ \ \text{for all $i>q$.} \] \end{proof} We move on to building a connection to the interior structure of the N\'eron--Severi space. For a Cartier divisor $L$ \[ q(L) \ensuremath{ \stackrel{\textrm{def}}{=}} \min\st{q\in\ensuremath{\mathbb N}\,|\, \text{$L$ is $q$-ample}} \] is an interesting numerical invariant, which was probably first defined in \cite[Definition 1.1]{DPS}. To put it into perspective, let us briefly recall some other ways of expressing partial ampleness associated to big divisors (these were discussed in an earlier version of \cite{dFKL}). \begin{align*} a(L;A_1,\dots,A_n) \ &\ensuremath{ \stackrel{\textrm{def}}{=}} \ \min \{ k \mid\text{$L|_{E_1\cap\dots\cap E_k}$ is ample for very general $E_i \in |A_i|$}\big \}, \\ b(L) \ &\ensuremath{ \stackrel{\textrm{def}}{=}} \ \dim \Bplus(L), \\ c(L) \ &\ensuremath{ \stackrel{\textrm{def}}{=}} \ \max \big \{ i \mid \text{$\^h^i$ is not identically zero in any neighborhood of $[L]\in N^1(X)_\R$} \big \}, \end{align*} where $A_1,\dots,A_n$ are very ample divisors on $X$. The minimum of all $a(L;A_1,\dots,A_n)$ (over all sequences of very ample divisors of length $n=\dim X$) is closely related to $\sigma_+(L)$ defined in \cite{DPS}. The quantities $q(L)$, $a(L,A)$, $b(L)$, and $c(L)$ depend only on the numerical equivalence class of $L$, and make good sense for $\ensuremath{\mathbb Q}$-divisors as well. They all express how far a given divisor is from being ample, with smaller numbers corresponding to more positivity. \begin{cor}\label{cor:ineq} With notation as above, \[ c(L) \ensuremath{\,\leq\,} \ q(L) \ensuremath{\,\leq\,} a(L;A_1,\dots,A_n) \ensuremath{\,\leq\,} b(L) \] for all sequences of very ample divisors $A_1,\dots,A_n$. \end{cor} \begin{proof} The first inequality comes from observing the definition of $\^h^i$ and the openness of $q$-ampleness in the N\'eron--Severi space. The second one is \cite[3.4]{DPS}, at the same time, it is immediate from Theorem~\ref{thm:A}. The inequality $a(L,A) \le b(L)$ comes from the observation that the restriction of a Cartier divisor to a general very ample divisor strictly reduces $\dim \Bplus(L)$, and the fact that a divisor with empty augmented base locus is ample. \end{proof} As it was noticed on \cite[p. 167.]{DPS}, one does not have equality in $q(L)\ensuremath{ \stackrel{\textrm{def}}{=}} a(L;A_1,\dots,A_n)$ in general. Here we present another simple example (borrowed again from an early version of \cite{dFKL}) exhibiting this property. More precisely, we give an example of a divisor $L$ that is $1$-ample, and a very ample divisor $A$ such that $L|_E$ is not ample for general $E\in|A|$. \begin{eg}\label{eg:c(L)<a(L,A)<b(L)} Let $X = \ensuremath{\mathbb F}_1 \times \P^1$, where $\ensuremath{\mathbb F}_1$ is the blow-up of $\P^2$ at a point, and denote by $p : X \to \ensuremath{\mathbb F}_1$ and $q : X \to \P^1$ the two projections. Let $E \subset \ensuremath{\mathbb F}_1$ be the exceptional curve of the blow-up, $F \subset \ensuremath{\mathbb F}_1$ be a fiber of the ruling, and let $H \subset \P^1$ be a point. We consider the divisors $$ L = p^*(\lambda E + F) + q^*H \ \text{ and } \ A = p^*(E + \mu F) + q^*H \quad \text{for some} \quad \lambda, \mu \in\ensuremath{\mathbb Z}_{\ge 2}. $$ Note that $A$ is very ample and $L$ is big. The stable base locus of $L$ coincides with its augmented base locus and is equal to $B \ensuremath{ \stackrel{\textrm{def}}{=}} p^{-1}(E)$. In particular $b(L) = 2$. On the other hand, the K\"unneth's formula for asymptotic cohomology (see \cite[Remark~2.14]{ACF}), and the fact that $L$ is not ample imply that $c(L) = 1$. Fix a general element $Y \in |A|$ cutting out a smooth divisor $D$ on $B$. Note that $\ensuremath{\mathcal O}_B(D) \cong \ensuremath{\mathcal O}_{\P^1\times\P^1}(\mu-1,1)$ via the isomorphism $B = E \times \P^1 \cong \P^1 \times \P^1$. Therefore, since $D$ is smooth, it must be irreducible; moreover, $p$ induces an isomorphism $D \cong \P^1$. We observe that the base locus of $L|_Y$ is contained in the restriction of the base locus of $L$, hence in $D$, and that $\ensuremath{\mathcal O}_D(L|_Y) \cong \ensuremath{\mathcal O}_{\P^1}(\mu - \lambda)$. We conclude that $$ a(L,A) \ = \ \begin{cases} 2 = b(L) &\text{if $\lambda \ge \mu$}, \\ 1 = c(L) &\text{if $\lambda < \mu$}. \end{cases} $$ For $\lambda <\mu$, we have $1=c(L)\leq q(L)\leq a(L;A)=1$, therefore $q(L)=1$. On the other hand there exist very ample divisors on $X$ such that $L|_A$ is not ample. \end{eg} Totaro asks in \cite[Question 12.1]{Totaro} whether $c(L)=q(L)$ always holds. As a result of the discussion so far, we can see that $q(L)$ and $c(L)$ behave much the same way in relation to $a(L;A_1,\dots,A_n)$, which furnishes some evidence that the answer to Totaro's question is affirmative. We give a reformulation of these concepts in terms of cones of divisors. This leads to a connection with movable curves as defined in \cite{BDPP}. In what follows $A_1,\dots,A_q$ will without exception denote a sequence of very ample divisors. \begin{defi} For a natural number $q$ and very ample divisors $A_1,\dots, A_q$ set \[ \ensuremath{{\mathcal C}}_{A_1,\dots,A_q} \ensuremath{ \stackrel{\textrm{def}}{=}} \st{\alpha\in N^1(X)_\ensuremath{\mathbb R} \,|\,\,\, \alpha|_{E_1\cap\dots\cap E_q} \text{ ample for $E_i\in |A_i|$ general }}\ . \] In addition, let $\Amp_q(X)$ denote the open (but not necessarily convex) cone of $q$-ample divisor classes. \end{defi} It is immediate that $\ensuremath{{\mathcal C}}_{A_1,\dots,A_n}\subseteq N^1(X)_\ensuremath{\mathbb R}$ is a convex cone, it is also open by \cite[Section 9]{Totaro}; for simplicity we set $\ensuremath{{\mathcal C}}_{\emptyset}=\Amp(X)$. \begin{rmk} In the important special case $q=n-1$ a general codimension $n-1$ complete intersection is an irreducible curve, and one has \[ \ensuremath{{\mathcal C}}_{A_1,\dots,A_{n-1}} \ensuremath{\,=\,} \st{\alpha\in N^1(X)_\ensuremath{\mathbb R}\,|\, (\alpha\cdot A_1\cdot\dots\cdot A_{n-1})>0}\ . \] \end{rmk} Corollary~\ref{cor:ineq} can then be rephrased in the following way. \begin{cor}\label{cor:containment} \[ \bigcup_{A_1,\dots,A_q} \ensuremath{{\mathcal C}}_{A_1,\dots,A_q} \ensuremath{\,\subseteq\,} \Amp_q(X) \] \end{cor} \begin{rmk} The question arises naturally whether the two sides of Corollary~\ref{cor:containment} are equal in general. This is quickly seen to be true on surfaces. By \cite[Theorem 9.1]{Totaro} $L$ is $1$-ample if and only if $-L$ is not pseudoeffective, which is equivalent to the existence of a very ample divisor $A$ for which $(-L\cdot A)<0$. This latter holds precisely when $L|_E$ is ample for $E\in |A|$ general. In general one obstruction that is easy to foresee is the existence of movable curves on $X$ that are not in the boundary of the cone spanned by complete intersection curves. \end{rmk} \begin{prop}\label{prop:equ} Let $X$ be an irreducible projective variety of dimension $n$. Then \[ \bigcup_{A_1,\dots,A_{n-1}} \ensuremath{{\mathcal C}}_{A_1,\dots,A_{n-1}} \ensuremath{\,=\,} \Amp_{n-1}(X) \] exactly if every movable curve is the limit of elements of the convex cone spanned by complete intersection curves coming from very ample divisors. \end{prop} \begin{proof} For a Cartier divisor $L$ on $X$, $L|_{E_1\cap\dots\cap E_{n-1}}$ is ample if and only if $(L\cdot A_1\cdot\dots\cdot A_{n-1})>0$. Consequently, $L\in\ensuremath{{\mathcal C}}_{A_1,\dots,A_{n-1}}$ if and only if $(L\cdot A_1\cdot\dots\cdot A_{n-1})>0$. Consequently, $L\in \cup_{A_1,\dots,A_{n-1}} \ensuremath{{\mathcal C}}_{A_1,\dots,A_{n-1}}$ holds exactly if there exists a complete intersection curve coming from very ample divisors intersected by $L$ positively. By \cite[Theorem 9.1]{Totaro}, a line bundle is $(n-1)$-ample precisely if $-L$ is not pseudo-effective. The equality of the Proposition is equivalent via \cite{BDPP} to the property that a divisor intersecting every complete intersection curve of very ample divisors positively necessarily intersects every movable curve positively. This happens precisely if the cone spanned by complete intersection curves on $X$ is dense in the cone of moving curves. \end{proof} \begin{eg} In \cite[Example 3.2.4]{Neu} Neumann constructs a smooth projective threefold $X$ on which the cone spanned by complete intersection curves is not dense in the movable cone. The space he constructs is a double blow-up of $\ensuremath{\mathbb P}^3$: first one blows up a line in $\ensuremath{\mathbb P}^3$, then a point on the exceptional divisor. The work \cite{Neu} gives all the details. By Proposition~\ref{prop:equ} \[ \bigcup_{A_1,\dots,A_q} \ensuremath{{\mathcal C}}_{A_1,\dots,A_{n-1}} \,\subsetneq\, \Amp_{n-1}(X)\ . \] \end{eg} \begin{question} Let $X$ be an irreducible projective variety. Under what condition does \[ \bigcup_{A_1,\dots,A_q} \ensuremath{{\mathcal C}}_{A_1,\dots,A_q} \ensuremath{\,=\,} \Amp_q(X) \] hold for all $0\leq q\leq n-1$? \end{question} \section{A Kawamata--Viehweg type vanishing for non-pseudo-effective divisors} Independently of the discussion so far, we show that the ideas leading to Theorem~\ref{thm:A} also provide a partial vanishing theorem for adjoint divisors $K_X+L$ where $L$ is not necessarily pseudo-effective. It has been common knowledge that cohomology groups of big line bundles tend to vanish in degrees roughly above the dimension of the stable base locus (see \cite[Proposition 2.15]{ACF} for an early example). Matsumura in \cite[Theorem 1.6]{Mats} proved that \[ \HH{i}{X}{\ensuremath{\mathcal O}_X(K_X+L)} \ensuremath{\,=\,} 0 \ \ \text{for $i> \dim B_{-}(L)$} \] for a big line bundle $L$. In Theorem~\ref{thm:B} we present a variant which works without the bigness assumption, and in addition provides vanishing in a wider range of degrees thanks to the fact that $a(L,A)$ can be strictly smaller than the dimension of the stable base locus (see Example~\ref{eg:c(L)<a(L,A)<b(L)}). The proof of Theorem~\ref{thm:B} requires a certain resolution defined in \cite[Section 4]{ACF}. Let $D$ be an arbitrary integral Cartier divisor, $A$ a very ample Cartier divisor on an irreducible projective variety $X$ of dimension $n$. Upon choosing general elements $E_1,\dots ,E_r\in |A|$, one obtains an exact sequence \begin{eqnarray}\label{eqn:res} && 0 \lra \ensuremath{\mathcal O}_X(D) \lra \ensuremath{\mathcal O}_X(D+rA) \lra \bigoplus_{i=1}^{r}\ensuremath{\mathcal O}_{E_i}(D+rA) \lra \\ && \lra \bigoplus_{1\leq i_1<i_2\leq r}\ensuremath{\mathcal O}_{E_{i_1}\cap E_{i_2}}(D+rA) \lra \dots\lra \bigoplus_{1\leq i_1<i_2<\dots<i_n\leq r}\ensuremath{\mathcal O}_{E_{i_1}\cap\dots\cap E_{i_n}}(D+rA) \lra 0\ . \nonumber \end{eqnarray} Given a coherent sheaf $\ensuremath{{\mathcal F}}$ one can also assume that the sequence \eqnref{eqn:res} remains exact after tensoring by $\ensuremath{{\mathcal F}}$ by the general position of the effective divisors $E_i$. Although strictly speaking it would not be necessary, it helps in the book-keeping process to chop up the above resolution into short exact sequences \begin{eqnarray*} && 0 \to \ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(D) \to \ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(D+rA) \to \ensuremath{{\mathcal C}}_1 \to 0 \\ && 0 \to \ensuremath{{\mathcal C}}_1 \to \bigoplus_{i=1}^{r}\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{E_i}(D+rA) \to \ensuremath{{\mathcal C}}_2 \to 0 \\ && \vdots \\ && 0 \to \ensuremath{{\mathcal C}}_{n-1} \to \bigoplus_{1\leq i_1<i_2<\dots<i_{n-1}\leq r}\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{E_{i_1}\cap\dots\cap E_{i_{n-1}}}(D+rA) \\ && \to \bigoplus_{1\leq i_1<i_2<\dots<i_n\leq r}\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_{E_{i_1}\cap\dots\cap E_{i_n}}(D+rA) \to 0\ . \end{eqnarray*} \begin{thm}\label{thm:B} Let $X$ be a smooth projective variety, $L$ a divisor, $A$ a very ample divisor on $X$. If $L|_{E_1\cap\dots\cap E_q}$ is big and nef for a general choice of $E_1,\dots,E_q\in |A|$, then \[ \HH{i}{X}{\ensuremath{\mathcal O}_X(K_X+L)} \ensuremath{\,=\,} 0 \ \ \ \text{for $i>q$.} \] \end{thm} \begin{proof} We will prove the statement by induction on the codimension of the complete intersections we restrict to; the case $q=0$ is the Kawamata--Viehweg vanishing theorem. Let $E_1,\dots,E_q\in |A|$ be elements such that the intersection of any combination of them is smooth of the expected dimension, and irreducible when it has positive dimension. As the $E_i$'s are assumed to be general, this can clearly be done via the base-point free Bertini theorem, which works for $\dim X\geq 2$. In the remaining cases (when $\dim X\leq 1$) the statement of the proposition is immediate. Consider the exact sequence \eqnref{eqn:res} with $D=K_X+L+mA$ and $r=q$. First we show that it suffices to verify \[ \HH{i}{X}{\ensuremath{{\mathcal C}}_1^{(m)}} \ensuremath{\,=\,} 0 \ \ \ \text{for all $m\geq 0$ and $i>q-1$}, \] where the upper index of $\ensuremath{{\mathcal C}}$ is used to emphasize the explicit dependence on $m$. Grant this for the moment, and see how this helps up to prove the statement of the proposition. Take the following part of the long exact sequence associated to the first piece above \begin{eqnarray*} \HH{i-1}{X}{\ensuremath{{\mathcal C}}_1^{(m)}} & \to & \HH{i}{X}{\ensuremath{\mathcal O}_X(K_X+L+mA)} \\ && \to \HH{i}{X}{\ensuremath{\mathcal O}_X(K_X+L+(m+q)A)} \to \HH{i}{X}{\ensuremath{{\mathcal C}}_1^{(m)}}\ . \end{eqnarray*} By assumption the cohomology groups on the two sides vanish for all $m$ whenever $i>q$, hence \[ \HH{i}{X}{\ensuremath{\mathcal O}_X(K_X+L+mA)} \simeq \HH{i}{X}{\ensuremath{\mathcal O}_X(K_X+L+(m+q)A)} \ \ \ \text{for all $m\geq 0$ and $i>q$.} \] These groups are zero however for $m$ sufficiently large by Serre vanishing, hence \[ \HH{i}{X}{\ensuremath{\mathcal O}_X(K_X+L)} \ensuremath{\,=\,} 0 \ \ \ \text{ for all $i>q$,} \] as we wanted. As for the vanishing of the cohomology groups $\HH{i}{X}{\ensuremath{{\mathcal C}}_1^{(m)}}$ for $m\geq 0$ and $i>q-1$, it is quickly checked inductively. Observe that for all $1\leq j\leq q$ we have \[ K_X+L+(m+q)A|_{E_1\cap\dots\cap E_j} \ensuremath{\,=\,} K_{E_1\cap\dots\cap E_j} + (L+(m+(q-j))A)|_{E_1\cap\dots\cap E_j} \] by adjunction, and $(L+(m+(q-j))A)|_{E_1\cap\dots\cap E_j}$ becomes ample when restricted to the intersection with $E_{j+1}\cap\dots\cap E_q$. Induction gives \[ \HH{i}{E_1\cap\dots\cap E_j}{K_X+L+(m+q)A|_{E_1\cap\dots\cap E_j}} \ensuremath{\,=\,} 0 \ \ \ \text{for all $m\geq 0$ and $i>q-j$}\ . \] By chasing through the appropriate long exact sequences, we arrive at \[ \HH{i}{X}{\ensuremath{{\mathcal C}}_j^{(m)}} \ensuremath{\,=\,} 0 \ \ \ \text{for all $m\geq 0$ and $i>q-j$.} \] For $j=1$ this is the required vanishing. \end{proof} \begin{cor}\label{cor:birational vanishing} Let $X$ be an irreducible projective variety, $L$ a line bundle, $A$ a very ample line bundle on $X$, $q \geq 0$ such that $L|_{E_1\cap\dots\cap E_q}$ is big and nef for $E_i\in |A|$ general ($1\leq i\leq q$). If $\pi:Y\to X$ is a proper birational morphism from a smooth variety, $B$ a nef divisor on $Y$, then \[ \HH{i}{Y}{\ensuremath{\mathcal O}_Y(K_Y+\pi^*L+B)} \ensuremath{\,=\,} 0 \] for all $i>q$. \end{cor} \begin{proof} This is in fact a corollary of the proof of Theorem~\ref{thm:B}. We point out the necessary modifications. Assuming $\dim X\geq 2$, Lemma~\ref{lem:pullback} makes sure that we can consider restrictions to intersections of general elements of $|\pi^*A|$ just as we did in the proof of Theorem~\ref{thm:B}; we also obtain that the generic restriction $\pi^*L|_{E_1'\cap\dots\cap E_q'}$ is still big and nef for $E_i'\in |\pi^*A|$ general. Next, run the proof on $Y$, with $D=K_Y+\pi^*L+D+m\pi^*A$, and $r=q$. The task that remains is to show that the cohomology groups \[ \HH{i}{Y}{\ensuremath{\mathcal O}_Y(K_Y+\pi^*L+D+m\pi^*A)} \simeq \HH{i}{Y}{\ensuremath{\mathcal O}_X(K_Y+\pi^*L+D+(m+q)\pi^*A)} \] vanish for all $m\geq 0$ and $i>q$. By the given isomorphisms, it suffices to prove this for $m\gg 0$. Serre vanishing no longer applies, since $\pi^*A$ is only big and nef; luckily we can use the classical Kawamata--Viehweg theorem to our advantage. Namely, observe that \[ \pi^*L+D+m\pi^*A \ensuremath{\,=\,} \pi^*(L+m_0A)+D+(m-m_0)\pi^*A \] for all integers $m,m0$ with $m\geq m_0$. If $m_0$ is suitably large then $L+m_0A$ itself is ample, therefore $\pi^*(L+m_0A)+D+(m-m_0)\pi^*A$ is big and nef, and the required vanishing follows. \end{proof} \begin{lem}\label{lem:pullback} Let $\pi:Y\to X$ a proper birational morphism of irreducible projective varieties of dimension $n\geq 2$, $L$ a Cartier divisor, $A$ a very ample Cartier divisor on $X$. If $L|_{E_1\cap\dots\cap E_k}$ is big and nef for some $k\geq 1$ and general elements $E_1,\dots,E_k$, then $\pi_*L|_{E_1'\cap\dots\cap E_k'}$ is big and nef for the same integer $k$, and $E_1',\dots, E_k'$ general elements from $|\pi^*A|$. \end{lem} \begin{proof} As $\pi^*A$ is big and globally generated and $\dim Y\geq 2$, a general element $E'\in |\pi^*A|$ maps to a general element of $|A|$ by the base-point free Bertini theorem. Moreover, by the same token, the intersection $E_1'\cap\dots\cap E_k'$ of general elements $|\pi^*A|$ is irreducible, and $\pi|_{E_1'\cap\dots\cap E_k'}$ is a proper birational morphism onto its image, which is the intersection of $k$ general elements of $|A|$, say $E_1\cap\dots\cap E_k$ (we tacitly assume that $E_i'$ maps to $E_i$ under $\pi$). Consequently, $\pi^*(L|_{E_1\cap\dots\cap E_k})$ is a big and nef divisor on $E_1'\cap\dots\cap E_k'$. However, \[ \pi^*(L|_{E_1\cap\dots\cap E_k}) \ensuremath{\,=\,} (\pi^*L)|_{E_1'\cap\dots\cap E_k'}\ , \] hence the latter is big and nef as we wanted. \end{proof} \section{Base loci on schemes} Here we treat base loci of line bundles on arbitrary schemes. Although a large part of what we do is straightforward, the topic has not been investigated much so far, and there is no suitable reference available. In the course of this section $X$ is an arbitrary scheme unless otherwise mentioned. It is customary (see \cite[Section 1.1.B]{PAG} for example) to define the base ideal sheaf of a Cartier divisor $L$ on a complete algebraic scheme to be \[ \mathfrak{b}(L) \ensuremath{ \stackrel{\textrm{def}}{=}} \im \left( \HH{0}{X}{\ensuremath{\mathcal O}_X(L)} \otimes_\ensuremath{\mathbb C} \ensuremath{\mathcal O}_X(-L) \stackrel{\text{eval}_L}{\lra} \ensuremath{\mathcal O}_X \right)\ . \] One then sets $\Bs(L)$ to be the closed subscheme of $X$ given by $\mathfrak{b}(L)$, and \[ \B(L) \ensuremath{ \stackrel{\textrm{def}}{=}} \bigcap_{m=1}^{\infty} \Bs(mL)_{\text{red}} \ensuremath{\,\subseteq\,} X \] as a closed subset. We can nevertheless define the base locus of an invertible sheaf in full generality. \begin{defi}\label{defi:general} Let $X$ be a scheme, $\ensuremath{{\mathcal L}}$ an invertible sheaf on $X$. Let $\ensuremath{{\mathcal F}}_\ensuremath{{\mathcal L}}$ denote the quasi-coherent subsheaf of $\ensuremath{{\mathcal L}}$ generated by $\HH{0}{X}{\ensuremath{{\mathcal L}}}$. With this notation set \[ \mathfrak{b}(\ensuremath{{\mathcal L}}) \ensuremath{ \stackrel{\textrm{def}}{=}} \ann_{\ensuremath{\mathcal O}_X} (\ensuremath{{\mathcal L}}/\ensuremath{{\mathcal F}}_\ensuremath{{\mathcal L}})\ , \] define $\Bs(\ensuremath{{\mathcal L}})$ to be the closed subscheme corresponding to $\mathfrak{b}(\ensuremath{{\mathcal L}})$, and let \[ \B(\ensuremath{{\mathcal L}}) \ensuremath{ \stackrel{\textrm{def}}{=}} \bigcap_{m=1}^{\infty} \Bs(\ensuremath{{\mathcal L}}^{\otimes m})_{\text{red}} \ensuremath{\,\subseteq\,} X \] as a closed subset of the topological space associated to $X$. \end{defi} It is immediate that we recover the usual definition in the case $X$ is complete and algebraic. \begin{lem}\label{lem:bsl} Let $X,Y$ be schemes, $f:Y\to X$ a map of schemes, $\ensuremath{{\mathcal L}}$ an invertible sheaf on $X$. Then \[ \mathfrak{b} (\ensuremath{{\mathcal L}}) \cdot \ensuremath{\mathcal O}_Y \ensuremath{\,\subseteq\,} \mathfrak{b} (f^*\ensuremath{{\mathcal L}})\ . \] In particular, if $Y\subseteq X$ is a closed subscheme, then $\Bs(\ensuremath{{\mathcal L}}|_Y) \subseteq \Bs(\ensuremath{{\mathcal L}})\cap Y$. \end{lem} \begin{proof}\footnote{I would like to thank Brian Conrad for simplifying a previous argument considerably and pointing out the right degree of generality here and in Definition~\ref{defi:general}.} Observe that to any quasi-coherent subsheaf $\ensuremath{{\mathcal F}}$ of an invertible sheaf $\ensuremath{{\mathcal L}}$ one can associate a (quasi-coherent) sheaf of ideals \[ \ensuremath{{\mathcal I}}(\ensuremath{{\mathcal F}}) \ensuremath{ \stackrel{\textrm{def}}{=}} \ann_{\ensuremath{\mathcal O}_X} \ensuremath{{\mathcal L}}/\ensuremath{{\mathcal F}} \ . \] The definition implies that $\ensuremath{{\mathcal I}}(\ensuremath{{\mathcal F}})\subseteq \ensuremath{{\mathcal I}}(\ensuremath{{\mathcal F}}')$ whenever $\ensuremath{{\mathcal F}}\subseteq\ensuremath{{\mathcal F}}'$. Considering the map $\HH{0}{X}{\ensuremath{{\mathcal L}}}\to\HH{0}{Y}{f^*\ensuremath{{\mathcal L}}}$ obtained by pulling back sections, one observes that the map \[ f^*(\ensuremath{{\mathcal F}}_\ensuremath{{\mathcal L}}) \lra f^*\ensuremath{{\mathcal L}} \] factors through the sheaf of modules $\ensuremath{{\mathcal F}}_{f^*\ensuremath{{\mathcal L}}}$. Consequently, \[ \mathfrak{b}(\ensuremath{{\mathcal L}})\cdot\ensuremath{\mathcal O}_Y \ensuremath{\,=\,} \ensuremath{{\mathcal I}}(\im f^*(\ensuremath{{\mathcal F}}_\ensuremath{{\mathcal L}})) \ensuremath{\,\subseteq\,} \ensuremath{{\mathcal I}}(\ensuremath{{\mathcal F}}_{f^*\ensuremath{{\mathcal L}}}) \ensuremath{\,=\,} \mathfrak{b}(f^*\ensuremath{{\mathcal L}})\ \] as we wanted. \end{proof} \begin{cor}\label{cor:bsl} In the situation of Lemma~\ref{lem:bsl} one has \[ \dim \B(\ensuremath{{\mathcal L}}|_Y) \ensuremath{\,\leq\,} \dim \B(\ensuremath{{\mathcal L}})\ , \] which immediately extends to the case of $\ensuremath{\mathbb Q}$-Cartier divisors. \end{cor} \begin{rmk} We point out that the proofs of both \cite[Example 1.1.9]{PAG} and \cite[Proposition 2.1.21]{PAG} go through unchanged when $X$ is an arbitrary scheme. This holds since both proofs depend only on the property \[ \mathfrak{b}(\ensuremath{{\mathcal L}}^{\otimes m}) \cdot \mathfrak{b}(\ensuremath{{\mathcal L}}^{\otimes k}) \ensuremath{\,\subseteq\,} \mathfrak{b}(\ensuremath{{\mathcal L}}^{\otimes(m+k)}) \ \text{for all $m,k\geq 1$}\ , \] which follows from the fact that one can multiply global sections. Therefore we obtain that $\B(\ensuremath{{\mathcal L}})$ is the unique minimal member of the collection of closed subsets $\st{\Bs(\ensuremath{{\mathcal L}}^{\otimes m})_{\text{red}}\,|\, m\geq 1}$. Moreover, just as in the reduced and irreducible case there exists $m_0\in \ensuremath{\mathbb N}$ with the property that $\B(\ensuremath{{\mathcal L}}) = \Bs(\ensuremath{{\mathcal L}}^{\otimes pm_0})_{\text{red}}$ for all natural numbers $p$. As a consequence, $\B(\ensuremath{{\mathcal L}}) = \B(\ensuremath{{\mathcal L}}^{\otimes m})$ for all positive integers $m$, and we are allowed to define the stable base locus for $\ensuremath{\mathbb Q}$-Cartier divisors by taking the stable base locus of a Cartier multiple. \end{rmk} Following \cite[Remark 1.3]{AIBL}, we define the augmented base locus of a $\ensuremath{\mathbb Q}$-Cartier divisor $L$ via \[ \Bplus (L) \ensuremath{ \stackrel{\textrm{def}}{=}} \bigcap_{A} \B(L-A) \] where $A$ runs through all ample $\ensuremath{\mathbb Q}$-Cartier divisors. \begin{rmk}\label{rmk:augm} Let us assume that $X$ is complete and algebraic over $\ensuremath{\mathbb C}$. Arguing as in the proof of \cite[Proposition 1.5]{AIBL} we can see that for a given $\ensuremath{\mathbb Q}$-divisor $L$ one can always find an $\epsilon>0$ such that \[ \Bplus (L) \ensuremath{\,=\,} \B(L-A) \] for any ample $\ensuremath{\mathbb Q}$-divisor with $\| A \| <\epsilon$ (with respect to an arbitrary norm on the N\'eron--Severi space). Exploiting Corollary~\ref{cor:bsl} this implies that \[ \dim \Bplus(L|_Y) \ensuremath{\,\leq\,} \dim \Bplus(L) \] for any closed subscheme $Y$ in $X$. \end{rmk} The main contribution of this section is a generalization of the fact that any coherent sheaf becomes globally generated after twisting with a high enough multiple of an ample line bundle. \begin{prop}\label{prop:resolution} Let $X$ be an irreducible projective variety, $L$ a Cartier divisor on $X$. Then $\Bplus(L)$ is the smallest subset of $X$ such that for all coherent sheaves $\ensuremath{{\mathcal F}}$ on $X$ there exists a possibly infinite sequence of sheaves of the form \[ \dots \to \bigoplus_{i=1}^{r_i} \ensuremath{\mathcal O}_X(-m_iL) \to\dots\to \bigoplus_{i=1}^{r_1} \ensuremath{\mathcal O}_X(-m_1L) \to \ensuremath{{\mathcal F}}\ , \] which is exact off $\Bplus(L)$. \end{prop} \begin{proof} If $L$ is a non-big divisor, then $\Bplus(L)=X$, and the statement is obviously true. Hence we can assume without loss of generality that $L$ is big. First we prove the following claim: let $\ensuremath{{\mathcal F}}$ be an arbitrary coherent sheaf on $X$; then there exist positive integers $r$,$m$, and a map of sheaves \begin{equation}\label{eqn:first} \bigoplus_{i=1}^{r} \ensuremath{\mathcal O}_X(-mL) \stackrel{\phi}{\lra} \ensuremath{{\mathcal F}}\ , \end{equation} which is surjective away from $\Bplus (L)$. Fix an arbitrary ample divisor $A$ on $X$. The sheaves $\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(m'A)$ are globally generated for $m'$ sufficiently large. According to \cite[Proposition 1.5]{AIBL} $\Bplus(L)=\B (L-\epsilon A)$ for any $\epsilon>0$ small enough. Pick such an $\epsilon$, set $L'\ensuremath{ \stackrel{\textrm{def}}{=}} L-\epsilon A$ and let $m\gg 0$ be a positive integer such that $m'\ensuremath{ \stackrel{\textrm{def}}{=}} m\epsilon$ is an integer, and \[ \Bs(mL') \ensuremath{\,=\,} \B (mL') \ensuremath{\,=\,} \Bplus (L)\ . \] By picking $m$ large enough, we can in addition assume that $\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(m'A)$ is globally generated. As a consequence, \[ \ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(m'A)\otimes\ensuremath{\mathcal O}_X(mL') \simeq \ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(m'A+mL') \] is globally generated away from $\Bs(mL')=\Bplus (L)$. On the other hand \[ mL'+m'A \ensuremath{\,=\,} m(L-\epsilon A)+(m\epsilon)A \ensuremath{\,=\,} mL\ , \] hence we have found $m\gg 0$ such that $\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(mL)$ is globally generated away from $\Bplus (L)$. Thanks to the map \[ \HH{0}{X}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(m'A)}\otimes \HH{0}{X}{\ensuremath{\mathcal O}_X(mL')} \lra \HH{0}{X}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(mL)} \] one can find a finite set of sections giving rise to a map of sheaves \[ \bigoplus_{i=1}^{r}\ensuremath{\mathcal O}_X \lra \ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(mL) \] surjective away from $\Bplus (L)$. Tensoring by $\ensuremath{\mathcal O}_X(-mL)$ gives the map in \eqnref{eqn:first}. Next we will prove that $\Bplus(L)$ satisfies that property described in the Proposition. Let $\ensuremath{{\mathcal G}}$ be the kernel of the map $\bigoplus_{i=1}^{r_1} \ensuremath{\mathcal O}_X(-m_1L) \stackrel{\phi_1}{\lra} \ensuremath{{\mathcal F}}$ coming from \eqnref{eqn:first}. Applying \eqnref{eqn:first} to $\ensuremath{{\mathcal G}}$ we obtain a map \[ \bigoplus_{i=1}^{r_2} \ensuremath{\mathcal O}_X(-mL) \stackrel{\phi_2}{\lra} \ensuremath{{\mathcal G}} \] surjective off $\Bplus (L)$, hence a two-term sequence \[ \bigoplus_{i=1}^{r_2} \ensuremath{\mathcal O}_X(-m_2L) \to \bigoplus_{i=1}^{r_1} \ensuremath{\mathcal O}_X(-m_1L) \to \ensuremath{{\mathcal F}} \] exact away from the closed subset $\Bplus (L)$. Continuing in this fashion we arrive at a possibly infinite sequence of the required type. Last, if $x\in \Bplus(L)$ then for all $\epsilon\in\ensuremath{\mathbb Q}^{\geq 0}$ and all $m\geq 1$ such that $m\epsilon\in\ensuremath{\mathbb Z}$, all global sections of $\ensuremath{\mathcal O}_X(m(L-\epsilon A))$ vanish at $x$. By taking $\ensuremath{{\mathcal F}}\ensuremath{ \stackrel{\textrm{def}}{=}} \ensuremath{\mathcal O}_X(A)$, \[ \ensuremath{{\mathcal F}}\otimes \ensuremath{\mathcal O}_X(mL) \ensuremath{\,=\,} \ensuremath{\mathcal O}_X(mL-A) \] will then have all global sections vanishing at $x\in X$. Therefore $\Bplus(L)$ is indeed the smallest subset of $X$ with the required property. \end{proof} \section{Generalized Fujita vanishing} We prove a generalization of Fujita's vanishing theorem (see \cite{Fuj} or \cite[Theorem 1.4.35]{PAG}) following his original line of thought. The main technical tools we will rely on are Proposition~\ref{prop:resolution} and Corollary~\ref{cor:birational vanishing}. \begin{thm}\label{thm:Fujita} Let $X$ be a complex projective scheme, $L$ a Cartier divisor, $\ensuremath{{\mathcal F}}$ a coherent sheaf on $X$. Then there exists a positive integer $m_0(L,\ensuremath{{\mathcal F}})$ such that \[ \HH{i}{X}{\ensuremath{{\mathcal F}}\otimes\ensuremath{\mathcal O}_X(mL+D)} \ensuremath{\,=\,} 0 \] for all $i>\dim \Bplus (L)$, $m\geq m_0(L,\ensuremath{{\mathcal F}})$, and all nef divisors $D$ on $X$. \end{thm} First we reduce the theorem to the case of irreducible projective varieties. \begin{lem}\label{lem:reduction} With notation as above, if Theorem~\ref{thm:Fujita} holds when $X$ is reduced and irreducible, then it is true in general. \end{lem} \begin{proof} The proof of the fact a Cartier divisor on a complete scheme is ample if and only if it so when restricted to any irreducible component of the underlying reduced scheme (\cite[1.2.16]{PAG}) works here as well with a few modifications. For simplicity we will give the whole proof. First we show that Theorem~\ref{thm:Fujita} holds for $X$ provided it is true on $X_\textrm{red}$. To this end, let $X_{\textrm{red}}$ be $X$ with the reduced induced subscheme structure, $\ensuremath{{\mathcal N}}$ the nilradical of $\ensuremath{\mathcal O}_X$. Let now $\ensuremath{{\mathcal G}}$ be a coherent sheaf on $X$. We will work with the filtration \[ \ensuremath{{\mathcal G}} \ensuremath{\,\supset\,} \ensuremath{{\mathcal N}}\cdot\ensuremath{{\mathcal G}}\ensuremath{\,\supset\,}\dots\ensuremath{\,\supset\,}\ensuremath{{\mathcal N}}^r\cdot \ensuremath{{\mathcal G}} \ensuremath{\,=\,} 0\ , \] which gives rise to a bunch of short exact sequences \[ \ses{\ensuremath{{\mathcal N}}^{j+1}\cdot\ensuremath{{\mathcal G}}}{\ensuremath{{\mathcal N}}^j\cdot\ensuremath{{\mathcal G}}}{\ensuremath{{\mathcal N}}^j\cdot\ensuremath{{\mathcal G}}/\ensuremath{{\mathcal N}}^{j+1}\cdot\ensuremath{{\mathcal G}}}\ . \] The quotient sheaves $\ensuremath{{\mathcal N}}^j\cdot\ensuremath{{\mathcal G}}/\ensuremath{{\mathcal N}}^{j+1}\cdot\ensuremath{{\mathcal G}}$ are coherent $\ensuremath{\mathcal O}_{\xred}$-modules, therefore the assumption that Theorem~\ref{thm:Fujita} holds on reduced schemes implies \begin{eqnarray*} && \HH{i}{X}{(\ensuremath{{\mathcal N}}^j\cdot\ensuremath{{\mathcal G}}/\ensuremath{{\mathcal N}}^{j+1}\cdot\ensuremath{{\mathcal G}})\otimes \ensuremath{\mathcal O}_X(mL+D)} \\ && \ensuremath{\,=\,} \HH{i}{\xred}{\ensuremath{{\mathcal N}}^j\cdot\ensuremath{{\mathcal G}}/\ensuremath{{\mathcal N}}^{j+1}\cdot\ensuremath{{\mathcal G}}\otimes\ensuremath{\mathcal O}_{\xred}(mL+D)} \ensuremath{\,=\,} 0 \end{eqnarray*} for all $i>\dim \Bplus (L|_{X_{\text{red}}})$, $m\gg 0$ and nef divisor $D$ on $X$. Applying the above vanishing to the long exact sequence associated to \begin{eqnarray*} && 0 \to {(\ensuremath{{\mathcal N}}^{j+1}\cdot\ensuremath{{\mathcal G}})\otimes\ensuremath{\mathcal O}_X(mL+D)} \to {(\ensuremath{{\mathcal N}}^j\cdot\ensuremath{{\mathcal G}})\otimes\ensuremath{\mathcal O}_X(mL+D)} \to \\ && \to {(\ensuremath{{\mathcal N}}^j\cdot\ensuremath{{\mathcal G}}/\ensuremath{{\mathcal N}}^{j+1}\cdot\ensuremath{{\mathcal G}})\otimes\ensuremath{\mathcal O}_X(mL+D)} \to 0 \end{eqnarray*} we see that $m\gg 0$ and $j\geq 0$ \[ \HH{i}{X}{(\ensuremath{{\mathcal N}}^{j+1}\cdot\ensuremath{{\mathcal G}})\otimes\ensuremath{\mathcal O}_X(mL+D)} \ensuremath{\,=\,} \HH{i}{X}{\ensuremath{{\mathcal N}}^j\cdot\ensuremath{{\mathcal G}})\otimes\ensuremath{\mathcal O}_X(mL+D)} \] whenever $i> \dim \Bplus (L|_{X_{\text{red}}})+1$, and \[ \HH{i}{X}{(\ensuremath{{\mathcal N}}^{j+1}\cdot\ensuremath{{\mathcal G}})\otimes\ensuremath{\mathcal O}_X(mL+D)} \twoheadrightarrow \HH{i}{X}{\ensuremath{{\mathcal N}}^j\cdot\ensuremath{{\mathcal G}})\otimes\ensuremath{\mathcal O}_X(mL+D)} \] for $i=\dim \Bplus (L|_{X_{\text{red}}})+1$. According to Remark~\ref{lem:bsl}, $\dim \Bplus (L|_{X_{\text{red}}}) \leq\dim \Bplus (L)$, therefore the same vanishing and surjectivity results hold whenever $i>\dim \Bplus (L)+1$ and $i=\dim \Bplus (L)+1$, respectively. Descending induction on $j$ gives the required vanishing statement. From now on we can and will assume without loss of generality that $X$ is reduced. Write \[ X \ensuremath{\,=\,} X_1\cup\dots\cup X_r \] as the union of its irreducible components. In the short exact sequence \[ \ses{\ensuremath{\mathcal I}\cdot\ensuremath{{\mathcal G}}}{\ensuremath{{\mathcal G}}}{\ensuremath{{\mathcal G}}/\ensuremath{\mathcal I}\cdot\ensuremath{{\mathcal G}}} \] the left term is supported on $X_2\cup\dots\cup X_r$, while the right term is supported in $X_1$. Induction on the number of irreducible components then tells us that \begin{eqnarray*} && \HH{i}{X}{(\ensuremath{\mathcal I}\cdot\ensuremath{{\mathcal G}})\otimes\ensuremath{\mathcal O}_X(mL+D)} \ensuremath{\,=\,} \HH{i}{X_2\cup\dots\cup X_r}{(\ensuremath{\mathcal I}\cdot\ensuremath{{\mathcal G}})\otimes\ensuremath{\mathcal O}_{X_2\cup\dots\cup X_r}(mL+D)} = 0 \end{eqnarray*} and \[ \HH{i}{X}{(\ensuremath{{\mathcal G}}/\ensuremath{\mathcal I}\cdot\ensuremath{{\mathcal G}})\otimes\ensuremath{\mathcal O}_X(mL+D)} \ensuremath{\,=\,} \HH{i}{X_1}{(\ensuremath{{\mathcal G}}/\ensuremath{\mathcal I}\cdot\ensuremath{{\mathcal G}})\otimes\ensuremath{\mathcal O}_{X_1}(mL+D)} \ensuremath{\,=\,} 0 \] for $i>\max\st{ \dim \Bplus (L|_{Y_1}),\dim \Bplus (L|_{Y_2\cup\dots\cup Y_r})}$, $m\gg 0$, and all nef divisors $D$ on $X$. From the associated cohomology long exact sequence we obtain that \[ \HH{i}{X}{\ensuremath{{\mathcal G}}\otimes\ensuremath{\mathcal O}_X(mL+D)} \ensuremath{\,=\,} 0 \] for all $i>\max\st{ \dim \Bplus (L|_{Y_1}),\dim \Bplus (L|_{Y_2\cup\dots\cup Y_r})}$, $m\gg 0$, and all nef divisors $D$ on $X$. We can conclude the proof by \[ \dim \Bplus (L) \ensuremath{\,\geq\,} \max\st{\dim \Bplus (L|_{Y_1}),\dim \Bplus (L|_{Y_2\cup\dots\cup Y_r})} \ . \] \end{proof} \begin{proof}(of Theorem~\ref{thm:Fujita}) By Lemma~\ref{lem:reduction}, we can assume that $X$ is reduced and irreducible. By induction on dimension we can also assume that the result is known for all sheaves supported on a proper subscheme of $X$. If $\dim \Bplus(L)\geq n$, then the Theorem holds for dimension reasons, hence we can assume that $\Bplus (L) < n$, equivalently, that $L$ is big. According to Proposition~\ref{prop:resolution} the sheaf $\ensuremath{{\mathcal F}}$ possesses a possibly infinite 'resolution' \[ \cdots \lra \oplus\ensuremath{\mathcal O}_X(a_1L) \lra \oplus\ensuremath{\mathcal O}_X(a_0L) \lra \ensuremath{{\mathcal F}}\lra 0 \] whose cohomology sheaves are supported on $\Bplus (L)$. Therefore \cite[Proposition B.1.2]{PAG} and \cite[Remark B.1.4]{PAG} imply that one can assume that it suffices to find one integer $a\in \ensuremath{\mathbb Z}$ for which the theorem holds for $\ensuremath{\mathcal O}_X(aL)$. Let $\mu:\tilde{X}\to X$ be a resolution of singularities, and set $\ensuremath{{\mathcal K}}_X\ensuremath{ \stackrel{\textrm{def}}{=}} \mu_*\ensuremath{\mathcal O}_{\tilde{X}}(K_{\tilde{X}})$. The divisor $L$ is big, therefore $\ensuremath{\mathcal O}_{\tilde{X}}(\mu^*(aL)-K_{\tilde{X}})$ will have a section for $a$ sufficiently large. This shows that there exists an injection \[ u: \ensuremath{{\mathcal K}}_X \hookrightarrow \ensuremath{\mathcal O}_X(aL) \] of coherent sheaves on $X$. We end up having reduced the theorem to the case when $\ensuremath{{\mathcal F}}=\ensuremath{{\mathcal K}}_X$. By Grauert--Riemenschneider vanishing we have \[ R^j\mu_*\ensuremath{\mathcal O}_{\tilde{X}}(K_{\tilde{X}}) \ensuremath{\,=\,} 0 \ \ \text{for all $j>0$.} \] Therefore \[ \HH{i}{X}{\ensuremath{{\mathcal K}}_X\otimes\ensuremath{\mathcal O}_X(aL+D)} \ensuremath{\,=\,} \HH{i}{\tilde{X}}{\ensuremath{\mathcal O}_{\tilde{X}}(K_{\tilde{X}}+\mu^*(aL+D))} \] for all $i$. Consider the cohomology group on the right-hand side. By Corollary~\ref{cor:ineq}, the restriction $L|_{E_1\cap\dots\cap E_q}$ is ample for $q> \dim\Bplus (L)$. Then Corollary~\ref{cor:birational vanishing} implies \[ \HH{i}{\tilde{X}}{\ensuremath{\mathcal O}_{\tilde{X}}(K_{\tilde{X}}+\mu^*(aL)+\mu^*D))} \ensuremath{\,=\,} 0 \] for all $i>\dim \Bplus (L)$. \end{proof}
1,314,259,994,017
arxiv
\section{Introduction} Artistic painting has achieved significant progress during recent years thanks to the appearing of hundreds of GAN variants \cite{gan_survey1_DBLP:journals/corr/abs-2006-05132, gan_survey2}. However, adversarial training has been reported to be notoriously unstable and can lead to mode collapse. To escape from adversarial training and inspired by non-equilibrium thermodynamics, diffusion probabilistic models \cite{DPM2015_DBLP:journals/corr/Sohl-DicksteinW15}, such as noise-conditional score network (NCSN) \cite{ScoreMatching_NEURIPS2019_3001ef25}, denoising diffusion probabilistic models (DDPM) \cite{DDPM_DBLP:journals/corr/abs-2006-11239}, stable diffusion models in latent spaces \cite{stablediffusion_DBLP:journals/corr/abs-2112-10752} have achieved GAN-level sample quality without adversarial training. These diffusion models are appealing with rather flexible model architectures, exact log-likelihood computation, and inverse problem solving without re-training models. There are two Markov chain style processes in a typical diffusion model. The first process is a \emph{forward diffusion process} which appends multiple-scale random noise to a given data sample ``step by step'' or ``in jump'' until the disturbed sample slip into a predefined isotropic Gaussian distribution. This process does not include trainable parameters. The second process is a \emph{reverse diffusion process} which generates a target distribution data sample from pure noise guided by some (user-input) pre-given conditions. A parameterized deep learning model is required in this reverse process. Intuitively speaking, the forward diffusion process can be recognized as ``directional blasting of a building'' $\textbf{x}_0$ to ``ruins with dusts'' $\textbf{x}_T$. The learning algorithm is a \emph{reverse engineering} which learns how to (re-)construct a building (expressed by $p_\theta(\textbf{x}_{t-1}|\textbf{x}_t)$ with a parameter set $\theta$ and $t\in\{1, ..., T\}$) from each step of \emph{inverse} directional blasting (expressed by $q(\textbf{x}_{t-1}|\textbf{x}_{t}, \textbf{x}_0)$) of each given building sample $\textbf{x}_0$. In one step of this reverse engineering, $\textbf{x}_{t-1}$ represents ``one complete wall'' in a building and $\textbf{x}_t$ represents ``concrete and sands'' that can be used to construct the complete wall $\textbf{x}_{t-1}$ in a reconstruction process or can be obtained from the complete wall $\textbf{x}_{t-1}$ in a forward ``blasting'' process. The reconstruction process is learned from the blasting process with targets such as noise prediction in DDPM \cite{DDPM_DBLP:journals/corr/abs-2006-11239} or score prediction using score matching strategy in NCSN \cite{ScoreMatching_NEURIPS2019_3001ef25}. We follow a recent impressive work of high-resolution image synthesis with LDMs by given textual or visual conditions\footnote{\url{https://github.com/CompVis/stable-diffusion}} \cite{stablediffusion_DBLP:journals/corr/abs-2112-10752}. There are several proposals in this LDM. The first proposal is applying the encoder part of a pretrained autoencoder to project images into low-dimension latent spaces and then perform diffusion/construction processes. Training diffusion models on such a low-dimension representation space allows us to reach a near-optimal point between computation complexity reduction and detail preservation to boost virtual fidelity of constructed images. The second is a cross-attention-enhanced \cite{transformer_NIPS2017_3f5ee243} U-Net framework \cite{Unet_DBLP:journals/corr/RonnebergerFB15} in the diffusion model where general conditioning inputs such as text or bounding boxes are taken as \emph{memory} (i.e., keys and values in the cross-attention layers) for the query (latent representations of images to be generated) to retrieve information on. Finally, the decoder module in the autoencoder is applied to recover the target image into high-resolution. We aim at improving the \emph{creativity} of image synthesis, or painting, using conditional LDMs. It is relatively difficult to precisely define the concept of creativity since it is subjective and influenced by culture, history, and region. The color, style, objects included in painting reflect rich emotions of numerous topics. For example, when we are given a textual condition, ``a painting of a virus monster playing guitar'', we can recognize noun entities such as ``virus monster'' and ``guitar'' and a verbal action ``playing''. What are the emotions involved in this textual hint? Happy, surprise and funny should be the major emotions. The painting requires less imaginations since we should better include the entries with a determined action. However, there are challenges for the models to draw painting for rather high-level topics such as ``urbanization of China'' or ``Asian morning''. These textual hints should be enriched and extended with concrete objects and actions to tell a story in a painting or in a series of paintings. Extensions to ``urbanization of China'' include ``originally a collection of fishing villages, Shenzhen rapidly grew to be one of the largest cities in China'', ``a train runs on the snow-capped mountains of the Qinghai-Tibet Plateau'', and ``left-behind children running in wheat-field''. Given an initial textual hint, we leverages Wikipedia and large-scale pretrained language models to execute this extension. In addition, we retrain existing checkpoints by the WikiArt paintings dataset\footnote{\url{https://www.wikiart.org/} and can be downloaded from \url{https://archive.org/download/wikiart-dataset/wikiart.tar.gz}} which has a collection of 81,444 fine-art paintings from 1,119 artists, ranging from fifteenth century to modern times. This dataset contains 27 different styles (e.g., \emph{Minimalism}, \emph{Symbolism}, \emph{Realism}) and 45 different genres. As far as our knowledge, it is currently the largest digital art datasets publicly available for research usage. This dataset was used to train an ArtGAN \cite{artgan_DBLP:journals/corr/TanCAT17} where conditions such as categorical label information was used for artwork synthesis. In this paper, we embed the textual information of artists, year, styles, and genres as additional conditions to the LDM. Through this way, we can determine explicitly to invite Vincent van Gogh or Rembrant to help us drawing artworks of modern topics such as ``urbanization of China''. This paper is organized as follows. In Section \ref{sec:background}, we briefly review the background knowledge required for understanding the stable diffusion models \cite{stablediffusion_DBLP:journals/corr/abs-2112-10752}. In particular, we describe the two processes defined in DDPM \cite{DDPM_DBLP:journals/corr/abs-2006-11239}, the autoencoder framework and loss functions used in it \cite{taming_DBLP:journals/corr/abs-2012-09841}, cross attention enhanced U-Net which acts as the backbone of the diffusion model, and pseudo numerical methods integrated with DDIMs for fast sampling. In Section \ref{sec:text_extend}, we describe our proposal of extending users' prompts by pretrained language models and existing knowledge resources. In Section \ref{sec:retrain_wikiart}, we show detailed information of the Wikiart dataset and our pipeline of retraining. We describe the experiments in Section \ref{sec:experiments} and finally conclude in Section \ref{sec:conclusion}. \section{Background}\label{sec:background} Diffusion models have been successfully used in image generation \cite{stablediffusion_DBLP:journals/corr/abs-2112-10752}, text-to-speech synthesis \cite{grad_tts_DBLP:journals/corr/abs-2105-06337, diff_tts_https://doi.org/10.48550/arxiv.2104.01409}, sing synthesis and conversion \cite{sing_conv_diff_https://doi.org/10.48550/arxiv.2105.13871, learn2sing_https://doi.org/10.48550/arxiv.2203.16408}, music generation \cite{music_diffusion_DBLP:journals/corr/abs-2103-16091} and healthcare Medical Anomaly Detection \cite{medical_diff_https://doi.org/10.48550/arxiv.2203.04306}. Surveys can be find in \cite{survey_vision_diffusion_https://doi.org/10.48550/arxiv.2209.04747, survey_generative_diffusion_https://doi.org/10.48550/arxiv.2209.02646, diff_beida_survey_https://doi.org/10.48550/arxiv.2209.00796}. We limit our discussion to text-to-image generation by leveraging the LDMs \cite{stablediffusion_DBLP:journals/corr/abs-2112-10752} and existing checkpoints\footnote{\url{https://huggingface.co/CompVis/stable-diffusion-v-1-4-original}}. We briefly review the core processes and target objectives of DDPMs \cite{DDPM_DBLP:journals/corr/abs-2006-11239} that are used in LDMs. In addition, autoencoders enhanced with KL-divergence, cross-attention embedded U-Net \cite{Unet_DBLP:journals/corr/RonnebergerFB15,transformer_NIPS2017_3f5ee243}, CLIP pretrained language models \cite{clip_DBLP:journals/corr/abs-2103-00020} and sampling algorithms such as that used in denoising diffusion implicit models (DDIMs) \cite{ddim_DBLP:journals/corr/abs-2010-02502} and pseudo numerical methods \cite{pseudo_https://doi.org/10.48550/arxiv.2202.09778} will be briefly reviewed. \subsection{DDPM} \begin{figure}[t] \centering \includegraphics[width=7.5cm]{figures_coling2022_caiworkshop2.crop.pdf} \caption{The Markov chain of forward diffusion (backward reconstruction) process of generating a sample by step-by-step adding (removing) noise. Image adapted from \cite{DDPM_DBLP:journals/corr/abs-2006-11239}.} \label{fig:ddpm_two_processes} \end{figure} Given a data point $\textbf{x}_0$ sampled from a real data distribution $q(\textbf{x})$ ($\textbf{x}_0 \sim q(\textbf{x})$), \newcite{DDPM_DBLP:journals/corr/abs-2006-11239} define a \emph{forward diffusion process} in which small amount of Gaussian noise is added to sample $\textbf{x}_0$ in $T$ steps to obtain a sequence of noisy samples $\textbf{x}_0, ..., \textbf{x}_T$. A predefined (hyper-parameter) variance schedule $\{ \beta_t \in (0, 1) \}_{t=1}^T$ controls the step sizes: \begin{align} q(\textbf{x}_t | \textbf{x}_{t-1}) & = \mathcal{N}(\textbf{x}_t; \sqrt{1-\beta_t}\textbf{x}_{t-1}, \beta_t \textbf{I}); \\ q(\textbf{x}_{1:T} | \textbf{x}_0) & := \prod_{t=1}^Tq(\textbf{x}_t | \textbf{x}_{t-1}). \label{eq:q_distribution} \end{align} When $T \rightarrow \infty$, $\textbf{x}_T$ is equivalent to following an isotropic Gaussian distribution. Note that, there are no trainable parameters used in this forward diffusion process. Let $\alpha_t = 1 - \beta_t$ and $\bar{\alpha}_t = \prod_{i=1}^t \alpha_i$, we can express an arbitrary step $t$'s diffused sample $\textbf{x}_t$ by the initial data sample $\textbf{x}_0$: \begin{equation} \textbf{x}_t = \sqrt{\bar{\alpha}_t} \textbf{x}_0 + \sqrt{1 - \bar{\alpha}_t} \bm{\epsilon}_t. \label{eq:xt_x0_relation} \end{equation} Here, noise $\bm{\epsilon}_t \sim \mathcal{N}(0, \textbf{I})$ shares the same shape with $\textbf{x}_0$ and $\textbf{x}_t$. In order to reconstruct from a Gaussian noise input $\textbf{x}_T \sim \mathcal{N}(0, \textbf{I})$, we need to learn a model $p_\theta$ to approximate the conditional probabilities to run the \emph{reverse diffusion process}: \begin{align} p_\theta(\textbf{x}_{t-1} | \textbf{x}_{t}) & = \mathcal{N}(\textbf{x}_{t-1}; \bm{\mu}_\theta(\textbf{x}_t, t), \bm{\Sigma}_\theta(\textbf{x}_t, t)); \\ p_\theta(\textbf{x}_{0:T}) & := p(\textbf{x}_T)\prod_{t=1}^Tp_\theta(\textbf{x}_{t-1} | \textbf{x}_{t}). \label{eq:p_theta_distribution} \end{align} Note that the reverse conditional probability is tractable by first applying Bayes' rule to three Gaussian distributions and then completing the ``quadratic component'' in the $\text{exp}(\cdot)$ function: \begin{align} q(\textbf{x}_{t-1} | \textbf{x}_{t}, \textbf{x}_0) & = \mathcal{N}(\textbf{x}_{t-1}; \tilde{\bm{\mu}}_t(\textbf{x}_t, \textbf{x}_0), \tilde{\beta}_t\textbf{I}) \\ & = q(\textbf{x}_{t} | \textbf{x}_{t-1}, \textbf{x}_0)\frac{q(\textbf{x}_{t-1} | \textbf{x}_0)}{q(\textbf{x}_{t} | \textbf{x}_0)} \\ & \propto \text{exp}(-\frac{1}{2\tilde{\beta}_t}(\textbf{x}_{t-1} - \tilde{\bm \mu}_t)^2). \end{align} Here, variance $\tilde{\beta}_t$ is a scalar and mean $\tilde{\bm \mu}_t$ depends on $\textbf{x}_t$ and noise $\bm{\epsilon}_t$: \begin{align} \tilde{\beta}_t & = \frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\beta_t; \\ \tilde{\bm \mu}_t & = \frac{1}{\sqrt{\alpha_t}}(\textbf{x}_t - \frac{1-{\alpha}_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\bm{\epsilon}_t). \end{align} Intuitively, $q(\textbf{x}_{t-1} | \textbf{x}_{t}, \textbf{x}_0)$ acts as a \emph{reference} to learn $p_\theta(\textbf{x}_{t-1} | \textbf{x}_{t})$. We can use the variational lower bound (VLB) to optimize the negative log-likelihood: \begin{multline} -\text{log}p_\theta(\textbf{x}_0) \leq -\text{log}p_{\theta}(\textbf{x}_0) + \\ D_{\text{KL}}(q(\textbf{x}_{1:T}|\textbf{x}_0) \parallel p_\theta(\textbf{x}_{1:T}|\textbf{x}_0)). \end{multline} Using the definitions of $q(\textbf{x}_{1:T}|\textbf{x}_0)$ in Equation \ref{eq:q_distribution} and $p_\theta(\textbf{x}_{0:T})$ in Equation \ref{eq:p_theta_distribution}, a loss item $L_t$ ($1 \leq t \leq T-1$) is expressed by: \begin{align} L_t & = D_{\text{KL}}(q(\textbf{x}_{t}|\textbf{x}_{t+1}, \textbf{x}_0) \parallel p_\theta(\textbf{x}_{t}|\textbf{x}_{t+1})) \\ & = \mathbb{E}_{\textbf{x}_0, \bm{\epsilon}_t} \left [ \frac{1}{2\parallel \bm{\Sigma}_\theta(\textbf{x}_t, t)\parallel_2^2} \parallel \tilde{\bm{\mu}}_t - \bm{\mu}_\theta(\textbf{x}_t, t)\parallel^2 \right ]. \nonumber \end{align} We further reparameterize the Gaussian noise term instead to predict $\bm{\epsilon}_t$ from time step $t$'s input $\textbf{x}_t$ and use a simplified objective that ignores the weighting term: \begin{align} L_t^{\text{simple}} & = \mathbb{E}_{t \sim [1, T], \textbf{x}_0, \bm{\epsilon}_t} \left [ \parallel \bm{\epsilon}_t - \bm{\epsilon}_\theta(\textbf{x}_t, t) \parallel^2 \right ] \\ & = \mathbb{E}\left [ \parallel \bm{\epsilon}_t - \bm{\epsilon}_\theta(\sqrt{\bar{\alpha}_t}\textbf{x}_0+\sqrt{1-\bar{\alpha}_t}\bm{\epsilon}_t, t) \parallel^2 \right ]. \nonumber \end{align} In \cite{stablediffusion_DBLP:journals/corr/abs-2112-10752}, LDMs are proposed so that the diffusion processes are performed in compressed latent spaces through a pretrained autoencoder $\mathcal{E}(\textbf{x}_0)$: \begin{align} L_t^{\text{LDM}} & = \mathbb{E}_{\textbf{z}_0=\mathcal{E}(\textbf{x}_0), \bm{\epsilon}_t, t}\left [ \parallel \bm{\epsilon}_t - \bm{\epsilon}_\theta(\textbf{z}_t, t) \parallel^2 \right ] \\ & = \mathbb{E}\left [ \parallel \bm{\epsilon}_t - \bm{\epsilon}_\theta(\sqrt{\bar{\alpha}_t}\textbf{z}_0+\sqrt{1-\bar{\alpha}_t}\bm{\epsilon}_t, t) \parallel^2 \right ]. \nonumber \end{align} In order to perform condition-based image synthesis, a pre-given textual prompt (or other formats such as layout) $y$ is first encoded by a domain specific encoder $\tau_\theta(y)$ and then sent to the model to predict $\bm{\epsilon}_\theta$: \begin{equation} L_t^{\text{LDM}} = \mathbb{E}_{\mathcal{E}(\textbf{x}_0), \bm{\epsilon}_t, t}\left [ \parallel \bm{\epsilon}_t - \bm{\epsilon}_\theta(\textbf{z}_t, t, \tau_\theta(y)) \parallel^2 \right ]. \label{eq:ldm} \end{equation} Here, $\tau_\theta(y)$ acts as memory (key and value) in the cross-attention mechanism \cite{transformer_NIPS2017_3f5ee243} and can be jointly trained together with $\bm{\epsilon}_\theta$'s U-Net framework \cite{Unet_DBLP:journals/corr/RonnebergerFB15} from image-conditioning pairs. In the text-to-image generation task of \cite{stablediffusion_DBLP:journals/corr/abs-2112-10752}, a 12-layer transformer with a hidden dimension of 768 is used\footnote{\url{https://huggingface.co/openai/clip-vit-large-patch14}} \cite{clip_DBLP:journals/corr/abs-2103-00020} to encode textual prompts. \subsection{Autoencoder with KL-divergence} The autoencoder is pretrained \cite{taming_DBLP:journals/corr/abs-2012-09841} beforehand and used directly for encoding the original data sample into latent space and for decoding the reconstructed $\textbf{z}_0$ back to the original sizes of $\textbf{x}_0$. In order to combine the effectiveness of the inductive bias of CNNs with the expressivity of transformers, both the encoder ($\mathcal{E}$) and the decoder (or, generator, $\mathcal{G}$) parts of the autoencoder use resnet blocks and self-attention blocks. Adversarial learning is used to train this vector quantised GAN framework with a combination of several losses: (1) a reconstruction loss: \begin{equation} \parallel \textbf{x} - \mathcal{G} (\textbf{q}(\mathcal{E}(\textbf{x})))\parallel^2, \end{equation} where $\textbf{q}(\cdot)$ is element-wise quantization. (2) a KL loss on the diagonal Gaussian distribution constructed from $\textbf{q}(\mathcal{E}(\textbf{x}))$ = [$\bm{\mu}$; $\text{log}\bm{\sigma}^2$]: \begin{equation} \sum_{c, h, w}(\bm{\mu}^2 + \bm{\sigma}^2 - 1 - \text{log}\bm{\sigma}^2)/2 \end{equation} where $c$ is channel number, $h$ is height and $w$ is width. The output tensor $\textbf{q}(\mathcal{E}(\textbf{x}))$ is separated into two parts (e.g., from (6, 64, 64) to two (3, 64, 64) shape tensors) for the mean and the log of the variance of the Gaussian distribution. (3) a GAN loss which includes the following component: \begin{equation} \text{log}\mathcal{D}(\textbf{x})+\text{log}(1-\mathcal{D}(\mathcal{G} (\textbf{q}(\mathcal{E}(\textbf{x})))). \end{equation} Here, $\mathcal{D}$ stands for a patch-based discriminator that aims to differentiate between real and reconstructed images. Adaptive weight is used to combine these losses and more details can be found in \cite{taming_DBLP:journals/corr/abs-2012-09841}. \subsection{U-Net with Cross Attention} In \cite{stablediffusion_DBLP:journals/corr/abs-2112-10752}, a U-Net with a multi-head cross attention mechanism \cite{transformer_NIPS2017_3f5ee243} is used to predict $\bm{\epsilon}_\theta$ with a MSE loss for training (Equation \ref{eq:ldm}). In a typical U-Net implementation, there are five blocks, a \emph{time embedding block} that embeds an input time step $t$, \emph{input/middle/output blocks} that perform convolutional and self-attention based representations of $\textbf{z}_t$ and their cross attentions with conditional memory $\tau_\theta(y)$, and finally a \emph{out block} that projects the result tensor back to the shape of $\textbf{z}_t$. The \emph{input block} performs a down sampling with a stack of ``resnet + spatial transformer'' modules (e.g., 12 modules from (channel, height, width) shape of from (4, 64, 64) to (1280, 8, 8)). Then, the \emph{middle block} with ``resnet + transformer + resnet'' modules links the \emph{input} and \emph{output blocks} without changing the shape of the tensor. Next, the \emph{output block} performs a up sampling with the same number of modules of the input block (e.g., 12 modules from shape (1280, 8, 8) to (320, 64, 64)). There are residual-style shortcut links here: each module's output are sent respectively from the \emph{input block} to the \emph{output block} with same level. The final \emph{out block} uses a 2D convolutional layer to project the hidden channel number (e.g., 320) back to the original channel number (e.g., 4). \subsection{DDIMs and Pseudo Numerical Methods} DDIMs \cite{ddim_DBLP:journals/corr/abs-2010-02502} generalizes DDPMs via a class of non-Markovian diffusion processes that lead to the same training objective and give rise to implicit models that generate high quality samples much faster. In the non-Markovian forward process, a real vector $\sigma \in \mathbb{R}_{\geq 0}^T$ is introduced to index a family of \emph{inference} distributions: \begin{align} q_\sigma(\textbf{x}_{1:T} | \textbf{x}_0) & := q_\sigma(\textbf{x}_{T} | \textbf{x}_0) \prod_{t=2}^Tq_\sigma(\textbf{x}_{t-1} | \textbf{x}_{t}, \textbf{x}_{0}); \nonumber \\ q_\sigma(\textbf{x}_{T} | \textbf{x}_0) & = \mathcal{N}(\sqrt{\bar{\alpha}_T}\textbf{x}_0, (1-\bar{\alpha}_T)\textbf{I}); \nonumber \\ q_\sigma(\textbf{x}_{t-1} | \textbf{x}_{t}, \textbf{x}_{0}) & = \mathcal{N}(\tilde{\bm{\mu}}(\textbf{x}_0, \textbf{x}_t, \sigma_t), \sigma_t^2\textbf{I}); \nonumber \\ \tilde{\bm{\mu}}(\textbf{x}_0, \textbf{x}_t, \sigma_t) & = \sqrt{\bar{\alpha}_{t-1}}\textbf{x}_0 + \nonumber \\ & \sqrt{1-\bar{\alpha}_{t-1} - \sigma_t^2}\frac{\textbf{x}_t - \sqrt{\bar{\alpha}_t}\textbf{x}_0}{\sqrt{1-\bar{\alpha}_t}}. \nonumber \end{align} The mean function $\tilde{\bm{\mu}}(\textbf{x}_0, \textbf{x}_t, \sigma_t)$ is chosen to ensure that $q_\sigma(\textbf{x}_{t} | \textbf{x}_0) = \mathcal{N}(\sqrt{\bar{\alpha}_t}\textbf{x}_0, (1-\bar{\alpha}_t)\textbf{I})$ without depending on $\sigma$ anymore. In the generative process of DDIM, the \emph{denoised observation} $\textbf{x}_0$ is predicted from pre-given $\textbf{x}_t$ (reverse usage of Equation \ref{eq:xt_x0_relation}): \begin{equation} f_\theta(\textbf{x}_t, t) := (\textbf{x}_t - \sqrt{1-\bar{\alpha}_t}\bm{\epsilon}_\theta(\textbf{x}_t, t))/\sqrt{\bar{\alpha}_t}. \nonumber \end{equation} Then, a sample $\textbf{x}_{t-1}$ can be generated from $\textbf{x}_t$ via: \begin{align} \textbf{x}_{t-1} & =\sqrt{\bar{\alpha}_{t-1}}f_\theta(\textbf{x}_t, t) \nonumber \\ & + \sqrt{1-\bar{\alpha}_{t-1} - \sigma_t^2}\bm{\epsilon}_\theta(\textbf{x}_t, t) + \sigma_t\bm{\epsilon}_t. \end{align} When $\sigma_t=0$ for all $t$, the coefficient of $\bm{\epsilon}_t$ becomes zero and samples are generated from $\textbf{x}_T$ to $\textbf{x}_0$ with a fixed procedure. The $\text{DDIM}(\cdot)$ is thus defined as: \begin{equation} \textbf{x}_{t-1}, f_\theta(\textbf{x}_t, t) = \text{DDIM}(\textbf{x}_t, \bm{\epsilon}_t,t). \label{eq:ddim_xt_to_xtm1} \end{equation} To accelerate the reconstruction process and keep the sample quality, DDIMs (Equation \ref{eq:ddim_xt_to_xtm1}) are included in pseudo numerical methods \cite{pseudo_https://doi.org/10.48550/arxiv.2202.09778} which treat DDPMs as solving differential equations on manifolds. In \cite{stablediffusion_DBLP:journals/corr/abs-2112-10752}'s code implementation\footnote{\url{https://github.com/CompVis/stable-diffusion/blob/main/ldm/models/diffusion/plms.py\#L218-L232}} (Algorithm \ref{alg:pndm_comb_ddim_plms}), a linear multi-step algorithm, the Adams-Moulton method\footnote{\url{https://en.wikipedia.org/wiki/Linear\_multistep\_method\#CITEREFHairerN\%C3\%B8rsettWanner1993}}, is used. This pseudo numerical algorithm includes a gradient part of 2nd order pseudo improved Euler, 2nd/3rd/4th order Adams-Bashforth methods, and a transfer part of DDIM. Here, the discrete indices $t-1, t+1$ stand for next (e.g., from $T$ to $T-1$) and former time steps, respectively. \begin{algorithm}[t] \caption{Pseudo linear multi-step (PLMS) algorithm enhanced by DDIM }\label{alg:pndm_comb_ddim_plms} $\textbf{x}_T \sim \mathcal{N}(0, \textbf{I})$\; \For{$t=T, T-1, ..., 1$}{ $e_t = \bm{\epsilon}_\theta(\textbf{x}_t, t)$\; \uIf{t == T}{ \# pseudo improved Euler-2nd\; $\textbf{x}_{t-1}, f_\theta(\textbf{x}_{t},t) = \text{DDIM}(\textbf{x}_t, e_t, t)$\; $e_{t-1} = \bm{\epsilon}_\theta(\textbf{x}_{t-1}, t-1)$\; $e'_t = (e_t + e_{t-1})/2$\; } \uElseIf{t == T-1}{ \# PLMS-2nd (Adams-Bashforth) \; $e'_t = (3e_t - e_{t+1})/2$\; } \uElseIf{t == T-2}{ \# PLMS-3rd (Adams-Bashforth) \; $e'_t = (23e_t - 16e_{t+1} + 5e_{t+2})/12$\; } \Else{ \# PLMS-4th (Adams-Bashforth) \; $e'_t = (55e_t - 59e_{t+1} + 37e_{t+2}-9e_{t+3})/24$\; } $\textbf{x}_{t-1}, f_\theta(\textbf{x}_{t},t) = \text{DDIM}(\textbf{x}_t, e'_t, t)$\; } return $\textbf{x}_0$\; \end{algorithm} \section{Textual Condition Extension}\label{sec:text_extend} \begin{figure \centering \includegraphics[width=7.5cm]{figures_coling2022_caiworkshop_figure2_my_framework.crop.pdf} \caption{The textual prompt extension pipeline by retrieving wikipedia and continue generating by T5/DialoGPT pretrained language models \cite{t5_DBLP:journals/corr/abs-1910-10683, dialogpt_DBLP:journals/corr/abs-1911-00536}.} \label{fig:text_extension_framework} \end{figure} We perform textual condition extension by leveraging wikipedia as the knowledge base and large-scale pretrained language models as implicit knowledge graphs. The pipeline is depicted in Figure \ref{fig:text_extension_framework}. Given a textual prompt, we first match it with the title list in wikipedia. At the same time, the input prompt is sent to (1) a pretrained language model, T5 \cite{t5_DBLP:journals/corr/abs-1910-10683}, to continue writing by taking the given prompt as a prefix hint and to (2) a pretrained dialog model, DialoGPT\footnote{\url{https://github.com/microsoft/DialoGPT}} \cite{dialogpt_DBLP:journals/corr/abs-1911-00536} that takes the input prompt as ``query'' and consequently generate ``responses''. Wikipedia's titles and contents are used for matching the input prompt and T5/DialoGPT's outputs. We use BM25 \cite{bm25_robertson2009probabilistic} here to simplify the matching process. From the result document(s), we further compute sentence importance to rank their content fertility and the relationship with the initial prompt. We use the (English) text part of LAION-5B\footnote{\url{https://laion.ai/blog/laion-5b/}} and Wikiart to train a TF-IDF model and then use it to score the prompts in the result prompt list. With a higher score, we subjectively believe that the prompt can possibly yield better images. To score the ``relationship'' with the initial prompt $u$, we embed a pair of initial and result prompts by T5 and compute their cosine similarity. Thus, the importance of a result prompt $v$ is computed by: \begin{equation} w(v) = \text{TFIDF}(v) + \lambda_1 \text{Cos}(\text{T5}(u), \text{T5}(v)). \end{equation} Here, $\lambda_1$ stands for a hyper-parameter to balance the scale of two scores. As former mentioned, we encourage the result prompts to include spacial and temporal information. We leverage a named entity recognizer\footnote{\url{https://github.com/kamalkraj/BERT-NER}} and regular expressions to recognize place/region names, addresses, time, and date. The number of spacial and temporal entities discounted by a hyper parameter $\lambda_2$ is added with $w(v)$ for the final scoring of a prompt. \section{Retraining with WikiArt}\label{sec:retrain_wikiart} \begin{figure*}[t] \centering \includegraphics[width=16cm]{figures_coling2022_caiworkshop_top30_artists.crop.pdf} \caption{Top-30 artists and their painting numbers in Wikiart.} \label{fig:top-30-artists} \end{figure*} Different artists have quite different number of paints in WikiArt dataset. The top-3 artists are Vincent van Gogh, Nicholas Roerich, and Pierre Auguste Renoir with 1,889, 1,860, and 1,400 paints, respectively. The top-10, top-20, and top-30 artists share 14.18\%, 21.80\%, and 27.62\% of the samples, respectively. Figure \ref{fig:top-30-artists} shows the distribution of the number of paints and their authors in Wikiart. We first retrain the CLIP text encoder with the same tokenizer with the LDM fixed. This stage is expected to map the captions used in Wikiart to stable diffusion's latent space. Then, we fine-tune the text encoder and the LDM jointly. This stage is expected to help the LDM to enrich its knowledge of artworks from different artists, in different styles and genres. \section{Experiments}\label{sec:experiments} We use a DGX-A100-80GB server with 8 NVIDIA A100-80GB GPU cards. The original code and settings of stable diffusion model's checkpoint v1-4 is reused. During inferencing, single GPUs are used with ddim\_eta=1.0, ddim\_steps=200, height and width are both 512, and scale is set to be 5.0. \subsection{Direct Comparison with Original LDMs} \begin{figure* \centering \includegraphics[width=16cm]{figures_coling2022_caiworkshop_img1_compare_with_existingpaper.crop.pdf} \caption{Direct comparison with same prompts used in \cite{stablediffusion_DBLP:journals/corr/abs-2112-10752} yet different artists.} \label{fig:compare-top-30-artists} \end{figure*} Figure \ref{fig:compare-top-30-artists} directly compares the images generated by the original model and that retrained under Wikiart. We used the same prompts as described in \cite{stablediffusion_DBLP:journals/corr/abs-2112-10752}. For direct comparison, we also directly copy the first two rows from their original paper. We list four rows picked from the top-30 artists (Figure \ref{fig:top-30-artists}). The painting skills and styles of the artists are reflected. For example, in our first row all "drawn" by Vincent van Gogh, it is relatively easy to distinguish them from other artists: star sky appears often and the Zombie painting is telling a rich story of the author himself. When the "street sign" is given in the first column, the original paper's two results mainly focused on the photo-style signs themselves. Yet, for the artists, the background nice street views are also important parts of the final painting, such as the sky, the forest, the building and the people with orange umbrella. With these hints, we modestly draw a preliminary conclusion that our four paints (rows 3 to 6) of the first column are more creative and includes richer sounding environment and humane information. Column three, five and six are drawn from prompts which include ``fake objects'' which do not frequently exist in real-world. The ``half mouse half octopus'' is more like photos in the original paper (column 3, first 2 rows), our images are closer to hand-drawn paints. When drawing a ``chair that looks like an octopus'', all the rows in column six are close to artworks. The final column can be regarded as an industrial design oriented prompt. With the artists' style and genre included, we can positively imagine that when these paints are printed in real-world T-shirts, people will show their interests of further personalized customization and buy them. \subsection{Textual Condition Extension Results} \begin{figure* \centering \includegraphics[width=16cm]{china_urban.crop.pdf} \caption{Four artists' artworks for the same prompt of ``a painting of urbanization of china''.} \label{fig:china_urban_4artists} \end{figure*} We use the former example of ``urbanization of China'' to show the results of textual condition extension. Figure \ref{fig:china_urban_4artists} shows four artworks by four famous artists, Vincent van Gogh, Nicholas Roerich, Pierre Auguste Renoir and Claude Monet. Interestingly, the major elements frequently used by artists are also reflected here. For example, the star sky of Vincent van Gogh, the water and boats of Claude Monet. The major elements included in the four paints are also interesting, combinations of Chinese traditional buildings and skycrapers, combinations of individual houses and mountains, rather crowded endless buildings and blurry sky, and Chinese traditional building style boats with super high skycrapers around the rivers. \begin{figure* \centering \includegraphics[width=16cm]{china_urban_extend1_shenzhen.crop.pdf} \caption{Four artists' artworks for the same extended prompt of ``originally a collection of fishing villages, Shenzhen rapidly grew to be one of the largest cities in China''.} \label{fig:china_urban_4artists_shenzhen1} \end{figure*} Figure \ref{fig:china_urban_4artists_shenzhen1} shows the same four artists' artwork for an extended prompt related to one of the most rapidly developed city, Shenzhen, during the urbanization of China. With the extended prompts, the model could generate more expressive images. For Vincent van Gogh, a moon in the middle of the sky, with fish ships near and high buildings in the far view. The same elments of fish boats and skycrapers are all included in the other three paintings. Interestingly, for Nicholas Roerich, even the skycrapers are drawn by following traditional Chinese style. \begin{figure* \centering \includegraphics[width=16cm]{china_urban_extend2_train.crop.pdf} \caption{Four artists' artworks for the same extended prompt of ``a train runs on the snow-capped mountains of the Qinghai-Tibet Plateau''.} \label{fig:china_urban_4artists_train2} \end{figure*} Figure \ref{fig:china_urban_4artists_train2} shows the same four artists' artwork for an extended prompt related to a train running on the snow-capped mountains, during the urbanization of China. With the extended prompts, again, the model could generate more expressive images and keep the characters of each artists. The general styles and viewpoints of the four artists are reflected: now we have the mountain as the ``sky'' of Vincent van Gogh and the ``sky and mountain'' in Claude Monet looks like a reversed river. \begin{figure* \centering \includegraphics[width=16cm]{china_urban_extend3_children.crop.pdf} \caption{Four artists' artworks for the same extended prompt of ``left-behind children running in wheat-field''.} \label{fig:china_urban_4artists_children3} \end{figure*} Figure \ref{fig:china_urban_4artists_children3} shows the same four artists' artwork for an extended prompt related to children running in wheat-fields, during the urbanization of China. With the extended prompts, again, the model could generate more expressive images with rich emotional colors such as blue skys, golden wheat fields, and running-enjoy children. The general styles and viewpoints of the four artists are reflected, such as the Vincent van Gogh's sky, the skirts of the two girls from Claude Monet. Full images of the top-30 artists (Figure \ref{fig:compare-top-30-artists}) of the one initial prompt and three extended prompts are shown in Figure \ref{fig:china_urban_30artists_urban_china}, \ref{fig:china_urban_30artists_shenzhen}, \ref{fig:china_urban_30artists_train} and \ref{fig:china_urban_30artists_children} respectively. \begin{figure* \centering \includegraphics[width=16cm]{30_artists_urban_china.crop.pdf} \caption{Top-30 artists' artworks for the same extended prompt of ``a painting of urbanization of china''.} \label{fig:china_urban_30artists_urban_china} \end{figure*} \begin{figure* \centering \includegraphics[width=16cm]{30_artists_urban_china_extend_shenzhen.crop.pdf} \caption{Top-30 artists' artworks for the same extended prompt of ``originally a collection of fishing villages, Shenzhen rapidly grew to be one of the largest cities in China''.} \label{fig:china_urban_30artists_shenzhen} \end{figure*} \begin{figure* \centering \includegraphics[width=16cm]{30_artists_urban_china_extend_train.crop.pdf} \caption{Top-30 artists' artworks for the same extended prompt of ``a train runs on the snow-capped mountains of the Qinghai-Tibet Plateau''.} \label{fig:china_urban_30artists_train} \end{figure*} \begin{figure* \centering \includegraphics[width=16cm]{30_artists_urban_china_extend_children.crop.pdf} \caption{Top-30 artists' artworks for the same extended prompt of ``left-behind children running in wheat-field''.} \label{fig:china_urban_30artists_children} \end{figure*} \section{Conclusion}\label{sec:conclusion} In order to improve the creativity of LDMs, we have proposed two directions of extending the input prompts and of retraining the original model by the Wikiart dataset. We take the 1,000 artists in recent 400 years as the major source of both creativity and artistry. With these proposals, the result diffusion models can ask these famous artists to draw novel and expressive paints of modern topics. We believe this is an interesting topic and has industrial design requirements for real-world applications, such as cloth designing, advertisement posters, and game character designing. Through drawing the real-world's topics with the help of hundreds to thousands famous artists, it is reasonable to learn the creativity and fertility from these artists' eyes. \begin{comment} These instructions are for authors submitting papers to *ACL conferences using \LaTeX. They are not self-contained. All authors must follow the general instructions for *ACL proceedings,\footnote{\url{http://acl-org.github.io/ACLPUB/formatting.html}} and this document contains additional instructions for the \LaTeX{} style files. The templates include the \LaTeX{} source of this document (\texttt{acl.tex}), the \LaTeX{} style file used to format it (\texttt{acl.sty}), an ACL bibliography style (\texttt{acl\_natbib.bst}), an example bibliography (\texttt{custom.bib}), and the bibliography for the ACL Anthology (\texttt{anthology.bib}). \section{Engines} To produce a PDF file, pdf\LaTeX{} is strongly recommended (over original \LaTeX{} plus dvips+ps2pdf or dvipdf). Xe\LaTeX{} also produces PDF files, and is especially suitable for text in non-Latin scripts. \section{Preamble} The first line of the file must be \begin{quote} \begin{verbatim} \documentclass[11pt]{article} \end{verbatim} \end{quote} To load the style file in the review version: \begin{quote} \begin{verbatim} \usepackage[review]{acl} \end{verbatim} \end{quote} For the final version, omit the \verb|review| option: \begin{quote} \begin{verbatim} \usepackage{acl} \end{verbatim} \end{quote} To use Times Roman, put the following in the preamble: \begin{quote} \begin{verbatim} \usepackage{times} \end{verbatim} \end{quote} (Alternatives like txfonts or newtx are also acceptable.) Please see the \LaTeX{} source of this document for comments on other packages that may be useful. Set the title and author using \verb|\title| and \verb|\author|. Within the author list, format multiple authors using \verb|\and| and \verb|\And| and \verb|\AND|; please see the \LaTeX{} source for examples. By default, the box containing the title and author names is set to the minimum of 5 cm. If you need more space, include the following in the preamble: \begin{quote} \begin{verbatim} \setlength\titlebox{<dim>} \end{verbatim} \end{quote} where \verb|<dim>| is replaced with a length. Do not set this length smaller than 5 cm. \section{Document Body} \subsection{Footnotes} Footnotes are inserted with the \verb|\footnote| command.\footnote{This is a footnote.} \subsection{Tables and figures} See Table~\ref{tab:accents} for an example of a table and its caption. \textbf{Do not override the default caption sizes.} \begin{table} \centering \begin{tabular}{lc} \hline \textbf{Command} & \textbf{Output}\\ \hline \verb|{\"a}| & {\"a} \\ \verb|{\^e}| & {\^e} \\ \verb|{\`i}| & {\`i} \\ \verb|{\.I}| & {\.I} \\ \verb|{\o}| & {\o} \\ \verb|{\'u}| & {\'u} \\ \verb|{\aa}| & {\aa} \\\hline \end{tabular} \begin{tabular}{lc} \hline \textbf{Command} & \textbf{Output}\\ \hline \verb|{\c c}| & {\c c} \\ \verb|{\u g}| & {\u g} \\ \verb|{\l}| & {\l} \\ \verb|{\~n}| & {\~n} \\ \verb|{\H o}| & {\H o} \\ \verb|{\v r}| & {\v r} \\ \verb|{\ss}| & {\ss} \\ \hline \end{tabular} \caption{Example commands for accented characters, to be used in, \emph{e.g.}, Bib\TeX{} entries.} \label{tab:accents} \end{table} \subsection{Hyperlinks} Users of older versions of \LaTeX{} may encounter the following error during compilation: \begin{quote} \tt\verb|\pdfendlink| ended up in different nesting level than \verb|\pdfstartlink|. \end{quote} This happens when pdf\LaTeX{} is used and a citation splits across a page boundary. The best way to fix this is to upgrade \LaTeX{} to 2018-12-01 or later. \subsection{Citations} \begin{table*} \centering \begin{tabular}{lll} \hline \textbf{Output} & \textbf{natbib command} & \textbf{Old ACL-style command}\\ \hline \citep{Gusfield:97} & \verb|\citep| & \verb|\cite| \\ \citealp{Gusfield:97} & \verb|\citealp| & no equivalent \\ \citet{Gusfield:97} & \verb|\citet| & \verb|\newcite| \\ \citeyearpar{Gusfield:97} & \verb|\citeyearpar| & \verb|\shortcite| \\ \hline \end{tabular} \caption{\label{citation-guide} Citation commands supported by the style file. The style is based on the natbib package and supports all natbib citation commands. It also supports commands defined in previous ACL style files for compatibility. } \end{table*} Table~\ref{citation-guide} shows the syntax supported by the style files. We encourage you to use the natbib styles. You can use the command \verb|\citet| (cite in text) to get ``author (year)'' citations, like this citation to a paper by \citet{Gusfield:97}. You can use the command \verb|\citep| (cite in parentheses) to get ``(author, year)'' citations \citep{Gusfield:97}. You can use the command \verb|\citealp| (alternative cite without parentheses) to get ``author, year'' citations, which is useful for using citations within parentheses (e.g. \citealp{Gusfield:97}). \subsection{References} \nocite{Ando2005,borschinger-johnson-2011-particle,andrew2007scalable,rasooli-tetrault-2015,goodman-etal-2016-noise,harper-2014-learning} The \LaTeX{} and Bib\TeX{} style files provided roughly follow the American Psychological Association format. If your own bib file is named \texttt{custom.bib}, then placing the following before any appendices in your \LaTeX{} file will generate the references section for you: \begin{quote} \begin{verbatim} \bibliographystyle{acl_natbib}
1,314,259,994,018
arxiv
\section{Introduction} \label{sec:Introduction} Since the pioneering work by~\cite{Stommel1961} on a conceptual model of the thermohaline circulation, the problem of the stability of the Atlantic Meridional Overturning Circulation (AMOC) has become one of the main issues in climate research. A collapse of the AMOC is often used to explain abrupt changes in past climate records. In recent years, a possible AMOC collapse in response to increased freshwater forcing in the northern North Atlantic, expected as a consequence of global warming, has been identified as a low probability but high risk future climate event \cite[]{Broecker1997,Clark2002,Alley2003}. An abrupt collapse of the AMOC, in response to a quasi--equilibrium increase in freshwater forcing in the North Atlantic, has been reported in different ocean and climate models of intermediate complexity (EMICs) \cite[]{Rahmstorf2005}. This implies a non--linear response of the ocean to the freshwater forcing, with a sudden collapse of the overturning above a threshold value of the freshwater forcing. The EMIC results are challenged by the model experiments of \cite{Yin2006} and by IPCC--AR4 general circulation model (GCM) results, as analysed in \cite{Schmittner2005}. In the latter, it is found that the AMOC strength decreases approximately linearly in response to a $CO_2$ increase according to the SRES--A1B scenario and there is no collapse. It must be noted that the simulations to detect possible multiple equilibria regimes of the AMOC in these GCMs have not been done. The near--linear response to the gradual freshwater flux perturbation as found in \cite{Schmittner2005} does not rule out the possibility of a sudden collapse with a stronger freshwater flux. However, from the GCM results it has been suggested that the existence of a multiple equilibria regime is an artifact of ocean--only models, and in particular of poor (or absent) representation of ocean--atmosphere interactions. In an ocean--only model, the salt advection feedback is the central feedback affecting the stability of the AMOC\@. When an atmosphere is coupled to the ocean model, other feedbacks, due to the ocean--atmosphere interaction, become relevant. The effect of these feedbacks may eventually overcome the effect of the salt--advection feedback, and remove the multiple equilibria found in ocean--only models and EMICS. In some models, the response of the atmosphere to AMOC changes may indeed act to stabilise the present day AMOC \cite[]{Vellinga2002,Stouffer2006}. In particular, the southward shift of the intertropical convergence zone would enhance the surface salinity of the Atlantic north of the equator, increasing the northward salinity transport by the northern hemispheric gyres~\cite[]{Krebs2007,Vellinga2002}. The decrease in the atmospheric temperature of the Northern Hemisphere (NH), as a consequence of the AMOC collapse, may also play a role~\cite[]{Stouffer2006}. Lower atmospheric temperatures would determine stronger heat extraction from the ocean and, consequently, higher densities of surface waters. This effect may be more than compensated by the insulating effect of a NH ice cover extending more to the south~\cite[]{Vellinga2002}. The potential impact of changes in the wind--stress, in particular zonal wind--stress, has recently been investigated in~\cite{Arzel2010}, but the magnitude of the changes induced by the wind--stress feedback remains unclear. The question that must be answered is: ``Do the atmospheric feedbacks remove the multiple equilibria regime of AMOC, as found in ocean--only models and EMICs?'' The first step to try to answer this question is, in our view, to find a simple, but quantitative, description of these atmospheric feedbacks, extending that of box--model representations \cite[]{Nakamura1994}. Only when a quantitative description of the feedbacks is available, it is possible to assess the impact of the ocean--atmosphere interaction on the stability properties of the AMOC. Studies to isolate the effect of the different feedbacks using a GCM are computationally expansive. Furthermore, the complexity of a full GCM can hinder the understanding of the relevant processes in the system. For these reasons, simpler atmospheric models are needed to provide dynamic boundary conditions to full ocean GCMs. Their design can benefit from the fact that the atmosphere, on the ocean time scales, can effectively be treated as a ``fast'' component that adjusts to the ocean anomalies. These coupled models are often referred to as ``hybrid coupled models'' (HCMs). Since the main known atmosphere--ocean coupled mode of variability is the El Ni\~no Southern Oscillation (ENSO), HCMs have been developed mainly to study this phenomenon, focusing on the interaction between wind and sea surface temperature ($SST$) in the tropical oceans. In this framework, the main atmosphere--ocean interaction to include in the model is the change in the zonal winds over the equatorial Pacific in response to $SST$ anomalies \cite[]{Cane1986}. \cite{Barnett1993} used a statistical model of the wind--stress based on an empirical orthogonal function decomposition of real data, coupled to a regional GCM of the equatorial Pacific. They found good forecasting skill for ENSO variability prediction, and HCMs have been extensively used for ENSO forecasting since then \cite[]{Latif1998}. Singular value decomposition of observational data has been used in \cite{Syu1995}, to implement an anomaly model of wind--stress for the equatorial Pacific. The HCM including this model has been used to investigate the role of ENSO--like feedbacks in seasonal variability. In \cite{Burgers2003}, linear regressions on Ni\~no--3 and Ni\~no--4 indexes are used in combination with a red noise term to study the importance of local wind feedbacks in the Tropical Pacific. Singular value decomposition in combination with a stochastic term has been used also in~\cite{Storch2005}. In these studies, the wind--stress--$SST$ interaction is generally the main point of interest, but other feedbacks are active as well in the ocean--atmosphere system. Changes in wind speed affect evaporation and, as a consequence, surface temperature \cite[]{Neelin1987}. Also the freshwater flux is correlated to $SST$, through the triggering of convective events in the atmosphere \cite[]{Graham1987,Zhang1995}. Our aim here is to develop a global HCM that includes all the main atmosphere--ocean feedbacks relevant for the stability of the AMOC, in an approach that focuses on the quasi--steady state behaviour rather than on variability. As we want to follow an approach as general as possible, we regress all the surface fluxes pointwise on $SST$. Since the $SST$ variability has a typical extent ranging from regional to basin scale, the atmosphere--ocean interaction is roughly captured by this local approach. In the HCM, two linear perturbation terms dependent on $SST$ are added to the climatology of the forcing fields of the ocean model. A term depending on the local $SST$ anomaly represents the atmosphere--ocean feedbacks that are acting in a statistical steady state. The large--scale changes in the surface fluxes due to the collapse of the AMOC can not be described by these local regressions alone, but are included through a second linear term that depends on the anomalous strength of the overturning circulation itself, measured through the NH annual average $SST$ anomaly. Taken together, the local-- and large--scale terms give a simple representation of the atmospheric feedbacks which play a role in the stability of the AMOC. As a demonstration of concept, our regressions are based on the output of an EMIC (described in section \ref{sec:Model}). The linear atmospheric feedback representations are presented in section~\ref{sec:regressions} with results in section ~\ref{sec:Results}. The performance of the HCM is compared to the one of the original EMIC in section~\ref{sec:test}. With both local and large--scale regression terms, the HCM captures the changes in atmospheric fluxes in response to AMOC changes. The advantages of the HCM over the EMIC are that (i) a more than ten fold decrease in computation time is achieved and (ii) it gives the possibility to selectively investigate the effect of different physical processes on the stability of the AMOC separately. \section{The EMIC SPEEDO} \label{sec:Model} The HCM is constructed from data of the EMIC SPEEDO \cite[]{Severijns2009}, an intermediate complexity coupled atmosphere/land/ocean/sea--ice general circulation model. The choice for an EMIC is motivated by the fact that multi--thousand year runs are needed to construct the HCM, which is at the moment not feasible with a GCM. The atmospheric component of SPEEDO is a modified version of Speedy~\cite[]{Molteni2003,Kucharski2003,Bracco2004,Hazeleger2005,Breugem2007}, an atmospheric GCM, having a horizontal spectral resolution of T30 with a horizontal Gaussian latitude--longitude grid (approximately $3^\circ$ resolution) and 8 vertical density levels. Simple parameterisation are included for large--scale condensation, convection, radiation, clouds and vertical diffusion. A simple land model is included, with three soil layers and up to two snow layers. The hydrological cycle is represented with the collection of precipitation in the main river basins and outflow in the ocean at specific positions. Freezing and melting of soil moisture is included. The ocean model component of SPEEDO is the CLIO model~\cite[]{Goosse1999}. It has approximately a $3^\circ \times 3^\circ$ resolution in the horizontal, with 20 vertical layers ranging in resolution from $10\;m$ to $750\;m$ from the surface to the bottom. The horizontal grid of the ocean model is curvilinear, and deviates from a latitude--longitude one in the north Atlantic and Arctic basins to avoid the singularity of the north pole. A convective adjustment scheme, increasing vertical diffusivity when the water column is unstably stratified, is used in the model. LIM sea--ice model is included in CLIO~\cite[]{Graham1987}. A coupler provides the boundary conditions to the components, and performs the interpolations between the different ocean and atmosphere model grids in a conservative way. Studies conducted both with an EMIC \cite[]{DeVries2005} and with a fully implicit ocean model \cite[]{Huisman2010} showed the fundamental role of the salinity budget at the southern boundary of the Atlantic ocean in determining the response of the AMOC to freshwater anomalies \cite[]{Rahmstorf1996}. The value of the net freshwater transport by the overturning circulation at $35^\circ\mathrm{S}$, shorthanded $M_{ov}$, is likely a control parameter that signals the coexistence of two stable equilibria of the AMOC. If $M_{ov}$ is positive, the AMOC is importing freshwater into the Atlantic basin and only the present--day ``ON'' state of the overturning is stable. If $M_{ov}$ is negative, freshwater is exported out of the basin by the AMOC, and a second stable ``OFF'' state of the AMOC exists, with reversed or no overturning in the Atlantic ocean. In the equilibrium solution of SPEEDO, the Atlantic basin integrated net evaporation is overestimated both with respect to most other models and to the few available observations~\cite[]{Rahmstorf1996}. Furthermore, the zonal gradient of salinity in the south Atlantic is reversed too, with a maximum on the eastern side. The high evaporation over the basin, combined with the low freshwater import by the gyre due to the reversed zonal salinity profile, force the overturning circulation to import freshwater ($M_{ov}=0.29 \; \mathrm{Sv}$) in order to close the budget. For these reasons, a small freshwater flux correction is needed in the model for the purpose of our study, since we are interested in the feedbacks connected with a permanent collapse of the AMOC. Following the example of \cite{DeVries2005}, a freshwater increase is applied over the eastern Atlantic, from the southern boundary to the latitude of the Gibraltar strait, summing up to $0.2 \; \mathrm{Sv}$. A dipole correction is applied over the southern gyre to reverse the zonal salinity profile, with a rate of $0.25 \; \mathrm{Sv}$\protect\footnote{The model used in~\cite{DeVries2005} shares the same ocean model component as SPEEDO, but uses ECBilt as the atmospheric model instead of Speedy. In their setup, the basin integrated net evaporation of the Atlantic ocean is underestimated, while the zonal salinity contrast in the southern Atlantic is overestimated. Therefore, their correction has a sign opposite to that here.}. All the corrections are performed as a virtual salt flux, keeping the global budget closed with an increased evaporation in the tropical Pacific and Indian oceans. As a consequence of these corrections, the net freshwater transport of the AMOC at the southern boundary of the Atlantic basin becomes negative ($M_{ov}=-0.069 \; \mathrm{Sv}$). As proposed in \cite{DeVries2005} and \cite{Huisman2010}, this situation may allow the coexistence of multiple equilibria of AMOC under the same boundary conditions. Even if the data necessary for the definition of the HCM comes from 300 years of simulations alone, in the testing phase of different freshwater corrections applied to reach the regime where the MOC can permanently collapse, several tens of thousand years of integrations have been simulated by the EMIC (i.e., changing fresh-water correction and going to equilibrium, testing flux diagnostics, testing whether the collapse of the AMOC is permanent), motivating the use of a fast EMIC. The surface boundary conditions for the ocean are computed from the atmospheric model as follows. Since the atmospheric boundary layer is represented by only one model layer, near surface values of temperature ($T_{sa}$), wind ($\vec{U}_{sa}$, the bold font indicating a vector quantity) and specific humidity ($Q_{sa}$) are extrapolated from the values of the model lowest full layers. Furthermore, an effective wind velocity is defined to include the effect of unresolved wind variability as $\left | V_0 \right | = \left ( \vec{U}_{sa} \cdot \vec{U}_{sa} + V_{gust}^2\right)^\frac{1}{2}$, where $V_{gust}$ is a model parameter. The ocean model provides through the coupler the values of $SST$, from which also the saturation specific humidity at the surface ($Q^{sat}_{sa}$) is computed through the Clausius--Clapeyron equation. With these quantities, the surface boundary conditions for the ocean are computed. The sensible ($\Phi_{SQ}$) and latent heat ($\Phi_{LQ}$) fluxes into the ocean are obtained from the bulk formulas: \begin{equation}\label{eq:heat} \begin{split} \Phi_{SQ} &= \rho_{sa} c_p C_H \left | V_0 \right | \left(T_{sa} - SST \right ),\\ \Phi_{LQ} &= \rho_{sa} L_H C_H \left | V_0 \right | min\left [\left(Q_{sa} - Q^{sat}_{sa} \right ),0\right], \end{split} \end{equation} where $\rho_{sa}$ is the surface air density, $c_p$ and $L_H$ are the specific heat of air and the latent heat of evaporation, respectively, and $C_H$ is a heat exchange coefficient, a model parameter depending on the stability properties of the boundary layer. The parameterisation of the radiative fluxes are more complex. For the short--wave ($\Phi_{SW}$) and long--wave components ($\Phi_{LW}$), two and four frequency bands are used, respectively. Transmittance is computed for each band separately, taking into account air density, water content and cloud cover. The total non--solar heat flux ($\Phi_Q$) is just the sum of the different components: \begin{equation}\label{eq:heatall} \Phi_{Q} = \Phi_{SQ} + \Phi_{LQ} + \Phi_{LW}. \end{equation} Separate parameterisation are used for precipitation due to convection ($\Phi_{Pcv}$) and to large--scale condensation ($\Phi_{Pls}$). River runoff ($\Phi_R$) is provided by the land model. The net evaporation ($\Phi_E$) can then be computed as: \begin{equation}\label{eq:net_evap} \Phi_{E} = \Phi_{LQ}/L_H - \Phi_{Pls} - \Phi_{Pcv} - \Phi_{R}. \end{equation} The wind--stress vector is computed as: \begin{equation}\label{eq:wind_stress} \vec{\Phi_U} = \rho_{sa} C_D \left | V_0 \right | \vec{U}_{sa}, \end{equation} where $C_D$ is a drag coefficient. \section{Linear regressions} \label{sec:regressions} Our aim is to capture the changes in the atmospheric forcing connected with the changes in the ocean state, that is the atmospheric response to a collapse of the AMOC. As motivated in the introduction, we assume that these atmospheric feedbacks can be expressed as functions of $SST$ alone. First, the feedbacks that keep the system in a statistical equilibrium state are always present, and are expressed in our case as a function of local $SST$. They are extracted from a 200 years long statistical steady state run (CLIM) of SPEEDO. The departure from the steady state arises during an externally forced AMOC collapse, in association with the large--scale $SST$ footprint of a AMOC decline. The feedbacks involved in the collapse are different from the ones acting at the steady state. To study the large--scale feedbacks, a 4000 year experiment was performed, starting from CLIM, with an additional $0.4 \;\mathrm{Sv}$ freshwater flux centred around southern Greenland during the first 1000 years; this run is referred to as PULSE. In the first hundred years of the experiment, the AMOC collapses and a shallow reverse overturning cell is established in the Atlantic basin. Since in this paper the focus is only on the impact of a complete and steady collapse of the AMOC, we only show the results using the large freshwater anomaly mentioned, that guarantees that the AMOC is brought to a steady reversed state. The maximum of the meridional overturning streamfunction during the first two hundred years of both PULSE and CLIM runs are shown in figure~\ref{fig:streams} (bottom panel). After the first 1000 years of the experiment, the additional freshwater pulse is released and the model tends to an equilibrium state with no sign of recovery of deep water formation in the northern north Atlantic after 3000 years (top panel of figure~\ref{fig:streams}). Taken together, the feedbacks extracted from CLIM and PULSE runs provide the representation of the changes of the atmospheric fluxes during a collapse of the AMOC. To provide the simplest description of the changes taking place at the ocean--atmosphere interface, the first order approximation is the addition of a linear perturbation term to the climatology of surface atmosphere--ocean fluxes. In particular, we consider a linear regression on $SST$\@. This approach is clearly limited, but it is an approximation that gives a consistent representation of the large--scale feedbacks. The results can be successfully used as boundary conditions for the ocean--only model, as will be shown below. To force the ocean model, we need five surface fluxes: non--solar heat flux (that includes long--wave radiation, latent and sensible heat fluxes), short--wave radiative heating, net evaporation, zonal and meridional wind--stresses. The incoming short--wave radiation is not regressed, and only its average seasonal cycle is retained, since its response to SST is completely mediated through a cloud cover response that is not well represented in the Speedy model~\cite[]{Severijns2009}. Two linear models are used for regressing data from CLIM and PULSE. The CLIM data is fitted with: \begin{equation}\label{eq:reg} \phi(i,j) - \overline{\phi(i,j)} = p_1(i,j)\cdot \left(SST(i,j)-\overline{SST(i,j)}\right), \end{equation} where $\phi \in \left \{\Phi_Q,\Phi_E,\vec{\Phi_U} \right\}$ is a particular surface flux field to be regressed, $p_1$ is the model parameter field to be fitted, $i$ ($j$) is the grid index in the east--west (north--south) direction and the overbar indicates a time average. Monthly data is used in the fit of CLIM data to represent the seasonal cycle. Note that this formulation is a \emph{local} regression, by which we mean a regression between quantities that belong to the same grid cell of the model. The natural variability signal caught by regressions from equation~(\ref{eq:reg}) is removed from PULSE data. Only the first 100 years of PULSE are used, since we are interested in the response that can approximately be considered linear. The residual signal $\phi_r(i,j)$ can then be regressed with a second linear model: \begin{equation}\label{eq:reg_LS} \phi_r(i,j) = p_2(i,j) \cdot \left( \left < SST\right >_{NH} - \overline{\left < SST \right >_{NH}}\right ), \end{equation} where the symbol $\left < \;\;\; \right>_{NH}$ denotes the average over the NH. In this case the regressor is, for all grid cells, the yearly average SST in the NH, a good indicator of the state of the AMOC~\cite[]{Stouffer2006}, as figure~\ref{fig:streams} suggests (bottom panel, dashed line). Yearly mean data is used for the fit of PULSE. It must be stressed that the last term of equation~(\ref{eq:reg_LS}) is the average NH $SST$ for the CLIM run, since we are interested in the deviation from the equilibrium state. Consequently, the intercept is set to zero, since the terms involving $p_2$ need not to have an effect when the climate is in a neighbourhood of CLIM. All the regressions are computed with the \emph{lm} (linear model) function provided in the R statistical software, version 2.8.0 \cite[]{Team2009}. The regressions are computed through a least square technique, and we require a statistical significance higher than the 95 percentile, discarding all the fits with a \emph{p--value} (provided by \emph{lm} itself) higher than $0.05$. This equals to discarding a fit if the probability of having the same result using random data is higher than 5\%. When this occurs the fit is considered unsuccessful, and only the climatological value of CLIM ($\overline{\phi(i,j)}$ in equation~(\ref{eq:reg})) is kept and both $p_1(i,j)$ and $p_2(i,j)$ are set to zero. The output of the fitting procedure shows very weak sensitivity to the chosen significance level. The same regression procedure was applied also to the output of the uncorrected original SPEEDO model. The results obtained from the two models, with or without freshwater flux corrections, are consistent on both qualitative and quantitative grounds. A partial exception is the southern ocean and the Labrador sea, where the strength of the feedbacks is different. An analysis of these differences is beyond the scope of the present study, but may be associated with changes in sea--ice cover in the two models. We now give the formulation of the boundary conditions for the ocean--only model to be forced by our ``climatology with feedbacks''. The surface heat flux into the ocean is computed as a combination of the regressions and a restoring term to the climatology: \begin{equation}\label{eq:heat_forcing} \begin{split} \Phi_{Q}(i,j) =& \overline{\Phi_{Q}(i,j)} + p_1^{\Phi_{Q}}(i,j)\cdot \left(SST(i,j)-\overline{SST(i,j)}\right) \\ +& p_2^{\Phi_{Q}}(i,j) \cdot \left( \left < SST\right >_{NH} - \overline{\left < SST \right >_{NH}}\right ) \\ +& \overline{\Phi_{SW}(i,j)}\\ +& \frac{\rho_{sa} c_p \left | \overline{V_0(i,j)} \right |}{\tau} \cdot \left ( \overline{SST}(i,j)-SST(i,j) \right ), \end{split} \end{equation} where $p_1^{\Phi_Q}$ and $p_2^{\Phi_Q}$ are the local and large--scale regression parameters for the heat flux, $\rho_{sa}$ and $\overline{V_0(i,j)}$ are fixed climatological values and the relaxation time $\tau$ is chosen to be 55 days for the ocean, consistently with the bulk formula of the coupled model of equation~(\ref{eq:heat}). The net evaporation flux is computed in three steps. First, the deviations from the climatological values, $\delta \Phi_E$, are computed at each grid cell: \begin{equation}\label{eq:evap_forcing} \begin{split} \delta \Phi_{E}(i,j) =& p_1^{\Phi_{E}}(i,j)\cdot \left(SST(i,j)-\overline{SST(i,j)}\right) \\ +& p_2^{\Phi_{E}}(i,j) \cdot \left( \left < SST\right >_{NH} - \overline{\left < SST \right >_{NH}}\right ), \end{split} \end{equation} where $p_1^{\Phi_{E}}$ and $p_2^{\Phi_{E}}$ are the regression parameters for the net evaporation flux. Then, the global integral of the deviations, $\Delta \Phi_E$, is computed on the model grid and the budget imbalance is set to zero. The total freshwater flux reads then: \begin{equation}\label{eq:evap_total} \Phi_{E}(i,j) = \overline{\Phi_{E}(i,j)} + \delta \Phi_E(i,j) - \Delta \Phi_E/\Sigma, \end{equation} where $\Sigma$ is the ocean surface area. For the wind--stress vector, only the output of the regressions is used: \begin{equation}\label{eq:wind_forcing} \begin{split} \vec{\Phi_U}(i,j) &= \overline{\vec{\Phi_U}(i,j)} + \vec{p}_1^{\vec{\Phi_U}}(i,j)\cdot \left(SST(i,j)-\overline{SST(i,j)}\right) \\ &+ \vec{p}_2^{\vec{\Phi_U}}(i,j) \cdot \left( \left < SST\right >_{NH} - \overline{\left < SST \right >_{NH}}\right ), \end{split} \end{equation} where $\vec{p}_1^{\vec{\Phi_U}}(i,j)$ and $\vec{p}_2^{\vec{\Phi_U}}(i,j)$ are the vectors of the regression parameters for local and large--scale regressions respectively, for the two components of the wind--stress. Over sea--ice, a fixed climatology of air--ice fluxes is used. When sea--ice is present, weighting is applied by the model to the surface fluxes multiplying by the fractional ocean area $(1-\varepsilon(i,j))$, where $\varepsilon(i,j)$ is the fractional sea--ice cover of the cell. The technique described returns the rate of change of the field with $SST$ or $\left < SST\right >_{NH}$ only in those areas where a linear regression is statistically significant. Furthermore, setting the regression parameters to zero still leaves a constant climatology that can be used as boundary condition for the ocean model. We thus have the complete control over which feedbacks are acting at the ocean--atmosphere interface, and we can selectively investigate their individual or collective effect. \section{Results} \label{sec:Results} \subsection{Local regressions} \label{sec:localreg} The fitting procedure for CLIM data is generally successful and the results of the regressions on CLIM data are reported in figures~\ref{fig:Climatology} and~\ref{fig:p1_reg}. In figure~\ref{fig:Climatology}, the average value of the regressed fields is reported ($\overline{\phi(i,j)}$ in equation~(\ref{eq:reg})). The total heat flux (including short--wave radiation) is shown in figure~\ref{fig:Climatology}. The net evaporation includes the river runoff. The values of the regression parameter $p_1$ are shown in figure~\ref{fig:p1_reg} for all the regressed fields. In both figures~\ref{fig:Climatology} and~\ref{fig:p1_reg} the values are weighted by the fractional free ocean surface of the cell to compensate for the effects of average sea--ice cover. The effect of changes in sea--ice cover are not included into the regressions, as the effect of sea--ice is taken into account by CLIO model. As discussed below, the changes in sea--ice can strongly modify the feedbacks (compare figures~\ref{fig:p1_reg} and~\ref{fig:reg_effective}). For all the regressed fields, the contribution to the fluxes of the local regression terms can be important compared to the average value, in particular at the western boundaries and outside the equatorial and polar regions. This is clear when we consider the $SST$ variability on a daily basis; the root of the variance is well above $1^\circ \mathrm{C} $ everywhere in the subtropical and subpolar ocean, with peak values of about $7 ^\circ \mathrm{C}$ close to the NH western boundaries (not shown). The linear regressions only capture part of the natural variability of CLIM fluxes, but the error is generally lower than $10\%$ of the original field over a major part of the ocean (not shown). Apart from the standard damping on $SST$ that also operates in ocean--only models driven by a prescribed atmosphere, the atmospheric control over the atmosphere--ocean heat flux counteracts this damping in many regions, in particular in the tropics and at high latitudes (positive values in figure~\ref{fig:p1_reg} a). This means that the linear feedback for the heat flux is not damping the $SST$ anomalies. Relevant exceptions are the equatorial ocean, the central north Atlantic, the northern portion of the Southern Ocean and other smaller areas. It should be noted that in the polar areas, the sea--ice cover determines the effective feedback in the heat flux, and often changes the sign of the feedback. The exact mechanism of this feedback is discussed in more detail in section~\ref{sec:largereg}. To investigate the origin of the pattern of the local heat feedback outside the polar regions, the same regression procedure was applied to each component of the heat flux separately, namely sensible and latent heat fluxes and long--wave radiation (not shown). The change in the latent heat release is the most important component of the heat flux change. The feedback of sensible heat flux is slightly weaker in magnitude, and is positive with the only relevant exceptions of the North Atlantic and the equatorial ocean. The long--wave radiation feedback follows the same pattern, and is the weakest term. As first noted in~\cite{Frankignoul1985}, the sign of the heat flux feedback from equation~(\ref{eq:heat}) depends to first order only on the relative change of $T_{sa}$ and $SST$, if the wind is assumed constant. A positive feedback is possible only if the change in $T_{sa}$ is larger than the one in $SST$. This is almost always true in our model in the areas where the heat feedback is positive, as we find when $T_{sa}$ is regressed on $SST$ (not shown). A plausible explanation of this positive heat feedback, at least at low and mid latitudes, is given by the convection--evaporation feedback mechanism as proposed by~\cite{Zhang1995}. There is a strong resemblance between the patterns of increased convective precipitation and those of weaker latent heat loss at higher $SST$. This suggests that, in the tropical and subtropical areas where a positive heat flux feedback is observed, a positive $SST$ anomaly is associated with anomalous convergence of wet air that both contributes to the reduction of evaporation\protect{\footnote{The reduction of evaporation is mainly due to weaker winds.}} and enhances precipitation if convection is triggered. Regression of surface pressure on $SST$ also supports this hypothesis, since higher $SST$s correlate with lower surface pressure in the tropical and subtropical areas. Regarding net evaporation (figure~\ref{fig:p1_reg} b), a weak increase is observed at higher $SST$ over most of the ocean. On the contrary, in most of the tropical areas the increase in convective events leading to increased precipitation dominates the freshwater feedback (basically, the blue areas of figure~\ref{fig:p1_reg} b), as discussed above. In the case of wind--stress, a decreased magnitude is observed in connection with higher $SST$ (compare figure~\ref{fig:p1_reg} c and d with the mean fields of figure~\ref{fig:Climatology}). The term $\left | V_0 \right |$ of equation~(\ref{eq:heat}) is regressed on the local $SST$, confirming that over most of the ocean at low and mid latitudes lower than average winds are observed in association with higher than average $SST$s (not shown), implying lower heat transfer through the interface. The correlation decreases moving poleward and the mechanism involved is basically the wind--evaporation feedback~\cite[]{Neelin1987}, that connects higher evaporation (lower $SST$) with stronger winds. The fact that we do not observe stronger winds where an increase of convective precipitation is found is not surprising, since the parameterisation of convection does not affect the horizontal wind field~\cite[]{Molteni2003}. A positive correlation between wind speed and $SST$ is observed only in the western part of the subtropical gyre of the Southern Hemisphere (SH) of the Atlantic ocean, south of Greenland and in the Labrador sea, in the northeastern part of the subpolar gyre of Pacific ocean, and in some other smaller regions. Even though the negative wind feedback is thought to be dominant, some evidence for a positive feedback has been found for the Kuroshio extension area, in the northeastern Pacific~\cite[]{Wallace1990,Nonaka2003}. The best known wind--$SST$ feedback mechanism where the wind response to SST anomalies is central is the Bjerknes' feedback in the equatorial Pacific areas, in connection with the ENSO~\cite[]{Cane1986}. The fundamental coupled variability of the equatorial ocean--atmosphere system is that of a decrease of the western Pacific trade winds in response to a positive anomaly of $SST$ in the eastern equatorial Pacific. Even though the model has too low resolution to exhibit a realistic ENSO~\cite[]{Severijns2009}, a weakening of the trade winds in the western and central equatorial ocean is captured by the linear regressions (figure~\ref{fig:p1_reg} c) and is consistent with the anomaly patterns connected with ENSO~\cite[]{Deser1990}. The stronger convective precipitation detected in the western Pacific at higher $SST$s may be a sign of anomalous convergence of the low level atmospheric circulation, again in agreement with what shown by~\cite{Deser1990}. The origin of the dipole structure of the meridional wind feedback between NH and SH (figure~\ref{fig:p1_reg} d) is basically a reflection of the weaker dominant winds at higher $SST$. \subsection{Large--scale regressions} \label{sec:largereg} Moving to the results of \emph{large--scale} regressions, it must be kept in mind in the interpretation of the results that the fit is performed only on the residuals of \emph{local} regressions, not on the full data of PULSE and that the fit is performed on a decreasing quantity, the NH average $SST$. The collapse of the AMOC causes a decrease in the NH average $SST$ of about $1.2 ^\circ \mathrm{C}$. A weaker change of opposite sign is observed over the Southern Ocean (approximately $0.4 ^\circ\mathrm{C}$). This NH--SH temperature dipole is a robust feature of different models, and is connected with lower northward heat transport in the Atlantic ocean, as already found in \cite{Stouffer2006}. The changes in the heat flux are mainly captured by the large--scale regression parameter alone. This can be evinced comparing the large--scale heat flux parameter and the diagnosed changes in the flux from the coupled model, and is connected with the larger magnitude of the large--scale parameter. The main response of the heat flux after the overturning collapse, not considering changes in the sea--ice cover (figure~\ref{fig:p2_reg} a), would be that of an increased heat extraction from the ocean in the NH ($9.9 \; W/(m^2 \;^\circ \mathrm{C})$ on average). When the effect of a changing sea--ice cover is included in the computation of the heat feedback (figure~\ref{fig:reg_effective}~b), its sign changes in the high latitudes of the NH ($-9.6 \; W/(m^2 \;^\circ \mathrm{C})$ on average in the NH), which means that heat released to the atmosphere decreases. This result is in contrast with what the regression parameter $p_2$ suggests, but consistent with the sign of the effective regression parameter. The difference is explained below. The net heat flux, weighted by the ice--free area $(1-\varepsilon)$, can be written as: \begin{equation} \phi_Q=(1-\varepsilon)(\overline{\phi_{Q}} + \partial\phi_Q/\partial SST + \partial\phi_Q/\partial \left<SST\right>_{NH}). \end{equation} $p_2^{\phi_Q}$ is simply $\partial\phi_Q/\partial \left<SST\right>_{NH}$ while the effective parameter is: \begin{equation} \begin{split} p_{2,eff}^{\phi_Q} &=\partial(\phi_Q\cdot(1-\varepsilon))/\partial \left<SST\right>_{NH}\\ &=(1-\varepsilon)\partial\phi_Q/\partial \left<SST\right>_{NH} - \phi_Q \partial\varepsilon/\partial \left<SST\right>_{NH}\\ &=(1-\varepsilon)p_2^{\phi_Q} - \phi_Q \partial\varepsilon/\partial \left<SST\right>_{NH}. \end{split} \label{eq:effective} \end{equation} The second term on the right hand side of equation~\ref{eq:effective} describes the changes in sea--ice cover in response to $SST$ changes. This term is larger than the first term over most of the Northern North Atlantic. Sea--ice cover changes determine the sign change in the large--scale heat feedback term. A similar reasoning holds for the local feedback. In general, the NH--SH heat flux dipole seen in figure~\ref{fig:p2_reg} a is driven by the decrease of NH near--surface temperature, that follows a pattern similar to that of $SST$ (figure~\ref{fig:sst_variance}), but with stronger sensitivity to AMOC changes everywhere except for the southern mid latitudes. This amplification of the $SST$ signal, in particular in the atmosphere of the high latitudes of NH, is a consequence of the appearance of sea--ice during winter. Without sea--ice changes, these differential variations in $SST$ and atmospheric temperature would tend to produce an increased upward heat flux in the NH (figure~\ref{fig:p2_reg} a). This increased heat loss is more than counteracted by the decrease in open ocean area; the increased ice cover effectively drives the cooling of amospheric temperatures above the North Atlantic. This can be seen from the changes in the heat flux diagnosed from the coupled model including the insulating effect of sea--ice (figure~\ref{fig:reg_effective}~c) and this is confirmed by the \emph{large--scale} regression parameter computed including the effect of sea--ice (figure~\ref{fig:reg_effective}~b). This ``effective'' regression parameter is the result of the same fitting procedure, applied in this case to the surface heat flux weighted by the actual sea--ice cover and not to the complete heat flux. The results for the local (large--scale) regression are those shown in figure~\ref{fig:reg_effective} a (b). As a consequence, this regression parameter gives a better representation of the feedbacks that the ocean effectively senses (including the effect of sea--ice). Note that the HCM only uses $p_1$ and $p_2$, and not the effective response coefficients. The changes in sea--ice cover result from explicitly resolved ice dynamics and thermodynamics. At low and mid latitudes in the NH the changes are due to reduced evaporation in response to lower $SST$ and, at low latitudes, to lower wind speed. The changes in the surface long--wave radiation budget are smaller in magnitude, and amount to an increased net emission of long--wave radiation almost everywhere in the NH except from the GIN seas. This effect has been observed in other model experiments and is connected with the reduced downward long--wave radiation flux over compensating the decreased black body emission at lower $SST$s~\cite[]{Laurian2009}. The decrease in the downward long--wave flux is an effect of a drier atmosphere, and partly balances the reduced latent heat flux. These changes in heat flux amount to a positive feedback on an AMOC anomaly when the effect of sea--ice is included, favouring a decrease of the surface density in the deep water formation areas of the North Atlantic in connection with weaker overturning circulation. The patterns of the net evaporation change (figure~\ref{fig:p2_reg} b) are consistent with the findings of~\cite{Stouffer2006} (their figure 9 e, with opposite sign). The AMOC collapse causes a reduction of net evaporation over the tropical and subtropical NH and over the tropical SH, due to lower $SST$s (figure~\ref{fig:sst_variance}). In the few areas where an increase in evaporation is observed (basically the north equatorial oceans), this is due to stronger winds. At low latitudes, a significant change of the precipitation patterns also plays a role, with a dipole pattern centred around the equator, and positive to the south. This southward shift of the intertropical convergence zone (ITCZ) produces the strongest precipitation increase over the Amazon river basin. This response of the Hadley cell is connected with the southward shift of the latitude of maximum heating, and has been observed consistently in different climate models~\cite[]{Stouffer2006,Krebs2007,Laurian2009} and in an idealised framework too~\cite[]{Drijfhout2010}. A similar, though weaker, pattern of precipitation change is observed in the Pacific and Indian oceans. The increased precipitation over the entire southern Atlantic more than compensates for the increased evaporation due to higher $SST$ in this part of the basin. A slow down of the hydrological cycle over Europe is detected as two negative peaks off the coast of France and in the North sea. On a global scale, the regressions of PULSE residuals determine an evaporation increase of $0.13 \; mm/(day \; ^\circ \mathrm{C})$. Therefore, our linear approach is not conserving the ocean water mass and needs a budget closure correction when used as boundary condition for the ocean, as implemented in equation~(\ref{eq:evap_total}). In the case of wind--stress, the response of the atmosphere is somewhat less straightforward to understand, and it deserves a longer discussion. For what concerns the meridional wind--stress, the changes in the low and mid latitudes are driven by the response of the zonally averaged temperature profile to the AMOC collapse. The equator to pole temperature difference increases by approximately $4^\circ \mathrm{C}$ in the NH. In the SH, the opposite is true, with a smaller change. These changes are clearly mirrored in the zonally averaged wind--stress. Stronger southward wind blows on the ocean with a collapsed AMOC in the NH up to $50^\circ \mathrm{N}$. The situation is similar in the SH, but with a weaker circulation down to $40^\circ \mathrm{S}$, following the opposite change in the zonally averaged temperature. The zonal winds over the Southern Ocean are also reduced. A more peculiar feature is observed in the north Atlantic. A pressure anomaly dipole between Greenland and northeastern Atlantic develops, with positive sign to the east, in connection with the differential cooling between these two regions (stronger cooling over eastern Atlantic). This in turn determines an anomalous anticyclonic circulation centred north of Scotland, with impacts on both the meridional and zonal wind--stress. Referring to our regressions, the changes due to the AMOC collapse in the tropical regions are already caught by the local regression parameter ($p_1$, figure~\ref{fig:p1_reg} c and d). This can be understood considering that the change in $SST$ due to the AMOC collapse (figure~\ref{fig:sst_variance}) is a dipole centred at the latitude of the southern tropic (at the equator in the Atlantic ocean) and positive to the south of it, with an amplitude of a few degrees. In fact, the changes due to the overturning collapse are overestimated by the \emph{local} regressions, and $p_2$ (figure~\ref{fig:p2_reg} c and d) amounts to a correction opposite to $p_1$. The positive values of $p_2$ for meridional wind--stress in the intertropical regions (figure~\ref{fig:p2_reg} d) signal the southward shift of the ITCZ, that is an anomalous southward wind with decreasing NH average $SST$, not represented by the local regressions. Also the anomalous anticyclonic circulation is reproduced in the large--scale regressions by the dipoles over northeastern Atlantic (positive to the south and to the east). The impact on AMOC stability of wind--stress feedbacks has been investigated in the recent paper by~\cite{Arzel2010}, where a simple zonally averaged atmospheric model was used. Even though it is quite difficult to compare their results with the results from a GCM like SPEEDO, the general picture is similar. The atmospheric circulation in the NH is strengthened, while the opposite is true for the SH. The magnitude of the changes in SPEEDO is close to their lowest estimates. \section{HCM test} \label{sec:test} The HCM consists of the ocean component of SPEEDO (i.e., CLIO) and the dynamic boundary conditions described in the previous section. It was tested by comparing its results with the original SPEEDO model. The first experiment (regCLIM) starts from the end state of the ocean of the CLIM run. The model is forced only by the local regressions (values of $p_2$ set to zero) for 3000 years. Next, all the large--scale regressions are also switched on, and the model runs for 2000 years more. Results of the regCLIM run are shown in figure~\ref{fig:model_drift}. On the top panel, the deviation from CLIM mean value of the global average sea temperature (salinity) is reported in black (red). The area shaded in grey on the left margin of figure~\ref{fig:model_drift} marks the (200 years) data of the CLIM run. The light blue area marks the first 3000 years of the regCLIM of the ocean--only model, with only local regressions active. To estimate the theoretical equilibrium state of the model, we fit the global average sea temperature and salinity from years 1201--5200 of regCLIM with the function: \begin{equation} f(t) = a_1 \mathrm{sin}\left (\frac{t+a_2}{a_3} \right ) \mathrm{exp}\left [ - \frac{t+a_2}{a_4} \right ] + B, \label{eq:fit_func} \end{equation} where $t$ is time, $a_1, \ldots, a_4$ are the fit parameters, and $B$ is a constant background that represents the state of the system at infinite time. The theoretical equilibrium state computed from this procedure is $0.31 ^\circ \mathrm{C}$ colder and $7.2 \cdot 10^{-4} psu$ fresher than the coupled CLIM run. Little drift, but a substantial reduction of the variability due to the restoring term, is observed in the global average $SST$ (figure~\ref{fig:model_drift}, black line in the bottom panel). The NH average $SST$ increases by $0.18 ^\circ \mathrm{C}$ (difference between last 200 years of regCLIM and CLIM). The maximum of the AMOC is, at the end of regCLIM, approximately $1\;\mathrm{Sv}$ weaker than in the CLIM run (bottom panel of figure~\ref{fig:model_drift}, in red). The AMOC, as the left bottom panel of figure~\ref{fig:Psia} shows, is weaker and approximately $500 \mathrm{m}$ shallower in the HCM. The freshwater transport by the AMOC at $30^\circ\mathrm{S}$ in the last 200 years of regCLIM (grey shaded area on the right of figure~\ref{fig:model_drift}) is $M_{ov} = -0.06\;\mathrm{Sv}$. To keep $M_{ov}<0$, the freshwater corrections described in section~\ref{sec:Model} are 50\% stronger than in the fully coupled model. To investigate the origin of the changes in the AMOC strength, we diagnose the surface fluxes of density for the CLIM and regCLIM runs. The surface density flux $\Phi_\rho$ can be estimated using the formula~\cite[]{Gulev2003,Tziperman1986}: \begin{equation} \label{eq:rhoflux} \Phi_\rho = -\frac{\alpha}{c_p}\Phi_H + \rho_0 \beta \frac{\Phi_E \cdot SSS}{1-SSS \cdot 10^{-3}}, \end{equation} where $\alpha= - 1/\rho_0\left(\partial \rho / \partial T\right)$, $\beta=1/\rho_0\left(\partial \rho / \partial S\right)$, $\Phi_H$ is the total surface heat flux into the ocean ($\Phi_H = \Phi_Q+\Phi_{SW}$), $\rho_0$ is the reference water density, $SSS$ is the surface salinity measured in ppt. The density flux into the ocean is shown in figure~\ref{fig:rhoflux} in units of $10^{-6} \cdot kg/(m^2 \;s)$ for the CLIM run (top panel). The effect of sea--ice cover is taken into account in the computation of the density flux, and the model grid (distorted over north Atlantic and Arctic) is used to avoid interpolation errors. The difference between the fluxes from the regressions in the last 200 years of regCLIM and CLIM is reported in the bottom panel of figure~\ref{fig:rhoflux}. Even if the changes are generally small (note the different colour scales in the figure), when the difference is averaged over the GIN seas and the Arctic Mediterranean (taking as southern boundaries the Bering strait and the latitude of the southern tip of Greenland), we find that the density flux decreases by $2\cdot 10^{-8} kg/(m^2 \; s)$. This value represents a 10\% decrease of the average density flux over the same area, that nicely fits the relative change in maximum overturning strength. The definition of the HCM, that does not include any high frequency stochastic component, causes a strong reduction of variability, but low frequency variability of the system seems to be preserved. To show this, a multi taper method (MTM) analysis \cite[]{Ghil2002} was performed on the time series of the maximum of overturning streamfunction of the Atlantic. The analysis is performed on the yearly data of CLIM (a longer control run is used, 1000 years long) and the last 1000 years of regCLIM (figure~\ref{fig:spectra}). At the lower end of the spectrum, energy is concentrated at similar frequencies in the two models, below approximately $0.02 \; \mathrm{year} ^{-1}$. At higher frequencies, instead, the broad peaks found in the HCM between $0.02\;\mathrm{year}^{-1}$ and $0.09\;\mathrm{year}^{-1}$ are not present in the original coupled model, while the peaks found above $0.1\;\mathrm{year}^{-1}$ in CLIM are lost in the HCM. Also the first empirical orthogonal function of $SST$ computed from the HCM resembles the one from CLIM only in the northwestern Atlantic. This approach is thus limited when the internal variability of the ocean is of interest, but in the present work the focus is only on the quasi--equilibrium response. Atmospheric noise and lagged correlations are probably needed to better represent and excite the modes of variability of the system. As a final test, a pulse experiment was performed in the HCM. In this test, that will be shorthanded as regPULSE, we apply the same freshwater anomaly as in PULSE (see section~\ref{sec:regressions}), also increased by 50\% as the corrections already applied in regCLIM. The initial conditions for regPULSE are provided by the final state of regCLIM: year 5200 of figure~\ref{fig:model_drift}. In regPULSE, as in PULSE, the anomaly is applied for 1000 years, letting the model reach a new equilibrium afterwards. We focus our analysis on the response of the system during the first hundred years of the run, where the regressions are expected to be significant. The AMOC maximum for regPULSE is reported in figure~\ref{fig:streams_reg} as a dashed line. The response of the AMOC in regPULSE, when measured by this quantity, follows closely the one in PULSE. The only substantial differences are its lower initial condition and the weaker variability of the regPULSE signal. The weaker variability of regPULSE signal is no surprise, considering the fact that our regressions do not add any high frequency variability to the system, depending only on $SST$. Looking at the entire overturning streamfunction of the Atlantic, the results are also encouraging. On the right hand side of figure~\ref{fig:Psia}, the overturning of the collapsed state that is established after the first 100 years of the pulse experiment are compared in PULSE and regPULSE. In the top right panel of figure~\ref{fig:Psia}, the streamfunction of years 101--110 of PULSE run is shown as a reference. The difference between regPULSE and PULSE during the same years is reported below. The results of the HCM are in good agreement with those of SPEEDO, showing a reversed cell only slightly weaker than in PULSE. The largest differences are at the southern border of the Atlantic basin, likely in connection with the general underestimation of the density flux over the southern ocean (figure~\ref{fig:rhoflux}). For what concerns the barotropic streamfunction during the pulse experiments, the only significant differences are found in the southern ocean (not shown). Over the Pacific sector of the Southern ocean, the underestimation of the barotropic streamfunction represents about 20\% of the transport predicted by PULSE. This discrepancy is probably connected with an overestimation of the decrease of the southern westerly winds in the regressed forcing in response to the collapse of the AMOC. \section{Summary and Conclusions}\label{sec:Conclusions} In this paper we described a new technique for developing a global HCM that includes a basic representation of the feedbacks due to the ocean--atmosphere interaction, relevant for the stability of the AMOC. The steady state feedbacks of the system were represented through linear regression terms depending on the local deviation of $SST$ from its mean value. The large--scale response of the atmosphere to an externally forced AMOC collapse is included with a regression on the NH hemisphere average temperature. The results of the regressions give a quantitative representation of the changes in the surface fluxes that is consistent with other model experiments~\cite[]{Stouffer2006,Krebs2007,Laurian2009}. In particular, we can detect the changes in heat flux at the surface due to the cooling of the NH after a AMOC collapse. Significant changes are observed also in the freshwater flux, in connection with the response of the general circulation in the atmosphere to the changes in the equator to pole temperature profile, that determine the response of the winds as well. The boundary conditions computed in section~\ref{sec:regressions}, were then successfully used as a dynamic forcing for an ocean--only model. This ocean forced by a ``minimal atmospheric model'' guarantees a decrease of the computation time between ten and twenty times with respect to the original coupled model. The ocean model forced by the regressions which form the HCM reaches a steady state close to the one of the original coupled model. Furthermore, an experiment is performed where the AMOC is collapsed in both the fully coupled model and in the ocean forced by the regressions. The two results are in good agreement. This enables us to proceed to further use the HCM to investigate the impact of the atmospheric feedbacks on the stability of the AMOC. In particular, the formulation of the forcing shown in section~\ref{sec:regressions} enables us to selectively choose which fluxes are fixed to a climatological value, and which ones are computed dynamically as a function of $SST$. We can thus investigate the impact of each feedback separately on quantitative grounds, and we can aim at a deeper understanding of the main physical processes involved in the collapse and recovery of the AMOC. It is also important to analyse the response of the HCM to weaker freshwater anomalies. Reducing the anomaly that forces the AMOC collapse, the atmospheric feedbacks are likely to play an increasingly dominant role. The model can obviously be extended in many ways. Using higher order (nonlinear) models in the data fit is unlikely to be worth the effort. The study of the role of atmospheric noise and of correlations lagged in space and time, and their inclusion in the HCM, may instead greatly improve the representation of atmosphere--ocean interaction with respect to the variability of the AMOC. As a final remark, we want to stress that our technique to design the HCM is general. We do not rely on any ad--hoc assumption connected with the nature of the EMIC that was used for this work. For this reason, this technique is potentially interesting for many other problems (apart from the stability of the AMOC) where a computationally efficient, simple representation of the ocean--atmosphere interaction is desired. For instance, instead of using data from the atmospheric component of SPEEDO, the ocean component could be coupled to a statistical atmosphere derived from a state--of--the--art coupled climate model or from reanalysis data, at least for the computation of local regressions. \begin{acknowledgments} This work is funded by the Netherlands Organisation for Scientific Research. We acknowledge Camiel Severijns (KNMI) for his precious technical support, and Matthijs den Toom (IMAU) for the stimulating discussions. \end{acknowledgments}
1,314,259,994,019
arxiv
\section{Introduction\label{s.Introduction}} The plant hormone auxin plays a fundamental role in plant development \citep{Reinhardt,Reinhardt2}, and its spatial distribution in plants tissues is critical for plant morphogenesis. Auxin accumulation is spatially localized in specific set of cells, where it induces the emergence of new primordia \citep{Reinhardt}. A fundamental problem consists in understanding how such auxin maxima appear, and how they induce the regular pattern observed in plants (see e.g. \citet{Traas}). On the other hand, experiments show that phyllotaxis strongly depends on the plant physical properties, more precisely on elasticity \citep{Green,Dumais1,Dumais2}, and physical forces provide information for plant patterning \citep{Traas}. Basically, turgor pressure induces stress, which is related to the associated deformation or strain through Young constants: see e.g. \citet{Boudaoud} where these notions are explained in the context of plant growth. Experiments have shown that lowering the stiffness of cell walls in the meristem leads to the emergence of new primordia \citep{Hamant2}. However, the interactions between physics-based and biochemical control of phyllotaxis is still poorly understood. Recently, new biologically plausible mathematical models of auxin transport have been proposed \citep{Barbier,Heisler,Jonsson,Smith1}, each of them being able to reproduce some aspects of phyllotaxis in simulations. New mathematical models were also proposed for the plant mechanics \citep{Mjolsness}, and for the interaction between mechanics and biochemistry \citep{Shipman,Newell}. In the latter, the authors use the model for the polar auxin flux proposed in \citet{Jonsson} for modelling the stress field in their mechanical model. It should be stressed that all these models are based on hypotheses that have not been verified experimentally; however they provide new scenari for understanding plant growth that can be tested experimentally. \begin{figure}[h!] \centering \includegraphics[height=5cm]{Fig1.jpg} \caption{\rm{Inflorescence shoot apical meristem of \textit{Arabidopsis thaliana}. Zones with high auxin concentration are highlighted by the fluorescent yellow signal auxin reporter from DR5::YFP. The red signal is highlighting cell walls stain, using propidium iodide. }\label{phyllo}} \end{figure} Auxin occurs in various plant tissues, where it is transported by polar cellular transport in various directions and can explain developmental patterning phenomena such as vein formation, see e.g. \citet{Scarpella} or \citet{Bayer}. In the following, we consider the models in \citet{Jonsson} and \citet{Smith1}, based on polar auxin flux. Polar auxin flux results from uneven accumulation of the auxin transport regulator PIN in cell membranes. An essential component is a positive feedback between auxin flux and PIN localization, resulting in the reinforcement of polar auxin transport to dedicated routes which develop into vascular tissues. We will not enter here into these considerations, but focus on simple models of transport processes (see e.g. the discussion in \citet{Jonsson} and \citet{Shipman}), where a quasi-equilibrium is assumed for PIN proteins. The molecules present in some cell $i$ may be transported to any neighbouring cell $j$, but they are preferentially transported to the neighbours with the highest auxin concentrations. Traditionally, models of patterning and morphogenesis have used reaction-diffusion theory. Turing demonstrated how, under some hypotheses, the regular patterns observed in phyllotaxis can be predicted \citep{Turing}. He showed that a combination of diffusion and a chemical reaction could give rise to regular patterns. Interesting models are described in \citet{Meinhardt,Thornley} which can, under some hypotheses, predict phyllotactic patterns. As stated previously, the auxin flux is strongly polarized, a phenomenon that cannot be described with reaction-diffusion models. The recent mathematical models given in \citet{Barbier,Jonsson,Smith1} are based on transport processes. Mathematically, mass transport processes are not well understood, and their study is a challenging problem. We propose here a mathematical study of related dynamical systems. We focus on their critical points and analyse their geometrical structure and stability. Besides stable auxin peaks, the model generates intervening areas of auxin depletion, as it is observed experimentally. These auxin depleted sites reflect an indirect repulsion mechanism since auxin molecules diffusing through the tissue will be attracted to the peaks, and diverted from the depleted areas. This idea of repulsion or spacing mechanism was already considered a long time ago \citep{Hofmeister}. The auxin flux is present everywhere in the plant, so that we choose to describe the various plant cells as a connected graph $(\Lambda,E)$. The node set $\Lambda$ represents the cells and $E$ the set of edges. Any edge $e=(i\to j)$, $i$, $j\in\Lambda$ indicates that some auxin molecule can move from cell $i$ to cell $j$. This graph is undirected, and we write $i\sim j$ to denote that cells $i$ and $j$ are nearest neighbours, so that auxin can move from cell $i$ to cell $j$, at some rate $q_{ij}$. These transition rates are not well understood at present time and one must rely on simple models. They should capture the fact that an auxin molecule present in some cell $i$ has the tendency to move to a cell $j\sim i$ when the concentration $a_j$ of auxin molecules present in cell $j$ is high. The simplest model accounting for this idea is given by \citet{Jonsson} \begin{equation}\label{Rates} q_{ij} = \frac{a_j}{\kappa + \sum_{k\sim i}a_k}, \end{equation} for some positive constant $\kappa$, which is of Michaelis-Menten or Monod type. Let $L=\vert\Lambda\vert$ be the number of cells. In the model given in \citet{Jonsson} (see also \citet{Smith1,Sahlin}), $a_i(t)$, for $i=1,\cdots , L$, denotes the concentration or the number of auxin molecules in cell $i$ at time $t$, and is assumed to evolve according to the differential equations \begin{equation}\label{eq:jonsson} \frac{\mathrm{d}a_i}{\mathrm{dt}} = f_i(\vect{a}) = D \sum_{k \sim i} (a_k -a_i) + T\sum_{k \sim i} \Big ( a_k \underbrace{\frac{a_i}{\kappa + \sum_{j \sim k} a_j }}_{=q_{ki}(\vect{a})} - a_i \underbrace{\frac{a_k}{\kappa + \sum_{j \sim i} a_j }}_{=q_{ik}(\vect{a})} \Big), \end{equation} for $i=1, \ldots, L$. The term $a_i q_{ik}$ gives the mean number of auxin molecules moving from cell $i$ to cell $k$ per unit time, and $D \sum_{k \sim i} (a_k -a_i)$ is a diffusive part, usually assumed to be weak with a small diffusion coefficient $D$. The second term corresponds to the mass transport process, which is known to be the main actor of the patterning process in plants. One can add auxin production and degradation terms, but, there is no clear biological evidence about where auxin is produced, and experiments show that it is not produced in the meristem, but imported from the leaves \citep{Reinhardt,Reinhardt3}. \subsection{Results} Direct quantitative measurements of auxin distribution in plant tissues are very difficult due to the small size of the meristematic tissues at the time of patterning. Therefore, biologists rely on indirect markers based on auxin-regulated genes that encode fluorescent proteins. Figure \ref{phyllo} shows a typical output, where domains rich in auxin appear as regions of strong green fluorescence. The pattern is quite noisy; this might be due either to the indirect experiments, or to the fact that the number of auxin molecules is not too high. (\ref{eq:jonsson}) might model the limiting behavior of this random particle system when the number of molecules tends to infinity. We introduce such a particle system in Section \ref{Stochastic} and justify equations like (\ref{eq:jonsson}) using law of large numbers. We then focus on the properties of (\ref{eq:jonsson}), like the non-negativity of the solutions (see Proposition \ref{Invariance}). This dynamical system can be written in the compact form $$\frac{\mathrm{d}\vect{a}}{\mathrm{dt}}=f(\vect{a}),$$ where $\vect{a}(t) = (a_i(t))_{1\le i\le L}$ is the vector of auxin concentrations. The related critical points are the vectors $\vect{a}^*$ satisfying $f(\vect{a}^*)=0$. They are the candidates for describing the equilibrium auxin concentrations. For example, $a_i =0$ means that there is (almost) no auxin molecules in cell $i$, while a subset of cells $J$ such that $a_j >0$ for $j\in J$ indicates a hot spot which might correspond to an auxin peak. The critical points play a fundamental role in the dynamic, and one can suspect that any solution $\vect{a}(t)$ of (\ref{eq:jonsson}) will approach such critical points as $t$ is large. Of course, this is wrong for general dynamical systems, but here, the model is supposed to catch pieces of biological reality, and the robustness of the regular geometries observed in plants suggests that this might well be the case. Some of these critical points are repulsive or unstable, that is, the orbits or the solutions of (\ref{eq:jonsson}) will avoid them. In the contrary, some of them will be attractive. Given a critical point $\vect{a}^*$, a mathematical way of checking the stability or the unstability of $\vect{a}^*$ is to compute the Jacobian ${\rm d}f(\vect{a}^*)$, by retaining only its spectrum, that is the set of all eigenvalues of ${\rm d}f(\vect{a}^*)$. For example, $\vect{a}^*$ is unstable when there is an eigenvalue having a positive real part. \begin{definition} We say that a critical point $\vect{a}$ is stable when all the eigenvalues of the Jacobian evaluated at $\vect{a}$ have non-positive real parts. \end{definition} Section \ref{s.CriticalPoints} is concerned with the characterization of the set of critical points, mainly focusing on pure transport processes. For $D=0$, we first consider critical points $\vect{a}>0$, meaning that $a_i > 0$ for all $i$. Corollary \ref{CriticalAdjacency} shows that such elements are precisely the positive solutions of the linear equation \begin{equation}\label{Linear} \Gamma \vect{a} = c \ {\bf 1},\ c\text{ constant}, \end{equation} where $\Gamma$ is the adjacency matrix of the graph $G$, with entries $\Gamma_{ij}\in \{0,1\}$ such that $\Gamma_{ij} = 1$ if and only if cells $i$ and $j$ are nearest neighbours, and ${\bf 1}$ is the vector having all components equal to 1. Next, we focus on critical points such that $a_i =0 $ for $i$ belonging to some subset $I\subset \Lambda = \{1,\cdots, L\}$. They correspond to auxin depleted cells. The graph decomposes into a product of sub-graphs $\gamma$, which are the connected components of the sub-graph of $G$ induced by the node set $J=\Lambda\setminus I$. We thus look for $\vect{a}$ having positive components $a_j > 0$ for $j\in J$, which should correspond in some sense to auxin peaks. We obtain the distribution of auxin in such components, denoted by $\vect{a}\vert_\gamma$, by solving the linear systems ${\Gamma_\gamma \vect{a}\vert_\gamma = c_\gamma {\bf 1}\vert_\gamma}$. A typical example of such configurations is given in Figure \ref{stable}, where the elements of $I$ are black and the various components $\gamma$ red. We then turn to the asymptotic behavior of the solutions of system (\ref{eq:jonsson}), and establish in Proposition \ref{ConvergenceAuxin} that every solution converges toward the set of critical points. Our technique is based on Lyapunov functions, that is, we look for a function $H(\vect{a})$ which should be decreasing along the orbits of (\ref{eq:jonsson}), like energy in physics. We proved that, for pure transport processes with $D=0$, the function $$H(\vect{a})= -\kappa \langle{\bf 1},\vect{a}\rangle -\frac{1}{2}\langle\vect{a},\Gamma \vect{a}\rangle,$$ where $\langle\cdot,\cdot\rangle$ denotes the scalar product, satisfies $$\frac{{\rm d}H(\vect{a}(t))}{{\rm d}t}\le 0$$ for any solution of (\ref{eq:jonsson}). \citet{Newell} also considered the differential system (\ref{eq:jonsson}) by taking a spatial continuous limit, and showed that the limiting equation is a p.d.e. similar to the von Karman equations from nonlinear elasticity theory: $$\frac{\partial w}{\partial t} = \triangle^2 w + P \triangle w + \mathrm{const} \cdot w + \text{ nonlinear terms}.$$ The von Karman equations are of gradient type (see e.g. \citet{Shipman}), where the potential is given by the elastic energy. These energy functionals were then used in \citet{Newell} and \citet{Newell2} to provide a very interesting mechanical explanation of the appearance of Fibonacci numbers in plant patterns based on buckling. However, the limiting equations associated with the auxin flux are not of gradient type, see the discussion in \citet{Newell}. For the basic dynamical system (\ref{eq:jonsson}), our result shows that the system is minimizing the energy $H$, without being of gradient type. Section \ref{SpecialClass} considers stability, and Proposition \ref{StabilityGeneral} shows that the Jacobian ${\rm d}f(\vect{a}) = (\partial f_i/\partial a_j)$ evaluated at $\vect{a}$ is given by $${\rm d}f(\vect{a})=\frac{1}{N^2}d(\vect{a})\Gamma \Big(c\ {\rm id}-d(\vect{a})\Gamma\Big),$$ where $d(\vect{a})$ is the diagonal matrix of diagonal given by $\vect{a}$. This permits to check the stability of the critical points for various graphs. We present various results on graphs of interest for plant patterning questions, like the circle or the two-dimensional grid. As stated previously, the positive solutions $\vect{a}\vert_\gamma$ to the linear system $\Gamma_\gamma \vect{a}\vert_\gamma = c_\gamma {\bf 1}\vert_\gamma$ provide restrictions of the critical points to the connected components $\gamma$. We give a particularly simple condition on the sub-graph $\gamma$ of $G$ induced by the set $J=\Lambda\setminus I$ ensuring the non-stability of $\vect{a}\vert_\gamma$. Let ${\cal N}_i$, $i\in \Lambda$ be the neighbourhood of $i$, that is the set of nodes $j$ such that $j\ne i$ and $j\sim i$. The configuration $\vect{a}\vert_\gamma$ is unstable when the sub-graph $\gamma$ contains a path of length 4, of the form $$i_0 \to i_1 \to i_2 \to i_3,$$ such that $$i_1 \in {\cal N}_{i_0},\ i_2 \not\in {\cal N}_{i_0} \text{ and } i_3\not\in {\cal N}_{i_0}.$$ \begin{figure}[h!] \centering \includegraphics[width=6cm]{Fig2.png} \caption{ Example of components $\gamma$ of the two-dimensional grid that can potentially yield stable configurations, see Corollary \ref{Unstable}. \label{subgraph}} \end{figure} For example, if $G$ is a two-dimensional grid, any stable configuration is composed of patches of the basic building blocks given in Figure \ref{subgraph}; These patterns are however not geometrically regular in general, see Figure \ref{stable}. The more involved model of \citet{Smith1}, which uses PIN proteins in a direct way (here we assume a quasi-equilibrium, see \citet{Jonsson}), produces more regular patterns in simulations. In this setting, the transition rates are forced to follow exponential distributions. Hence, a strong selection based on rates of the form $\exp( b a_i)$, $b>0$ instead of the linear function $a_i$ seems to regularize the critical points. Of course, it might be interesting to justify such a choice biologically. We also argue in what follows that the critical configurations produced by the auxin flux might be more regular when coupled to periodic potentials. \begin{figure}[h!] \centering \includegraphics[height=5cm]{Fig3} \caption{\rm A potentially stable configuration when the graph $G$ is a rectangular grid, for the pure transport process. The black circles correspond to the values $a_i =0$, $i\in I$ (the set auxin depleted cells), while the red circles are such that $a_i > 0$, corresponding to auxin peaks. One can construct the set of all stable configurations by playing with the building block given by the square, the star, and the various parts of the star. This shows that dynamical system (\ref{eq:jonsson}) does not necessarily produce regular patterns. We can however give examples where such configurations are unstable, see Section \ref{StabilityGrid}\label{stable}} \end{figure} It might well be that the auxin flux self-organize in regular patterns when coupled to mechanical forces, for example, as already stated in the Introduction, see \citet{Newell}. In the same spirit, we introduce a simple model coupling the auxin flux to a potential $\phi$, which might model deformations, curvature or effects related to the meristem elasticity. We provide an example of the form \begin{equation}\label{Pot} \frac{{\rm d}a_i(t)}{{\rm d}t}=f_i(\vect{a})+\sum_{j\sim i}(a_j \phi_i -a_i \phi_j), \end{equation} $i=1,\ldots,L$. If the potential itself has some regularities, as it is the case in specific model given in \citet{Newell}, the auxin flux will exhibit much more regular patterns, see e.g. Figure \ref{Fig4}. \begin{figure} \begin{minipage}{0.5\textwidth} \begin{center} \includegraphics[height=4cm]{Fig4a.jpg} \vspace{3mm} (a) $\vect{a}(t)$ for $t\approx 0$ \end{center} \end{minipage} \begin{minipage}{0.5\textwidth} \begin{center} \includegraphics[height=4cm]{Fig4b.jpg} \vspace{3mm} (b) $\vect{a}(t)$ for large $t$. \end{center} \end{minipage} \caption{Simulation of the orbits of the differential equation (\ref{Pot}) with $T=1, D=0$ and a potential $\phi(x,y)= \sin(4\pi x/A)\sin(4\pi y/B)$ on a torus, where $x=1,\cdots,A$ and $y=1,\cdots,B$. The initial state is flat. (b) shows the state $\vect{a}(t)$ for large $t$: one sees regularly spaced auxin peaks, which are isolated in a background of auxin depleted cells. The potential and transport terms drift thus the process toward more regular patterns, while the transport process creates domains of auxin depletion. \label{Fig4}} \end{figure} Finally, the model provides an interesting conclusion: for most graphs, stable configuration are composed of building blocks isolated in a sea of auxin depleted cells. This might be the basis for repulsion between primordia: auxin molecules will not have the tendency to move toward them, leading to indirect repulsion. The idea of such repulsive force appeared a long time ago in the work of \citet{Hofmeister}. Many authors have used this hypothesis to develop very interesting mathematical models, all leading to phyllotactic patterns observed in nature, like Fibonacci numbers, the Golden Angle or helical lattices, see \citet{Adler,Atela,Douady,Kunz,Levitov}. \section{A stochastic model of auxin transport \label{Stochastic}} We consider a stochastic process related to differential equation (\ref{eq:jonsson}), describing the random numbers of auxin molecules $\eta_t(i)\in\mathbb{N}$ present in cell $i$ at time $t$, $i=1,\cdots, L$. The state space of this stochastic process is denoted by $\Omega_L = \mathbb{N}^{\Lambda}$, where $\Lambda$ is the set of $L$ cells (the nodes of the graph). Looking at equation (\ref{eq:jonsson}), we define transitions by supposing that any auxin molecule present in cell $i$ at time $t$ can be transported to a neighboring cell $j$ at rate $\bar q_{ij}(\eta)$ of the form \begin{equation}\label{StochasticRates} \bar q_{ij}(\eta)=\frac{\eta (j)}{\bar\kappa +\sum_{k\sim i}\eta (k)}, \end{equation} when $\eta (i)\ge 1$. This defines a Markov process with state space $\Omega_L$, describing the stochastic moves of the various auxin molecules. Let $M$ denote the total number of molecules. It turns out that the ordinary differential equation (\ref{eq:jonsson}) describes the large $M$ limit of the stochastic process (weak noise limit). This random particle system is then described as a gaussian process $X_M(t)\approx \eta_t /M$ in $\mathbb{R}^L$ drifted by the solution $\vect{a}(t)$ of (\ref{eq:jonsson}) for some covariance function. This approximation will be mathematically rigourous if the constants $\kappa$ and $\bar\kappa$ are related in such a way that $\bar\kappa = M \kappa$, and the limiting behavior of the rescaled number of auxin molecules is such that $\eta_t (i)/M \approx a_i(t)$, where $\vect{a}(t)$ solves (\ref{eq:jonsson}), with $\sum_{i\in\Lambda}a_i(t)\equiv 1$. Such stochastic particle systems are known as {\it density dependent population processes}, and the above limit has been treated in detail in \citet{Ethier}, and corresponds to a law of large numbers. Notice that different kinds of limits can also be considered. Stochastic mass transport processes of this type have also appeared in physics, and are known as {\it generalized zero range processes}, see e.g. \citet{Evans,Godreche,Grosskinsky,Kipnis}. In this setting, hydrodynamical limits are considered, when both $M$ and $L$ tend simultaneously to $\infty$ in such a way that $M=\rho L$, for a fixed density. Simulations show the appearance of condensates when $\rho$ is larger than a critical threshold $\rho_c$, which might represent auxin peaks in some way. Mathematically, the theory of condensation is not developed at present time for these general processes, so that we here focus on the weak noise limit. \\ The gaussian approximation of $\eta_t /M$ is defined as follows: for $i=1,\cdots, L$, consider the unit vectors $e_i$ with $e_{i}(j)=0$ when $j\ne i$ and $e_i(i)=1$. Let ${\rm d} f$ be the Jacobian ${\rm d} f = (\partial f_i/\partial a_j))_{i,j=1,\cdots,L}$. For simplicity, we illustrate the transition rates for cells arranged along a circle: the rate functions are given by functions $\beta_l(\vect{a})$, $l\in \mathbb{Z}^l$, satisfying \begin{align*} \beta_{e_{i+1}-e_i}(\vect{a}) &= (D a_i +T\frac{a_{i+1}a_i}{\kappa +a_{i-1}+a_{i+1}} ) \ ; &\mathrm{for} \; i= 1, \ldots, L, \\ \beta_{e_{i-1}-e_i}(\vect{a}) &= (D a_i +T\frac{a_{i-1}a_i}{\kappa + a_{i-1}+a_{i+1}} ) \ ; &\mathrm{for} \; i= 1, \ldots, L,\\ \beta_{l}(\vect{a}) &=0 & \mathrm{for} \ ; l \neq e_{i-1}-e_{i}, e_{i+1}-e_{i}. \end{align*} For example $e_{i+1}-e_i$ means that an auxin molecule of cell $i$ has been transported in cell $i+1$. For arbitrary graphs, the definitions of the rates $\beta_l$ are similar. With these notations, we can define the matrix $G$ $$G(\vect{a})=\sum_{l\in\mathbb{Z}^L}\beta_l(\vect{a})l l^*,$$ which will be an essential element of the covariance matrix associated with the gaussian approximation. Consider the following matrix valued differential equation $$\frac{\partial \phi(t,s)}{\partial t}={\rm d} f(\vect{a}(t))\phi(t,s),\ \ \phi(s,s)={\rm id}.$$ Then, as $M$ is large, one gets that (see e.g. \citep{Ethier}) $$ \frac{\eta_t}{M}= \vect{a}(t) + \frac{1}{\sqrt{M}}V_t,$$ where $V_t$ is a gaussian process of mean $\phi(t,0)V(0)$ and of covariance function $${\rm Cov}(V(t),V(r)) = \int_0^{\min\{t,r\}}\phi(t,s)G(\vect{a}(s))\phi(r,s)^* {\rm d}s.$$ \section{Basic properties of the auxin flux\label{s.BasicProperties} } \begin{proposition}\label{Invariance} Every solution $\vect{a}$ of (\ref{eq:jonsson}) starting in $\mathbb{R}^L_{\geq 0}$ remains non-negative, and is conservative, that is, $$\forall t \in \mathbb{R}_{\geq 0}, \; \sum_i^L a_i(t) =\sum_i^L a_i(0) = \rho L.$$ Moreover, the system (\ref{eq:jonsson}) admits a unique solution defined over $ [0,+\infty)$. When $a_i(0)>0$, then $a_i(t) > 0$, $\forall t > 0$. For pure transport processes with $D=0$, $a_i(0)=0 \Rightarrow a_i(t)\equiv 0,\ \forall t > 0$. \end{proposition} The proof of proposition \ref{Invariance} is given in Section \ref{Appendix}. Let us rewrite the system \eqref{eq:jonsson}, for $1\leq i\leq L$ \begin{equation}\label{Diff-ai} \dot a_i=D\sum_{k\sim i}a_k+T\sum_{k\sim i}(\frac{a_k}{\kappa+\sum_{j\sim k}a_j}-\frac{a_k}{\kappa+\sum_{j\sim i}a_j}-\frac{D}{T})a_i, \end{equation} with the initial condition $\vect{a}(0)\in \mathbb{R}_+^L$. \begin{proposition} If the graph is connected and $D>0$, the only critical point of \eqref{Diff-ai} in $\mathbb{R}_+^L$ admitting zero components is the origin. \end{proposition} \begin{proof} Let $a_i=0$ where $a_i$ is the $i$-th component of a critical point $a \in \mathbb{R}_+^L$ of \eqref{Diff-ai}. Clearly \eqref{Diff-ai} entails $\sum_{k\sim i}a_k=0$ and the non-negativity of each term, $a_k=0$ for all $k\sim i$. Since the graph is connected we deduce that $a_k=0$ for all $1\leq k \leq L$. \end{proof} \begin{proposition} Let us assume that the graph is connected and $D>0$. If $\sum_{k=1}^L a_k(0)>0$, then for all $i\in\{1,...,L\}$, we have $\underline \lim_{t\rightarrow +\infty}a_i(t)>0.$ \end{proposition} To prove the previous proposition, we will use the following Proposition, see \citet{Gabriel}. \begin{proposition} Let $f:\mathbb{R}_+\rightarrow \mathbb{R}$ be twice differentiable and bounded together with $\ddot f$. If, as $n\rightarrow +\infty$, $t_n\uparrow +\infty$ and $f(t_n)\rightarrow \underline \lim_{t\rightarrow +\infty} f(t)$ (or $f(t_n)\rightarrow \overline \lim_{t\rightarrow +\infty} f(t)$), then $\dot f(t_n)\rightarrow 0$. \end{proposition} \begin{remark} \begin{enumerate} \item[(1)] The boundedness of $f$ and $ \ddot f$ implies the one of $\dot f$. \item[(2)] The assumptions in the preceding proposition can be weakened without changing essentially the proof: "$f:\mathbb{R}_+\rightarrow \mathbb{R}$ be twice differentiable and bounded together with $\ddot f$ " can be replaced by "$f:\mathbb{R}_+\rightarrow \mathbb{R}$ is bounded and differentiable and $\dot f$ is uniformly continuous". \end{enumerate} \end{remark} \begin{proof} If $a_k(0)=0$ for all $1\leq k\leq L$, then the unique solution is identically zero. Otherwise $\sum_{k=1}^L a_k(0)>0$. Let us suppose that for some $i\in\{1,...,L\}$, $$\underline \lim_{t\rightarrow +\infty}a_i(t)=0.$$ Let us introduce the notation $\underline a_i=\underline\lim_{t\rightarrow +\infty}a_i(t)$. Since $a_i(t)$ is bounded together with its second derivative, the preceding proposition applies and for any sequence $t_n\uparrow +\infty$ such that $a_i(t_n)\rightarrow \underline a_i$, we have $\dot a_i(t_n)\rightarrow 0$ as $n\rightarrow +\infty$. Every $a_k(t_n)$ being bounded in the right-hand member of the equation for $\dot a_i(t_n)$, we conclude that $\lim_{n\rightarrow +\infty}D\sum_{k\sim i}a_k(t_n)=0$. The non-negativity of each $a_k(t_n)$ entails $\lim_{n\rightarrow +\infty}a_k(t_n)=0=\underline a_k$ for every ${k\sim i}$. According to the above proposition, $\lim_{n\rightarrow +\infty}\dot a_k(t_n)=0$ for every ${k\sim i}$ and since the graph is connected, repeating the same argument provides $\lim_{n\rightarrow +\infty}\dot a_j(t_n)=0$ for every $j\in\{1,...,L\}$. Thus $0=\lim_{n\rightarrow +\infty}\sum_{1\leq j\leq L}a_j(t_n)=\sum_{k=1}^L a_k(0)>0$, a contradiction. \\ \\ As a consequence, for $D>0$, it is impossible to have $\lim_{t\rightarrow +\infty}a_i(t)=0$, and thus none of the compartments can become empty asymptotically. \end{proof} \section{Tools from Markov Chain theory\label{s.Tools} } We will use notions from Markov chain theory, and hence consider generators $Q:\ \Lambda\ {\rm x} \ \Lambda \longrightarrow \mathbb{R}$, $Q =\{q_{ij},\ i,\ j\in\Lambda\}$, such that $$q_{ij}\ge 0,\ \text{ for } i\ne j \text{ and } q_{ii}=-\sum_{j\ne i}q_{ij}.$$ For example, the auxin flux described by (\ref{eq:jonsson}) contains implicitly a generator $Q(D,T,\vect{a})$ given by \begin{equation}\label{LaplaceBasic} \begin{cases} q_{ij}(D,T,\vect{a})= D + T q_{ij}(\vect{a}),& i \sim j, \\ q_{ij}(D,T,\vect{a})=0 , & i \nsim j, i \neq j. \\ q_{ii}(D,T,\vect{a})= - \sum_{j\neq i} q_{ij}(D,T,\vect{a}), \end{cases} \end{equation} where we set $$q_{ij}(\vect{a})= \frac{a_j}{\kappa + \sum_{k \sim i} a_k }.$$ $Q$ is {\bf irreducible} when for any pair of nodes $(i,j)$, there is a path $i_0 =i\to i_1\to i_2\to \cdots \to i_k =j$ such that $q_{i_n i_{n+1}}>0$, $n=0,\cdots, k-1$. When $Q$ is irreducible, one can prove that there is a unique {\bf invariant probability measure} $\pi$ satisfying $\pi^* Q =0$. An irreducible transition kernel $Q$ of invariant probability measure $\pi$ is said to be {\bf reversible} when $$\pi_i q_{ij}\equiv \pi_j q_{ji},\ \forall i\ne j.$$ \section{Characterization of the critical points\label{s.CriticalPoints} } We can write (\ref{eq:jonsson}) in the more compact form $$\frac{\mathrm{d}a_i}{\mathrm{dt}} = f_i(\vect{a})=\sum_{j\sim i} ( a_j q_{ji}(D,T,\vect{a})-a_i q_{ij}(D,T,\vect{a})), \ \ \frac{\mathrm{d}\vect{a}}{\mathrm{dt}} = f(\vect{a})= \vect{a}^{\ast}Q(D,T,\vect{a}).$$ Our first aim is to look for the critical points of the above dynamical system, that is, to find the element $\vect{a}\in\mathbb{R}^L$ solving the equations $f(\vect{a})=0$, which can be rewritten as $\vect{a}^{\ast}Q(D,T,\vect{a})=0$. Hence, any solution to $f(\vect{a})=0$ is an invariant measure associated with the transition function $Q(D,T,\vect{a})$. We will use the following facts: \begin{itemize} \item{} When $D>0$, the generator $Q(D,T,\vect{a})$ is irreducible. \item{} For pure transport processes where $D=0$ and $T>0$, $Q(0,T,\vect{a})$ is irreducible if and only if $a_i >0$ $\forall i$. \end{itemize} In the irreducible case, let $\pi(\vect{a})$ denote the associated positive invariant probability measure. We thus look for $\vect{a} > 0$ such that \begin{equation}\label{FundamentalEquation} \frac{\vect{a}}{\sum_{i\in\Lambda}a_i} =\pi(\vect{a}). \end{equation} \subsection{The irreducible case} \subsubsection{Pure transport processes} If $Q(0,T,\vect{a})$ is reversible, the equation $f(\vect{a})=0$ is equivalent to the set of equations \begin{equation}\label{LocalEquation} a_i q_{ij}(0,T,\vect{a})\equiv a_j q_{ji}(0,T,\vect{a}),\ \ i\ne j. \end{equation} In what follows, we will use the functions \begin{equation}\label{N} N_k =N_k(\vect{a}) = \kappa + \sum_{j\sim k}a_j. \end{equation} \begin{lemma}\label{ReversibilityPureTransport} Let $G$ be a connected graph. Assume that $D=0$ and $T>0$. Then $Q(0,T,\vect{a})$ is reversible $\forall \vect{a}>0$, of invariant probability measure given by \begin{equation}\label{InvariantMeasureTransport} \pi (\vect{a}) =\Big(\frac{a_i N_i}{Z(\vect{a})}\Big)_{i\in\Lambda}, \end{equation} where $$Z(\vect{a})=\sum_{i\in\Lambda}a_i N_i =\kappa \sum_{i\in\Lambda}a_i + \sum_{i \in \Lambda} \sum_{j\sim i}a_i a_j.$$ In this case, $\vect{a}>0$ is a critical point with $f(\vect{a})=0$ if and only if $N_i(\vect{a})$ does not depend on $i$, with \begin{equation}\label{NConstant} N_i(\vect{a})\equiv \frac{Z(\vect{a})}{\sum_{i\in\Lambda }a_i}=\kappa + \frac{ \sum_{i \in \Lambda} \sum_{j\sim i}a_i a_j}{\sum_{i\in\Lambda}a_i}. \end{equation} \end{lemma} \begin{remark}\label{Reinforced1} The transition rates $q_{ij}(\vect{a})$ are similar to the rates associated with a family of Markov chains used in the study of vertex-reinforced random walks, see \citet{Benaim1,Benaim2} and \citet{Pemantle}, and Lemma \ref{ReversibilityPureTransport} is an adaptation of these results. Interestingly, such vertex-reinforced random walks are approximated by deterministic dynamical systems called {\it replicator dynamics}, of the form $$\frac{{\rm d}a_i}{{\rm d}t}= a_i (N_i'(\vect{a})-H'(\vect{a})),$$ where $N_i'(\vect{a})=N_i(\vect{a})-\kappa$ and $H'(\vect{a})=\sum_{i\in\Lambda}a_i N_i'$. In this setting, the function $H'$ plays the role of a Lyapunov function. We will also find a similar Lyapunov function, see Section \ref{s.Convergence}. \end{remark} \begin{proof} Assume, without loss of generality, that $T=1$. First notice that \begin{eqnarray*} \sum_{j\sim i}\pi(\vect{a})_j q_{ji}(0,T,\vect{a})&=&\sum_{j\sim i}\frac{a_j N_j}{Z(\vect{a})}\frac{a_i}{N_j}\\ &=&\frac{1}{Z(\vect{a})}\sum_{j\sim i}a_i a_j = \frac{a_i}{Z(\vect{a})}\sum_{j\sim i}a_j = \frac{a_i (N_i-\kappa)}{Z(\vect{a})}. \end{eqnarray*} The identity $$\pi(\vect{a})_i q_{ii}(0,T,\vect{a})=-\frac{a_i N_i}{Z(\vect{a})}\sum_{j\sim i}\frac{a_j }{N_i}=-\frac{a_i (N_i-\kappa)}{Z(\vect{a})},$$ shows that $$\sum_{j\sim i}\pi(\vect{a})_j q_{ji}(0,T,\vect{a})+\pi(\vect{a})_i q_{ii}(0,T,\vect{a})=0,$$ so that $\pi(\vect{a})$ is a invariant probability measure for $Q(0,T,{\bf a})$. $\bf{a}>0$ is a critical point with $f(\vect{a})=0$ if and only if $\frac{\vect{a}}{\sum_{i \in \Lambda}a_i}$ is an invariant measure for $Q(0,T,{\bf a})$. Because of the unicity of the invariant measure, we obtain $$N_i(\vect{a})\equiv \frac{Z(\vect{a})}{\sum_{i\in\Lambda }a_i}.$$ \end{proof} Let $\Gamma$ be the adjacency matrix of the graph $G=(\Lambda,E)$, that is, the matrix with entries given by $\Gamma_{ij}= 1$, when $i\ne j$ and $i\sim j$, and $\Gamma_{ij}=0$ otherwise. We summarize the above results in the following \begin{corollary}[Pure Transport Processes]\label{CriticalAdjacency} Assume that $D=0$ and $T>0$ (no diffusion), and consider only positive $\vect{a}>0$. Then, \begin{equation}\label{SolAdjacency} f(\vect{a})=0 \hbox{ if and only if }\Gamma \vect{a} = c(\vect{a}) {\rm\bf 1},\ {\rm\bf 1}=(1,\cdots,1)^*, \end{equation} where \begin{equation}\label{C} c(\vect{a})=\frac{ \sum_{i \in \Lambda} \sum_{j \sim i}a_i a_j}{\sum_{i\in\Lambda} a_i} =\frac{\langle\vect{a},\Gamma \vect{a}\rangle}{\langle\vect{a},{\bf 1}\rangle}. \end{equation} \end{corollary} \begin{remark} Let $c$ be a constant, and let $\vect{a}$ (if it exists) be such that $\Gamma \vect{a} = c {\bf 1}$ and ${\bf a}\geq 0$. Then ${\bf a}$ is a critical point and $c$ is given by (\ref{C}). \end{remark} \begin{example}[The one-dimensional cycle]\label{CriticalCircle} Assume that the $L$ cells are arranged on a cycle. The pure transport process ($D=0$) is reversible, so that the critical points $\vect{a}>0$ of dynamical system (\ref{eq:jonsson}) are solutions of linear system (\ref{SolAdjacency}). We illustrate some results given in Section \ref{Circle}. When $L> 4$ is a multiple of 4, the set of critical points $\vect{a}\in \mathbb{R}^L$ forms a two dimensional sub-manifold $M_c$ of $\mathbb{R}^L$ given by, when $\rho = 1/L$, $$ M_c =\{(a_1,a_2,-a_1+2\rho,-a_2+2\rho,a_1,a_2,-a_1+2\rho,-a_2+2\rho,\cdots );\ a_k \in (0,2\rho),\ k=1, 2\}. $$ When $L>4 $ is not a multiple of 4, $M_c$ is reduced to the uniform configuration $M_c =\{(\rho,\rho,\cdots,\rho)\}$. We will see that the uniform configuration is always unstable, and that the other critical points are unstable when $\vect{a}>0$. However, the boundary points are all stable. \end{example} \subsubsection{General transport processes} \begin{lemma}\label{CharacterizationIrreducible} Assume that $G$ is connected, and that both $D$ and $T$ are positive. For $\vect{a}>0$, $f(\vect{a})=0$ if and only if there exists a constant $c$ such that $\vect{a}$ solves the following system of quadratic equations: \begin{equation}\label{QuadraticSystem} (a_i -\frac{D}{T})N_i(\vect{a})+ a_i = c\ a_i N_i(\vect{a}),\ i=1,\cdots,\ \Lambda. \end{equation} \end{lemma} \begin{proof} Let $\mu_i = (a_i -D/T)N_i$, $i=1,\cdots,\Lambda$. Then $\vect{\mu} = (\mu_i)_{1\le i\le\Lambda}$ behave \begin{eqnarray*} (\mu Q(0,T,\vect{a}))_i &=&\sum_{j\sim i}\mu_j q_{ji}(\vect{a})+\mu_i q_{ii}(\vect{a})\\ &=&T \sum_{j\sim i} (a_j -\frac{D}{T})N_j \frac{a_i}{N_j}-T(a_i-\frac{D}{T})N_i \sum_{j\sim i}\frac{a_j}{N_i}\\ &=&T \sum_{j\sim i}(a_j -\frac{D}{T})a_i - T (a_i-\frac{D}{T})\sum_{j\sim i}a_j\\ &=& T \frac{D}{T}\sum_{j\sim i}(a_j -a_i), \end{eqnarray*} which gives the diffusion term contained in $f$. Hence, one can rewrite the equation $f(\vect{a})=0$ as $$(\vect{\mu} + \vect{a})Q(0,T,\vect{a})=0.$$ By assumption, $\vect{a} > 0$ so that $Q(0,T,\vect{a})$ is irreducible as a Markov generator, and hence has only one invariant probability measure. The linear space composed of invariant measures is one-dimensional, so that the measure $\vect{\mu} + \vect{a}$ is proportional to $\pi(\vect{a})$. The result is a consequence of expression for $\pi(\vect{a})$ given in (\ref{InvariantMeasureTransport}). \end{proof} The next paragraph generalizes the diffusive part to model the effect of potentials on the auxin flux. \subsubsection{Inclusion of potentials\label{Mechanical}} As stated in the Introduction, experiments have shown that both mechanical and biochemical processes play a role in plant patterning. We here adapt some ideas of \citet{Newell} and \citet{Newell2} to our discrete setting. The former considered the discrete model (\ref{eq:jonsson}) by taking a continuous limit, resulting in a p.d.e. describing the time evolution of auxin concentrations, which is coupled to the von Karman equations from elasticity theory. These equations describe the deformations of an elastic shell or plate subject to various loading conditions. Usually, the in-plane stress is described using Airy functions which are potential for the stress field. Here, we will simply suppose that this potential is given by some function $(\phi_i)_{1\le i\le L}$. We also suppose that the auxin flux is directed in part by these potentials and assume a model of the form \begin{equation}\label{potential} \frac{{\rm d}a_i(t)}{{\rm d}t}=f_i(\vect{a})+\sum_{j\sim i}(a_j \phi_i -a_i \phi_j), \end{equation} $i=1,\ldots,L$. We will see in the sequel that the critical points associated to (\ref{eq:jonsson}) exhibit regular geometrical patterns locally, but not necessarily globally. The potential might be defined in such a way to reproduce the patterns obtained when considering mechanical buckling, and the model defined by (\ref{potential}) might then lead to more regularly spaced auxin peaks, see Figure \ref{Fig4}. \begin{lemma}\label{Coupling} Assume a model of the form (\ref{potential}), with $D>0$ and $T>0$. Let $\vect{a}>0$. Then $ f_i(\vect{a})+\sum_{j\sim i}(a_j \phi_i -a_i \phi_j)=0$ if and only if there exists a constant $c\in\mathbb{R}$ such that $$(a_i-\frac{D}{T}-\frac{1}{T}\phi_i)N_i(\vect{a}) + a_i= c\ a_i N_i(\vect{a}),\ i=1,\cdots,\Lambda.$$ \end{lemma} The proof of Lemma \ref{Coupling} is identical to the proof of Lemma \ref{CharacterizationIrreducible}. \subsection{The reducible case \label{Reducible}} We can adapt the previous notions to the case $D=0$ and reducible transition kernel $Q(0,T,\vect{a})$, that is when some $a_i$ vanish. In this case, there is a pair of nodes $i$ and $j$ such that $$\prod_{k=1}^m q_{i_{k-1}i_k}(\vect{a})=0,$$ for all paths $\gamma:\ i_0=i\to i_1\to\cdots\to i_m =j$ taking $i$ to $j$ in the graph $G=(\Lambda,E)$. Example \ref{CriticalCircle} shows that the critical points associated with (\ref{eq:jonsson}) on a circle form a manifold when $L$ is a multiple of 4. We also assert that the boundary points obtained from $M_c$ by setting $a_1=0$ are stable. We will thus consider subsets $I\subset \{1,\cdots,L\}$ corresponding to the sites $i$ where $a_i =0$. We will denote by $\vect{a}\vert_I$ the restriction of any $\vect{a}$ to $I$. The same notations apply for generators and adjacency matrices, where one conserves only the transitions rates $q_{ij}(\vect{a})$ such that $i$, $j \in\Lambda\setminus I$. According to Lemma \ref{ReversibilityPureTransport}, these sub-transition kernels are reversible for $\vect{a}$ such that $a\vert_{\Lambda\setminus I} > 0$. If one removes the nodes $i\in I$, the graphs decomposes as a product of connected components $\gamma$, which form the sub-graph of $G$ induced by the nodes of $J=\Lambda\setminus I$. The special form of the vector field associated with (\ref{eq:jonsson}) ensures however that the set of critical values such that $\vect{a}\vert_I =0$, $I\subset \{1,\cdots,L\}$, can be obtained by considering a family of transitions functions $Q_\gamma(0,T,\vect{a}\vert_\gamma)$. For each component $\gamma$, Corollary \ref{CriticalAdjacency} shows that the related critical points are obtained by solving linear systems of the form \begin{equation}\label{LocalSolution} \Gamma_\gamma \vect{a}\vert_\gamma = c_\gamma {\rm\bf 1}\vert_\gamma, \end{equation} where $\Gamma_\gamma$ is the adjacency matrix of the sub-graph $\gamma$, and the $c_\gamma$ are normalization constants chosen in such a way that $\sum_i a_i = \rho L$. The set of critical points is then obtained by taking the direct product of the sets of critical values associated with the sub-graphs $\gamma$. \section{Asymptotic properties of the auxin flux for pure transport processes\label{s.Convergence} } We consider the convergence of the dynamical system (\ref{eq:jonsson}) when $D=0$ using the method of Lyapunov functions. Suppose without loss of generality that $T=1$. We look for a function $H(\vect{a})$ such that $$\frac{{\rm d}H(\vect{a}(t))}{{\rm d}t} =\langle\nabla H(\vect{a}(t)),\frac{{\rm d}\vect{a}(t)}{{\rm d}t}\rangle \le 0,\ \forall t \geq 0.$$ If furthermore this function is bounded, then $H(\vect{a}(t))$ converges, and we can in this way get useful information concerning the convergence (e.g. toward the set of critical points) of $\vect{a}(t)$ solution of (\ref{eq:jonsson}). \begin{lemma}\label{Lyapunov} Assume that $D=0$ and set $T=1$. Let \begin{equation}\label{LyapounovFunction} H(\vect{a}) =-\frac{1}{2}\sum_{k\in\Lambda}a_k (N_k(\vect{a})+\kappa)=-\kappa \sum_{k\in\Lambda}a_k - \frac{1}{2} \sum_k\sum_{j\sim k}a_j a_k, \end{equation} where the functions $N_k(\vect{a})$ have been defined in (\ref{N}). Let ${\vect a}(t)$ be a solution of the o.d.e. (\ref{eq:jonsson}) such that $a_i(0)\ge 0$. Then \begin{equation}\label{LyapounovFormula} \frac{{\rm d}H(\vect{a}(t))}{{\rm d}t} =-\frac{1}{2}\sum_{k \in \Lambda}\sum_{j\sim k}q_{kj}q_{jk}(N_k-N_j)^2 \le 0, \forall t\geq 0. \end{equation} \end{lemma} Notice that \begin{equation*} \frac{\partial H}{\partial a_k}(\vect{a})=- N_k(\vect{a}). \end{equation*} since the function $N_k =N_k(\vect{a})=\kappa+\sum_{j\sim k}a_j$ does not depend on the variable $a_k$. \begin{proof} One can write \begin{eqnarray*} \frac{{\rm d}H(\vect{a}(t))}{{\rm d}t} &=&-\sum_{k\in\Lambda}N_k \sum_{j\sim k} (a_j \frac{a_k}{N_j}-a_k\frac{a_j}{N_k}) =- \sum_{k\in\Lambda}N_k\sum_{j\sim k}\frac{a_j}{N_k}\frac{a_k}{N_j}(N_k-N_j)\\ &= & -\sum_{k\in\Lambda}N_k\sum_{j\sim k}q_{kj}q_{jk}(N_k-N_j) \\ &=&-\frac{1}{2}\sum_{k \in \Lambda}\sum_{j\sim k}q_{kj}q_{jk}\Big(N_k(N_k-N_j)+N_j(N_j-N_k)\Big)\\ &=&-\frac{1}{2}\sum_{k \in \Lambda}\sum_{j\sim k}q_{kj}q_{jk}(N_k-N_j)^2. \end{eqnarray*} By Proposition \ref{Invariance}, $a_i(0)\ge 0$, $\forall i$, implies that $a_i(t)\ge 0$, $\forall i$, $\forall t > 0$, so that $q_{kj}\ge 0$ and $q_{jk}\ge 0$, $\forall k\sim j$, and $\forall t > 0$, proving the assertion. \end{proof} To prove the convergence of the auxin flux, we use a Theorem of Lyapunov- LaSalle (see \citet{LaSalle}). Introduce the notation $$\dot{H}(\vect{x})= \sum_{i=1}^L\frac{\partial H}{\partial x_i}f_i(\vect{x})=-\frac{1}{2}\sum_{k \in \Lambda}\sum_{j\sim k}q_{kj}q_{jk}(N_k-N_j)^2.$$ Consider the sets $$ \Omega = \{\vect{x}\in[0, 2\rho]^L \, \mid \, \sum_i x_i=\rho L\} \text{ and } E_{\Omega} =\{\vect{x}\in\Omega \, \mid \, \dot{H}(\vect{x})=0\}.$$ \begin{lemma}\label{CriticalInvariant} The set $E_\Omega$ is the set of critical points. \end{lemma} \begin{proof} Let $x\in\Omega$. Then $\dot{H}(\vect{x})=0$ if and only if for all pairs $j\sim k$, either $x_j =0$, $x_k =0$ or $N_j = N_k$. Let $I_x :=\{i\in\Lambda;\ x_i =0\}$. Then $\dot{H}(\vect{x})=0$ if and only if, for all pairs of neighbours $j\sim k$ such that $j\in\Lambda\setminus I_x$ and $k\in\Lambda\setminus I_x$, one has that $N_j =N_k $. Let $\gamma$ be the connected component of the graph containing this pair (see Section \ref{Reducible}), with $N_j = N_k = c_\gamma$, for some positive constant $c_\gamma$. Then, $N_i\equiv c_\gamma$, $\forall i\in \gamma$. One then gets that $\dot{H}(\vect{x})=0$ if and only if the function $N$ is constant on the connected components $\gamma$ associated with $I_x$. Hence, for each such component, one has that $\Gamma_\gamma x\vert_\gamma = c_\gamma {\bf 1}\vert_\gamma$. The results is a consequence of Corollary \ref{CriticalAdjacency} and of the results of Section \ref{Reducible}. \end{proof} Let $M_{\Omega}$ be the largest invariant subset of $E_{\Omega}$. As $E_{\Omega}$ contains only the critical points of $f$, $E_{\Omega}$ is invariant. Hence, $M_{\Omega}=E_{\Omega}$. \begin{proposition}\label{ConvergenceAuxin} Let $\vect{a}(t)$ be the unique solution of the o.d.e. (\ref{eq:jonsson}) with $\vect{a}(0)\in\Omega$. Then $\vect{a}(t)\in\Omega$, $\forall t > 0$ and $\vect{a}(t)$ converges to $M_\Omega$ as $t\to\infty$. \end{proposition} \begin{proof} Proposition \ref{Invariance} shows that the compact set $\Omega$ is invariant. The continuously differentiable function $H$ is such that $\dot{H}(\vect{x})\leq 0$, $\forall x \in\Omega$. The results then follows from a result of \citet{LaSalle}. \end{proof} \begin{corollary}\label{limit point} Every limit point of a trajectory $\bf{a}(t)$ is a critical point i.e. if for $t_n \nearrow \infty$, $a(t_n)\rightarrow a_{\infty}$ then $a_{\infty}\in M_{\Omega}$. \end{corollary} \begin{proof} If $a_{\infty}\not\in \Omega$, as $E_{\Omega}=M_{\Omega}$ is a closed set then $d()>0$. It's a contradiction with the proposition \ref{ConvergenceAuxin}. \end{proof} \begin{remark}[Global minimizers of $H$]\label{Global} The literature contains results on the set $\mu(G)$ of minimizers of $H$ when $\sum_{i\in \Lambda}a_i = 1$. The authors of \citep{Motzkin} proved that $\max_{\vect{a}}\langle\vect{a},\Gamma \vect{a}\rangle= (\omega(G)-1)/\omega(G)$, where $\omega(G)$ is the clique number of $G$, that is the order of the largest complete sub-graph of $G$. Moreover, they obtained that the absolute minimum of $H$ is achieved at an interior point of the unit simplex if and only if $G$ is a complete multipartite graph. Various results were then obtained in \citep{Waller}. where for example it is proved that $\mu(G)$ is a simplicial complex, having an automorphism group similar to that of $G$. In some sense, $\mu(G)$ mirrors some of the geometry of the graph $G$. \end{remark} \begin{proposition} If $D=0$, then system \eqref{Diff-ai} does not admit non-constant periodic solutions. \end{proposition} \begin{proof} Every point of a periodic solution is a limit point and, according to our preceding results (corollary \ref{limit point}), it is a critical point. Unicity of a solution provides a contradiction. \end{proof} \begin{proposition} If $D=0$, then the set of critical points of system \eqref{Diff-ai} is non-countable. \end{proposition} \begin{proof} Let $\sum_{k=1}^L a_k(0)=C>0$. We know that the corresponding solution has to remain in the hyperplane $(\Pi): \sum_{k=1}^Lx_k=C$. Since the path is bounded it admits at least one limit point and, according to our preceding results (corollary \ref{limit point}), the latter is a critical point belonging to $(\Pi)$. Consequently, for every positive value of $C$, we obtain distinct critical points. \end{proof} \section{Stability of pure transport processes\label{SpecialClass}} \subsection{The irreducible case} We consider pure transport processes (i.e. $D=0$) on general graphs. We first discuss the stability of the special class of critical points $\vect{a}> 0$ solving equations of the form $\Gamma \vect{a} = c {\bf 1}$. Without loss of generality, we set $T=1$. For such $\vect{a}$, $N_i(\vect{a})\equiv N=\kappa +c$, and therefore, when the graph is regular, one obtains for example the uniform solution $\vect{a}=(\rho)=(\rho,\ldots,\rho)$. When $G$ is the complete graph $K_L$ of $L$ nodes, where every pair of nodes $i\ne j$ are nearest neighbours, a simple computation shows that the Jacobian ${\rm d}f((\rho))$ associated with (\ref{eq:jonsson}) and evaluated at the uniform configuration $(\rho)$, is given by $$\frac{\partial f_i ((\rho))}{\partial a_j}=\frac{\rho^2}{N^2}, \ \frac{\partial f_i ((\rho))}{\partial a_i}=-\sum_{j\ne i}\frac{\rho^2}{N^2}.$$ Consequently, ${\rm d}f((\rho))$ is a symmetric generator, and thus admits only non-positive real eigenvalues. The uniform configuration is then stable for the complete graph. \begin{proposition}\label{StabilityGeneral} Let $\vect{a} > 0$ be such that $\Gamma \vect{a} = c {\bf 1}$, for some positive constant $c>0$. According to Lemma \ref{CriticalAdjacency}, $\vect{a}$ is a critical point, with $N_i(\vect{a})\equiv N = c+\kappa$. Assume that $D=0$ and set $T=1$. The Jacobian ${\rm d}f(\vect{a}) = (\partial f_i/\partial a_j)$ evaluated at $\vect{a}$ is then given by $${\rm d}f(\vect{a})=\frac{1}{N^2}d(\vect{a})\Gamma \Big(c\ {\rm id}-d(\vect{a})\Gamma\Big),$$ where $d(\vect{a})$ is the diagonal matrix of diagonal given by $\vect{a}$, and where $\Gamma$ is the adjacency of the graph. \end{proposition} The proof of Proposition \ref{StabilityGeneral} is given in Section \ref{Appendix}. We now characterize the set of stable configurations using the spectral gap of the matrix $P(\vect{a})=\Gamma d(\vect{a})/c$. Let $P$ be a stochastic matrix associated with a Markov chain on the state space $\Lambda$. We assume that $P$ is reversible with invariant probability measure $\pi$. Let $A$ be the matrix defined by $A_{ij}=\pi_i p_{ij}\equiv \pi_j p_{ji}$, $i\ne j$. The eigenvalues of $P$ are real, given by $-1\le\beta_L\le\cdots\beta_2 <\beta_1 = 1$, and the spectral gap is given by $C=1-\beta_2$. Let $L = {\rm id}-P$ be the associated Laplace operator, of eigenvalues $\lambda_k = 1-\beta_k$, $k=1,\cdots, L$. Then (see e.g. \citep{Diaconis}) \begin{equation}\label{Gap} C =\lambda_2 = \inf\{\frac{{\cal E}_\pi(\phi,\phi)}{{\rm Var}_\pi(\phi)}:\ \phi \text{ is nonconstant}\}, \end{equation} where $${\cal E}_\pi (\phi,\phi)=\frac{1}{2}\sum_{i,j}(\phi(j)-\phi(i))^2 A_{ij},$$ is the Dirichlet form associated with $L$, and where ${\rm Var}_\pi (\phi)$ is the variance of the random variable $\phi$ with respect to the invariant probability measure $\pi$. One can check that $${\rm Var}_\pi (\phi)=\frac{1}{2}\sum_{i,j}(\phi(j)-\phi(i))^2 \pi_i \pi_j.$$ We can also reformulate the above variational problem in a different way: set $\langle \phi\rangle_\pi = \sum_{i\in\Lambda}\phi (i)\pi_i$. Then \begin{equation}\label{Gap2} C = \inf\{\frac{{\cal E}_\pi(\phi,\phi)}{{\rm Var}_\pi(\phi)}:\ \langle\phi\rangle_\pi =0 \}. \end{equation} \begin{lemma}\label{SpectralGeneral} Let $G$ be a connected graph of adjacency matrix $\Gamma$, and let $\vect{a}>0$ satisfy $\Gamma \vect{a} = c\ {\bf 1}$ for some $c>0$. The matrix $P(\vect{a})$ defined by \begin{equation}\label{StochasticMatrix} P(\vect{a}) = \frac{1}{c}\Gamma d(\vect{a}), \end{equation} is stochastic, irreducible, reversible, of invariant measure $\pi'(\vect{a})$ given by $\pi'(\vect{a})_i = a_i/(\rho L)$, and with a real spectrum $-1\le \beta_{\Lambda}\le \beta_{\Lambda-1}\le \cdots\le \beta_2< \beta_1 = 1$. Let $ C(\vect{a})$ be the spectral gap of $P(\vect{a})$, defined by $C(\vect{a}) = 1-\beta_2$. $\vect{a}$ is stable if and only if $C(\vect{a}) \ge 1$. Moreover, the spectral gap is given by $$C(\vect{a})=\delta \inf_\phi \frac{\sum_{i,j}(\phi(j)-\phi(i))^2 \gamma_{ij} \pi'(\vect{a})_i \pi'(\vect{a})_j}{ \sum_{i,j}(\phi(j)-\phi(i))^2 \pi'(\vect{a})_i \pi'(\vect{a})_j}\le \delta, $$ where $\delta = \rho L /c > 1$, and where the infimum is taken over all nonconstant functions $\phi$. \end{lemma} \begin{proof} The matrix is stochastic since by assumption $\Gamma \vect{a}= c \vect{1}$. Let $\pi'(\vect{a})=\left(\frac{a_i}{\rho L}\right)_{i\in \Lambda}$. Then $P(\vect{a})$ is reversible of invariant measure given by $\pi'$. Notice next that $$ A_{ij}=\pi'(\vect{a})_i P(\vect{a})_{ij}=\delta \gamma_{ij}\pi'(\vect{a})_i \pi'(\vect{a})_j,$$ where we recall that $\gamma_{ij}\in\{0,1\}$ is the $(i,j)$ entry of the adjacency matrix $\Gamma$. Hence, using the variational characterization of the spectral gap given in (\ref{Gap}), $$ C \le \frac{{\cal E}_{\pi'(\vect{a})}(\phi,\phi)}{{\rm Var}_{\pi'(\vect{a})}(\phi)} = \delta \frac{\sum_{i,j}(\phi(j)-\phi(i))^2 \gamma_{ij} \pi'(\vect{a})_i \pi'(\vect{a})_j}{ \sum_{i,j}(\phi(j)-\phi(i))^2 \pi'(\vect{a})_i \pi'(\vect{a})_j} \le \delta, $$ when $\phi$ is non-constant. The configuration $\vect{a}$ is stable if and only if the eigenvalues of the Jacobian matrix $df(\vect{a})$ given in the proposition \ref{StabilityGeneral} are all non-positive. The adjacency matrix $\Gamma$ is symmetric, so that $\left(\Gamma d(\vect{a})\right)^* = d(\vect{a})\Gamma$. It follows that the eigenvalues $\tilde{\beta}_i$ of $d(\vect{a})\Gamma$ are equal to $c \beta_i$, $i=1,\cdots,L$. The eigenvalues of $N^2 df(\vect{a})$ are given by $ \tilde{\beta}_i (c-\tilde{\beta}_i )= \beta_i (1-\beta_i ) c$. Hence, $\vect{a}$ is stable if and only if $\beta_2 < 0$, that is if and only if $C \geq 1$. \end{proof} \begin{corollary}\label{Unstable} Let $G$ be a connected graph of adjacency matrix $\Gamma$, and let $\vect{a}>0$ satisfy $\Gamma \vect{a} = c\ {\bf 1}$ for some $c>0$. For $i\in\Lambda$, let ${\cal V}_i = \{j\in\Lambda;\ j\sim i\}$ be the neighbourhood of $i$. Assume that there exist elements $i_0$, $i_1$, $i_2$ and $i_3$ of $\Lambda$ such that \begin{equation}\label{InstabilityCriterium} i_1\in {\cal V}_{i_0}, \ i_2 \in {\cal V}_{i_1}\setminus {\cal V}_{i_0}\setminus \{i_0\},\ i_3 \in {\cal V}_{i_2}\setminus {\cal V}_{i_0}\setminus \{i_0\}. \end{equation} Then $\vect{a}$ is unstable. \end{corollary} \begin{example}\label{ExampleSubgaphs} When $G$ is a sub-graph of a two-dimensional grid, a solution to the linear system $\Gamma \vect{a} = c {\bf 1}$ can possibly to be stable only when $G$ belongs to the list given in Figure \ref{subgraph}, which consists in the square, the star, and all the various parts of the star. \end{example} \begin{proof} We use Lemma \ref{SpectralGeneral} to express the spectral gap of $P(\vect{a})$ as \begin{eqnarray*} C(\vect{a})&=&\delta \inf_{\langle\phi\rangle_{\pi'(\vect{a})}=0} \frac{\sum_i \phi (i)^2 \pi'(\vect{a})_i \sum_j \gamma_{ij}\frac{a_j}{\rho L} -\sum_{i,j}\gamma_{ij}\phi (i) \phi (j) \pi'(\vect{a})_i \pi'(\vect{a})_j}{\sum_i \phi (i)^2 \pi'(\vect{a})_i}\\ &=&\delta \inf_{\langle\phi\rangle_{\pi'(\vect{a})}=0} \frac{\sum_i \phi(i)^2 \pi'(\vect{a})_i \frac{c}{\rho L} - \sum_{i,j}\gamma_{ij}\phi(i)\phi(j)\pi'(\vect{a})_i \pi'(\vect{a})_j} { \sum_i \phi(i)^2 \pi'(\vect{a})_i }\\ &=& \delta \inf_{\langle\phi\rangle_{\pi'(\vect{a})}=0} \frac{\sum_i \phi(i)^2 \pi'(\vect{a})_i \delta^{-1} - \sum_{i,j}\gamma_{ij}\phi(i)\phi(j)\pi'(\vect{a})_i \pi'(\vect{a})_j} { \sum_i \phi(i)^2 \pi'(\vect{a})_i } \end{eqnarray*} We will prove that $C(\vect{a}) < 1$ by choosing a test function $\phi$ satisfying ${\langle\phi\rangle_{\pi'(\vect{a})}=0}$ for which $$ \delta \frac{\sum_i \phi(i)^2 \pi'(\vect{a})_i \delta^{-1} - \sum_{i,j}\gamma_{ij}\phi(i)\phi(j)\pi'(\vect{a})_i \pi'(\vect{a})_j} { \sum_i \phi(i)^2 \pi'(\vect{a})_i } < 1,$$ which is equivalent to require that $$\sum_{i,j}\gamma_{ij}\phi(i)\phi(j)\pi'(\vect{a})_i \pi'(\vect{a})_j > 0.$$ We set $\phi(j) = 0$, $\forall j\in {\cal V}_{i_0}$. For $j\in \Lambda\setminus {\cal V}_{i_0}\setminus \{i_0\}$, we choose $\phi(j)$ to be arbitrary but positive. For $j=i_0$, we choose $\phi(i_0)$ so that $$a_{i_0}\phi (i_0)=-\sum_{j\ne i_0}a_j \phi(j).$$ Consequently $\langle\phi\rangle_{\pi'(\vect{a})}=0$ and $\sum_{i,j}\gamma_{ij}\phi(i)\phi(j)a_i a_j > 0$. \end{proof} Corollary \ref{Unstable} provides a simple condition ensuring the non-stability of configurations $\vect{a}$ satisfying $\Gamma \vect{a} = c {\bf 1}$. We next consider the reducible case where $a_i =0$ for $i\in I\subset \Lambda$. Set $J=\Lambda\setminus I$, and let $\{\gamma_1,\cdots,\gamma_P\}$ be the collection of sub-graphs of $G$ induced by the nodes of $J$, of node set $J_{\gamma_p}$ and of adjacency matrices $\Gamma_{\gamma_p}$, $p=1,\cdots, P$. We again assume that $\Gamma_{\gamma_p}\vect{a}\vert_{\gamma_p}=c_{\gamma_p} \vect{1}$ for some $c_{\gamma_p}>0$. \subsection{The reducible case} We consider the stability of critical points $\vect{a}$ such that $a_i = 0$, for $i\in I \subset\Lambda$ with $I\ne \emptyset$. \begin{proposition}\label{StabilityBoundary} Assume that $D=0$ and set $T=1$. Let $\vect{a}$ be a critical point of \eqref{eq:jonsson} such that $a_i=0$ for $i \in I$. Let $\{\gamma_1,\cdots,\gamma_P\}$ be the collection of sub-graphs of $G$ obtained by deleting the nodes of $I$, of adjacency matrices $\Gamma_{\gamma_p}$, $p=1,\cdots, P$. The critical points $\vect{a}$ are obtained by solving linear systems of the form $\Gamma_{\gamma_p}\vect{a}\vert_{\gamma_p}=c_{\gamma_p} \vect{1}\vert_{\gamma_p}$ for some $c_{\gamma_p}>0$ (see Section \ref{Reducible}). The spectrum of the Jacobian evaluated at $\vect{a}$ is given by \begin{equation} {\rm spec}({\rm d}f(a))= \bigcup_{p=1}^P {\rm spec}\left({\rm d}f\vert_{\gamma_p}(\vect{a}\vert_{\gamma_p})\right) \cup \left\{\sum_{k\sim i} \frac{a_k}{N_k} - \frac{N_i-\kappa}{N_i}, i\in I \right\} \end{equation} \end{proposition} The proof of Proposition \ref{StabilityBoundary} is given in Section \ref{Appendix}. Proposition \ref{StabilityBoundary} shows that such configurations are stable when 1) each $a\vert_{\gamma_p}$ is stable and 2) when $\sum_{k\sim i} a_k / N_k(\vect{a})-(N_i(\vect{a})-\kappa)/N_i(\vect{a}) < 0$, $i\in I$. Here, if $k\sim i$, $i\in I$, $k\in \Lambda\setminus I$, $N_k(a)$ is given by the constant $\kappa + c_{\gamma_p}$ when $k\in J_{\gamma_p}$. To go further, we need the following \begin{definition} Let $J\subset\Lambda$. The outer boundary of $J$, denoted by $\partial J$, is the subset of $\Lambda$ given by $$\partial J = \{j \in\Lambda\setminus J;\ j\sim J\}.$$ \end{definition} \subsection{Example: the rectangular grid\label{StabilityGrid}} We now illustrate the various stable patches we can form by using the building blocks, as given in Figures \ref{subgraph} and \ref{stable}. It is easy to provide examples of unstable configurations when the outer boundary of some component $\gamma$ is such that \begin{equation}\label{NotIsolated} \partial\Big(\partial J_\gamma\Big)\cap J_{\gamma'} \ne\emptyset, \text{ for some component } \gamma'\ne\gamma, \end{equation} as illustrated in Figure \ref{FigureStable}(a). \begin{figure}[h!] \begin{minipage}{0.5\textwidth} \begin{center} \includegraphics[height=4cm]{Fig5} \vspace{3mm} (a) Unstable configuration \end{center} \end{minipage} \begin{minipage}{0.5\textwidth} \begin{center} \includegraphics[height=4cm]{Fig6} \vspace{3mm} (b) Stable configuration \end{center} \end{minipage} \caption{(a) One can check in this example that (\ref{NotIsolated}) implies the non-stability of the configuration for well chosen parameters. Red dots indicates cells with $a_i \neq 0$. (b) One can check in this example that (\ref{Isolated}) is satisfied, ensuring the stability of the configuration. \label{FigureStable}} \end{figure} Next, the reader can verify, using Proposition \ref{StabilityBoundary}, that any patch composed of building blocks disposed in such a way that \begin{equation}\label{Isolated} \partial\Big(\partial J_p \cup J_p\Big)\cap \Big(\cup_{p'\ne p}J_{p'}\Big) = \emptyset,\ \forall p=1,\cdots, P, \end{equation} is stable. Figure \ref{FigureStable}(b) exhibits a typical example of a stable configuration in this setting. \subsection{Example: the pure transport process on the circle \label{Circle}} We here assume that $D=0$ and $T=1$. Corollary \ref{Unstable} yields the instability of uniform solution $(\rho)=(\rho, \ldots, \rho)$ when the length $L$ of the cycle is larger than 4. The adjacency matrix of the circle is circulant, with eigenvalues given by $$\mu_k= e^{2\pi i \frac{k}{L}}+e^{2\pi i \frac{(L-1)k}{L}}=2\cos\left(2\pi \frac{k}{L}\right).$$ The determinant of $\Gamma$ vanishes if and only if there exists $j\in\{1,...,L\}$ such that $\mu_j=0$, that is if $$\cos\left(2\pi \frac{j}{L}\right)=0 \Leftrightarrow 2\pi \frac{j}{L}= \frac{\pi}{2}+k\pi \text{ for } k\in\mathbb{N},$$ or equivalently if there is a $k\in\mathbb{N}$ such that $j=\frac{L}{4} +k\frac{L}{2}\in \mathbb{N}$. Hence, the determinant of $\Gamma$ vanishes if and only if $L$ is a multiple of 4. In this case, the set $M_c$ of critical values $\vect{a}$ (that is satisfying $\Gamma \vect{a}=c \vect{1}$) such that $a_i > 0$, $\forall i\in\Lambda$, is such that $$ a_3= c-a_1, a_4= c-a_2, a_5= a_1, a_6= a_2, a_7= c-a_1, ...$$ with $a_1\neq 0 \neq a_2$. Recalling that we impose the following normalization $\sum_{i=1}^L a_i=\rho L$, we obtain $$\sum_{i=1}^L a_i=\rho L \Leftrightarrow 2c\frac{L}{4}= \rho L \Leftrightarrow c=2 \rho.$$ The set of critical values $M_c$ is then composed of configurations of the form $$\vect{a}=(a_1,a_2,2 \rho-a_1,2 \rho-a_2 ,a_1, a_2,2 \rho-a_1 ,2 \rho-a_2 ,...,a_1,a_2,2 \rho-a_1 ,2 \rho-a_2)$$ with $(a_1,a_2)\in (0,2\rho)\times (0,2\rho)$. Corollary \ref{Unstable} then implies that this set contains only unstable points when $L>4$. For $L=4$, the critical point $\vect{a}=(a_1,a_2,2 \rho-a_1,2 \rho-a_2)$ is stable since the eigenvalues of the Jacobian matrix are such that $$\lambda_1=\lambda_2=\lambda_3=0 \text{ and } \lambda_4=-\frac{2c}{(\kappa+c)^2}$$ We can summarize these results in the following corollary: \begin{corollary}\label{StabilityCriticalOneDim} Assume that the nodes are arranged on a circle of size $L$. The set $M_c$ of critical values $\vect{a}>0$ such that $f(\vect{a})=0$ contains only the uniform configuration $(\rho,...,\rho)$ if L is not a multiple of 4. In the case where $L = 4n$, for some $n\in \mathbb{N}$ with $n\geq 1$, $M_c$ is given by $$ M_c =\{(a_1,a_2,-a_1+2\rho,-a_2+2\rho,a_1,a_2,-a_1+2\rho,-a_2+2\rho,\cdots ); \ a_k \in (0,2\rho),\ k=1, 2\}. $$ Any element of $M_c$ is unstable except for $L=4$. \end{corollary} The set $M_c^{tot}$ of all critical points is obtained by decomposing the circle into sub-graph $\gamma$ such $\vect{a}\vert_\gamma>0$ and by solving the system $$\Gamma_\gamma \vect{a}\vert_\gamma = c_\gamma {\rm\bf 1}\vert_\gamma,$$ for these sub-graphs. We can prove that this system has positive solution $\vect{a}\vert_\gamma$ if and only $|I|<4$ ($|I|:=$ length of the path), because for $|I|\geq 4$, we see that $a_4=0$ (which is in contradiction with the hypothesis). When $|I|=3$, the critical points take the form $\vect{a}\vert_\gamma=(z_1,c_\gamma,c_\gamma-z_1)$, with $z_1\in(0,c_\gamma)$ and when $|I|=2$, $\vect{a}\vert_\gamma=(c_\gamma,c_\gamma)$. In these two cases, the critical points are stable as the Lyapunov function H defined in (\ref{LyapounovFunction}) takes its minimal value $H(a)=-(\kappa+\frac{\rho L}{4})\rho L$. The global minimum of H is obtained by adapting the result of \citet{Motzkin}, see Remark \ref{Global}. Finally, if $|I|=1$, we have $\vect{a}\vert_\gamma=(c_\gamma)$; $H$ is maximal and hence $\vect{a}$ is unstable. The set $M_c^{tot}$ of critical points is then obtained by taking the direct product of the sets of critical values associated with the paths $\gamma$. For example, if L is a multiple of 4, the subset of $M_c^{tot}$ defined by $$ \tilde M_c = \{(a_1,a_2,-a_1+2\rho,-a_2+2\rho,a_1,a_2,-a_1+2\rho,-a_2+2\rho,\cdots ); \ a_1=0,\ a_2 \in (0,2\rho)\}, $$ is composed of critical values which are stable since $$\lambda=0 \text{ with multiplicity } 3\frac{L}{4} \text{ and } \lambda=\frac{-2c^2}{(\kappa+c)^2} \text{ with multiplicity } \frac{L}{4} $$ \subsection{An explicit computation when $D=0$ on the circle\label{SpecialComputation} } As we have seen, when $\vert I\vert =3$, the stable configurations are given by triplets of the form $(z_1,c_\gamma,c_\gamma -z_1)$, where $z_1$ is such that $z_1 \in (0,c_\gamma)$, for some positive constant $c_\gamma > 0$. Consider a path composed of five cells $i-1$, $i$, $i+1$, $i+2$ and $i+3$ such that $a_{i-1}=a_{i+3}=0$, so that the dynamical system (\ref{eq:jonsson}) associated with these cells becomes \begin{eqnarray} \frac{{\rm d}a_i}{{\rm d}t}&=& \frac{a_{i+1} a_i}{\kappa + a_{i}+a_{i+2}}- \frac{a_i a_{i+1}}{\kappa + a_{i+1}},\label{eq1}\\ \frac{{\rm d}a_{i+2}}{{\rm d}t}&=& \frac{a_{i+1} a_{i+2}}{\kappa + a_{i}+a_{i+2}}- \frac{a_{i+2} a_{i+1}}{\kappa + a_{i+1}},\label{eq2}\\ \frac{{\rm d}a_{i+1}}{{\rm d}t}&=& \frac{a_i a_{i+1}}{\kappa + a_{i+1}}+\frac{a_{i+2} a_{i+1}}{\kappa + a_{i+1}} -\frac{a_{i+1} a_i}{\kappa + a_{i}+a_{i+2}}-\frac{a_{i+1} a_{i+2}}{\kappa + a_{i}+a_{i+2}}.\label{eq3} \end{eqnarray} Dividing (\ref{eq1}) by (\ref{eq2}) yields that $$\frac{\frac{{\rm d}a_i}{{\rm d}t}}{\frac{{\rm d}a_{i+2}}{{\rm d}t}} =\frac{a_i}{a_{i+2}}.$$ Thus there is a positive constant $c>0$ such that \begin{equation}\label{Relation1} a_{i+2} = c a_i. \end{equation} Plugging this identity in (\ref{eq3}), one obtains $$\frac{{\rm d}a_{i+1}}{{\rm d}t} = (1+c)a_i a_{i+1}(\frac{1}{\kappa + a_{i+1}}-\frac{1}{\kappa + a_i +a_{i+2}}),$$ and finally $$\frac{\frac{{\rm d}a_{i+1}}{{\rm d}t}}{\frac{{\rm d}a_{i}}{{\rm d}t}} = -(1+c).$$ Hence there exists a constant $d$ such that $a_{i+1}=d-(1+c)a_i$. Normalizing the total mass in such a way that $a_i +a_{i+1}+a_{i+2} = 3\rho$, one gets that $3\rho = d$ and \begin{equation}\label{Relation2} a_{i+1}= 3\rho -(1+c)a_i. \end{equation} Plugging (\ref{Relation1}) and (\ref{Relation2}) in equation (\ref{eq1}) yields the differential equation $$ \frac{{\rm d}a_i}{{\rm d}t} = \frac{ a_i (3\rho -(1+c)a_i)(3\rho -2(1+c)a_i)}{(\kappa + a_i(1+c))(3\rho +\kappa -(1+c)a_i)}.$$ Setting $u = (1+c)a_i$, one gets the o.d.e. $$\frac{{\rm d}u}{{\rm d}t} = \frac{u(3\rho -u)(3\rho -2u)}{(\kappa +u)(3\rho +\kappa -u)}.$$ Solving by partial fractions expansions, one obtains $$\frac{3\kappa\rho +\kappa^2}{9\rho^2}(\ln(u) +\ln(3\rho-u))-\frac{9 \rho^ 2 +4(\kappa^2+3 \rho \kappa)}{18\rho^2}\ln(3\rho-2u) =t+\alpha,$$ for some constant $\alpha$. Clearly one must have $u<3\rho /2$. \begin{lemma} \label{lem} As $t \to \infty$, $u(t)=(1+c)a_i(t) \longrightarrow \frac{3\rho}{2}$. \end{lemma} \begin{proof} The preceding considerations show that we have to consider only initial conditions of the form $0\leq u(0)\leq 3\rho$. Clearly $0, \frac{3\rho}{2}$ and $3\rho$ are critical points of our equation. We can easily find a compact interval $I$ whose interior contains $J=[0,3\rho]$ and so that $f'(u)$ is continuous and thus bounded over $I$. As a consequence $f$ satisfies a Lipschitz-condition over $I$. According to the general theory, for any initial condition $u(0)\in J$ our equation admits a unique solution defined over a maximal interval $I_m$. If $u(0)=0$, then $u\equiv 0$ is the corresponding solution. If $u(0)\in ]0,\frac{3\rho}{2}[$, then $\dot u(0)>0$. Due to unicity, the solution can not reach a critical point in a finite time and thus the boundary of $]0,\frac{3\rho}{2}[$. Moreover the solution is obviously bounded entailing $I_m=[0,+\infty[$. For the preceding reasons the derivative of $u(t)$ is never $0$ and thus always positive since $\dot u(0)>0$. Thus $u(t)$ increases to $\frac{3\rho}{2}$ as $t\rightarrow +\infty$. The same reasoning shows that $u(t)$ decreases to $\frac{3\rho}{2}$ as $t\rightarrow +\infty$ for $u(0)\in ]\frac{3\rho}{2}, 3\rho[$. Finally if $u(0)=3\rho$, then $u\equiv 3\rho$. \end{proof} Furthermore, (\ref{Relation1}) yields $$c=\frac{a_{i+2}(0)}{a_i(0)}.$$ As $(1+c)a_i=a_i+a_{i+2}$ tends to $c_{\gamma}$ as time goes to infinity, Lemma \ref{lem} yields that $c_{\gamma}=3\rho/2$, and $$a_i(t) \longrightarrow \frac{3\rho}{2(1+c)}=\frac{c_{\gamma}}{1+c},$$ as $t \to \infty$. \eqref{Relation1} and \eqref{Relation2} show that $$a_{i+1}=3\rho-(1+c)a_i \longrightarrow \frac{3\rho}{2}= c_{\gamma} \text{ and } a_{i+2}=c a_i\longrightarrow \frac{c}{1+c}c_{\gamma}=c_{\gamma}-\frac{c_{\gamma}}{1+c}.$$ In summary, one obtains that an orbit defined by initial conditions of the form $$(a_{i-1}(0),a_i(0),a_{i+1}(0),a_{i+2}(0),a_{i+3}(0)) \text{ with } a_{i-1}(0)=a_{i+3}(0)=0$$ converges to the critical point $(z_1,c_\gamma,c_\gamma -z_1)$, with $z_1= \frac{c_{\gamma}}{1+c}$, $c_\gamma=\frac{3\rho}{2}$ and ${c=\frac{a_{i+2}(0)}{a_i(0)}}$. Finally, if the system starts from a symmetric initial state $a_i(0)=a_{i+2}(0)$, the constant c is egal to 1 and the system tends to $(0,\frac{3\rho}{4},\frac{3\rho}{2},\frac{3\rho}{4},0)$ as $t \to \infty$. \section{Appendix\label{Appendix}} \subsection{ Proof of Theorem \ref{Invariance}} First, we easily check that the system $\dot{\vect{a}}=f(\vect{a})$ is conservative, i.e. $$\forall t \in \mathbb{R}_{\geq 0}, \; \sum_i^L a_i(t) =\sum_i^L a_i(0).$$ In the following, we use the notation $\dot{\vect{a}}$ instead of $\frac{{\rm d}\vect{a}}{{\rm d}t}$. The latter is equivalent to $$\sum_i^L \dot{a}_i(t) = \sum_i^L f_i(\vect{a})= 0. $$ In fact, one can write \begin{eqnarray*} \sum_i \dot{a}_i (t)&=&D \sum_i\sum_{k \sim i} (a_k -a_i)+T \sum_i \sum_{k\sim i}a_k a_i\left(\frac{N_i-N_k}{N_k N_i}\right)\\ &=&D \sum_i(d_i a_i-d_i a_i)+2 T \sum_{k\sim i}\frac{a_k a_i}{N_k N_i}((N_i-N_k)-(N_k-N_i))=0, \end{eqnarray*} where $N_k=\kappa+\sum_{j\sim k}a_k$, and where $d_i$ is the degree of i (that is the number of neighbours of i). Next, system (\ref{eq:jonsson}) can be written as \begin{equation}\label{eq:jonssonb} \dot a_i=D\sum_{k\sim i}a_k+T\sum_{k\sim i}\left(\frac{a_k}{\kappa+\sum_{j\sim k}a_j}-\frac{a_k}{\kappa+\sum_{j\sim i}a_j}-\frac{D}{T}\right)a_i. \end{equation} Let $\vect{a}$ a solution of (\ref{eq:jonssonb}) with $\vect{a}(0)\in\mathbb{R}^L_{\geq 0}$. We say that the function $f:\mathbb{R}_+\rightarrow \mathbb{R}$ is instantaneously positive (i.p.) if there exists $\delta>0$ so that $f$ is strictly positive over $(0,\delta)$. If $f(0)>0$ and $f$ is continuous to the right at $0$, then $f$ is i.p.. It is also clear that if $f$ admits a strictly positive right-hand derivative at $0$, then it is i.p.. Let $U$ be the open set $U=\{\vect{x}=(x_1,x_2,...,x_L)\in\mathbb{R}^L;-\frac{\kappa}{2L}<x_i\}$. Since the right-hand member of (\ref{eq:jonssonb}) is continous over $U$, the general theory of o.d.e.'s provides the existence of a solution defined over a maximal interval $0\in J^+\subset \mathbb{R}_+$ for any initial condition $\vect{ a}(0)\in U$. Moreover, the solution is unique because the right-hand member of (\ref{eq:jonssonb}) locally lipschitzian. Set for convenience \begin{eqnarray*} h_i(t)&=& D\sum_{k\sim i}a_k(t)\quad \hbox{ and } \\ g_i(t)&=&T\sum_{k\sim i}\left(\frac{a_k(t)}{\kappa+\sum_{j\sim k}a_j(t)}-\frac{a_k(t)}{\kappa+\sum_{j\sim i}a_j(t)}-\frac{D}{T}\right). \end{eqnarray*} The variation of constants formula allows us to write , $\forall t\in J^+$, \begin{equation}\label{F} a_i(t)=a_i(0)e^{\int_0^tg_i(s)ds}+\int_0^t h_i(u)e^{-\int_u^tg_i(v)dv}du. \end{equation} Since $a_i(0)\geq 0$, the first term in (\ref{F}) is non-negative. Moreover if $a_k(t)$ is i.p. for some $k\sim i$, then according to (\ref{F}), the same property holds for $a_i(t)$. In particular, if $a_k(0)>0$ for some $k\sim i$, then by continuity $a_k(t)$ is i.p. and thus also $a_i(t)$. \noindent \textbf{The case $D>0$:} Clearly, if $\vect{ a}(0)= \vect{0}$, then the unique solution is identically $0$. Otherwise, there exists $1\leq i_0\leq L$ with $a_{i_0}(0)>0$ and $\forall j\sim i_0, a_j(t)$ is i.p.. Since our graph is supposed to be connected, every $i$ admits a neighbor $k\sim i$ with $a_k(t)$ i.p.. Hence, $a_i(t)$ is i.p. $\forall i,1\leq i\leq L$. The preceding arguments show that for any initial condition $\vect{ a}(0)\in \mathbb{R}_{\geq 0}^L\subset U$, all components of the solution of (\ref{eq:jonssonb}) are i.p.. Let us suppose that one of them admits the value $0$ in $J^+\backslash\{0\}$. Since all components are continuous and their number is finite, there exists a first time $t_0>0$ for which at least one component $a_{i_0}(t_0)=0$ and all of them are strictly positive over $(0,t_0)$. According to (\ref{F}), we have $$a_{i_0}(t_0)=0=a_i(0)e^{\int_0^{t_0}g_i(s)ds}+\int_0^{t_0}h_i(u)e^{-\int_u^{t_0}g_i(v)dv}du.$$ Clearly $h_i(t)>0$ over $J^+\backslash\{0\}$ and since the first term is non-negative, we conclude to $a_{i_0}(t_0)>0$, a contradiction. Therefore all $a_i(t)$ are strictly positive over $J^+\backslash \{0\}$. \noindent \textbf{The case $D=0$:} If $a_i(0)=0$, the homogeneous equation for $a_i(t)$ admits only the zero solution, and we remove the related $i$th component from (\ref{eq:jonssonb}). Otherwise $a_i(0)>0$ and, by continuity, $a_i(t)$ is i.p.. In that case $a_i(t)=a_i(0)e^{\int_0^tg_i(s)ds}>0$ over $J^+$.\\ \\ \\ In both cases the solution of (\ref{eq:jonssonb}) have strictly positive components over $J^+$. We also proved that $\forall t \in J^+$ we have: $$\sum_{1\leq i\leq L}a_i(t)=\sum_{1\leq i\leq L}a_i(0).$$ As a consequence the solution of (\ref{eq:jonssonb}) is bounded and thus the unique solution of our problem is defined over $J^+=[0,+\infty)$. \subsection{Proof of Proposition \ref{StabilityGeneral}} We first give the Jacobian, for general $\vect{a}$. We have \begin{equation}\label{Jacobian1} \frac{\partial f_i(\vect{a})}{\partial a_j} =\frac{a_i}{N_j}-\frac{a_i}{N_i}+\sum_{k\sim i}\frac{a_i a_k}{N_i^2}-\sum_{k\sim i, k\sim j}a_k \frac{a_i}{N_k^2}, \end{equation} (where the last term is due to the triangles in the graph) when $j\sim i$, that is, $i$ and $j$ are nearest neighbours. When $i=j$, one gets \begin{equation}\label{Jacobian2} \frac{\partial f_i(\vect{a})}{\partial a_i} =\sum_{k\sim i}\frac{a_k}{N_k}-a_i\sum_{k\sim i}\frac{a_k}{N_k^2} -\frac{\sum_{k\sim i}a_k}{N_i}. \end{equation} The remaining non-vanishing partial derivatives correspond to nodes $j$ located at distance 2 of $i$ in the graph, that is, to nodes $j$ such that $j\sim k$ for some $k\sim i$, $j\ne i$ but $i\not \sim j$. Then \begin{equation}\label{Jacobian3} \frac{\partial f_i(\vect{a})}{\partial a_j} =-\sum_{j\sim k,\ k\sim i}\frac{a_i a_k}{N_k^2}. \end{equation} When $N_i = N$, $\forall i$, these expressions simplify to $$\frac{\partial f_i(\vect{a})}{\partial a_j} =\sum_{k\sim i}\frac{a_i a_k}{N_i^2}=\frac{N-\kappa}{N^2}a_i-\frac{a_i}{N^2}\sum_{k\sim i, k\sim j}a_k. $$ If $j\sim i$, $$\frac{\partial f_i(\vect{a})}{\partial a_i}=-\frac{N-\kappa}{N^2}a_i,$$ and $$\frac{\partial f_i(\vect{a})}{\partial a_j} =-\sum_{k\sim i, k\sim j}\frac{a_i a_k}{N_k^2}=-\frac{a_i}{N^2}\sum_{k\sim i, k\sim j}a_k, $$ if $j\sim k$ for some $k\sim i$, $j\ne i$ but $i\not \sim j$.\\ Consider the sub-matrix $L$ given by $L=(\partial f_i(\vect{a})/\partial a_j)_{j\sim i}$. Let $d(\vect{a})$ be the diagonal matrix of diagonal given by $\vect{a}$. The perturbation associated with the triangles contained in the graph is represented by the term $-\frac{a_i}{N^2}\sum_{k\sim i, k\sim j}a_k$ in $\frac{\partial f_i(\vect{a})}{\partial a_j}$ for $j\sim i$, and the related matrix is given by \begin{eqnarray*} \left(-\frac{a_i}{N^2}\sum_{k\sim i, k\sim j}a_k\right)\gamma_{ij} &=& \left(-\frac{a_i}{N^2}\sum_{k}\gamma_{ik}a_k\gamma_{kj}\right)\gamma_{ij}\\ &=&\left( -\frac{1}{N^2}\left(d(a)\Gamma d(a)\Gamma-{\rm diag}( d(a)\Gamma d(a)\Gamma)\right)_{ij}\right)\gamma_{ij}\\ &=&\left( -\frac{1}{N^2}(d(a)\Gamma d(a)\Gamma)_{ij}+\frac{N-\kappa}{N^2}d(a)_{ij}\right)\gamma_{ij}.\\ \end{eqnarray*} The matrix $L$ is now given by $$L=\frac{d(a)}{N^2}(N-\kappa)(\Gamma-id)- \frac{1}{N^2}(d(a)\Gamma d(a) \Gamma-(N-\kappa)d(a))\circ\Gamma ,$$ where $\circ$ represents the Hadamard product, i.e. the multiplication component by component.\\ Likewise, the perturbation of $L$ by $\left(\frac{\partial f_i(\vect{a})}{\partial a_j}\right)_{i\sim k, k\sim j, i\not \sim j, i\neq j}$ can be written as \begin{eqnarray*} \left(-\frac{a_i}{N^2}\sum_{\substack{k\sim i, k\sim j, \\i\not \sim j, i\neq j}} a_k \right)\gamma_{ij}&=& \left(-\frac{a_i}{N^2}\sum_{k}\gamma_{ik}a_k\gamma_{kj}\right)(1-\gamma_{ij}-id_{ij})\\ &=& \left( -\frac{1}{N^2}(d(a)\Gamma d(a)\Gamma)_{ij}+\frac{N-\kappa}{N^2}d(a)_{ij}\right)(1-\gamma_{ij}-id_{ij}).\\ \end{eqnarray*} The related Jacobian is thus given by $L+\left(\frac{\partial f_i(\vect{a})}{\partial a_j}\right)_{i\sim k, k\sim j, i\not \sim j, i\neq j}$, that is \begin{eqnarray*} df(a)&=&\frac{d(a)}{N^2}(N-\kappa)(\Gamma-id)- \frac{1}{N^2}(d(a)\Gamma d(a) \Gamma-(N-\kappa)d(a)) \circ \Gamma\\ & &- \frac{1}{N^2}(d(a)\Gamma d(a) \Gamma-(N-\kappa)d(a)) \circ(\bold{1}-\Gamma-id)\\ &=& \frac{d(a)}{N^2}(N-\kappa)(\Gamma-id)- \frac{1}{N^2}(d(a)\Gamma d(a) \Gamma-(N-\kappa)d(a)) \circ (\bold{1}-id)\\ &=& \frac{d(a)}{N^2}(N-\kappa)(\Gamma-id)- \frac{1}{N^2}(d(a)\Gamma d(a) \Gamma-(N-\kappa)d(a)),\\ \end{eqnarray*} where $\bold{1}$ is the matrix composed only of ones. The last equality is a consequence of the fact that the diagonal of $d(a)\Gamma d(a) \Gamma-(N-\kappa)d(a)$ vanishes. Hence, $$df(a)=\frac{d(a)\Gamma}{N^2}((N-\kappa)id-d(a)\Gamma ) =\frac{d(a)\Gamma}{N^2}( c\ id-d(a)\Gamma ), $$ proving the result. \subsection{Proof of Proposition \ref{StabilityBoundary}} Set $I=\{i \in \Lambda : a_i=0\}$, and consider the sub-graphs $\gamma_p$ of $G$ induced by the nodes of $J=\Lambda\setminus I$, with $\gamma_p=(\Lambda_p,E_p)$, $1 \leq p \leq P$. The related critical points $\vect{a}$ are such that the restrictions $\vect{a}\vert_{\gamma_p}$ satisfy the linear systems $\Gamma_{\gamma_p}\vect{a}\vert_{\gamma_p}=c_{\gamma_p} \vect{1}\vert_{\gamma_p}$. Set $N_{\gamma_p}=c_{\gamma_p}+ \kappa$. \noindent \eqref{Jacobian1} - \eqref{Jacobian3} permit to compute the entries of the Jacobian matrix, by first looking at the diagonal entries: When $i\in \Lambda_p$, one has $$ \frac{\partial f_i(\vect{a})}{\partial a_i} = - a_i \frac{N_{\gamma_p}-\kappa}{N_{\gamma_p}^2}, $$ providing the diagonal entry of the Jacobian of $f\vert_{\gamma_p}(\vect{a}\vert_{\gamma_p})$. When $i\not\in \Lambda_p$, a similar computation yields $$ \frac{\partial f_i(\vect{a})}{\partial a_i} = \sum_{k\sim i} \frac{a_k}{N_k} - \frac{N_i-\kappa}{N_i}. $$ We then compute the entries $(i,j)$ for $j \sim i$: $$ \frac{\partial f_i(\vect{a})}{\partial a_j} =a_i \frac{N_{\gamma_p} - \kappa}{N_{\gamma_p}^2}-\sum_{k\sim i, k\sim j} a_k \frac{a_i}{N_k^2} = a_i \frac{N_{\gamma_p} - \kappa}{N_{\gamma_p}^2}-\sum_{k\sim i, k\sim j, k \in \Lambda_p} \frac{a_k a_i}{N_{\gamma_p}^2} , $$ for $i,j \in \Lambda_p$ and $1\leq p \leq P$, which corresponds to the $(i,j)$ entry of the Jacobian of $f\vert_{\gamma_p}(\vect{a}\vert_{\gamma_p})$. Likewise, $$ \frac{\partial f_i(\vect{a})}{\partial a_j} = \frac{a_i}{N_j}- \frac{a_i}{N_{\gamma_p}} + \sum_{k\sim i} \frac{a_i a_k}{N_{\gamma_p}^2}-\sum_{\substack{k\sim i, k\sim j, \\ k \in \Lambda_p}} \frac{a_k a_i}{N_{\gamma_p}^2} = \frac{a_i}{N_j} - a_i \frac{\kappa}{N_{\gamma_p}^2} - \frac{a_i}{N_{\gamma_p}^2} \sum_{\substack{k\sim i, k\sim j,\\ k \in \Lambda_p}} a_k , $$ when $i \in \Lambda_p$ for some $p$ and $j \not \in \Lambda_p$. Finally, $$ \frac{\partial f_i(\vect{a})}{\partial a_j}=0, $$ when $i,j \not \in \cup_p\Lambda_p$, or equivalently when both $i$ and $j$ belongs to $I$. We next consider $(i,j)$ entries where $j$ is at a distance 2 of $i$ in the graph $G$, that is when $j$ is such that $j\sim k$ for some $k\sim i$, $j\neq i$ and $j \not\sim i$. One obtains that $$ \frac{\partial f_i(\vect{a})}{\partial a_j}=-a_i \sum_{j \sim k, k \sim i} \frac{a_k}{N_{\gamma_p}^2}, $$ when $i,j,k \in \Lambda_p$, which is the $(i,j)$ entry of the Jacobian of $f\vert_{\gamma_p}(\vect{a}\vert_{\gamma_p})$. Likewise, $$ \frac{\partial f_i(\vect{a})}{\partial a_j}=-a_i \sum_{j \sim k, k \sim i} \frac{a_k}{N_{\gamma_p}^2} , $$ when $i,k \in \Lambda_p, j\not \in \Lambda_p$ ($\Rightarrow j \in I$). Next, $$ \frac{\partial f_i(\vect{a})}{\partial a_j}=0. $$ when $i \, \mathrm{or} \, k \not \in \Lambda_p, \forall j \in \Lambda$. Permuting conveniently the indices, the Jacobian ${\rm d}f(\vect{a})$ can be written as \begin{equation} {\rm d}f(\vect{a}) = \begin{pmatrix} d_n & \vect{0} \\ \ast & {\rm d}f^{\gamma} \end{pmatrix} \end{equation} where $d_n$ is a diagonal matrix $n \times n$ with entries given by $\lambda_i:= \sum_{k\sim i} \frac{a_k}{N_k} - \frac{N_i-\kappa}{N_i}$, for $i \in I$, and hence ${\rm d}f^{\gamma}$ is a block diagonal matrix, each block being equal to the Jacobian of $f$ restricted on each sub-graph $\gamma_p$. The permutation allows us to group all indices $i \in I$ in the same block, and all indices related to the sub-graphs $\gamma_p$ are also arranged together. It follows that the eigenvalues of ${\rm d}f(\vect{a})$ are given by the diagonal entries $(\lambda_i)_{i\in I}$, and by the eigenvalues of all Jacobian matrices. {\bf Acknowledgements} This work was supported by the University of Fribourg, and by the SystemsX "Plant growth in changing environments" project funding. Many thanks to D. Kierzkowski and C. Kuhlemeier for providing us the picture given in Figure \ref{phyllo} and to Ale\v s Janka for its help in Matlab programming. We are very grateful to Patrick Favre and Didier Reinhardt for giving us the opportunity to learn parts of the actual knowledge on the role of the auxin flux in plant patterning.
1,314,259,994,020
arxiv
\section{Introduction} The study of the scan statistics dates back\footnote{ Naus himself cites even earlier work in the 1940's by \citet{silberstein1945xliii}, \citet{berg1945xliv}, and \citet{mack1948xc}.} to \citet{naus1965distribution}, who derived the probability that an interval of a certain length contains a certain fraction of independent and identically distributed (iid) samples from the uniform distribution on $[0,1]$. Specifically, let $U_1, \dots, U_n$ be iid random variables from Unif$(0,1)$ with empirical cumulative distribution function (CDF) denoted by $F_n$, and let $h$ be the length of the underlying interval of interest. \citet{naus1965distribution} studied the distribution of \begin{equation}} % \setcounter{equation}{1}\label{scan0} \sup_{0 \le a \le 1} F_n(a+h) - F_n(a). \end{equation} The scan statistics of the uniform empirical distributions can be used to detect elevated signal relative to any continuous null distribution, after an appropriate inverse CDF transformation. Knowing this distribution \eqref{scan0} is essential to calibrating the scan statistic in the context of detecting, in a uniform background, the presence of an interval of a certain length with an unusually high density of points. This is considered today a quintessential detection problem, with applications in the detection of disease clusters \cite{besag1991detection} and syndromic surveillance \cite{heffernan2004ssp}, among many others \cite{glaz2001scan,glaz2009scan,glaz2012scan, handbook}. In practice, even in the simplest case where only a single anomalous interval may be present, the length of that interval is almost always unknown. In that case, it is natural to consider intervals of various lengths, but standardize the counts, leading to \begin{equation}} % \setcounter{equation}{1}\label{scan} \sup_{0 \le a \le 1} \sup_{h_- \le h \le h_+} \frac{\sqrt{n}(F_n(a+h) - F_n(a) - h)}{\sqrt{h (1-h)}}. \end{equation} This can be seen to approximate the likelihood ratio test \cite{kulldorff1997spatial}. The parameters $h_-$ and $h_+$ limit the search to intervals that are neither too short and nor too large. The main goal of this paper is to derive the asymptotic (as $n \to \infty$) distribution of \eqref{scan} along with its studentized counterpart \begin{equation}} % \setcounter{equation}{1}\label{studentizedscan} \sup_{0 \le a \le 1} \sup_{h_- \le F_n(a+h) - F_n(a) \le h_+} \frac{\sqrt{n}(F_n(a+h) - F_n(a) - h)}{\sqrt{(F_n(a+h) - F_n(a) ) (1- F_n(a+h) + F_n(a) )}}. \end{equation} \begin{rem} From the four theorems in Section \ref{sec:main}, one finds out that relatively small scale $h$ dominates in \eqref{scan} and \eqref{studentizedscan}. There are works that apply scale corrections to the scan \cite{dumbgen2001multiscale, sharpnack2016exact, konig2018multidimensional}. Under the scale corrections, one scale no longer dominates. \end{rem} \subsection{Related work: point processes} In one of the most celebrated results in what is now the empirical process literature, \citet{kolmogorov1933sulla} derived the limiting distribution of $\sqrt{n} \, \sup_{0 \le a \le 1} (F_n(a) - a)$. This is the Kolmogorov-Smirnov statistic, and it can be seen as scanning over intervals of the form $[0, a]$, $0 \le a \le 1$. For similar reasons that motivated the introduction of the normalized scan statistic \eqref{scan} as an improvement over the unnormalized one \eqref{scan0}, \citet{anderson1952asymptotic} introduced and studied normalized variants of the Kolmogorov-Smirnov statistic, some of them of the form $\sqrt{n} \sup_a (F_n(a) - a) \sqrt{\psi(a)}$, where $\psi$ is a given weight function. The choice $\psi(a) = [a (1-a)]^{-1}$ is particularly compelling, leading to the statistic \begin{equation}} % \setcounter{equation}{1}\label{AD} \sup_{0 \le a \le 1} \frac{\sqrt{n} (F_n(a) - a)}{\sqrt{a(1-a)}}. \end{equation} \citet{eicker1979asymptotic} and \citet{jaeschke1979asymptotic} obtained the limiting distributions of this statistic, its variants of the form \begin{equation}} % \setcounter{equation}{1}\label{vn} V_n = \sup_{\epsilon_n \le a \le \delta_n}\frac{\sqrt{n} (F_n(a) - a)}{\sqrt{a(1 - a)}}, \end{equation} and its Studentized counterpart \begin{equation}} % \setcounter{equation}{1}\label{hvn} \hat V_n = \sup_{\epsilon_n \le a \le \delta_n}\frac{\sqrt{n} (F_n(a) - a)}{\sqrt{F_n(a)(1 - F_n(a))}}, \end{equation} for some given $0 \le \epsilon_n \le \delta_n \le 1$. We note that these statistics can be directly expressed in terms of the order statistics, $U_{(1)} \le \cdots \le U_{(n)}$, which when $\varepsilon_n = 0$ and $\delta_n = 1$, is as follows \begin{equation}} % \setcounter{equation}{1}\label{hc} \max_{1 \le i \le n}\frac{i - nU_{(i)}}{\sqrt{nU_{(i)}(1 - U_{(i)})}}, \end{equation} and \begin{equation}} % \setcounter{equation}{1}\label{hcstu} \max_{1 \le i < n}\frac{i - nU_{(i)}}{\sqrt{i(1 - \frac{i}{n})}}, \end{equation} respectively. \citet{berk1979goodness} proposed to directly look at each order statistic individually, combining the resulting tests using Tippett's method, leading to \begin{equation}} % \setcounter{equation}{1} \min_{1 \le i \le n} B(U_{(i)}; i, n - i + 1), \end{equation} with $B(\cdot; a, b)$ denoting the distribution function of the Beta$(a, b)$ distribution. \citet{moscovich2016exact} and \citet{gontscharuk2017asymptotics} derived the asymptotic distribution of this statistic. Other goodness-of-fit tests include the reversed Berk-Jones statistic \cite{jager2004new} and Phi-divergence tests \cite{jager2007goodness}, etc. We note that the two-sided version of the above-mentioned tests have been considered and studied \subsection{Related work: signals} Closely related to the work above is the setting where, instead of observing a point cloud, one observes a signal. The simplest situation is that of a one-dimensional signal defined on a regular lattice, that is, of the form $X_1, \dots, X_n$. The null situation is when these are iid from some underlying distribution on the real line, for example, the standard normal distribution. By writing \begin{equation}} % \setcounter{equation}{1} R_n = \max_{1 \le i \le n - k} \{S_{i + k} - S_i\},~~~k = \lfloor c \log n\rfloor,~~ c > 0, \end{equation} \citet{erdos1970new} investigated the strong limit of $R_n/(\alpha k)$ when $X_1$ has a finite moment generating function around a neighborhood of zero. \citet{deheuvels1986exact} studied $\limsup$ and $\liminf$ of $(R_n - \alpha k)/\log k$. When the goal is to detect an interval where the observations are unusually large, and the length of the (discrete) interval is unknown, it becomes of interest to study the following scan statistic \begin{equation}} % \setcounter{equation}{1}\label{maxincre} Z_n = \max_{1 \le i < j \le n} \frac{S_j - S_i}{\sqrt{j - i}}, \end{equation} where $S_k = \sum_{i = 1}^k X_i$. The study of such statistics dates back to the work of \citet{darling1956limit}, who derived the limiting distribution of \begin{equation}} % \setcounter{equation}{1} \max_{1 \le j \le n} \frac{S_j}{\sqrt{j}}, \end{equation} which can be seen as scanning intervals of the form $\{1, \dots, j\}$. \citet{siegmund1995using} provided the limiting distribution of the statistic \eqref{maxincre} under the assumption that the $X_i$'s are iid normal. This study was extended by \citet{mikosch2010limit} to the case where the underlying distribution is heavy-tailed, and by \citet{kabluchko2014limiting} when the underlying distribution has finite moment generating function in a neighborhood of the origin. \citet{kabluchko2011extremes} generalized the result to the multivariate setting where the variables are indexed by a multi-dimensional lattice; see also \cite{sharpnack2016exact, konig2018multidimensional}. \citet{proksch2018multiscale} studied more general scanning procedures motivated within the framework of inverse problems. There is a parallel literature for continuous processes, where one observes instead $X_t, t \in [0,1]$ (in dimension 1). See, for example, \citet{aldous2013probability, qualls1973asymptotic} and \citet{chan2006maxima}. \subsection{Related work: Lipschitz-1/2 modulus of the uniform empirical process} The results of \citet{mason1983strong} on the Lipschitz-1/2 modulus of the uniform empirical process, defined by \begin{equation}} % \setcounter{equation}{1}\label{eq:osci} \sup_{0 \le a \le 1 - h} \sup_{t \le h \le 1} \frac{\sqrt{n}|F_n(a+h) - F_n(a) - h|}{\sqrt{h}}, \end{equation} are most closely related to the present results. They proved strong limit theorems for \eqref{eq:osci} with $t = t_n \to 0$ at various rates. We refer to \citet[Chapter 14.2]{shorack2009empirical} for a review. \subsection{Content} The rest of the paper is organized as follows. We state our main results in \secref{main}, where we provide asymptotic distributions of some scan statistics and their variants. The proofs are provided in \secref{proof}. \section{Main results} \label{sec:main} Recall that $U_1,\ldots,U_n$ are iid from the uniform distribution on $[0, 1]$, and that $U_{(1)} \leq \cdots \leq U_{(n)}$ denote the order statistics. (Whenever needed, we write $U_{(0)} \equiv 0$ and $U_{(n + 1)} \equiv 1$.) \subsection{Studentized scan statistics} We derive the asymptotics for \eqref{studentizedscan} before \eqref{scan} for convenience of the proof. As we did earlier, we may rewrite \eqref{studentizedscan} directly in terms of the order statistics, in the form of \begin{equation}} % \setcounter{equation}{1}\label{mnpluskl} M_n^+(k, l) = \max_{0 \le i < j \le n:\, k \le j - i < l} M_{i, j}, \end{equation} where \begin{equation}} % \setcounter{equation}{1}\label{normalizedorder} M_{i, j} = \frac{j - i - n(U_{(j)} - U_{(i)})}{\sqrt{(j - i)(1 - \frac{j - i}{n})}}. \end{equation} We will be particularly interested in the following special case \begin{equation}} % \setcounter{equation}{1}\label{stats:mnplus} M_n^+ := M_n^+(1, n), \end{equation} which is the analog of \eqref{hcstu}. Not surprisingly, the limiting distribution is an extreme value distribution, specifically, a Gumbel distribution. Indeed, we have the following. \begin{thm}\label{thm:mnplus} For any $\tau \in \mathbb{R}$, \begin{equation}} % \setcounter{equation}{1}\label{result:mnplus} \lim_{n \to \infty}\P \bigg\{ M_n^+ \le \sqrt{2 \log n} - \frac{3\log\log n}{2\sqrt{2 \log n}} + \frac{\tau}{\sqrt{2 \log n}} \bigg\} = \exp\big(-c\, \exp(-\tau)\big), \end{equation} where $c = \tfrac{8}{9\sqrt{\pi}}$. \end{thm} Similarly, define the opposite one-sided statistics \begin{equation}} % \setcounter{equation}{1} M_n^-(k, l) = -\min_{0 \le i < j \le n:\, k \le j - i \le l} M_{i, j}, \end{equation} and \begin{equation}} % \setcounter{equation}{1}\label{stats:mnminus} M_n^- := M_n^-(1, n). \end{equation} Finally, define the two-sided statistics \begin{equation}} % \setcounter{equation}{1} M_n(k, l) = \max\{M_n^+(k, l), M_n^-(k, l)\} = \max_{0 \le i < j \le n:\, k \le j - i < l} |M_{i, j}|, \end{equation} and \begin{equation}} % \setcounter{equation}{1}\label{stats:mn} M_n := M_n(1,n) = \max\{M_n^+, M_n^-\}. \end{equation} For these statistics too, the limiting distribution is a Gumbel distribution, but what is surprising here is that these statistics do not behave the same way as $M_n^+$. In particular, $M_n^- = (1 + o_P(1)) \log n$, and therefore dominates $M_n^+$ in the large-sample limit, implying that $M_n = M_n^-$ with probability tending to~1. Indeed, we have the following. \begin{thm} \label{thm:mnminus} For any $\tau \in \mathbb{R}$, \begin{equation}} % \setcounter{equation}{1} \label{result:mnminus} \lim_{n \to \infty}\P\big\{ M_n^- \le \log n + \tau \big\} = \exp(-\exp(1 - \tau)). \end{equation} Moreover, \begin{equation}} % \setcounter{equation}{1} \label{result:mn} \lim_{n \to \infty} \P\big\{ M_n = M_n^- \big\} = 1. \end{equation} \end{thm} \subsection{Standardized scan statistics} We also examine the large-sample behavior of standardized scan statistics \eqref{scan}. Following the same way as rewriting \eqref{studentizedscan} before. Define \begin{equation}} % \setcounter{equation}{1} \tilde M_n^+(k, l) := \max_{0 \le i < j \le n: k \le j - i \le l}\tilde M_{i, j}, \end{equation} where \begin{equation}} % \setcounter{equation}{1} \tilde M_{i, j} := \frac{j - i - n(U_{(j)} - U_{(i)})}{\sqrt{n(U_{(j)} - U_{(i)})(1 - U_{(j)} + U_{(i)})}}. \end{equation} Note that \begin{equation}} % \setcounter{equation}{1}\label{stats:tmnplus} \tilde M_n^+ := \tilde M_n^+(1, n), \end{equation} is the analog of \eqref{hc}. The behavior of $\tilde M_n^+$ turns out to be very different from that of its studentized analog $M_n^+$. However, we recover a similar behavior if we appropriately bound the length of the scanning interval from below. \begin{thm}\label{thm:tmnplus} For any $\tau > 0$, \begin{equation}} % \setcounter{equation}{1}\label{eq:tmnplusall} \lim_{n \to \infty}\P \bigg\{ \tilde M_n^+ \le \sqrt{\frac{n}{\tau}} \bigg\} = \exp(-\tau). \end{equation} Moreover, for any $A > 0$, defining $k_n = \lceil A(\log n)^3 \rceil$, \begin{equation}} % \setcounter{equation}{1}\label{eq:tmnpluscubelocal} \lim_{n \to \infty}\P\bigg\{ \tilde M_n^+(k_n, n) \le \sqrt{2 \log n} - \frac{3\log\log n}{2\sqrt{2 \log n}} + \frac{\tau}{\sqrt{2 \log n}} \bigg\} = \exp(- c_A\, \exp(-\tau)) , \end{equation} where $c_A = \int_A^\infty \Lambda_1(a)da$ with $\Lambda_1(a) = \frac{1}{2\sqrt{\pi}a^2} \exp\big(\frac{\sqrt{2}}{3\sqrt{a}}\big)$. \end{thm} \begin{rem} Here we choose $k_n \propto (\log n)^3$ because we want to examine the behavior of $\tilde M^+(K, L)$, compared to its counterpart $M^+(K, L)$ at the most contributed part, which is reflected in the proof of Theorem \ref{thm:mnplus}. For readers who are curious about other choices of $k_n$, we note that $\tilde M_{i, j}$ behaves like subgaussian, or named as ``sublogarithmic'' in \cite{kabluchko2014limiting}. Roughly speaking, $\tilde M_n^+(k_n, n)$ will likely to take its maximum around the indices $i$, $j$ with small length, that is, when $j - i$ is close to $k_n$. \end{rem} Define the standardized analog of \eqref{stats:mnminus} \begin{equation}} % \setcounter{equation}{1} \tilde M_n^-(k, l) = -\min_{0 \le i < j \le n:\, k \le j - i \le l}\tilde M_{i, j}, \end{equation} with \begin{equation}} % \setcounter{equation}{1}\label{stats:tmnminus} \tilde M_n^- := \tilde M_n^-(1, n), \end{equation} as well as the analog of \eqref{stats:mn} \begin{equation}} % \setcounter{equation}{1} \tilde M_n(k, l) = \max\{\tilde M_n^+(k, l), \tilde M_n^-(k, l)\}, \end{equation} with \begin{equation}} % \setcounter{equation}{1}\label{stats:tmn} \tilde M_n := \tilde M_n(1, n) = \max\{\tilde M_n^+, \tilde M_n^-\}. \end{equation} \begin{thm}\label{thm:tmnminus} We have \begin{equation}} % \setcounter{equation}{1} \lim_{n \to \infty} \P\big( \tilde M_n = \tilde M_n^+ \big) = 1. \end{equation} Thus for any $\tau > 0$, \begin{equation}} % \setcounter{equation}{1}\label{eq:tmnsall} \lim_{n \to \infty}\P \bigg( \tilde M_n \le \sqrt{\frac{n}{\tau}} \bigg) = \exp(-\tau). \end{equation} \end{thm} \begin{rem} While the behavior of the Studentized statistic $M_n^+$ is driven by the smallest intervals, this is not as much the case for the standardized statistic $\tilde M_n^+$. Indeed, a large value of $M_n^+$ comes from some $n(U_{(j)} - U_{(i)})$ being large compared to $j - i$, however, $n(U_{(j)} - U_{(i)})$ being in the denominator defining $\tilde M_n^+$, its impact is lessened. \end{rem} \section{Proofs of Main Results}\label{sec:proof} Our proof arguments are based on standard moderate and large deviation results, Kolmogorov's theorem, a Poisson approximation \cite{arratia1989two}, as well as some technical results developed by \citet{kabluchko2014limiting} in their study of the limiting distribution of the scan statistic in the form of \eqref{maxincre}. \subsection{Preliminaries} Throughout the paper, we assume that $\{X_k, k \in \mathbb{Z}\}$ are iid distributed with the density, \begin{equation}} % \setcounter{equation}{1} f(x) = \mathbbm{1}(x \leq 1)\exp(x-1), \end{equation} noting that $-X_1 + 1$ follows standard exponential distribution. This distribution has zero mean and unit variance. Define the two-sided partial sums, \begin{equation}} % \setcounter{equation}{1} S_k^+ = \sum_{i=1}^k X_i, ~~~~ S_0^+= 0, ~~~~S_{-k}^+ = -\sum_{i=1}^k X_{-i}, ~~k \in \mathbb{N} \end{equation} and \begin{equation}} % \setcounter{equation}{1} S_k^- := - S_k^+. \end{equation} They will play a central role in what follows. Define the normalized increments \begin{equation}} % \setcounter{equation}{1} Z_{i, j}^\pm = \frac{S_j^\pm - S_i^\pm}{\sqrt{j - i}}, \end{equation} \begin{equation}} % \setcounter{equation}{1} Z_n^\pm(k, l) := \max_{1 \le i < j \le n: k \le j - i \le l} Z^\pm_{i, j}, \qquad Z_n^\pm := Z_n^\pm(1, n). \end{equation} Let $\varphi^\pm(t)$ be the cumulant generating functions of $\pm X_1$ respectively. We have \begin{equation}} % \setcounter{equation}{1} \varphi^+(t) = t - \log(1 + t), \quad \text{if } t \ge 0. \end{equation} \begin{equation}} % \setcounter{equation}{1} \varphi^-(t) = \begin{cases} -t - \log(1 - t), \quad &\text{if }0 \le t \le 1,\\ \infty, &\text{if }t \ge 1, \end{cases} \end{equation} Also, define $I^+(s)$ and $I^-(s)$ as the respective Legendre-Fenchel transforms (a.k.a., rate functions). We have \begin{equation}} % \setcounter{equation}{1} I^+(s) = \begin{cases} -s - \log(1 - s), &\text{if } 0 \le s \le 1,\\ \infty, &\text{if } s \ge 1, \end{cases} \end{equation} and \begin{equation}} % \setcounter{equation}{1} I^-(s) = s - \log(1 + s), \end{equation} with respective Taylor expansions at $0$ (as $s \to 0$) \begin{align*} I^+(s) &= s^2/2 + s^3/3+ o(s^3),\\ I^-(s) &= s^2/2 - s^3/3+ o(s^3). \end{align*} We also prepare several usefull lemmas. The first two lemmas are well-known moderate and large deviations results \cite{cramer1938sommes, bahadur1960deviations}. \begin{lem}\label{lem:moderatedev} Let $(x_k)$ be a sequence satisfying $x_k \to \infty$ and $x_k = o(\sqrt{k})$ as $k \to \infty$. Then, as $k \to \infty$, \begin{equation}} % \setcounter{equation}{1} \P\bigg( \frac{S_k^\pm}{\sqrt{k}} \ge x_k \bigg) \sim \frac{1}{\sqrt{2\pi}x_k} \exp\bigg\{-k I^\pm \bigg(\frac{x_k}{\sqrt{k}} \bigg) \bigg\}. \end{equation} \end{lem} \begin{lem} For every $k \in \mathbb{N}$ and $x > 0$, we have \begin{equation}} % \setcounter{equation}{1}\label{eq:rescumulantineq} \P\bigg( \frac{S_k^\pm}{\sqrt{k}} \ge x \bigg) \le \exp\bigg\{-k I^\pm \bigg( \frac{x}{\sqrt{k}} \bigg) \bigg\} . \end{equation} Moreover, for every $A \le s_\infty$, where $s_\infty = \sup\{s \in \mathbb{R}: \P(X_1 \le s) \le 1\}$, there is $C_A > 0$ such that, for all $k \in \mathbb{N}$ and $x \in (0, A\sqrt{k})$, \begin{equation}} % \setcounter{equation}{1}\label{eq:cumulantineq} \P\bigg( \frac{S_k^\pm}{\sqrt{k}} \ge x \bigg) \le \frac{C_A}{x} \exp\bigg\{-k I^\pm \bigg( \frac{x}{\sqrt{k}} \bigg) \bigg\}, \end{equation} \end{lem} The following result is obtained from a simple application of Theorem 2.4 in \cite{petrov1995limit}, which provides an upper bound of the tail distribution of $\max_{1 \le k \le n}S_k^\pm$ by that of $S_n^\pm$. \begin{lem}\label{lem:petrov} We have \begin{equation}} % \setcounter{equation}{1} \P\bigg\{\max_{1 \le k \le n}S_k^\pm \ge x\bigg\} \le 2 \P\Big\{ S_n^\pm \ge x - \sqrt{2(n - 1)} \Big\}. \end{equation} \end{lem} For completeness, we include Lemma 4.4 and 4.5 from \citep{kabluchko2014limiting} below. For integers $r > 0$ and $x < y$, define \begin{equation}} % \setcounter{equation}{1}\label{T} \mathbb{T}_r(x, y) := \big\{(i, j) \in \mathbb{I}: x - r \le i \le x\text{ and }y \le j \le y + r\big\}. \end{equation} \begin{lem}\label{lem:weakcontrolzijplus} Fix constants $B_1, B_2 > 0$. Then for all $x \in \mathbb{Z}$, $l, r \in \mathbb{N}$ and all $u > 0$ such that $B_1l > u^2$ and $r \le B_2lu^{-2}$, we have \begin{equation}} % \setcounter{equation}{1} \mathcal{Q}(l, r, u) := \P\bigg( \max_{i, j \in \mathbb{T}_r(x, x + l)}\frac{S_j^+ - S_i^+}{\sqrt{l}}\ge u\bigg) \le \frac{C}{u}\exp\bigg(-\frac{u^2}{2} - \frac{c u^3}{\sqrt{l}} \bigg), \end{equation} where the constants $c$ and $C$ depend on $B_1$ and $B_2$ but do not depend on $x$, $l$, $r$, $u$. \end{lem} \begin{lem}\label{lem:interchange} Let $\nu$, $\nu_n$, $n \in \mathbb{N}$, be measures on $[0, \infty)$ which are finite on compact intervals. Let $G$, $G_n$, $n \in \mathbb{N}$, be measurable functions on $[0, \infty)$ which are uniformly bounded on compact intervals. Assume that \begin{enumerate} \item $\nu_n$ converges to $\nu$ weakly on every interval $[0, t]$, $t \ge 0$; \item for $\nu$-a.e. $s \ge 0$, we have $\lim_{n \to \infty} G_n(s_n) = G(s)$, for every sequence $s_n \to s$; \item $\lim_{T \to \infty} \int_T^\infty |G_n|d\nu_n = 0$ uniformly when $n \ge N$ for some $N \in \mathbb{N}$. \end{enumerate} Then, $\lim_{n \to \infty} \int_0^T G_n d\nu_n = \int_0^T G d\nu$. \end{lem} We also provide an upper bound of the tail distribution $\max_{i, j \in \mathbb{T}_r(x, x + l)}(S_j^- - S_i^-)/\sqrt{l}$ also, which is cruder than its counterpart for $S_k^+$ in Lemma \ref{lem:weakcontrolzijplus} but shall suffice for our purposes. \begin{lem}\label{lem:weakcontrolzijminus} For all $x \in \mathbb{Z}$, $l, r \in \mathbb{N}^+$ and all $u > 40$ such that $l > u^2r$ and $r > 10u^2$, we have \begin{equation}} % \setcounter{equation}{1} \mathcal{Q}(l, r, u) := \P\bigg( \max_{i, j \in \mathbb{T}_r(x, x + l)}\frac{S_j^- - S_i^-}{\sqrt{l}}\ge u\bigg) \le C\exp\bigg(-\frac{u^2}{3}\bigg), \end{equation} where the constant $C$ does not depend on $x$, $l$, $r$, $u$. \end{lem} \begin{proof} Before we proceed into the proof, one fact about $I^-(s)$ is \begin{equation}} % \setcounter{equation}{1}\label{eq:iminusbound} I^-(s) \ge \frac{1.01s^2}{3}, ~~~0 \le s \le 0.5, \end{equation} which can be easily checked. Define $V_{l,u} := u^2 - uS_l^-/\sqrt{l}$, $S_{k_1}^{(1)-}$ and $S_{k_2}^{(2)-}$ to be two partial sums of $-X_i$ independent of each other and $S_l^-$. With translation invariance, we bound $\mathcal{Q}(l, r, u)$ as follows, \begin{align*} \mathcal{Q}(l, r, u) &= \P\bigg( \max_{i, j \in \mathbb{T}_r(0, 0 + l)}\frac{S_j^- - S_i^-}{\sqrt{l}}\ge u\bigg)\\ &= \P\bigg(\max_{0 \le k_1, k_2 \le r} \frac{S_{k_1}^{(1)-} + S_{k_2}^{(2)-}}{\sqrt{l}} + \frac{S_l^-}{\sqrt{l}} \ge u\bigg)\\ &= \P\bigg(\max_{0 \le k_1, k_2 \le r} \frac{S_{k_1}^{(1)-} + S_{k_2}^{(2)-}}{\sqrt{l}} \ge \frac{V_{l, u}}{u}\bigg)\\ &\le \P\bigg(\max_{0 \le k_1, k_2 \le r} \frac{S_{k_1}^{(1)-} + S_{k_2}^{(2)-}}{\sqrt{l}} \ge \frac{V_{l, u}}{u}, V_{l, u} \le u^2\sqrt{\frac{r}{l}}\bigg)\\ &+ \P\bigg(\max_{0 \le k_1, k_2 \le r} \frac{S_{k_1}^{(1)-} + S_{k_2}^{(2)-}}{\sqrt{l}} \ge \frac{V_{l, u}}{u}, V_{l, u} > u^2\sqrt{\frac{r}{l}}\bigg)\\ &\le \P(V_{l, u} \le u^2\sqrt{r/l})+ \P\bigg(\max_{0 \le k_1, k_2 \le r} \frac{S_{k_1}^{(1)-} + S_{k_2}^{(2)-}}{\sqrt{l}} > u\sqrt{\frac{r}{l}}\bigg), \end{align*} where we bound these two terms individually. By the assumptions on $u, l, r$, we have $u(1 - \sqrt{r/l})/\sqrt{l} \le 0.5$. Thus with \eqref{eq:rescumulantineq} and \eqref{eq:iminusbound}, we have \begin{equation}} % \setcounter{equation}{1} \P\bigg(V_{l, u} \le u^2\sqrt{\frac{r}{l}}\bigg) = \P\bigg(\frac{S_l^-}{\sqrt{l}} \ge u - u\sqrt{\frac{r}{l}}\bigg) \le \exp\bigg[ -lI^+\bigg\{\frac{u(1 - \sqrt{r/l})}{\sqrt{l}}\bigg\}\bigg] \le \exp\bigg(-\frac{u^2}{3}\bigg). \end{equation} Now we switch to the second item, with Lemma \ref{lem:petrov}, \eqref{eq:rescumulantineq} and assumption that $r > 10u^2$, $u > 40$, \begin{align*} \P\bigg(\max_{0 \le k_1, k_2 \le r} \frac{S_{k_1}^{(1)-} + S_{k_2}^{(2)-}}{\sqrt{l}} \ge u\sqrt{\frac{r}{l}}\bigg) &\le 2\P\bigg(\max_{0 \le k \le r} \frac{S_k^-}{\sqrt{r}} \ge \frac{u}{2}\bigg)\\ &\le 4\P\bigg(\frac{S_r^-}{\sqrt{r}} \ge \frac{u}{2} - \sqrt{2}\bigg)\\ &\le C\exp\bigg\{-rI^-\bigg(\frac{u - 2\sqrt{2}}{2\sqrt{r}}\bigg)\bigg\}\\ &\le C\exp\bigg(-\frac{u^2}{3}\bigg). \end{align*} Putting the two terms together, we get the stated bound. \end{proof} We now adjust the Lemma \ref{lem:weakcontrolzijplus} to suit for proving Theorem \ref{thm:tmnplus}, in which we need to deal with \begin{equation}} % \setcounter{equation}{1}\label{eq:tzij} \tilde Z_{i, j}^+ := \frac{S_j^+ - S_i^+}{\sqrt{j - i - (S_j^+ - S_i^+)}}. \end{equation} Define a function \begin{equation}} % \setcounter{equation}{1} \phi(x) = \frac{x}{\sqrt{1 - x}}, ~~x < 1, \end{equation} and thus we have \begin{equation}} % \setcounter{equation}{1} \frac{\tilde Z_{i, j}^+}{\sqrt{j - i}} = \phi\bigg(\frac{Z_{i, j}^+}{\sqrt{j - i}}\bigg). \end{equation} Since $\phi(x)$ is strictly increasing on $(-\infty, 1)$ with range $\mathbb{R}$, we write its inverse function as \begin{equation}} % \setcounter{equation}{1}\label{eq:gplus} g^+(x) := \frac{1}{2}(x\sqrt{x^2 + 4} - x^2), ~~x \in \mathbb{R}, \end{equation} which is also strictly increasing. Therefore, $\tilde Z_{i, j}^+ \ge u$ if and only if \begin{equation}} % \setcounter{equation}{1}\label{eq:tzijtransform} Z_{i, j}^+ \ge \sqrt{j - i} \cdot g^+\bigg(\frac{a}{\sqrt{j - i}}\bigg). \end{equation} This is an important transformation which enables us to deal with $Z_{i, j}^+$ instead. We compute the Taylor expansion of $I^+(g^+(s))$ at $s = 0$, \begin{equation}} % \setcounter{equation}{1}\label{eq:iplusgplustaylor} I^+(g^+(s)) = \frac{s^2}{2} - \frac{s^3}{6} + O(s^4). \end{equation} We have \begin{lem}\label{lem:weakcontroltzijplus} Fix constants $B_1$, $B_2 > 0$. Then for all $x \in \mathbb{Z}$, $l,r \in \mathbb{N}$ and all $u > 0$ such that $B_1l > u^2$ and $r < B_2lu^{-2}$, we have \begin{equation}} % \setcounter{equation}{1}\label{eq:weakcontroltzijplus} \mathcal{Q}(l, r, u) := \P\bigg(\max_{(i, j) \in \mathbb{T}_r(x, x + l)} \tilde Z_{i, j}^+ \ge u\bigg) \le \frac{C}{u} \exp\bigg(-\frac{u^2}{2} + \frac{cu^3}{\sqrt{l}}\bigg), \end{equation} where the constants $c, C > 0$ depend on $B_1$ and $B_2$ but do not depend on $x,l,r,u$. \end{lem} \begin{proof} By the transformation \eqref{eq:tzijtransform}, translation invariance and the fact that $g^+(x)/x^2$ is strictly decreasing, \begin{align} \mathcal{Q}(l, r, u) & = \P\bigg(\max_{(i, j) \in \mathbb{T}_r(0, l)} \tilde Z_{i, j}^+ \ge u\bigg) \\ &= \P\bigg[\max_{0 \le k_1, k_2 \le r} \bigg\{S_{k_1}^{(1)+} + S_{k_2}^{(2)+} - (l + k_1 + k_2) \cdot g^+\bigg(\frac{u}{\sqrt{l + k_1 + k_2}}\bigg)\bigg\} + S_{l}^+ \ge 0\bigg]\\ &\le \P\bigg[\max_{0 \le k_1, k_2 \le r} \bigg\{S_{k_1}^{(1)+} + S_{k_2}^{(2)+} \bigg\}- l \cdot g^+\bigg(\frac{u}{\sqrt{l}}\bigg) + S_{l}^+ \ge 0\bigg], \label{eq:tzijdecompose} \end{align} where $S_{k_1}^{(1)+}$, $S_{k_2}^{(2)+}$ are two partial sums of $X_i$ independent of each other and $S_l^+$. Define \begin{equation}} % \setcounter{equation}{1}\label{eq:vludefn} V_{l, u} = u\bigg(u - \frac{S_l^+}{\sqrt{l - S_l^+}}\bigg). \end{equation} Thus \begin{equation}} % \setcounter{equation}{1} \frac{S_l^+}{\sqrt{l - S_l^+}} = \frac{l \cdot S_l^+/l}{\sqrt{l}\sqrt{1 - S_l^+/l}} = \sqrt{l} \cdot \phi\bigg(\frac{S_l^+}{l}\bigg) = u - \frac{V_{l, u}}{u}, \end{equation} which gives \begin{equation}} % \setcounter{equation}{1} S_l^+ = l\cdot g^+\bigg(\frac{u - V_{l, u}/u}{\sqrt{l}}\bigg). \end{equation} Therefore, \begin{align} &\mathcal{Q}(l,r,u) \\ &\le \P\bigg[\max_{0 \le k_1, k_2 \le r} \bigg\{S_{k_1}^{(1)+} + S_{k_2}^{(2)+} \bigg\}- l \cdot g^+\bigg(\frac{u}{\sqrt{l}}\bigg) + l\cdot g^+\bigg(\frac{u - V_{l, u}/u}{\sqrt{l}}\bigg) \ge 0, V_{l, u} \le 0\bigg]\\ &+\P\bigg[\max_{0 \le k_1, k_2 \le r} \bigg\{S_{k_1}^{(1)+} + S_{k_2}^{(2)+} \bigg\}- l \cdot g^+\bigg(\frac{u}{\sqrt{l}}\bigg) + l\cdot g^+\bigg(\frac{u - V_{l, u}/u}{\sqrt{l}}\bigg) \ge 0, V_{l, u} > 0\bigg]\\ &= \P(V_{l, u} \le 0) \\ &+\P\bigg[\max_{0 \le k_1, k_2 \le r} \bigg\{S_{k_1}^{(1)+} + S_{k_2}^{(2)+} \bigg\}- l \cdot g^+\bigg(\frac{u}{\sqrt{l}}\bigg) + l\cdot g^+\bigg(\frac{u - V_{l, u}/u}{\sqrt{l}}\bigg) \ge 0, V_{l, u} > 0\bigg]\\ &= F_{l,u}(0)+ \int_0^\infty G_{l,r,u}(s)dF_{l,u}(s),\label{eq:tqlrudecompose} \end{align} where the last equality is obtained by conditioning on $V_{l,u} = s$, which is independent of $S_{k_1}^{(1)+}$, $S_{k_2}^{(2)+}$. $F_{l, u}$ therein is the probability distribution of $V_{l, u}$ and \begin{align*} G_{l, r, u}(s) :=& \P\bigg[\max_{0 \le k_1, k_2 \le r} \bigg\{ S_{k_1}^{(1)+} + S_{k_2}^{(2)+} \bigg\}- l \cdot g^+\bigg(\frac{u}{\sqrt{l}}\bigg)+ l\cdot g^+\bigg(\frac{u - s/u}{\sqrt{l}}\bigg) \ge 0\bigg], \end{align*} which is decreasing. To obtain an upper bound for $\mathcal{Q}(l,r,u)$, first we bound $F_{l,u}(s)$ for $s \in [0, \frac{3}{4}u^2]$ so that $u - s/u \in [u/4, u]$. Applying \eqref{eq:cumulantineq}, \begin{align*} F_{l,u}(s) &= \P\bigg(\frac{S_l^+}{\sqrt{l - S_l^+}} \ge u - \frac{s}{u}\bigg)\\ &= \P\bigg\{\frac{S_l^+}{\sqrt{l}} \ge \sqrt{l} \cdot g^+\bigg(\frac{u - s/u}{\sqrt{l}}\bigg)\bigg\}\\ & \le C\bigg\{\sqrt{l}\cdot g^+\bigg(\frac{u - s/u}{\sqrt{l}}\bigg)\bigg\}^{-1}\exp\bigg[-l\cdot I^+\bigg\{g^+\bigg(\frac{u - s/u}{\sqrt{l}}\bigg)\bigg\}\bigg]\\ & \le \frac{C}{u}\exp\bigg[-l\cdot I^+\bigg\{g^+\bigg(\frac{u - s/u}{\sqrt{l}}\bigg)\bigg\}\bigg], \end{align*} where the last inequality follows from the fact that when $0 < x < 1$, \begin{equation}} % \setcounter{equation}{1} xg^+\bigg(\frac{1}{x}\bigg) > \frac{1}{2}. \end{equation} By Taylor expansion of $I^+(g^+(s))$, we have \begin{align} F_{l, u}(s) &\le \frac{C}{u}\exp\bigg\{-\frac{1}{2}\bigg(u - \frac{s}{u}\bigg)^2 + \frac{c}{2\sqrt{l}}\bigg(u - \frac{s}{u}\bigg)^3\bigg\}\nonumber\\ &\le \frac{Ce^s}{u}\exp\bigg(-\frac{u^2}{2} + \frac{c u^3}{\sqrt{l}}\bigg).\label{eq:tflucontrol} \end{align} It is however easy to see that this inequality continues to hold for $s \ge \frac{3}{4}u^2$. Indeed, if $c$ is sufficiently small, then the assumption $B_1l > u^2$ implies that $cu^3/\sqrt{l} \le u^2/8$. Hence, when $s \ge \frac{3}{4}u^2$, the above inequality becomes \begin{equation}} % \setcounter{equation}{1} F_{l, u}(s) \le \frac{C}{u}\exp\bigg(\frac{3u^2}{8}\bigg). \end{equation} If $C$ is sufficiently large, the right-hand side of previous inequality is greater than $1$ and hence the inequality trivially holds. We bound $G_{l,r,u}(s)$ for $s \ge 0$, \begin{align*} G_{l,r,u}(s) &\le \P\bigg\{\max_{0 \le k_1, k_2 < r} S_{k_1}^{(1)+} + S_{k_2}^{(2)+} > \frac{s}{2u}\sqrt{\bigg(u - \frac{s}{u}\bigg)^2 + 4l} + \frac{s^2}{2u^2} - s\bigg\}\\ &\le 2\P\bigg\{\max_{0 \le k < r} S_{k}^+ > \frac{s}{4u}\sqrt{\bigg(u - \frac{s}{u}\bigg)^2 + 4l} - \frac{s}{2}\bigg\}\\ &\le 2\P\bigg\{\max_{0 \le k < r} S_{k}^+ > \frac{s}{2u}\sqrt{l} - \frac{s}{2}\bigg\}. \end{align*} Applying the Lemma \ref{lem:petrov} to the above equation we obtain \begin{align*} G_{l,r,u}(s) &\le 4\P\bigg(S_r^+ > \frac{s}{2u}\sqrt{l} -\frac{s}{2} - \sqrt{2r}\bigg)\\ &\le 4\P\bigg(\frac{S_r^+}{\sqrt{r}} > \frac{s}{2u\sqrt{r}}\sqrt{l}- \frac{s}{2\sqrt{r}} - \sqrt{2}\bigg)\\ &\le 4\exp\bigg\{-rI^+\bigg(\frac{cs - \sqrt{2}}{\sqrt{r}}\bigg)\bigg\}. \end{align*} In the second inequality, we used the assumption $r < B_2lu^{-2}$. By noticing the fact that $I^+(s) \ge s^2/2$, we have \begin{equation}} % \setcounter{equation}{1}\label{eq:tglrucontrol} G_{l,r,u}(s) \le Ce^{-cs^2}. \end{equation} Strictly speaking, this is valid only as long as $cs \ge \sqrt{2}$, however, we can choose the constant $C$ so large that \eqref{eq:tglrucontrol} continues to hold in the case $cs < \sqrt{2}$. To obtain \eqref{eq:weakcontroltzijplus}, by \eqref{eq:tqlrudecompose}, \eqref{eq:tflucontrol}, \eqref{eq:tglrucontrol}, it is clear that \begin{align*} \mathcal{Q}(l, r, u) &\le F_{l,u}(0) + \sum_{k = 0}^\infty G_{l, r, u}(k)F_{l, u}(k + 1)\\ &\le \frac{C}{u}\bigg(1 + \sum_{k = 0}^\infty e^{-ck^2}e^k\bigg)\exp\bigg(-\frac{u^2}{2} + \frac{cu^3}{\sqrt{l}}\bigg)\\ &\le \frac{C}{u}\exp\bigg(-\frac{u^2}{2} + \frac{cu^3}{\sqrt{l}}\bigg). \end{align*} \end{proof} \subsection{Proof of \thmref{mnplus} and \thmref{mnminus}} The roadmap of our proof. We know that $(U_{(1)}, U_{(2)}, \dots, U_{(n)})$ has the same distribution as \begin{equation}} % \setcounter{equation}{1} \Big(\frac{Y_1}{\sum_{i = 1}^{n + 1}Y_i}, \frac{Y_1 + Y_2}{\sum_{i = 1}^{n + 1}Y_i}, \dots, \frac{\sum_{i = 1}^n Y_i}{\sum_{i = 1}^{n + 1}Y_i}\Big), \quad \text{where $Y_1, \dots, Y_{n+1}$ are iid exponential.} \end{equation} In particular, $Y_i$ can be set as $1 - X_i$. We use this fact, together with a comparison of $\sum_{i = 1}^{n + 1}Y_i$ with its mean using a central limit theorem, to deal with the dependency among order statistics above, effectively reducing the problem to partial sums of iid random variables. We then divide the intervals into smaller intervals, which end up contributing the most to the maximum, and larger intervals, whose contribution we show to be negligible. Although $U_{(i)}$ and $Y_i$ may be defined on different probability spaces with different probability measure, we may switch between them when there is no confusion. Because we only prove convergence in distribution, from now on, we put $U_{(j)} = \sum_{i = 1}^jY_i/ \sum_{i = 1}^{n + 1}Y_i$ throughout the proof. \subsubsection{Proof of (\ref{result:mnplus})} We study the asymptotic behavior of the statistic based on different regions of $j - i$. For $b > 0$, define the event \begin{align*} A_{i, j}^{n+}(b) & = \bigg\{\frac{j - i - n(U_{(j)} - U_{(i)})}{\sqrt{(j - i)(1 - \frac{j - i}{n})}} \le b\bigg\}\\ &=\bigg\{U_{(j)} - U_{(i)} \ge \frac{j - i}{n} - \frac{b}{\sqrt{n}} w_{i, j}^n \bigg\}, \end{align*} where \begin{equation}} % \setcounter{equation}{1}\label{w} w_{i, j}^n := \sqrt{\frac{j - i}{n}\bigg(1 - \frac{j - i}{n}\bigg)}. \end{equation} Under this notation, we have \begin{equation}} % \setcounter{equation}{1} \big\{M_n^+ \le b\big\} = \bigcap_{0 \le i < j \le n}A^{n+}_{i, j}(b). \end{equation} Define \begin{equation}} % \setcounter{equation}{1} u_n(\tau) = \bigg(1 + \frac{-3\log\log n + 2\tau}{4 \log n} \bigg) \sqrt{2 \log n}. \end{equation} Throughout the proof, we abbreviate $u_n(\tau)$ as $u_n$ with $\tau$ fixed. With this choice, we have $u_n \sim \sqrt{2\log n}$. \paragraph{Step 1: Upper bound} For the upper bound, it suffices to focus on the optimal range so that the maximum is achieved. This turns out to be at $j - i \propto (\log n)^3$, as discussed below. Define the events \begin{equation}} % \setcounter{equation}{1}\label{eq:omegancn} \Omega_n = \big\{|S_{n + 1}^+| \le (\log\log n) \sqrt{n} \big\}. \end{equation} By the central limit theorem, \begin{equation}} % \setcounter{equation}{1}\label{eq:omegan} \P(\Omega_n) \to 1 ~\mbox{ as }~ n\to \infty. \end{equation} When $j - i \le \frac{n}{\log n\log\log n}$, \begin{align*} &A_{i, j}^{n+}(u_n) \\ &\subseteq \Omega_n^\mathsf{c}\bigcup \{ \Omega_n \bigcap A_{i, j}^{n+}(u_n)\} \\ &= \Omega_n^\mathsf{c}\bigcup\bigg(\Omega_n \bigcap \bigg\{\frac{j - i - S_j^+ + S_i^+}{n + 1 - S_{n + 1}^+}\ge \frac{j - i}{n} - \frac{u_n}{\sqrt{n}}w_{i,j}^n\bigg\}\bigg)\\ &\subseteq \Omega_n^\mathsf{c}\bigcup\bigg(\Omega_n \bigcap\bigg\{S_j^+ - S_i^+ \le (j - i)\frac{-1 + S_{n + 1}^+}{n} + \frac{u_n}{\sqrt{n}}(n + 1 - S_{n + 1}^+)w_{i,j}^n\bigg\}\bigg)\\ &\subseteq \Omega_n^\mathsf{c}\bigcup \bigg\{S_j^+ - S_i^+ \le (j - i)\frac{\log\log n}{\sqrt{n}} + \frac{u_n}{\sqrt{n}}(n + 1 + (\log\log n)\sqrt{n})w_{i,j}^n\bigg\}\\ &= \Omega_n^\mathsf{c}\bigcup\bigg\{Z_{i,j}^+ \le \frac{(\log\log n)}{\sqrt{n}}\sqrt{j - i} + u_n\cdot\bigg(1 + \frac{(\log\log n)\sqrt{n} + 1}{n}\bigg)\sqrt{1 - \frac{j - i}{n}}\bigg\}\\ &\subseteq \Omega_n^\mathsf{c}\bigcup\bigg\{Z_{i,j}^+ \le \sqrt{\frac{\log\log n}{\log n}} + u_n\cdot\bigg(1 + \frac{(\log\log n)\sqrt{n} + 1}{n}\bigg)\bigg\}\\ &\subseteq \Omega_n^\mathsf{c}\bigcup\{Z_{i,j}^+ \le u_n(\tau + \varepsilon)\}, \end{align*} for any fixed $\varepsilon > 0$ provided that $n$ is large enough. To deal with the standardized sums $Z_{i,j}^+$, we need Theorem 1.1 and Theorem 1.2 in \citep{kabluchko2014limiting}. Because $X_1 \leq 1$, it belongs to the superlogarithm family defined in \citep{kabluchko2014limiting}. Applying Theorem 1.1 and Theorem 1.2 in \citep{kabluchko2014limiting}, we obtain \begin{equation}} % \setcounter{equation}{1}\label{eq:lefttail} \lim_{n \to \infty}\P\{ Z_n^+ \le u_n \} = \exp\bigg\{-\frac{8}{9\sqrt{\pi}}e^{-\tau}\bigg\}, \end{equation} and \begin{equation}} % \setcounter{equation}{1}\label{eq:lefttail_bounds} \lim_{A \to \infty}\liminf_{n \to \infty}\P \{ Z_n^+ = Z_n^+(A^{-1}(\log n)^3, A(\log n)^3) \} = 1. \end{equation} By \eqref{eq:omegan}, \eqref{eq:lefttail} and the fact that $(\log n)^3 \ll \frac{n}{\log n(\log\log n)}$, \begin{align*} &\limsup_{n \to \infty}\P(M_n^+ \le u_n ) \\ &= \limsup_{n \to \infty}\P\bigg\{ \bigcap_{0 \le i < j \le n}A_{i, j}^{n+}(u_n)\bigg\} \\ &\le \limsup_{n \to \infty}\P\bigg\{ \bigcap_{\substack{0 \le i < j \le n : j - i \le \frac{n}{\log n \log\log n}}}A^{n+}_{i, j}(u_n(\tau + \varepsilon))\bigg\} + \limsup_{n \to \infty} \P(\Omega_n^\mathsf{c})\\ &\le \limsup_{n \to \infty}\P\bigg\{Z_n^+\bigg(1, \frac{n}{\log n \log\log n}\bigg) \le u_n(\tau + \varepsilon)\bigg\} + \limsup_{n \to \infty} \P(\Omega_n^\mathsf{c})\\ &=\exp\bigg\{-\frac{8}{9\sqrt{\pi}}e^{-\tau - \varepsilon}\bigg\}. \end{align*} As $\varepsilon > 0$ is arbitrary we get \begin{equation}} % \setcounter{equation}{1} \limsup_{n \to \infty}\P(M_n^+ \le u_n ) \le \lim_{\varepsilon \to 0} \exp\bigg\{-\frac{8}{9\sqrt{\pi}}e^{-\tau - \varepsilon}\bigg\} = \exp\bigg\{-\frac{8}{9\sqrt{\pi}}e^{-\tau}\bigg\}. \end{equation} \paragraph{Step 2: Lower bound} Define \begin{equation}} % \setcounter{equation}{1} k_n = \frac{n}{\log n(\log\log n)}, \quad K_n = \frac{n\log\log n}{\log n}. \end{equation} We establish the lower bound by dividing the range of $j - i$ into five regions: \begin{align*} R_1 &= [1, u_n^2), & R_2 &= [u_n^2, k_n), \\ R_3 &= [k_n, K_n), & R_4 &= [K_n, n - K_n), \\ R_5 &= [n - K_n, n). & \end{align*} \medskip \noindent $\bullet$ For $R_1$, note that \begin{equation}} % \setcounter{equation}{1} \frac{j - i}{n} - \frac{u_n}{\sqrt{n}}w_{i, j}^n \le 0, \end{equation} is equivalent to \begin{equation}} % \setcounter{equation}{1} j - i \le \frac{u_n^2}{1 + u_n^2/n}. \end{equation} Since $u_n^4 \ll n$, $i, j$ only take value in integers, it is further equivalent to $j - i \le u_n^2$ when $n$ is large enough, which is exactly $R_1$. Therefore, when $n$ is large enough, \begin{equation}} % \setcounter{equation}{1} A^{n+}_{i, j}(u_n) = \Omega, \end{equation} for any $(i,j)$ satisfying $j-i \in R_1$ so that \begin{equation}} % \setcounter{equation}{1} \bigcap_{0 \le i < j \le n: j - i \in R_1}A^{n+}_{i, j}(u_n) = \Omega. \end{equation} \noindent $\bullet$ For $R_2$, following the same argument that was used to prove the upper bound, it can be shown that \begin{equation}} % \setcounter{equation}{1} \liminf_{n \to \infty}\P\bigg\{ \bigcap_{\substack{0 \le i < j \le n : j - i \in R_2}}A^{n+}_{i, j}(u_n)\bigg\} \ge \exp\bigg\{-\frac{8}{9\sqrt{\pi}}e^{-\tau}\bigg\}. \end{equation} \noindent $\bullet$ Turning to $R_3$, we shall show that \begin{equation}} % \setcounter{equation}{1} \P\bigg(\max_{0 \le i \le n - k_n} \frac{S_{i + k_n}^+ - S_i^+}{\sqrt{k_n}} \le \log\log n \bigg) \to 1, \end{equation} and then use this fact to prove that the maximum of $M_{i, j}^+$ over $R_3$ is ignorable. First we bound $\max_{0 \le i \le n - k_n} (S_{i + k_n}^+ - S_i^+)$. Define \begin{equation}} % \setcounter{equation}{1} q_n = \frac{k_n}{(\log\log n)^2} \ll k_n, \end{equation} and introduce a positive sequence $\varepsilon_n$ such that $q_n \ll \varepsilon_n \ll k_n$. Consider the following two-dimensional grid with mesh size $q_n$: \begin{equation}} % \setcounter{equation}{1} \mathcal{J}_n = \{ (x, y) \in q_n\mathbb{Z}^2 : x \in [-\epsilon_n, n + \epsilon_n], y - x \in [0.9k_n - \epsilon_n, 1.1k_n + \epsilon_n] \}. \end{equation} By the union bound, \begin{equation}} % \setcounter{equation}{1} \P \big\{ Z_n^+(0.9k_n, 1.1k_n) > \log\log n \big\} \le \sum_{(x, y) \in \mathcal{J}_n}\P\bigg\{ \max_{(i, j) \in \mathbb{T}_{q_n}(x, y)} Z_{i, j}^+ \ge \log\log n \bigg\}. \end{equation} Note that the cardinality of $\mathcal{J}_n$ satisfies \begin{equation}} % \setcounter{equation}{1} |\mathcal{J}_n| \sim \frac{(1.1 - 0.9)nk_n}{(q_n)^2} = 0.2(\log\log n)^5 \log n . \end{equation} By the translation invariance property of $\mathbb{T}_{q_n}(x, y)$ and Lemma \ref{lem:weakcontrolzijplus}, taking $l = y - x$, $r = q_n$ and $u = \log\log n$ for large enough $n$ (and thus satisfying the conditions in Lemma \ref{lem:weakcontrolzijplus}) temporarily, we have \begin{align*} \P\big\{ Z_n^+(0.9k_n, 1.1k_n) \ge \log\log n \big\} &\le C|\mathcal{J}_n|\exp\bigg\{-\frac{(\log\log n)^2}{2} \bigg\} \to 0, \end{align*} where $C > 0$ is a constant. Since \begin{equation}} % \setcounter{equation}{1} \max_{0 \le i \le n - k_n} \frac{S_{i + k_n}^+ - S_i^+}{\sqrt{k_n}}\le Z_n^+(0.9k_n, 1.1k_n), \end{equation} it follows that \begin{equation}} % \setcounter{equation}{1}\label{eq:fixscan} \limsup_{n \to \infty}\P\bigg( \max_{0 \le i \le n - k_n} \frac{S_{i + k_n}^+ - S_i^+}{\sqrt{k_n}} \ge \log\log n \bigg) = 0. \end{equation} We may now prove the ignorability of maximum of $M_{i, j}^+$ when taking values on $R_3$. Define \begin{equation}} % \setcounter{equation}{1} \Omega_{1n} := \bigg\{\max_{0 \le i \le n - k_n} \frac{S_{i + k_n}^+ - S_i^+}{\sqrt{k_n}} \le \log\log n\bigg\}. \end{equation} By \eqref{eq:fixscan}, $\P(\Omega_{1n}) \to 1$ as $n \to \infty$. For $j - i \in R_3$, \begin{align*} &A_{i, j}^{n+}(u_n) \\ &\supseteq \Omega_n\bigcap\Omega_{1n}\bigcap \bigg\{S_j^+ - S_i^+ \le (j - i)\frac{S_{n + 1}^+ - 1}{n} + \frac{u_n}{\sqrt{n}}(n + 1 - S_{n + 1})w_{i,j}^n\bigg\}\\ &=\Omega_n\bigcap\Omega_{1n}\bigcap \bigg\{S_j^+ - S_{i + k_n}^+ \le (j - i)\frac{S_{n + 1}^+ - 1}{n} - S_{i + k_n}^+ + S_i^+ + \frac{u_n}{\sqrt{n}}(n + 1 - S_{n + 1}^+)w_{i,j}^n\bigg\}\\ &\supseteq\Omega_n\bigcap\Omega_{1n}\bigcap \bigg\{S_j^+ - S_{i + k_n}^+ \le -(j - i)\frac{\log \log n}{\sqrt{n}} - \sqrt{k_n}\log\log n + \frac{u_n}{\sqrt{n}}(n + 1 - \log\log n\sqrt{n})w_{i,j}^n\bigg\}\\ &\supseteq\Omega_n\bigcap\Omega_{1n}\bigcap \bigg\{\frac{S_j^+ - S_{i + k_n}^+}{\sqrt{j - i - k_n}} \le\sqrt{\frac{j - i}{j - i - k_n}}\bigg[u_n\cdot\bigg( 1 - \frac{\log \log n}{\sqrt{n}} \bigg) - \sqrt{\frac{(\log\log n)^3}{\log n}}-\log\log n\bigg]\bigg\}\\ &\supseteq\Omega_n\bigcap\Omega_{1n}\bigcap \bigg\{\frac{S_j^+ - S_{i + k_n}^+}{\sqrt{j - i - k_n}} \le\sqrt{1 + \frac{k_n}{K_n}}\bigg[u_n\cdot\bigg(1 -\frac{\log \log n}{\sqrt{n}} \bigg) - \sqrt{\frac{(\log\log n)^3}{\log n}} -\log\log n\bigg]\bigg\}\\ &\supseteq\Omega_n\bigcap\Omega_{1n}\bigcap \bigg\{\frac{S_j^+ - S_{i + k_n}^+}{\sqrt{j - i - k_n}} \le u_n(\log\log n)\bigg\}, \end{align*} where the last line follows by noting that $k_n/K_n = 1/(\log\log n)^2$. Thus \begin{align*} &\bigcap_{0 \le i < j \le n: ~k_n + 1 \le j - i \le K_n}A_{i, j}^{n+}(u_n) \\ &\supset \Omega_n\bigcap\Omega_{1n}\bigcap \bigg\{\max_{0 \le i < j \le n: ~k_n + 1 \le j - i \le K_n}\frac{S_j^+ - S_{i + k_n}^+}{\sqrt{j - i -k_n}} \le u_n(\log\log n)\bigg\}\\ &\supset \Omega_n\bigcap\Omega_{1n}\bigcap \bigg\{\max_{0 \le i < j \le n: ~j - i \le K_n}\frac{S_j^+ - S_i^+}{\sqrt{j - i}} \le u_n(\log\log n)\bigg\}, \end{align*} and recall that $u_n(\cdot)$ is a function. Since $(\log n)^3 \ll K_n$, \eqref{eq:lefttail} and \eqref{eq:lefttail_bounds} together imply that \begin{align*} &\liminf_{n \to \infty}\P\big\{ M_n^+(k_n + 1, K_n) \le u_n(\tau) \big\} \\ &\ge \liminf_{n \to \infty}\P\bigg[\Omega_n\bigcap\Omega_{1n}\bigcap \{Z_n^+(1, K_n) \le u_n(\log\log n) \}\bigg] \\ &\ge \liminf_{n \to \infty}\P\bigg[\Omega_n\bigcap\Omega_{1n}\bigcap \{Z_n^+(1, K_n) \le u_n(\tau') \}\bigg]= \exp\bigg(-\frac{8}{9\sqrt{\pi}}e^{-\tau'}\bigg), \end{align*} for any $\tau, \tau'$. We now take $\tau' \to \infty$, yielding \begin{equation}} % \setcounter{equation}{1} \liminf_{n \to \infty}\P\big\{ M_n^+(k_n + 1, K_n) \le u_n(\tau) \big\} = \liminf_{\tau' \to \infty}\exp\bigg( -\frac{8}{9\sqrt{\pi}}e^{-\tau'} \bigg)= 1. \end{equation} \noindent $\bullet$ Next we apply the Kolmogorov's Theorem to deal with $R_4$. Define the centered order statistics \begin{equation}} % \setcounter{equation}{1} \bar{U}_{(i)} = U_{(i)} - \frac{i}{n + 1}. \end{equation} Note that when $n$ is large enough, \begin{align*} A_{i, j}^{n+}(u_n) &= \bigg\{\bar{U}_{(j)} - \bar{U}_{(i)} \ge \frac{j - i}{n(n + 1)} - \frac{u_n}{\sqrt{n}}w_{i, j}^n \bigg\}\\ &= \bigg\{\sqrt{n}(\bar{U}_{(j)} - \bar{U}_{(i)}) \ge \frac{j - i}{\sqrt{n}(n + 1)} - u_nw_{i, j}^n \bigg\}\\ &\supseteq \bigg\{\sqrt{n}(\bar{U}_{(j)} - \bar{U}_{(i)}) \ge - 0.9u_nw_{i, j}^n \bigg\}\\ &\supseteq \bigg\{0.9u_nw_{i, j}^n \ge \sqrt{n}(\bar{U}_{(j)} - \bar{U}_{(i)}) \ge - 0.9u_nw_{i, j}^n \bigg\}\\ &\supseteq \bigg\{2\sqrt{n} \max\{|\bar{U}_{(i)}|, |\bar{U}_{(j)}|\} \le 0.9u_nw_{i, j}^n \bigg\}. \end{align*} For $(i,j)$ such that $j-i \in R_4$, $w_{i, j}^n$ is minimized at either $j - i = \frac{n\log\log n}{\log n}$ or $n - \frac{n\log\log n}{\log n}$. Consequently, \begin{align*} \bigcap_{0 \le i < j \le n: j - i \in R_4}A^n_{i, j}(u_n) &\supseteq \bigg\{\sqrt{n} \max_{1 \le i \le n}\{|\bar{U}_{(i)}|\} \le \frac{0.9u_n}{2}\min_{0 \le i < j \le n: j - i \in R_4}w_{i, j}^n\bigg\}\\ &=\bigg\{\sqrt{n} \max_{1 \le i \le n}\{|\bar{U}_{(i)}|\} \le \frac{0.9u_n}{2}\sqrt{\frac{\log\log n}{\log n}\bigg(1 - \frac{\log\log n}{\log n} \bigg)}\bigg\}. \end{align*} The Kolmogorov's Theorem states that for any $y\ge0$, \begin{equation}} % \setcounter{equation}{1} \lim_{n\to\infty} \P\bigg(\sqrt{n}\max_{1 \le i \le n}|\bar{U}_{(i)}| \le y\bigg) = K(y) := 1 - 2e^{-2y^2} +2e^{-8y^2} - \cdots~. \end{equation} In particular, $(\sqrt{n}\max_{1 \le i \le n}|\bar{U}_{(i)}|)$ is tight. Therefore, by the fact that \begin{equation}} % \setcounter{equation}{1} \frac{0.9u_n}{2}\sqrt{\frac{\log\log n}{\log n}\bigg(1 - \frac{\log\log n}{\log n}\bigg)} \asymp \sqrt{\log\log n} \to \infty, \end{equation} we obtain \begin{equation}} % \setcounter{equation}{1}\label{eq:kolplus} \lim_{n \to \infty}\P\bigg\{ \bigcap_{0 \le i < j \le n: j - i \in R_4}A_{i, j}^{n+}(u_n)\bigg\} = 1. \end{equation} \noindent $\bullet$ For $R_5$, define $j' = n - j$ and $U_{(j' + 1)}' = 1 - U_{(n + 1- j' - 1)} = 1 - U_{(j)}$. A simple change of indices gives \begin{align*} M_n^+(n - K_n, n) &= \max_{\substack{0 \le i < j \le n\\n - K_n \le j - i < n}}\frac{j - i - n(U_{(j)} - U_{(i)})}{\sqrt{(j - i)(1 - \frac{j - i}{n})}}\\ &\le \max_{\substack{i, j' \ge 0\\ i + j' < K_n}} \frac{nU_{(j' + 1)}' - (j' + 1) + nU_{(i)} - i}{\sqrt{(i + j') (1 - \frac{i + j'}{n} )}} \\ &\le 1.01\max_{\substack{i, j\ge 0\\ 1 \le i + j < K_n}} \frac{nU_{(j)}' - j + nU_{(i)} - i}{\sqrt{i + j}} + 1.01 \\ \end{align*} where the last inequality holds when $n$ is large enough since $K_n \ll n$. Now, by the above statements, to prove \begin{equation}} % \setcounter{equation}{1} \limsup_{n \to \infty}\P(M_n^+(n - K_n, n) \ge u_n) = 0, \end{equation} it suffices to prove \begin{equation}} % \setcounter{equation}{1} \limsup_{n \to \infty}\P\bigg(\max_{\substack{i, j \ge 0\\ 1 \le i + j < K_n}} \frac{nU_{(i)} - i + nU_{(j)}' - j}{\sqrt{i + j}} \ge \sqrt{1.9\log n}\bigg) = 0. \end{equation} Assuming $0/0 = 0$, observe that \begin{align*} &\P\bigg(\max_{\substack{i, j \ge 0\\ 1 \le i + j < K_n}} \frac{nU_{(i)} - i + nU_{(j)}' - j}{\sqrt{i + j}} \ge \sqrt{1.9\log n}\bigg)\\ &= \P\bigg\{\max_{\substack{i, j \ge 0\\mathbbm{1} \le i + j \le K_n}} \bigg(\frac{nU_{(i)} - i}{\sqrt{i + j}} + \frac{nU_{(j)}' - j}{\sqrt{i + j}} \bigg) \ge \sqrt{1.9\log n}\bigg\}\\ &\le \P\bigg\{\max_{\substack{i, j \ge 0\\mathbbm{1} \le i + j \le K_n}} \bigg(\frac{nU_{(i)} - i}{\sqrt{i}}+ \frac{nU_{(j)}' - j}{\sqrt{j}}\bigg) \ge \sqrt{1.9\log n}\bigg\}\\ &\le \P\bigg(\max_{0 \le i \le n} \frac{nU_{(i)} - i}{\sqrt{i}} + \max_{0 \le j \le n}\frac{nU_{(j)}' - j}{\sqrt{j}} \ge \sqrt{1.9\log n}\bigg)\\ &\le 2\P\bigg(\max_{0 \le i \le n} \frac{nU_{(i)} - i}{\sqrt{i}} \ge \frac{\sqrt{1.9\log n}}{2}\bigg)\\ &\le 2\P\bigg(\max_{0 \le i \le n} \frac{nU_{(i)} - i}{\sqrt{i(1 - i/n})} \ge \frac{\sqrt{1.9\log n}}{2}\bigg). \end{align*} However, \citet{eicker1979asymptotic} showed that \begin{equation}} % \setcounter{equation}{1} \max_{0 \le i \le n} \frac{nU_{(i)} - i}{\sqrt{i(1 - i/n)}} \sim \sqrt{2\log\log n}, \end{equation} which finishes the proof for $R_5$. \noindent $\bullet$ Now combining all the results gives the lower bound, which, together with the upper bound, establishes the proof of \thmref{mnplus}. \qed \subsubsection{Proof of (\ref{result:mnminus})} In what follows, we let \begin{equation}} % \setcounter{equation}{1} u_n = u_n(\tau) := \log n + \tau, \end{equation} with $\tau$ fixed. Define \begin{align*} A_{i, j}^{n-}(u_n) & = \bigg\{\frac{n(U_{(j)} - U_{(i)}) - (j - i)}{\sqrt{(j - i)(1 - \frac{j - i}{n})}} \le u_n\bigg\} =\bigg\{U_{(j)} - U_{(i)} \le \frac{j - i}{n} + \frac{u_n}{\sqrt{n}}w_{i, j}^n \bigg\}, \end{align*} where $w_{i, j}^n$ is defined in \eqref{w}, and note that \begin{equation}} % \setcounter{equation}{1} \big\{ M_n^- \le u_n \big\} = \bigcap_{0 \le i < j \le n} A_{i,j}^{n-}(u_n). \end{equation} \paragraph{Step 1: Upper bound} For the upper bound, again, we only consider a particular order of magnitude for the length, the one that contributes the most to the maximum. When $j - i \le \frac{n \log\log n}{(\log n)^2}$, \begin{align*} A_{i, j}^{n-}(u_n) &\subset \Omega_n^\mathsf{c} \bigcup \{ \Omega_n \bigcap A^{n-}_{i, j}(u_n) \} \\ &\subset \Omega_n^\mathsf{c} \bigcup \bigg\{\frac{S_j^- - S_i^-}{\sqrt{j - i}} \le (\log\log n)\sqrt{\frac{j - i}{n}} + u_n\cdot\bigg(1 + \frac{\log\log n}{\sqrt{n}}\bigg)\bigg\}\\ &\subset \Omega_n^\mathsf{c} \bigcup \bigg\{\frac{S_j^- - S_i^-}{\sqrt{j - i}} \le u_n(\tau + \varepsilon) \bigg\}, \end{align*} for any $\varepsilon > 0$, where $\Omega_n$ is given in \eqref{eq:omegancn}. By \eqref{eq:omegan}, it suffices to consider the second event on the RHS. Applying Theorem 1.7 in \citep{kabluchko2014limiting}, the limiting distribution of $Z_n^-$ is the same as that of $\max_{1 \le i \le n} (-X_i)$. By the independence of $\{X_i\}$, we obtain \begin{align}\label{lem:righttail} \lim_{n \to \infty}\P( Z_n^- \le u_n ) &= \lim_{n \to \infty}\P\{ \max_{1 \le i \le n} (-X_i) \le u_n \} = \exp\{ -\exp(1 - \tau) \}. \end{align} Therefore, taking $\varepsilon \to 0$, \begin{align*} \limsup_{n \to \infty}\P(M_n^- \le u_n) &= \limsup_{n \to \infty}\P\bigg\{\bigcap_{0 \le i < j \le n}A_{i, j}^{n-}(u_n)\bigg\}\\ &\le \limsup_{n \to \infty}\P\bigg\{ \bigcap_{\substack{0 \le i < j \le n: j - i \le \frac{n\log\log n}{(\log n)^2}}}A_{i, j}^{n-}(u_n)\bigg\} + \P(\Omega_n^\mathsf{c})\\ &\le \limsup_{\varepsilon \to 0}\exp \{ -\exp(1 - \tau - \varepsilon) \}\\ &=\exp \{ -\exp(1 - \tau) \}. \end{align*} \paragraph{Step 2: Lower bound} As in the proof of \eqref{thm:mnplus}, we divide the range of $j - i$ into several subintervals. Similar to the upper bound case, \begin{equation}} % \setcounter{equation}{1} \lim_{n\to \infty }\P \bigg\{ M_n^-\bigg(1, \frac{n\log\log n}{(\log n)^2}\bigg) \leq u_n \bigg\} = \exp \{ -\exp(1 - \tau) \}. \end{equation} With the same argument that was used to prove \eqref{eq:kolplus}, we obtain \begin{equation}} % \setcounter{equation}{1} \lim_{n \to \infty}\P\bigg\{ \bigcap_{\substack{0 \le i < j \le n : \frac{n\log\log n}{(\log n)^2} \le j - i \le n - \frac{n\log\log n}{(\log n)^2}}}A_{i, j}^{n-}(u_n)\bigg\} = 1. \end{equation} The case where $j - i \ge n - \frac{n\log\log n}{(\log n)^2}$ can be treated similarly to proving the region $R_5$ in the proof of Theorem \ref{thm:mnplus}, even easier since now $u_n \sim \log n$ (and details are omitted). \subsubsection{Proof of (\ref{result:mn})} This follows directly from \eqref{result:mnplus}, where we learn that $M_n^+ \asymp_P \sqrt{\log n}$, and \eqref{result:mnminus}, which states that $M_n^- \asymp_P \log n$ (here $A_n \asymp_P B_n$ means $A_n = O(B_n)$ and $B_n = O(A_n)$). Combining them implies that $M_n^- \gg_P M_n^+$, and therefore $M_n = \max(M_n^-, M_n^+) = M_n^-$ with probability tending to 1 as $n$ increases. \subsection{Proof of \thmref{tmnplus}} \subsubsection{Proof of (\ref{eq:tmnplusall})} We first derive the asymptotic distribution of \begin{equation}} % \setcounter{equation}{1} \tilde M_n^+(1,2) = \max_{0 \le i \le n - 1}\frac{1 - n(U_{(i + 1)} - U_{(i)})}{\sqrt{n(U_{(i + 1)} - U_{(i)})(1 - U_{(i + 1)} + U_{(i)})}}, \end{equation} which is exactly the same as that of \eqref{eq:tmnplusall} and then show that $\tilde M_n^+(2, n) \ll_P \sqrt{n}$. These together imply \eqref{eq:tmnplusall}. To get the asymptotic distribution of $\tilde M_n^+(1,2)$, note that \begin{align} \tilde M_n^+(1,2) \le \max_{0 \le i \le n - 1}\frac{1}{\sqrt{n(U_{(i + 1)} - U_{(i)})[1 - (U_{(i + 1)} - U_{(i)})]}} \label{eq:tmnoneoneupper}\\ \mbox{ and }~ \tilde M_n^+(1,2) \ge \max_{0 \le i \le n - 1}\frac{1 - n(U_{(i + 1)} - U_{(i)})}{\sqrt{n(U_{(i + 1)} - U_{(i)})}} \label{eq:tmnoneonelower}, \end{align} where both upper and lower bounds are functions of \begin{equation}} % \setcounter{equation}{1} T := \min_{0 \le i \le n - 1} (U_{(i + 1)} - U_{(i)}). \end{equation} Therefore it suffices to work on $T$ instead. It is easy to see that $T \le 1/n$. By symmetry, \begin{equation}} % \setcounter{equation}{1} \P(T \ge t) = n!\P(T \ge t, U_1 \le U_2 \le \cdots \le U_n). \end{equation} Define the subset \begin{equation}} % \setcounter{equation}{1} A_t = \{(u_1, \ldots, u_n) \in [0, 1]^n: u_i + t \le u_{i + 1}, i = 0, 1, \ldots, n-1 \}, \end{equation} where $u_0 = 0$. Then, \begin{equation}} % \setcounter{equation}{1} \{(U_1,\cdots, U_n) \in A_t\} = \{T \ge t, U_1 \le U_2 \le \cdots \le U_n\}, \end{equation} and hence \begin{equation}} % \setcounter{equation}{1} \P(T \ge t, U_1 \le U_2 \le \cdots \le U_n) = \lambda_n(A_t), \end{equation} where $\lambda_n$ is the Lebesgue measure on $\mathbb{R}^n$. Define a mapping \begin{equation}} % \setcounter{equation}{1} h: \quad A_t \longrightarrow Q \subset [0, 1 - nt]^n, \quad h(u_1,u_2,\cdots, u_n) = (u_1 - t, u_2 - 2t, u_n - nt), \end{equation} where \begin{equation}} % \setcounter{equation}{1} Q := \{(y_1,\ldots,y_n): y_i \le y_{i + 1}, \forall ~ 1 \le i \le n - 1\} \cap [0, 1 - nt]^n. \end{equation} It is easy to verify that $h$ is a volume-preserving bijection. Hence \begin{equation}} % \setcounter{equation}{1} \P(T \ge t, U_1 \le U_2 \le \cdots \le U_n) = \lambda_n(A_t) = \lambda_n(Q) = \frac{(1 - nt)^n}{n!} \end{equation} Therefore, we have \begin{equation}} % \setcounter{equation}{1} \P(T \ge t) = \frac{n!(1 - nt)^n}{n!} = (1 - nt)^n, \end{equation} for $0 \le t \le 1/n$. For any $0 \le t \le 1/n$, \begin{equation}} % \setcounter{equation}{1} \P\bigg\{ \min_{0 \le i \le n - 1} (U_{(i + 1)} - U_{(i)}) \ge t \bigg\} = (1 - n t)^n , \end{equation} which implies \begin{equation}} % \setcounter{equation}{1} \lim_{n \to \infty}\P\bigg\{ \min_{0 \le i \le n - 1} (U_{(i + 1)} - U_{(i)}) \ge \frac{\tau}{n^2} \bigg\} = \exp(-\tau). \end{equation} This, together with \eqref{eq:tmnoneoneupper} and \eqref{eq:tmnoneonelower}, implies \begin{equation}} % \setcounter{equation}{1} \lim_{n \to \infty}\P \bigg( \tilde M_n^+(1, 1) \le \sqrt{\frac{n}{\tau}} \bigg) = \exp(-\tau). \end{equation} It remains to show that $\tilde M_n^+(2 , n) \ll_P \sqrt{n}$. We will divide it into $\tilde M_n^+(2, (\log n)^2)$, $\tilde M_n^+((\log n)^2, n - (\log n)^2)$ and $\tilde M_n^+(n - (\log n)^2, n)$. When $2 \le j - i \le (\log n)^2$, note that \begin{align} 1 - (U_{(j)} - U_{(i)}) &= 1 - \frac{j - i}{n + 1} - (\bar{U}_{(j)} - \bar{U}_{(i)})\\ &\ge 1 - \frac{(\log n)^2}{n + 1} - 2\max_{1 \le i \le n} |\bar{U}_{(i)}|\nonumber\\ &= 1 + O_P(1/\sqrt{n})\nonumber\\ &\ge 0.5,\label{eq:oneujui} \end{align} where the last inequality holds on a sequence of events with probability tending to one, by Kolmogorov's Theorem mentioned in the proof of \thmref{mnplus} when $n$ is large enough. Meanwhile, \begin{align} \frac{j - i - n(U_{(j)} - U_{(i)})}{\sqrt{n(U_{(j)} - U_{(i)})}} &= \frac{j - i - \frac{n}{n + 1 - S_{n + 1}^+}(j - i - S_j^+ + S_i^+)}{\sqrt{n\frac{n}{n + 1 - S_{n + 1}^+}(j - i - S_j^+ + S_i^+)}}\nonumber\\ &= (1 + O_P(1/\sqrt{n}))\tilde Z_{i, j} +O_P(1/\sqrt{n})\nonumber\\ &\le 1.01\tilde Z_{i, j} + 0.01,\label{eq:mnplusbound} \end{align} on the sequence of events $\Omega_n$ defined in \eqref{eq:omegan}. With these results, the union bound, \eqref{eq:rescumulantineq} and the fact that $I^+(s) = -s - \log(1 - s)$ on $[0, 1)$, for any $\varepsilon > 0$, \begin{align*} &\P(\tilde M_n^+(2, (\log n)^2) \ge \varepsilon \sqrt{n}) \\ &\le \P(\tilde Z_n^+(2, (\log n)^2) \ge 0.9\varepsilon \sqrt{n}) + \P(\Omega_n^c)\\ &\le \sum_{0 \le i < j \le n: 2 \le j - i \le (\log n)^2}\P(\tilde Z_{i, j}^+ \ge 0.9\varepsilon \sqrt{n}) + \P(\Omega_n^c)\\ &\le n\sum_{2 \le k \le (\log n)^2}\P\bigg(\frac{S_k^+}{\sqrt{k - S_k^+}} \ge 0.9\varepsilon \sqrt{n}) + \P(\Omega_n^c)\\ &\le n\sum_{2 \le k \le (\log n)^2}\exp\bigg[ -kI^+\bigg\{g^+\bigg(\frac{0.9\varepsilon \sqrt{n}}{\sqrt{k}}\bigg\}\bigg] \bigg\}+ \P(\Omega_n^c)\\ &\le n\sum_{2 \le k \le (\log n)^2}\exp\bigg[ kg^+\bigg(\frac{0.9\varepsilon \sqrt{n}}{\sqrt{k}}\bigg) + k\log\bigg\{1 - g^+\bigg(\frac{0.9\varepsilon \sqrt{n}}{\sqrt{k}}\bigg)\bigg\} \bigg]+ \P(\Omega_n^c). \label{eq:tmplusgplus} \end{align*} As $a \to \infty$, $0.9\varepsilon\sqrt{n}/\sqrt{k} \to \infty$ and $g^+(a) \uparrow 1$. In addition, \begin{equation}} % \setcounter{equation}{1} 1 - g^+(a) = 1 - \frac{a(\sqrt{a^2 + 4} - a)}{2} = 1 - \frac{2a}{\sqrt{a^2 + 4} + a} = \frac{\sqrt{a^2 + 4} - a}{\sqrt{a^2 + 4} + a} = \frac{4}{(\sqrt{a^2 + 4} + a)^2}. \end{equation} Note that \begin{equation}} % \setcounter{equation}{1} \frac{0.9}{a^2} \le \frac{4}{(\sqrt{a^2 + 4} + a)^2} \le \frac{1}{a^2}, \end{equation} when $a$ is large enough. Therefore, when $n$ is sufficiently large, \begin{align*} \P(\tilde M_n^+(2, (\log n)^2) \ge \varepsilon \sqrt{n}) &\le n\sum_{2 \le k \le (\log n)^2}\exp\bigg\{ k - k\log\bigg(\frac{0.9\varepsilon n}{k}\bigg) \bigg\} \\ &\le n\sum_{2 \le k \le (\log n)^2}\exp(- 0.9k\log n ) \\ &\le n\sum_{2 \le k \le (\log n)^2}\exp( -1.8\log n ) \to 0, \end{align*} where the last inequality uses that $k \ge 2$. When $(\log n)^2 \le j - i \le n - (\log n)^2$, by Theorem \ref{thm:mnplus} and Theorem \ref{thm:mnminus}, we have \begin{equation}} % \setcounter{equation}{1}\label{eq:orderdiffup} U_{(j)} - U_{(i)} \le \frac{j - i}{n} +\frac{1.01\log n}{ \sqrt{n}}w_{i, j}^n, \end{equation} \begin{equation}} % \setcounter{equation}{1}\label{eq:orderdiffup2} 1 - (U_{(j)} - U_{(i)}) \ge 1 - \frac{j - i}{n} - \frac{1.01\log n}{ \sqrt{n}}w_{i, j}^n, \end{equation} \begin{equation}} % \setcounter{equation}{1}\label{eq:orderdifflow} U_{(j)} - U_{(i)} \ge \frac{j - i}{n} - \frac{1.01\log n}{\sqrt{n}}w_{i, j}^n, \end{equation} and \begin{equation}} % \setcounter{equation}{1}\label{eq:orderdifflow2} 1 - (U_{(j)} - U_{(i)}) \le 1 - \frac{j - i}{n} + \frac{1.01\log n}{\sqrt{n}}w_{i, j}^n, \end{equation} with probability tending to one. Together, \eqref{eq:orderdiffup} and \eqref{eq:orderdifflow} lead to \begin{equation}} % \setcounter{equation}{1} \bigg|\frac{n(U_{(j)} - U_{(i)})}{j - i}\bigg| = O_P(1), \end{equation} uniformly in $(i, j)$ satisfying $j-i \ge (\log n)^2$. \eqref{eq:orderdiffup2} and \eqref{eq:orderdifflow2} imply \begin{equation}} % \setcounter{equation}{1} \bigg|\frac{1 - (U_{(j)} - U_{(i)})}{1 -(j - i)/n}\bigg| = O_P(1). \end{equation} These, combined with the definitions of $M_n^+$ and $\tilde M_n^+$, imply \begin{equation}} % \setcounter{equation}{1} \tilde M_n^+\{(\log n)^2, n - (\log n)^2\} \asymp_P M_n^+\{(\log n)^2, n - (\log n)^2\}. \end{equation} By Theorem \ref{thm:mnplus}, it follows that for any $\varepsilon > 0$, \begin{equation}} % \setcounter{equation}{1} \lim_{n \to \infty}\P [ \tilde M_n^+\{(\log n)^2, n - (\log n)^2\} \ge \varepsilon\sqrt{n} ] = 0. \end{equation} Finally, when $n - (\log n)^2 \le j - i \le n$, define $j' = n - j$ and thus $U_{(j' + 1)}' = 1 - U_{(n + 1- j' - 1)} = 1 - U_{(j)}$. A simple change of indices gives \begin{align*} &\tilde M_n^+(n - (\log n)^2, n)\\ &= \max_{\substack{0 \le i < j \le n\\n - (\log n)^2 \le j - i \le n}}\frac{j - i - n(U_{(j)} - U_{(i)})}{\sqrt{n(U_{(j)} - U_{(i)})(1 - (U_{(j)} - U_{(i)}))}}\\ &=\max_{\substack{i, j' \ge 0\\ i + j' \le (\log n)^2}} \frac{nU_{(j' + 1)}' - (j' + 1) + nU_{(i)} - i}{\sqrt{n(U_{(i)} + U_{(j' + 1)}') (1 - U_{(i)} - U_{(j' + 1)}')}}\\ &=\max_{\substack{i, j \ge 0\\ 1\le i + j \le (\log n)^2}} \frac{nU_{(i)} - i + nU_{(j)}' - j}{\sqrt{n(U_{(i)} + U_{(j)}') (1 - U_{(i)} - U_{(j)}')}} + O_P(1). \end{align*} Notice that when $i, j \ge 0$ and $1 \le i + j \le (\log n)^2$, \begin{equation}} % \setcounter{equation}{1} 1 - U_{(i)} - U_{(j)}' > 1 -2 \max_{0 \le i \le (\log n)^2}U_{(i)} > 0.5, \end{equation} with probability tending to one, which can be seen by a simple application of Kolmogorov's Theorem. By a similar speech when proving $R_5$ in the proof of Theorem \ref{thm:mnplus}, \begin{align} &\P\bigg(\max_{\substack{i, j \ge 0\\ 1 \le i + j \le (\log n)^2}} \frac{nU_{(i)} - i + nU_{(j)}' - j}{\sqrt{n(U_{(i)} +U_{(j)}') (1 - U_{(i)} - U_{(j)}')}}\ge \varepsilon \sqrt{n} \bigg) \\ &\le \P\bigg(\max_{\substack{i, j \ge 0\\mathbbm{1} \le i + j \le (\log n)^2}} \frac{nU_{(i)} - i + nU_{(j)}' - j}{\sqrt{n(U_{(i)} +U_{(j)}')}}\ge 0.5 \varepsilon \sqrt{n} \bigg) \\ &\le 2\P\bigg(\max_{0 \le i \le (\log n)^2} \frac{nU_{(i)} - i}{\sqrt{nU_{(i)}}}\ge 0.25\varepsilon \sqrt{n} \bigg) \\ &\le 2\P\bigg(\max_{0 \le i \le (\log n)^2} \frac{nU_{(i)} - i}{\sqrt{nU_{(i)}(1 - U_{(i)})}}\ge 0.25\varepsilon \sqrt{n} \bigg) \\ &\to 0, \label{eq:tmnpluslast} \end{align} where the last line again follows from \citet{eicker1979asymptotic}. These eventually establish the proof of \eqref{eq:tmnplusall}. \subsubsection{Proof of (\ref{eq:tmnpluscubelocal})} {\bf The roadmap of our proof. } To derive the asymptotic distribution, we first focus on the most contributed part, i.e., those with length $j - i = l_n \sim a\log^3 n$ for $a > 0$. Define \begin{equation}} % \setcounter{equation}{1} u_n = u_n(\tau) := \sqrt{2\log n}\bigg(1 + \frac{-3\log\log n + 2\tau}{4\log n}\bigg). \end{equation} For any two constants $0 < A_1 < A_2 < \infty$, define $l_n^- = A_1 \log^3 n$ and $l_n^+ = A_2 \log^3 n$. We prove \begin{equation}} % \setcounter{equation}{1}\label{eq:tmnplusrestricted} \lim_{n \to \infty}\P\{ \tilde M_n^+(l_n^-, l_n^+) \le u_n\} = \exp\bigg\{ -e^{-\tau}\int_{A_1}^{A_2} \Lambda_1(a)da\bigg\}. \end{equation} It turns out that to prove \eqref{eq:tmnplusrestricted}, within that region, it suffices to focus on \begin{equation}} % \setcounter{equation}{1} \tilde Z_{i, j}^+ := \frac{S_j^+ - S_i^+}{\sqrt{j - i - (S_j^+ - S_i^+)}}, \end{equation} instead, up to restricting on subset $\Omega_n$ defined in \eqref{eq:omegan}. Write \begin{equation}} % \setcounter{equation}{1} \tilde Z_n^+(k, l) = \max_{0 \le i < j \le n: k \le j - i \le l } \tilde Z_{i, j}^+, \end{equation} and \begin{equation}} % \setcounter{equation}{1} \tilde Z_n^+ = \tilde Z_n^+(1, n). \end{equation} We will use Lemma \ref{lem:interchange} to show that \begin{equation}} % \setcounter{equation}{1}\label{eq:local2} \mathcal{Q}_n := \P\bigg(\max_{(i, j) \in \mathbb{T}_{Bq_n}(x, x + l_n)} \tilde Z_{i, j}^+ \ge u_n\bigg) \sim P_n(0)\bigg\{ 1 + H^2\bigg(\frac{B}{a}\bigg) \bigg\}, \end{equation} where $B\geq 1$ is an integer and the quantities $P_n(0)$, $H(x)$, $q_n$ will be specified later. Next, with a domain $\mathbb{J}_n(z)$ (to be specified) larger than $\mathbb{T}_{Bq_n}$, we will show that \begin{equation}} % \setcounter{equation}{1}\label{eq:largelocalrate} \P\bigg(\max_{(i, j) \in \mathbb{J}_n(z)} \tilde Z_{i, j}^+ \ge u_n \bigg) \sim e^{-\tau}\frac{w_n}{n}\int_{A_1}^{A_2} \Lambda_1(a)da, \end{equation} which no longer depends on $B$, with $\Lambda_1(a)$ defined in the theorem part. This enables us to apply Poisson limit theorem in \cite{arratia1989two} to get \begin{equation}} % \setcounter{equation}{1}\label{eq:tznplus} \lim_{n \to \infty}\P\{ \tilde Z_n^+(l_n^-, l_n^+) \le u_n \} = \exp\bigg\{ -e^{-\tau}\int_{A_1}^{A_2} \Lambda_1(a)da\bigg\}. \end{equation} The final step will be showing that the region beyond $A_2(\log n)^3$ is negligible, that is, \begin{equation}} % \setcounter{equation}{1}\label{eq:tznplusbeyond} \limsup_{A_2 \to \infty}\limsup_{n \to \infty}\P\{ \tilde M_n^+(l_n^+, n) \ge u_n \} = 0. \end{equation} Therefore setting $A_1 = A$ and letting $A_2 \to \infty$ yield \eqref{eq:tmnpluscubelocal}. We first argue why we can focus on \eqref{eq:tzij} instead when $j - i \asymp \log^3 n$. Note that \eqref{eq:oneujui} and \eqref{eq:mnplusbound} continue to hold when $j - i \asymp (\log n)^3$. Hence, \begin{equation}} % \setcounter{equation}{1} \tilde M_n^+(l_n^-, l_n^+) = \{1 + O_P(1/\sqrt{n})\}\tilde Z_n^+(l_n^-, l_n^+) + O_P(1/\sqrt{n}), \end{equation} which implies \begin{align*} \P\{\tilde Z_n^+(l_n^-, l_n^+) \le u_n(\tau - \varepsilon)\} \le \P\{\tilde M_n^+(l_n^-, l_n^+) \le u_n(\tau)\} \le\P\{\tilde Z_n^+(l_n^-, l_n^+) \le u_n(\tau + \varepsilon)\}, \end{align*} for any $\varepsilon > 0$. If we had established \eqref{eq:tznplus}, taking $\varepsilon \to 0$ would yield \eqref{eq:tmnplusrestricted}. Now we turn to the mainstream of the proof. \medskip \noindent {\sc Proof of \eqref{eq:local2}}. We will prove this following a similar strategy as in \citet{kabluchko2014limiting}. Necessary adjustments are still needed since \citet{kabluchko2014limiting} focused on $Z_{i, j}^+$ while we are dealing with $\tilde Z_{i, j}^+$. We will present the parts that need to be adjusted and refer to their results when nothing needs to be changed. First we work on $\mathcal{Q}_n$. For any $\tau \in \mathbb R$ and $a \ge 0$, let $l_n = a(\log n)^3$ and define \begin{equation}} % \setcounter{equation}{1}\label{eq:onedef2} P_n(s) = \P\bigg(\frac{S_{l_n}^+}{\sqrt{l_n - S_{l_n}^+}} \ge u_n -\frac{s}{u_n} \bigg). \end{equation} Define \begin{equation}} % \setcounter{equation}{1} b_n := \frac{u_n - s/u_n}{\sqrt{l_n}}, \end{equation} for ease of notation. Since $u_n^3 \propto \sqrt{l_n}$ and $b_n \sim \sqrt{2/a}/\log n\to 0$, for fixed $s > 0$ with sufficiently large $n$, with the transformation \eqref{eq:tzijtransform}, Lemma \ref{lem:moderatedev} and Taylor's expansion \begin{align} P_n(s) &= \P\bigg\{ \frac{S_{l_n}^+}{\sqrt{l_n}} \ge \sqrt{l_n}g^+(b_n)\bigg\}\nonumber\\ &\sim \frac{1}{\sqrt{2\pi}u_n}\exp\bigg\{ -\frac{(u_n - s/u_n)^2}{2}\frac{2I^+(g^+(b_n ))}{b_n^2} \bigg\}\nonumber\\ &= \frac{1}{\sqrt{2\pi}u_n}\exp\bigg\{-\frac{(u_n - s/u_n)^2}{2}\bigg(1 - \frac{1}{3}b_n\bigg) + o(1)\bigg\}\nonumber\\ &\sim \frac{1}{2\sqrt{\pi}}e^{s + \frac{\sqrt{2}}{3} a^{-1/2}} \frac{e^{-\tau}\log n}{n} \label{eq:pns2}. \end{align} Recall that $\mathbb{T}_r(x, y)$ is defined in \eqref{T}. Define $q_n = (\log n)^2$. By the same techniques in the proof of Lemma \ref{lem:weakcontroltzijplus} we have \begin{align*} \mathcal{Q}_n & = \P\bigg(\max_{(i, j) \in \mathbb{T}_{Bq_n}(x, x + l_n)} \tilde Z_{i, j}^+ \ge u_n\bigg) \\ &= \P\bigg[\max_{(i, j) \in \mathbb{T}_{Bq_n}(x, x + l_n)} \bigg\{S_j^+ - S_i^+ - (j - i)g^+\bigg(\frac{u_n}{j - i}\bigg)\bigg\} \ge 0\bigg]\\ &= \P\bigg[\max_{0 \le k_1, k_2 \le Bq_n} \bigg\{S_{k_1}^{(1)+} + S_{k_2}^{(2)+} -(l_n + k_1 + k_2)g^+\bigg(\frac{u_n}{l_n + k_1 + k_2}\bigg)\bigg\} + S_{l_n}^+ \ge 0\bigg]\\ &= P_n(0)\bigg\{ 1 + \int_0^\infty G_n(s)d\nu_n(s) \bigg\}, \end{align*} where $P_n(s)$ defined in \eqref{eq:onedef2} is actually the probability distribution of $V_{l_n, u_n}$, defined in \eqref{eq:vludefn}. Therein \begin{align*} G_n(s) :=& \P\bigg[\max_{0 \le k_1, k_2 \le Bq_n} \bigg\{ S_{k_1}^{(1)+} + S_{k_2}^{(2)+} - (l_n + k_1 + k_2)g^+\bigg(\frac{u_n}{\sqrt{l_n + k_1 + k_2}}\bigg)\bigg\} \\ &~~~~~+ l_n \cdot g^+\bigg(\frac{u_n - s/u_n}{\sqrt{l_n}}\bigg) \ge 0\bigg], \end{align*} and \begin{equation}} % \setcounter{equation}{1} \nu_n(\cdot) := P_n(\cdot)/P_n(0). \end{equation} It is immediate that the first and second conditions in Lemma~\ref{lem:interchange} hold by directly mimicking the details in the proof of Lemma 4.3 in \citep{kabluchko2014limiting}, that is, for any fixed $s > 0$ and any sequence $s_n \to s$, \begin{equation}} % \setcounter{equation}{1} \lim_{n \to \infty}G_n(s_n) = \P( M_1 + M_2 \ge s ), \end{equation} and \begin{equation}} % \setcounter{equation}{1} \lim_{n \to \infty}\nu_n([0, s)) = \lim_{n \to \infty} \frac{P_n(s)}{P_n(0)} = e^s. \end{equation} $M_1$ and $M_2$ are independent copies with the same distribution as \begin{equation}} % \setcounter{equation}{1} M = \sup_{t \in [0, a^{-1}B]} \{ \sqrt{2}W(t) - t \}, \end{equation} where $W(t)$ is a standard Brownian motion (similar but more detailed arguments can be found in the proof of lemma 4.3 in \cite{kabluchko2011extremes}). To verify the third condition in Lemma~\ref{lem:interchange}, we need to bound the integral $ \int_0^\infty G_n(s)d\nu_n(s)$ from above. This can be immediately completed by using Lemma \ref{lem:weakcontroltzijplus}. Hence applying Lemma \ref{lem:interchange} completes the proof of \eqref{eq:local2}, where \begin{equation}} % \setcounter{equation}{1} H(x) := \operatorname{\mathbb{E}}\{\sup_{t \in [0, x]} e^{\sqrt{2}W(t) - t}\}\text{, }x > 0, \end{equation} therein. \medskip \noindent {\sc Proof of \eqref{eq:largelocalrate}}. Define $w_n = (\log n)^3$. For $z \in \mathbb{Z}$, define \begin{equation}} % \setcounter{equation}{1} \mathbb{J}_n(z) = \{(i, j) \in \mathbb{I}: z \le i < z + w_n, j - i \in [l_n^-, l_n^+]\}. \end{equation} To derive the rate of $\P(\max_{(i, j) \in \mathbb{J}_n(z)} \tilde Z_{i, j}^+ \ge u_n )$, by translation invariance we may take $z = 0$. Let $\delta_n$ be a real sequence satisfying $\delta_n =o (w_n)$ and $q_n = o(\delta_n)$, e.g. $\delta_n = (\log n)^{2.5}$. For $B \in \mathbb{N}$, we introduce the following two-dimensional discrete grids with mesh size $Bq_n$: \begin{equation}} % \setcounter{equation}{1} \mathcal{J}_n(B) = \{(x, y) \in Bq_n\mathbb{Z} \times Bq_n\mathbb{Z} : x \in [-\delta_n, w_n + \delta_n], y - x \in [l_n^- - \delta_n, l_n^+ + \delta_n]\}, \end{equation} \begin{equation}} % \setcounter{equation}{1} \mathcal{J}_n'(B) = \{(x, y) \in Bq_n\mathbb{Z} \times Bq_n\mathbb{Z} : x \in [\delta_n, w_n -\delta_n], y - x \in [l_n^- + \delta_n, l_n^+ - \delta_n]\}. \end{equation} By Bonferroni inequality, \begin{equation}} % \setcounter{equation}{1} S_n'(B) - S_n''(B) \le \P\bigg( \max_{(i, j) \in \mathbb{J}_n(0)} \tilde Z_{i, j}^+ \ge u_n \bigg) \le S_n(B), \end{equation} where \begin{equation}} % \setcounter{equation}{1}\label{eq:firstpart} S_n(B) = \sum_{(x, y) \in \mathcal{J}_n(B)}\P\bigg( \max_{(i, j) \in \mathbb{T}_{Bq_n}(x, y)} \tilde Z_{i, j}^+ \ge u_n \bigg), \end{equation} \begin{equation}} % \setcounter{equation}{1} S_n'(B) = \sum_{(x, y) \in \mathcal{J}_n'(B)}\P\bigg( \max_{(i, j) \in \mathbb{T}_{Bq_n}(x, y)} \tilde Z_{i, j}^+ \ge u_n \bigg), \end{equation} and \begin{equation}} % \setcounter{equation}{1}\label{eq:snppb} S_n''(B) = \sum_{(x_1, y_1), (x_2, y_2) }\P\bigg( \max_{(i, j) \in \mathbb{T}_{Bq_n}(x_1, y_1)} \tilde Z_{i, j}^+ \ge u_n, \max_{(i, j) \in \mathbb{T}_{Bq_n}(x_2, y_2)} \tilde Z_{i, j}^+ \ge u_n \bigg), \end{equation} where the summation is taken over $(x_1, y_1) \neq (x_2, y_2) \in \mathcal{J}_n'(B)$. As long as we can show \begin{equation}} % \setcounter{equation}{1}\label{eq:snbupper} \lim_{B \to \infty}\limsup_{n \to \infty}nw_n^{-1}S_n(B) \le e^{-\tau}\int_{A_1}^{A_2} \Lambda_1(a)da, \end{equation} \begin{equation}} % \setcounter{equation}{1}\label{eq:snpblower} \lim_{B \to \infty}\liminf_{n \to \infty}nw_n^{-1}S_n'(B) \ge e^{-\tau}\int_{A_1}^{A_2} \Lambda_1(a)da, \end{equation} and \begin{equation}} % \setcounter{equation}{1}\label{eq:snppbzero} \lim_{B \to \infty}\limsup_{n \to \infty}nw_n^{-1}S_n''(B) = 0, \end{equation} \eqref{eq:largelocalrate} will follow immediately. The proof of \eqref{eq:snpblower} is almost identical to that of \eqref{eq:snbupper}, so we only focus on proving \eqref{eq:snbupper} based on the dominated convergence theorem. Define \begin{equation}} % \setcounter{equation}{1} \mathcal{L}_n(B) = Bq_n \mathbb{Z}\cap [l_n^- -\delta_n, l_n^+ +\delta_n], \end{equation} such that $|\mathcal{L}_n(B)| \sim (A_2 - A_1) (\log n)/B$. Since the probability on the right-hand side of \eqref{eq:firstpart} depends only on $l := y - x$, by translation invariance we have \begin{equation}} % \setcounter{equation}{1} S_n(B) \le \frac{w_n + \delta_n}{Bq_n} \sum_{l \in \mathcal{L}_n(B)}\P\bigg( \max_{(i, j) \in T_{Bq_n}(0, l)} \tilde Z_{i, j}^+ \ge u_n \bigg). \end{equation} Next we apply \eqref{eq:local2} to bound each probability with $l$ fixed and replace $ (B q_n)^{-1} \sum_{l \in \mathcal{L}_n(B)}$ by an integral as $n\to \infty$. By \eqref{eq:local2} and \eqref{eq:pns2}, \begin{equation}} % \setcounter{equation}{1} \lambda_{n,B}(a) : = \frac{n}{\log n}\P\bigg( \max_{(i,j) \in T_{Bq_n}(0, l_{n,B}(a))} \tilde Z_{i,j}^+ \ge u_n \bigg) \to \frac{1}{2\sqrt{\pi}}e^{ \frac{\sqrt{2}}{3} a^{-1/2} - \tau}\bigg\{ 1 + H^2\bigg(\frac{B}{a}\bigg) \bigg\}, \end{equation} as $n \to \infty$, where \begin{equation}} % \setcounter{equation}{1} l_{n,B}(a) = \max\{l \in Bq_n\mathbb{Z}: l \le a w_n \}. \end{equation} The function $\lambda_{n,B}(a)$ takes constant values on sub-intervals with widths $Bq_n/w_n = B/\log n$. It follows that \begin{equation}} % \setcounter{equation}{1} S_n(B) \le \frac{w_n + \delta_n}{B^2n} \sum_{l \in \mathcal{L}_n(B)} \frac{B\lambda_{n,B}(a)}{\log n} = \frac{w_n + \delta_n}{B^2 n}\int_{A_1 - \frac{2\delta_n}{w_n}}^{A_2 + \frac{2\delta_n}{w_n}}\lambda_{n,B}(a)da. \end{equation} From Lemma \ref{lem:weakcontroltzijplus}, we can upper bound the integrand $\lambda_{n,B}(a)$ by an integrable function that is independent of $n$. Therefore, applying Fatou's lemma on $\limsup$ gives \begin{equation}} % \setcounter{equation}{1}\label{eq:domchange} \limsup_{n \to \infty}nw_n^{-1}S_n(B) \le e^{-\tau}\int_{A_1}^{A_2} \frac{a^2\Lambda_1(a)}{B^2}\bigg\{ 1 + H^2\bigg(\frac{B}{a}\bigg)\bigg\}da. \end{equation} This result holds for any $B \in \mathbb{N}$. Note that $\lim_{B \to \infty} H(B)/B = 1$. Letting $B\to \infty$, we arrive at \eqref{eq:snbupper}. To prove \eqref{eq:snppbzero}, we bound $S_n''(B)$ by similar quantities of $Z_{i, j}^+$, which allows us to use results in \citet{kabluchko2014limiting} immediately. For any interval $(x, y)$ define the event \begin{equation}} % \setcounter{equation}{1} E_n(x, y) = \bigg\{\max_{(i,j) \in \mathbb{T}_{Bq_n}(x, y)} \tilde Z_{i,j}^+ \ge u_n\bigg\}. \end{equation} Note that \begin{equation}} % \setcounter{equation}{1} \frac{g^+(x)}{x} = \frac{1}{2}(\sqrt{x^2 + 4} - x) \ge 1 - \frac{x}{2}, \text{ when }x \to 0. \end{equation} When $y - x \propto (\log n)^3$, $u_n/(y - x) \propto 1/(\log n)$, \begin{align*} E_n(x, y) &= \bigg\{\max_{0 \le l_1, l_2 \le Bq_n} \bigg\{S_{y + l_2}^+ - S_{x - l_1}^+ - (y - x + l_1 + l_2)g^+\bigg(\frac{u_n}{\sqrt{y - x + l_1 + l_2}}\bigg)\bigg\} \ge 0 \bigg\}\\ &\subset \bigg\{\max_{0 \le l_1, l_2 \le Bq_n} \frac{S_{y + l_2}^+ - S_{x - l_1}^+}{\sqrt{y - x + l_1 + l_2}} \ge \sqrt{y - x + l_1 + l_2} g^+\bigg(\frac{u_n}{\sqrt{y - x + l_1 + l_2}}\bigg) \bigg\}\\ &\subset \bigg\{\max_{(i,j) \in \mathbb{T}_{Bq_n}(x, y)} Z_{i,j}^+ \ge u_n(\tau)\bigg(1 - \frac{u_n}{2\sqrt{y - x + l_1 + l_2}}\bigg)\bigg\}\\ &\subset \bigg\{\max_{(i,j) \in \mathbb{T}_{Bq_n}(x, y)} Z_{i,j}^+ \ge u_n(\tau - 0.1)\bigg\}. \end{align*} Therefore, \begin{align*} &\P\{ E_n(i_1, j_1) \cap E_n(i_2, j_2) \} \\ &\le \P\bigg[\bigg\{\max_{(i,j) \in \mathbb{T}_{Bq_n}(i_1, j_1)} Z_{i,j}^+ \ge u_n(\tau - 0.1)\bigg\} \bigcap \bigg\{\max_{(i,j) \in \mathbb{T}_{Bq_n}(i_2, j_2)} Z_{i,j}^+ \ge u_n(\tau - 0.1)\bigg\}\bigg]. \end{align*} This allows us to work on $Z_{i, j}^+$ instead. Directly applying Lemma 4.12, Lemma 4.14, Lemma 4.15 and Lemma 4.16 in \citep{kabluchko2014limiting} yields \eqref{eq:snppbzero}. \medskip \noindent {\sc Proof of \eqref{eq:tznplus}}. We will temporarily adopt the notations in \citet{arratia1989two}. Define \begin{equation}} % \setcounter{equation}{1} I = \{\alpha \in \mathbb{N}: \alpha w_n \le n\}, \end{equation} which implies $|I| \le n/w_n$. For any $\alpha \in I$, define \begin{equation}} % \setcounter{equation}{1} X_\alpha = \mathbbm{1}\{\max_{(i, j) \in \mathbb{J}_n(\alpha w_n)} \tilde Z_{i, j}^+ \ge u_n \}, \end{equation} \begin{equation}} % \setcounter{equation}{1} p_\alpha = \P(X_\alpha), \end{equation} and \begin{equation}} % \setcounter{equation}{1} B_\alpha = \{\beta \in I: |(\beta - \alpha)w_n| \le l_n^+ + w_n\}. \end{equation} Hence $|B_\alpha| \le A_2 + 1$. To apply Theorem 1 in \citep{arratia1989two}, we need to show that \begin{equation}} % \setcounter{equation}{1} b_1 := \sum_{\alpha \in I}\sum_{\beta \in B_\alpha} p_\alpha p_\beta, \end{equation} \begin{equation}} % \setcounter{equation}{1} b_2 := \sum_{\alpha \in I}\sum_{\alpha \neq \beta \in B_\alpha} p_{\alpha\beta}, \text{ where } p_{\alpha\beta} := \operatorname{\mathbb{E}}(X_\alpha X_\beta), \end{equation} and \begin{equation}} % \setcounter{equation}{1} b_3' := \sum_{\alpha \in I} s_\alpha' \end{equation} therein vanish as $n \to \infty$, where \begin{equation}} % \setcounter{equation}{1} s_\alpha' := \operatorname{\mathbb{E}}\bigg|\operatorname{\mathbb{E}}\bigg(X_\alpha - p_\alpha \Big|\sum_{\beta \in I - B_\alpha} X_\beta \bigg)\bigg| \end{equation} By the definition of $B_\alpha$, $X_\alpha - p_\alpha$ and $\sum_{\beta \in I - B_\alpha} X_\beta$ are independent. Hence $s_\alpha' = 0$, so is $b_3'$. It follows from \eqref{eq:largelocalrate} that \begin{equation}} % \setcounter{equation}{1} b_1 \sim |I||B_\alpha| p_\alpha p_\beta \to 0. \end{equation} With slight modification on \eqref{eq:largelocalrate}, \begin{equation}} % \setcounter{equation}{1} \P\bigg(\max_{(i, j) \in \mathbb{J}_n(\alpha w_n) \cup \mathbb{J}_n(\beta w_n)} \tilde Z_{i, j}^+ \ge u_n \bigg) \sim e^{-\tau}\frac{2w_n}{n}\int_{A_1}^{A_2} \Lambda_1(a)da. \end{equation} This, together with \eqref{eq:largelocalrate}, implies \begin{equation}} % \setcounter{equation}{1} p_{\alpha\beta} = \P\bigg(\max_{(i, j) \in \mathbb{J}_n(\alpha w_n)} \tilde Z_{i, j}^+ \ge u_n, \max_{(i, j) \in \mathbb{J}_n(\beta w_n)} \tilde Z_{i, j}^+ \ge u_n \bigg) = o\bigg(\frac{w_n}{n}\bigg). \end{equation} Thus, \begin{equation}} % \setcounter{equation}{1} b_2 \le |I||B_\alpha|\max_{\alpha \neq \beta} p_{\alpha\beta} \to 0. \end{equation} Now, by Theorem 1 in \citep{arratia1989two}, \begin{equation}} % \setcounter{equation}{1} \lim_{n\to \infty}\P\{\tilde Z_n^+(l_n^-, l_n^+) \le u_n\} = \lim_{n\to \infty}\P\bigg(\sum_{\alpha \in I}X_\alpha = 0\bigg) = e^{-\lambda}, \end{equation} where \begin{equation}} % \setcounter{equation}{1} \lambda = \sum_{\alpha \in I}p_\alpha \to e^{-\tau}\int_{A_1}^{A_2} \Lambda_1(a)da. \end{equation} Therefore, \begin{equation}} % \setcounter{equation}{1} \lim_{n\to \infty}\P\{\tilde M_n^+(l_n^-, l_n^+) \le u_n\} = \exp\bigg(-e^{-\tau}\int_{A_1}^{A_2} \Lambda_1(a)da\bigg), \end{equation} by the statement in the beginning of our proof. \medskip \noindent {\sc Proof of \eqref{eq:tznplusbeyond}}. Divide $(l_n^+, n]$ into $(l_n^+, (\log n)^4]$, $((\log n)^4, n - (\log n)^4]$ and $(n - (\log n)^4, n]$. Within the first region, for any $k \in \mathbb{N}$, any pair $(i, j)$ with length $2^k (\log n)^3 \le j - i \le 2^{k + 1} (\log n)^3$ can be covered by the union of at most $2^{-k}n/\log n$ disjoint discrete squares of the form $\mathbb{T}_{2^k(\log n)^2}(x, x + j - i)$. By \eqref{eq:orderdiffup2}, \begin{equation}} % \setcounter{equation}{1} 1 - (U_{(j)} - U_{(i)}) \ge 1 - 1.1(\log n)^4/n, \end{equation} with probability tending to one. With these facts, by the union bound and Lemma \ref{lem:weakcontroltzijplus}, \begin{align*} &\P\{\tilde M_n^+(l_n^+, (\log n)^4) \ge u_n\} \\ &\le \P\bigg\{ \max_{k: \log_2 A_2 \le k \le \log_2 (\log n)}\tilde M_n^+(2^k(\log n)^3, 2^{k + 1}(\log n)^3) \ge u_n \bigg\} \\ &\le \P\bigg\{ \max_{k: \log_2 A_2 \le k \le \log_2 (\log n)}\tilde Z_n^+(2^k(\log n)^3, 2^{k + 1}(\log n)^3) \ge u_n(\tau - 0.1) \bigg\} \\ &\le \sum_{k \ge \log_2 A_2}2^{-k}\frac{n}{\log n}\P\bigg\{ \max_{(i, j) \in T_{2^k(\log n)^2}(0, 2^{k + 1}(\log n)^3)} \tilde Z_{i, j}^+ \ge u_n(\tau - 0.1)\bigg\} + \P(\Omega_n^c) \\ &\le C\sum_{k \ge \log_2 A_2}2^{-k} + \P(\Omega_n^c). \end{align*} Taking $\limsup_{n \to \infty}$ and letting $A_2 \to \infty$ gives the desired result. In the meantime, on $((\log n)^4, n - (\log n)^4]$, a finer examination of \eqref{eq:orderdiffup} and \eqref{eq:orderdifflow} yields \begin{equation}} % \setcounter{equation}{1} \bigg|\frac{n(U_{(j)} - U_{(i)})}{j - i} - 1\bigg| = O_p\bigg(\frac{1}{\log n}\bigg). \end{equation} \eqref{eq:orderdiffup2} and \eqref{eq:orderdifflow2} imply \begin{equation}} % \setcounter{equation}{1} \bigg|\frac{1 - (U_{(j)} - U_{(i)})}{1 - (j - i)/n} - 1\bigg| = O_p\bigg(\frac{1}{\log n}\bigg). \end{equation} Therefore, \begin{align*} \P\{\tilde M_n^+((\log n)^4, n - (\log n)^4) \ge u_n\} \le \P\{M_n^+(l_n^+, (\log n)^4) \ge u_n(\tau - 0.1)\} \to 0, \end{align*} by Theorem \ref{thm:mnplus}. The proof of the region $(n - (\log n)^4, n]$ is immediate by following the proof for \eqref{eq:tmnpluslast}, which we omit here. \qed \subsection{Proof of Theorem \ref{thm:tmnminus}} Define \begin{equation}} % \setcounter{equation}{1} \tilde Z_{i, j}^- := \frac{S_j^- - S_i^-}{\sqrt{j - i + S_j^- - S_i^-}}, \end{equation} and \begin{equation}} % \setcounter{equation}{1} g^-(a) := \frac{1}{2}(a\sqrt{a^2 + 4} + a^2). \end{equation} \begin{equation}} % \setcounter{equation}{1}\label{eq:iminusgminusbound} I^-(g^-(s)) \ge s^2/2. \end{equation} The theorem follows immediately after showing that \begin{equation}} % \setcounter{equation}{1} \limsup_{n \to \infty}\P(\tilde M_n^- \ge \varepsilon\sqrt{n}) = 0, \end{equation} for any $\varepsilon > 0$. This can be proved similarly by dividing the regions, transforming the statistic $\tilde M_{i, j}^-$ into $\tilde Z_{i, j}^-$, combined with \eqref{eq:iminusgminusbound}. We omit the detail here. \section*{Acknowledgements} Andrew Ying was partially supported by the Achievement Rewards for College Scientists (ARCS) Scholarship. The authors strongly thanks for Professor Ery Arias-Castro for building up the introduction and providing the motivation. The authors would also like to thank for Professor Qi-Man Shao, Professor Xiao Fang, Professor Hock Peng Chan, and Professor David O.~Siegmund for stimulating discussions and pointers to the literature. \bibliographystyle{chicago}
1,314,259,994,021
arxiv
\section{\bf INTRODUCTION} A pseudoscalar gluonium candidate, the so-called $E/\iota(1440)$, was observed in $p\bar{p}$ annihilation in 1967~\cite{baillon67} and in $J/\psi$ radiative decays in the 1980's~\cite{scharre80,edwards82e,augustin90}. After 1990, more and more observations revealed the existence of two resonant structures around 1.44 GeV/$c^{2}$ in $a_0(980)\pi$, $K\bar{K}\pi$, and $K^*\bar{K}$ spectra~\cite{rath89,adams01,bai90c,augustin92,bertin95-97,cicalo99,nichitiu02}. They showed that the lower state, $\eta(1405)$, has large couplings to $a_0(980)\pi$ and $K\bar{K}\pi$, while the high mass state, $\eta(1475)$, favors $K^*\bar{K}$. The $\eta(1405)$ was also confirmed by MarkIII~\cite{bolton92b}, Crystal Barrel~\cite{amsler95f}, and DM2~\cite{augustin90} in decays into $\eta\pi\pi$ in $J/\psi$ radiative decays and $\bar{p}p$ annihilations. In contrast, although $\eta(1475)$ was observed in $K\bar{K}\pi$ ($K^*\bar{K}$)~\cite{rath89,adams01,bai90c,augustin92,bertin95-97,cicalo99,nichitiu02}, it was not seen in $\eta\pi\pi$. Nonetheless, the study of $K\bar{K}\pi$ and $\eta\pi\pi$ channels in $\gamma\gamma$ collisions~\cite{acciarri01g} showed that $\eta(1475)$ appeared in $K\bar{K}\pi$, but not in $\eta\pi\pi$, while $\eta(1405)$ appeared in neither channel. The study of the decays $J/\psi \rightarrow \{\gamma,\omega,\phi\}K\bar{K}\pi$ is a useful tool in the investigation of quark and possible gluonium content of the states around 1.44 GeV/$c^{2}$. In this paper, we investigate the possible structure in the $K\bar{K}\pi$ final state in $J/\psi$ hadronic decays at around $1.44$ GeV/$c^{2}$, and measure the branching fraction of $J/\psi \rightarrow \eta K^{0}_{S} K^{\pm} \pi^{\mp}$ for the first time, based on $5.8 \times 10^{7}$ $J/\psi$ events collected with the Beijing Spectrometer at the Beijing Electron-Positron Collider (BEPC) . \section{\bf THE BES DETECTOR} BESII is a large solid-angle magnetic spectrometer that is described in detail elsewhere~\cite{jzbnpa}. Charged particle momenta are determined with a resolution of $\sigma_{p}/p=1.78\%\sqrt{1+p^{2}}$ ($p$ in GeV/$c^{2}$) in a 40-layer cylindrical main drift chamber (MDC). Particle identification (PID) is accomplished using specific ionization ($dE/dx$) measurement in the MDC and time-of-flight (TOF) information in a barrel-like array of 48 scintillation counters. The $dE/dx$ resolution is $\sigma_{dE/dx} \simeq 8.0\%$; the TOF resolution is $\sigma_{TOF}=180$ ps for Bhabha events. Outside of the time-of-flight counter is a 12-radiation-length barrel shower counter (BSC) comprised of gas proportional tubes interleaved with lead sheets. The BSC measures the energy and direction of photons with resolutions of $\sigma_{E}/E \simeq 21\%\sqrt{E}$ ($E$ in GeV), $\sigma_{\phi}=7.9$ mrad, and $\sigma_{z}=2.3$ cm. The iron flux return of the magnet is instrumented with three double layers of counters that are used to identify mouns. A Geant3 based Monte Carlo (MC) package (SIMBES) with detailed consideration of the detector performance is used. The consistency between data and MC has been carefully checked in many high purity physics channels, and the agreement is reasonable~\cite{bessimulation2005}. The detection efficiencies and mass resolutions for each decay mode presented in this paper are obtained with uniform phase space MC generators. \section{\bf ANALYSIS} In this analysis, $\omega$ mesons are observed in the $\omega \rightarrow \pi^{+}\pi^{-}\pi^{0}$ decay, $\phi$ mesons in the $\phi \rightarrow K^{+}K^{-}$ decay, and other mesons are detected in the decays: $K^{0}_{S} \rightarrow \pi^{+}\pi^{-}$, $\pi^0 \rightarrow \gamma \gamma$, $\eta \rightarrow \pi^{+}\pi^{-}\pi^{0}$. The final states of the analyzed decays $J/\psi \rightarrow \{\omega,\eta\} K^{0}_{S}K^{\pm} \pi^{\mp}$, $\omega K^{+}K^{-}\pi^{0}$, $\phi K^{0}_{S}K^{\pm} \pi^{\mp}$, and $\phi K^{+}K^{-}\pi^{0}$ are $2(\pi^{+}\pi^{-})K^{\pm}\pi^{\mp}\gamma\gamma$, $\pi^{+}\pi^{-}K^{+}K^{-}\gamma \gamma \gamma \gamma$, $K^{+}K^{-}\pi^{+}\pi^{-}K^{\pm}\pi^{\mp}$, and $2(K^{+}K^{-})\gamma \gamma$, respectively.\\ \indent Candidate events are required to satisfy the following common selection criteria: \begin{enumerate} \item The correct number of charged tracks with net charge zero is required for each event. Each charged track should have a good helix fit in the MDC, and the polar angle $\theta$ of each track in the MDC must satisfy $|\cos \theta|<0.8$. The event must originate from the collision point; tracks except $\pi^{\pm}$ from $K^{0}_{S}$ must satisfy $\sqrt{x^{2}+y^{2}} \leq 2$ cm, $|z| \leq 20$ cm, where $x$, $y$, and $z$ are the space coordinates of the point of closest approach of tracks to the beam axis. \item Candidate events should have at least the minimum number of isolated photons associated with the different final states. Each photon should have an energy deposit in the BSC greater than $50$ MeV, the angle between the shower development direction and the photon emission direction less than $30^{\circ}$, and the angle between the photon and any charged track larger than $8^{\circ}$. \item For each charged track in an event, $\chi^{2}_{PID}(i)$ is determined using both $dE/dx$ and TOF information: $$\chi^{2}_{PID}(i) =\chi^{2}_{dE/dx}(i)+ \chi^{2}_{TOF}(i),$$ where $i$ corresponds to the particle hypothesis. A charged track is identified as a $K$ ($\pi$) if $\chi^{2}_{PID}$ for the $K$ ($\pi$) hypothesis is less than those for the $\pi$ and $p$ ($K$ and $p$) hypotheses. \item The selected events are subjected to four constraint kinematic fits (4C-fit), unless otherwise specified. When there are more than the minimum number of photons in an event, all combinations are tried, and the combination with the smallest $\chi^{2}$ is retained. \end{enumerate} The branching fraction is calculated using \begin{eqnarray} B(J/\psi \rightarrow X )=\frac{N_{obs}} {\epsilon_{J/\psi \rightarrow X \rightarrow Y} \times N_{J/\psi} \times B(X \rightarrow Y)}, \label{equ:all-branching-ratio} \end{eqnarray} and the upper limit for a branching fraction is determined using \begin{eqnarray} B(J/\psi \rightarrow X) < \frac{N_{up}}{ \epsilon_{J/\psi \rightarrow X \rightarrow Y} \times N_{J/\psi} \times B(X \rightarrow Y) \times (1-\sigma^{sys})}, \label{equ:upper-branching} \end{eqnarray} where, $N_{obs}$ is the number of events observed, $N_{up}$ is the upper limit on the number of the observed events at the $90\%$ C.L. calculated using a Bayesian method ~\cite{pdg2006}, $\epsilon$ is the detection efficiency obtained from MC simulation, $N_{J/\psi}$ is the number of $J/\psi$ events, $(5.77 \pm 0.27) \times 10^{7}$ ~\cite{fangss2003}, $\sigma^{sys}$ is the corresponding systematic error, and $B(X \rightarrow Y)$ is the branching fraction, taken from the Particle Data Group (PDG) ~\cite{pdg2006}, of the $X$ intermediate state to the $Y$ final state. \subsection{\bf $J/\psi \rightarrow \{\omega, \eta \} K^{0}_{S}K^{\pm}\pi^{\mp}$} At least one charged track must be identified as a kaon using TOF and $dE/dx$ information. If there is more than one kaon candidate, the assigned kaon is the one with the largest kaon weight. Candidate events are fitted kinematically using energy momentum conservation (4C-fit) under the $2(\pi^{+}\pi^{-})K^{\pm}\pi^{\mp} \gamma \gamma$ hypothesis, and $\chi^{2}<25$ is required. Each event is required to contain one $K^{0}_{S}$ meson with six possible $\pi^{+}\pi^{-}$ combinations to test for consistency with the $K^{0}_{S}$. Looping over all combinations, we select the one closest to the $K^{0}_{S}$ mass, denoted as $m_{\pi^{+}\pi^{-}}$, provided it is within $15$ MeV/$c^{2}$ of the $K^{0}_{S}$ mass. After $K^{0}_{S}$ selection, the two remaining oppositely-charged pion combinations along with the two gammas are used to calculate $m_{\pi^{+}\pi^{-}\gamma \gamma}$. Figure~\ref{fig:w-pi0} (a) shows the scatter plot of $m_{\gamma \gamma}$ versus $m_{\pi^{+}\pi^{-}\gamma \gamma}$ with two possible entries per event, where clear $\eta$ and $\omega$ signals are seen. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{scatter1_wkksp.eps} \includegraphics[width=0.35\textwidth]{w-eta-ratio-ksside.eps} \caption{(a) The scatter plot of $m_{\gamma \gamma}$ versus $m_{\pi^{+}\pi^{-}\gamma \gamma}$, and (b) the $\pi^{+}\pi^{-} \gamma \gamma$ invariant mass for $J/\psi \rightarrow \{\omega, \eta\}K^{0}_{S}K^{\pm}\pi^{\mp}$ candidate events with two possible entries per event. The curves in (b) are the results of the fit described in the text, and the shaded histogram in (b) shows the normalized background estimated from the $K^{0}_{S}$-sideband region ($0.025$ GeV/$c^{2}<|m_{\pi^{+}\pi^{-}}-m_{K^{0}_{S}}|<0.055$ GeV/$c^{2}$).} \label{fig:w-pi0} \end{figure} The $\pi^{+} \pi^{-} \gamma \gamma$ invariant mass distribution with two possible entries per event is shown in Fig. \ref{fig:w-pi0} (b), where $\eta$ and $\omega$ signals are apparent. The branching fractions of $J/\psi \to \omega K^{0}_{S}K^{\pm}\pi^{\mp}$ and $\eta K^{0}_{S}K^{\pm}\pi^{\mp}$ are obtained by fitting this distribution. The backgrounds for $J/\psi \to \omega K^{0}_{S}K^{\pm}\pi^{\mp}$ which contribute to the peak in the $\omega$ signal region mainly come from non-$K^{0}_{S}$ events and events from $J/\psi \rightarrow \omega K^{0}_{S} K^{0}_{S}$ that survive selection criteria. The number of background events from $J/\psi \rightarrow \omega K^{0}_{S} K^{0}_{S}$ is estimated from Monte-Carlo simulation to be less than 2 . Backgrounds for $J/\psi \to \eta K^{0}_{S}K^{\pm}\pi^{\mp}$ contributing to the peak in the $\eta$ signal region mainly come from non-$K^{0}_{S}$ events and events from $J/\psi$ decays into $K^{*0}\bar{K^{*}_{2}}(1430)^{0}\rightarrow (K^{0}_{S} \pi^{0})(K^{0}_{S} \eta)$. The latter contribution is estimated to be less than one event from MC simulation. Non-$K^{0}_{S}$ events from the $K^{0}_{S}$ sideband region (0.025 GeV/c$^2$ $<|m_{\pi^+\pi^-}-m_{K^{0}_{S}}|<0.055$ GeV/c$^{2}$) are shown in Fig. \ref{fig:w-pi0} (b) as the shaded histogram, the background events are $19.2\pm 15.6$ $\omega$ and $-4.1\pm7.0$ $\eta$ by fitting the distribution with possible signals and polynomial background. A fit to the $m_{\pi^{+} \pi^{-} \gamma \gamma}$ distribution is performed by using $\omega$ and $\eta$ Breit-Wigner (BW) functions folded with Gaussian resolution functions plus a quadratic polynomial, shown as the curve in Fig. \ref{fig:w-pi0} (b). The numbers of events in the $\omega$ and $\eta$ peaks are $1971.7 \pm 41.4$ and $231.6 \pm 23.1$, respectively. Here, the background events in the decays of $J/\psi \to \omega K^{0}_{S}K^{\pm}\pi^{\mp}$ and $J/\psi \rightarrow\eta K^{0}_{S}K^{\pm}\pi^{\mp}$ estimated above are not subtracted but are included in the background systematic error. The $J/\psi \rightarrow \omega K^{0}_{S} K^{\pm} \pi^{\mp}$ and $J/\psi \rightarrow \eta K^{0}_{S} K^{\pm} \pi^{\mp}$ detection efficiencies are obtained from MC simulation to be $1.48\%$ and $1.18\%$, respectively. The branching fractions are then determined as: \begin{eqnarray} B(J/\psi \rightarrow \omega K^{0}_{S} K^{\pm} \pi^{\mp}) & = &(3.77 \pm 0.08) \times 10^{-3},\nonumber \end{eqnarray} \begin{eqnarray} B(J/\psi \rightarrow \eta K^{0}_{S} K^{\pm} \pi^{\mp}) & = & (2.18 \pm 0.22) \times 10^{-3}.\nonumber \end{eqnarray} Here the errors are statistical only. \subsubsection{\bf $J/\psi \to \omega K^{*}\bar{K}+c.c. \rightarrow \omega K^{0}_{S}K^{\pm}\pi^{\mp}$} To select the $\omega$ signal, the mass combination with $\pi^{+} \pi^{-} \gamma \gamma$ closest to the $\omega$ mass is required to satisfy $|m_{\pi^{+}\pi^{-}\gamma \gamma}-m_{\omega}|<0.04$ GeV/$c^{2}$. Figure \ref{fig:ksp-kp-wkksp} shows the scatter plot of $m_{K^{0}_{S}\pi}$ versus $m_{K\pi}$ for $J/\psi \rightarrow \omega K^{0}_{S} K^{\pm} \pi^{\mp}$ candidate events, where the events in the cross bands correspond to $J/\psi \rightarrow \omega K^{*}\bar{K}+c.c.$. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{scatter3.eps} \caption{The scatter plot of $m_{K^{0}_{S}\pi}$ versus $m_{K^{\pm}\pi^{\mp}}$ for $J/\psi \rightarrow \omega K^{0}_{S} K^{\pm} \pi^{\mp}$ candidate events.} \label{fig:ksp-kp-wkksp} \end{figure} Figure~\ref{fig:k892-wkksp} (a) shows the scatter plot of $m_{\pi^{+}\pi^{-}\gamma \gamma}$ versus $m_{\pi^{+}\pi^{-}}$, and there is an accumulation of events in the $\omega$ and $K^{0}_{S}$ cross bands. The combined mass spectrum of $K^{0}_{S} \pi^{\mp}$ and $K^{\pm} \pi^{\mp}$ in the signal region (box 1 in Fig.~\ref{fig:k892-wkksp} (a)), which is defined as $|m_{\pi^{+}\pi^{-}}-m_{K^{0}_{S}}|<0.015$ GeV/$c^{2}$ and $|m_{\pi^{+}\pi^{-}\gamma \gamma}-m_{\omega}|<0.04$ GeV/$c^{2}$, is shown in Fig. \ref{fig:k892-wkksp} (b), where a clear $K^{*}$ signal is observed. The $K^{*}$ signal is fitted with a BW function folded with a Gaussian resolution function plus a third-order polynomial, and $1208.3\pm 93.3$ $K^{*}$ events are obtained. Non-$\omega$ and non-$K^{0}_{S}$ backgrounds are studied using $\omega$ and $K^{0}_{S}$ sideband events. Figure \ref{fig:k892-wkksp} (c) is the fitted $K\pi$ mass spectrum in the $\omega$ sideband region ($|m_{\pi^{+}\pi^{-}}-m_{K^{0}_{S}}|<0.015$ GeV/$c^{2}$, $0.06$ GeV$/c^{2}<|m_{\pi^{+}\pi^{-}\gamma \gamma}-m_{\omega}|<0.14$ GeV/$c^{2}$, shown as horizontal sideband boxes $2$ in Fig.~\ref{fig:k892-wkksp} (a)) and $K^{0}_{S}$ sideband region ($0.03$ GeV/$c^{2}<|m_{\pi^{+}\pi^{-}}- m_{K^{0}_{S}}|<0.06$ GeV/$c^{2}$, $|m_{\pi^{+}\pi^{-}\gamma \gamma}-m_{\omega}|<0.04$ GeV/$c^{2}$, shown as vertical sideband boxes $3$), and the number of $K^{*}$ sideband events $N_{sid1}=(686.2\pm 56.0)$ is obtained. Figure \ref{fig:k892-wkksp} (d) is background from the corner region ($0.03$ GeV/$c^{2}<|m_{\pi^{+}\pi^{-}}-m_{K^{0}_{S}}|<0.06$ GeV/$c^{2}$, $0.06$ GeV/$c^{2}<|m_{\pi^{+}\pi^{-}\gamma \gamma}-m_{\omega}|<0.14$ GeV/$c^{2}$, shown as diagonal boxes $4$), and the number of $K^{*}$ events $N_{sid2}$ is equal to $(134.1 \pm 25.5)$. The number of background events in the signal region is half of the sum of $K^{*}$ events in the $\omega$ sideband and $K^{0}_{S}$ sideband regions ($N_{sid1}$) minus a quarter of the $K^{*}$ events in the corner regions ($N_{sid2}$). So $N_{bg}=(686.2\pm 56.0)/2-(134.1 \pm 25.5)/4= (309.6\pm28.8)$. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{scatter-w-ks2.eps} \includegraphics[width=0.35\textwidth]{bwk892-data.eps} \centering \includegraphics[width=0.35\textwidth]{totside2-k892-data1.eps} \includegraphics[width=0.35\textwidth]{corner2-k892-data1.eps} \caption{(a) The scatter plot of $m_{\pi^{+}\pi^{-}\gamma \gamma}$ versus $m_{\pi^{+}\pi^{-}}$, and the combined mass spectrum of $K^{0}_{S} \pi^{\mp}$ and $K^{\pm} \pi^{\mp}$ with two entries per event $J/\psi \rightarrow \omega K^{*}\bar{K}+c.c.$ candidate events for (b) the signal region (the central box 1); (c) the $\omega$ and $K^{0}_{S}$ sideband regions (two horizontal boxes 2 and two vertical sideband boxes 3); and for (d) the corner region (four diagonal boxes 4). The curves are the results of the fit described in the text.} \label{fig:k892-wkksp} \end{figure} The detection efficiency is estimated to be $1.23\%$ from MC simulation. After background subtraction, the branching fraction is determined to be \begin{eqnarray} B(J/\psi \rightarrow \omega K^{*} \bar{K}+c.c.) & = &(6.20 \pm 0.68) \times 10^{-3},\nonumber \end{eqnarray} where the error is statistical only. \subsubsection{\bf $J/\psi \to \omega X(1440)\rightarrow \omega K^{0}_{S}K^{\pm}\pi^{\mp}$} Figure \ref{fig:w-x1440-recoiling} (a) shows the scatter plot of $m_{K^{0}_{S}K^{\pm}\pi^{\mp}}$ versus $m_{\pi^{+}\pi^{-}\gamma \gamma}$, and Fig. \ref{fig:w-x1440-recoiling} (b) is the $K^{0}_{S}K^{\pm}\pi^{\mp}$ invariant mass spectrum after $\omega$ selection ($|m_{\pi^{+}\pi^{-}\gamma \gamma}-m_{\omega}|<0.04$ GeV/c$^{2}$). Figs. \ref{fig:w-x1440-recoiling} (a) and (b) show a resonance near $1.44$ GeV/$c^{2}$, denoted as $X(1440)$. To ensure that this peak is not due to background, we have made studies of potential background processes using both data and MC simulations. Non-$\omega$ and non-$K^{0}_{S}$ processes are studied with $\omega$ and $K_S^0$ mass sideband events, respectively. The main background channel $J/\psi \rightarrow \omega 2(\pi^{+}\pi^{-})$ and other background processes with 6-prong events are studied by MC simulation. In addition, we also checked for possible backgrounds with a MC sample of $60 \times 10^{6} ~J/\psi \to anything$ decays generated by the LUND-charm model~\cite{jcchen2004}. None of these background sources produces a peak around $1.44$ GeV/$c^{2}$ in the $K^{0}_{S}K^{\pm}\pi^{\mp}$ invariant mass spectrum. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{scatter2.eps} \includegraphics[width=0.35\textwidth]{m1440-1b.eps} \caption{(a) The scatter plot of $m_{K^{0}_{S}K^{\pm}\pi^{\mp}}$ versus $m_{\pi^{+}\pi^{-}\gamma \gamma}$ and (b) the $K^{0}_{S} K^{\pm} \pi^{\mp}$ invariant mass distribution for $J/\psi \rightarrow \omega K^{0}_{S}K^{\pm}\pi^{\mp}$ candidate events. The curves in (b) are the results of the fit described in the text. } \label{fig:w-x1440-recoiling} \end{figure} The $K^{0}_{S}K^{\pm}\pi^{\mp}$ invariant mass distribution is fitted with a BW function convoluted with a Gaussian mass resolution function ($\sigma=7.44$ MeV/$c^{2}$) to represent the $X(1440)$ signal and a third-order polynomial background function. The mass and width obtained from the fit are $M=1437.6 \pm 3.2$ MeV/$c^{2}$ and $\Gamma=48.9 \pm 9.0$ MeV/$c^{2}$, and the fit yields $248.8 \pm 35.2$ events. Using the efficiency of $1.45\%$ determined from a uniform phase space MC simulation, we obtain the branching fraction to be \begin{eqnarray} B(J/\psi \rightarrow \omega X(1440))\cdot B( X(1440) \rightarrow K^{0}_{S} K^{\pm} \pi^{\mp}) & = &(4.86 \pm 0.69) \times 10^{-4},\nonumber \end{eqnarray} where the error is only the statistical error. \subsection{\bf $J/\psi \rightarrow \omega K^{+} K^{-} \pi^{0}$} At least one charged track is required to be a kaon and the combined PID probability for $K^{+}K^{-} \pi^{+}\pi^{-}$ is required to be greater than those for the $K^{\pm}\pi^{\mp} \pi^{+}\pi^{-}$ and $\pi^{+}\pi^{-}\pi^{+}\pi^{-}$ hypotheses. A 4C kinematic fit is made under the $K^{+} K^{-} \pi^{+} \pi^{-} 4 \gamma$ hypothesis. There are three combinations to form two $\pi^{0}$'s, and further a six-constraint kinematic fit (6C-fit) with the smallest $\chi^{2}_{6C}$ is made requiring two $\pi^{0}$'s from four photons. Events with $\chi^{2}_{4C}<50$ and $\chi^{2}_{6C}<50$ are selected. To reject the possible multiple photon background events, $\chi^{2}_{4C}$ is required to be less than those for the $K^{+} K^{-} \pi^{+} \pi^{-} 2 \gamma$, $K^{+} K^{-} \pi^{+} \pi^{-} 3 \gamma$, and $K^{+} K^{-} \pi^{+} \pi^{-} 5 \gamma$ hypotheses. Background events with $K^{0}_{S}$ decays, such as $K^{*0}\bar{K^{*}_{2}}(1430)^{0} \rightarrow K^{0}_{S} K^{\pm} \pi^{\mp} \{\pi^{0}, 2\pi^{0}\}$, and $\gamma K^{*} \bar{K^{*}} \rightarrow \gamma K^{0}_{S}K^{\pm} \pi^{\mp} \pi^{0}$, are eliminated by requiring $|m_{\pi^{+}\pi^{-}}-m_{K^{0}_{S}}|>0.02$ GeV/$c^{2}$ in the $\pi^{+}\pi^{-}$ invariant mass. There are two $\pi^{+}\pi^{-}\pi^{0}$ mass combinations, and the one closest to the $\omega$ mass, denoted as $m_{\pi^{+}\pi^{-}\pi^{0}}$, is selected. The scatter plot of $m_{K^{+}K^{-}\pi^{0}}$ versus $m_{\pi^{+}\pi^{-}\pi^{0}}$ is shown in Fig.~\ref{fig:w-x1440-wkkp0} (a), where the circle indicates some enhancement from $J/\psi \rightarrow \omega X(1440)$ events in the $\omega K^{+}K^{-}\pi^{0}$ decay. \begin{figure}[htbp] \centering \includegraphics[width=0.3\textwidth]{w-x1440_wkkpi0.eps} \includegraphics[width=0.3\textwidth]{k892-wside-new-norm.eps} \includegraphics[width=0.3\textwidth]{x1440-data4.eps} \caption{(a) The scatter plot of $m_{K^{+}K^{-}\pi^{0}}$ versus $m_{\pi^{+}\pi^{-}\pi^{0}}$, (b) the $K^{\pm}\pi^{0}$ invariant mass distribution with two possible entries per event, and (c) the $K^{+} K^{-}\pi^{0}$ invariant mass distribution for $J/\psi \rightarrow \pi^{+}\pi^{-}\pi^{0} K^{+}K^{-}\pi^{0}$ candidate events. The curves are the results of the fit described in the text, and the shaded histogram (b) shows the normalized background estimated from the $\omega$-sideband region.} \label{fig:w-x1440-wkkp0} \end{figure} \subsubsection{\bf $J/\psi \to \omega K^{*\pm}K^\mp \rightarrow \omega K^{+}K^{-}\pi^{0}$} To suppress the main $K^{*0}$ backgrounds, $|m_{K^{\pm} \pi^{\mp}}-m_{K^{*0}}|>0.05$ GeV/$c^{2}$ is required. In addition to the above selection, the further requirement of $|m_{\pi^{+}\pi^{-}\pi^{0}}-m_{\omega}|<0.04$ GeV/$c^{2}$ is imposed. The combined mass spectrum of $K^{+}\pi^{0}$ and $K^{-}\pi^{0}$ is shown in Fig. \ref{fig:w-x1440-wkkp0} (b), where the $K^{*\pm}$ signal is seen, and is fitted to obtain the branching fraction of $J/\psi \rightarrow \omega K^{*\pm}K^{\mp}$. Background events for $\omega K^{*\pm}K^{\mp}$ which could contribute to the peak in the $K^{*\pm}$ signal region mainly come from events with $K^{*}$ decays, such as $J/\psi \rightarrow K^{*0} \bar{K^{*}_{2}}(1430)^{0}$ into 4-prong plus multiple photons, $J/\psi \rightarrow \phi K^{*} \bar{K}$, and $J/\psi \rightarrow \gamma K^{*} \bar{K^{*}}$, but their contributions can be ignored according to MC studies. It is further confirmed that the $J/\psi \rightarrow \omega K^{*\pm}K^{\mp}$ background is negligible using $\omega$ and $\pi^{0}$ sideband events. The $K^{\pm} \pi^{0}$ invariant mass distribution in Fig. \ref{fig:w-x1440-wkkp0} (b) (2 entries/event) is fitted with a $K^{* \pm}$ BW function with the mass and width fixed to PDG values~\cite{pdg2006} plus a third-order polynomial. The number of $K^{*\pm}$ events obtained is $(175.6\pm27.4)$. The detection efficiency is $0.32\%$, and the branching fraction of $J/\psi \rightarrow \omega K^{*} \bar{K}+c.c.$ is determined to be \begin{eqnarray} B(J/\psi \rightarrow \omega K^{*}\bar{K}+c.c.) & = &(6.53\pm1.02) \times 10^{-3},\nonumber \end{eqnarray} where the error is statistical only. \subsubsection{\bf $J/\psi \rightarrow \omega X(1440) \rightarrow \omega K^{+}K^{-}\pi^{0}$} Figure \ref{fig:w-x1440-wkkp0} (c) shows the $K^{+}K^{-}\pi^{0}$ invariant mass recoiling against the $\omega$, where a $X(1440)$ signal is observed. We have also studied potential background processes using both data and MC simulations. Non-$\omega$ processes are studied with the $\omega$ mass sideband events ($0.06$ GeV/$c^{2}<|m_{\pi^{+}\pi^{-}\pi^{0}}-m_{\omega}| <0.10$ GeV/$c^{2}$). Background with $\omega$ decays is studied by MC simulations, similar to those of $J/\psi \rightarrow \omega K^{*}\bar{K}+c.c. \rightarrow \omega K^{+}K^{-} \pi^{0}$. In addition, we also checked for possible backgrounds using a MC sample of $60 \times 10^{6} ~J/\psi \to anything$ decays generated by the LUND-charm model. In each case, the $K^{+}K^{-}\pi^{0}$ mass distribution shows no evidence of an enhancement near $1440$ MeV/$c^{2}$. By fitting the $K^{+} K^{-}\pi^{0}$ mass spectrum in Fig. \ref{fig:w-x1440-wkkp0} (c) with a BW function convoluted with a Gaussian mass resolution function ($\sigma=14.2$ MeV/$c^{2}$) plus a third-order polynomial background function, the mass and width of $M=1445.9\pm 5.7$ MeV/$c^{2}$ and $\Gamma=34.2 \pm18.5$ MeV/$c^{2}$ are obtained, and the number of events from the fit is $62.1\pm18.3$. A fit without a BW signal function returns a value of $-2 \ln L$ larger than the nominal fit by 31.7 with three degrees of freedom (d.o.f.), corresponding to a statistical significance of $5.0~\sigma$ for the signal. The efficiency is determined to be $0.64\%$ from a phase space MC simulation, and the branching fraction is \\ \begin{eqnarray} B(J/\psi \rightarrow \omega X(1440)) \cdot B(X(1440) \rightarrow K^{+} K^{-} \pi^{0}) & = &(1.92 \pm 0.57) \times 10^{-4},\nonumber \end{eqnarray} where the error is statistical. \subsection{\bf $J/\psi \rightarrow \phi K^{0}_{S} K^{\pm} \pi^{\mp}$} Events with six charged tracks are selected, and at least two charged tracks must be identified as kaons. If there are more than two kaons, the two kaons with the largest kaon PID probabilities are regarded as the real kaons. The other charged tracks are assumed, one at a time, to be a kaon, while the other three to be pions, and these combinations of three kaons and three pions are kinematically fitted. The hypothesis with the smallest $\chi^{2}$ is considered as the right combination, and $\chi^{2}<20$ is required. Two combinations of oppositely charged pions are used to reconstruct the $K^{0}_{S}$ signal, and the one closest to the $K^{0}_{S}$ mass is required to be within $15$ MeV/$c^{2}$. The invariant mass of the two mass combinations formed with oppositely charged kaons are shown in Fig. \ref{fig:phi-phikksp}, where a clear $\phi$ signal is observed. A fit to the $K^{+}K^{-}$ mass distribution in Fig \ref{fig:phi-phikksp} is performed to obtain the number of $J/\psi \rightarrow\phi K^{0}_{S}K^{\pm}\pi^{\mp}$ events. Backgrounds contributing to the $\phi$ signal peak mainly come from $J/\psi$ into $\phi f^{\prime}_{2}(1525) \rightarrow \phi \eta \eta$, $\phi \eta^{\prime} \rightarrow \phi \eta \pi^{+} \pi^{-}$, $\phi K^{0}_{S}K^{0}_{S}$, and $\phi 2(\pi^{+}\pi^{-})$ (excluding $\phi K^{0}_{S}K^{0}_{S}$). From MC simulations of these background channels, the number of background $\phi$ events in the signal region is less than one, and $K^{0}_{S}$-sideband events also show that the background is negligible. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{ubin-phi_6prong_reso.eps} \centering \caption{The $K^{+}K^{-}$ invariant mass distribution for $J/\psi \rightarrow K^{+}K^{-} \pi^{+}\pi^{-} K^{\pm} \pi^{\mp}$ candidate events with two possible entries per event. The curves are the results of the fit described in the text.} \label{fig:phi-phikksp} \end{figure} The $K^{+} K^{-}$ mass distribution in Fig. \ref{fig:phi-phikksp} is fitted with a BW function convoluted with a Gaussian mass resolution function ($\sigma=2.93$ MeV/$c^{2}$) plus a third-order polynomial background function. The number of $\phi$ events from the fit is $227.1 \pm 19.0$. Using the detection efficiency of $1.56\%$, the corresponding branching fraction is \begin{eqnarray} B(J/\psi \rightarrow \phi K^{0}_{S}K^{\pm}\pi^{\mp}) & = &(7.37 \pm 0.62) \times 10^{-4},\nonumber \end{eqnarray} where the error is statistical only. \subsubsection{\bf $J/\psi \to \phi K^{*} \bar{K}+c.c. \rightarrow \phi K^{0}_{S} K^{\pm} \pi^{\mp}$} To remove most non-$\phi$ background events, the $K^{+}K^{-}$ combination closest to the $\phi$ mass is required to satisfy $|m_{K^{+}K^{-}}-m_{\phi}|<0.015$ GeV/c$^{2}$. The scatter plot of $m_{K^{0}_{S}\pi}$ versus $m_{K\pi}$ for candidate events is shown in Fig. \ref{fig:k892-phik892k-kksp} (a), where the events in the cross band correspond to the $K^{*}$ signal. The scatter plot of $m_{\pi^{+}\pi^{-}}$ versus $m_{K^{+}K^{-}}$ is shown in Figure~\ref{fig:k892-phik892k-kksp} (b), and there is an accumulation of events in the $\phi$ and $K^{0}_{S}$ cross bands. Figure~\ref{fig:k892-phik892k-kksp} (c) shows the combined $K^{0}_{S} \pi^{\mp}$ and $K^{\pm} \pi^{\mp}$ mass spectrum for events in the signal region (box 1 in Fig.~\ref{fig:k892-phik892k-kksp} (b)), which is defined as $|m_{\pi^{+}\pi^{-}}-m_{K^{0}_{S}}|<0.015$ GeV/$c^{2}$ and $|m_{K^{+}K^{-}}-m_{\phi}|<0.015$ GeV/$c^{2}$. A fit yields $194.8 \pm 25.0$ $K^{*}$ events. The same method as used for the $J/\psi \rightarrow \omega K^{*}\bar{K} +c.c.\rightarrow \omega K^{0}_{S}K^{\pm}\pi^{\mp}$ analysis is used to estimate the number of background events in the signal region, and $N_{bg}=(10.0\pm6.6)$, which is neglected in the branching fraction determination. \begin{figure}[htbp] \centering \includegraphics[width=0.3\textwidth]{kpi_kspi_phikksp.eps} \includegraphics[width=0.3\textwidth]{scatter-phi-ks.eps} \includegraphics[width=0.3\textwidth]{phikksp-k892fit.eps} \caption{(a) The scatter plot of $m_{K^{0}_{S}\pi}$ versus $m_{K\pi}$, (b) the scatter plot of $m_{\pi^{+}\pi^{-}}$ versus $m_{K^{+}K^{-}}$, and (c) the combined $K^{0}_{S} \pi^{\mp}$ and $K^{\pm} \pi^{\mp}$ invariant mass distributions for events in the signal region (box 1) for $J/\psi \rightarrow \phi K^{0}_{S} K^{\pm} \pi^{\mp}$ candidate events. The curves are the results of the fit described in the text.} \label{fig:k892-phik892k-kksp} \end{figure} The detection efficiency of $J/\psi \rightarrow \phi K^{*}\bar{K}+c.c.$ in this decay is $1.42\%$, and its branching fraction is determined to be \begin{eqnarray} B(J/\psi \rightarrow \phi K^{*}\bar{K}+c.c.) & = &(2.08 \pm 0.27) \times 10^{-3},\nonumber \end{eqnarray} where the error is statistical only. \subsubsection{\bf $J/\psi \to \phi X(1440) \rightarrow \phi K^{0}_{S} K^{\pm} \pi^{\mp}$} The distribution of $K^{0}_{S} K^{\pm} \pi^{\mp}$ invariant mass recoiling against the $\phi$ signal is shown in Fig. \ref{fig:x1440-phikksp} (a), and there is no evidence for $X(1440)$. The upper limit on the number of the observed events at the $90\%$ C.L. is $8.1$~\cite{pdg2006}. The likelihood distribution and the $90\%$ C.L. limit are shown in Fig. \ref{fig:x1440-phikksp} (b). The likelihood values for the number of events are obtained by fitting the $K^{0}_{S} K^{\pm} \pi^{\mp}$ distributions with a X(1440) signal plus a third-order polynomial background. Its mass and width are fixed to those in the decay $J/\psi \rightarrow \omega X(1440)\rightarrow \omega K^{0}_{S} K^{\pm}\pi^{\mp}$. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{x1440_0-phikksp.eps} \includegraphics[width=0.35\textwidth]{fitnumber-kksp.eps} \caption{(a) The $K^{0}_{S} K^{\pm} \pi^{\mp}$ invariant mass recoiling against the $\phi$, and (b) the number of events of X(1440). The curve in (a) is a third order polynomial to describe the background, and the observed number of events at the $90\%$ confidence level using a Bayesian method is indicated by the arrow in (b).} \label{fig:x1440-phikksp} \end{figure} The detection efficiency is $2.53\%$, and the upper limit on the branching fraction at the $90\%$ C.L. is: \begin{eqnarray} B(J/\psi\rightarrow \phi X(1440) \rightarrow \phi K^{0}_{S}K^{+}\pi^{-}+c.c.)<1.93 \times 10^{-5}. \end{eqnarray} \subsection{\bf $J/\psi \rightarrow \phi K^{+} K^{-} \pi^{0}$} At least three charged tracks must be identified as kaons. A 4C-fit is applied under the hypothesis $J/\psi \rightarrow \gamma \gamma 2(K^{+}K^{-})$, and $\chi^{2}<16$ is required. To reject possible background events from $J/\psi \rightarrow \gamma 2(K^{+}K^{-})$, the $\chi^{2}$ of the 4C fit for $J/\psi \rightarrow \gamma\gamma 2(K^{+}K^{-})$ is required to be less than that for the $\gamma 2(K^{+}K^{-})$ hypothesis. There are four possible ways to combine the oppositely charged kaons in forming the $\phi$, and the $K^{+}K^{-}$ combination closest to the $\phi$ mass is chosen. Figure~\ref{fig:scatter-phik892k-phi-pi0} (a) shows the scatter plot of $m_{\gamma \gamma}$ versus $m_{K^{+}K^{-}}$, and clear $\phi$ and $\pi^{0}$ signals are seen. \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{scatter-phi105-111-pi055-135.eps} \includegraphics[width=0.35\textwidth]{phik892k-data_kkp0.eps} \caption{(a) The scatter plot of $m_{\gamma\gamma}$ versus $m_{K^{+}K^{-}}$, and (b) the $K^{\pm}\pi^{0}$ invariant mass distribution for events in the signal region (box 1) for $J/\psi \rightarrow \gamma \gamma 2(K^{+}K^{-})$ candidate events with two entries per event. The curves are the results of the fit described in the text.} \label{fig:scatter-phik892k-phi-pi0} \end{figure} \subsubsection{\bf $J/\psi \to \phi K^{*\pm} K^{\mp} \rightarrow \phi K^{+} K^{-} \pi^{0}$} Figure \ref{fig:scatter-phik892k-phi-pi0} (b) shows the $K^{\pm}\pi^{0}$ combined mass spectrum for events in the signal region (box 1 in Fig.~\ref{fig:scatter-phik892k-phi-pi0} (a) ), which is defined as $|m_{K^{+}K^{-}}-m_{\phi}|<0.015$ GeV/$c^{2}$ and $|m_{\gamma \gamma}-m_{\pi^{0}}|<0.04$ GeV/$c^{2}$, and a clear $K^{*\pm}$ signal is seen. It is fitted with a BW, whose mass and width are fixed to those of $K^{*\pm}$ in the PDG, plus a third-order polynomial. The number of $K^{*\pm}$ events from the fit is $277.8 \pm 27.7$. The sidebands are used as before to estimate the number of the corresponding background events in the signal region, and the result is $N_{bg}=(40.1\pm 10.1)$. After subtracting the above background and incorporating the efficiency of $1.71\%$ from MC simulation, the branching fraction is determined to be \begin{eqnarray} B(J/\psi \rightarrow \phi K^{*}\bar{K}+c.c. ) & = &(2.96 \pm 0.37) \times 10^{-3},\nonumber \end{eqnarray} where the error is statistical only. \subsubsection{\bf $J/\psi \to \phi X(1440) \rightarrow \phi K^{+} K^{-} \pi^{0}$} The distribution of $K^{+} K^{-} \pi^{0}$ invariant mass recoiling against the $\phi$ is shown in Fig. \ref{fig:x1440-phikkp0} (a). No evidence for the $X(1440)$ is observed near $1440$ MeV/$c^{2}$. The upper limit on the number of the observed events at the $90\%$ C.L. is $10.5$~\cite{pdg2006}. The likelihood distribution and the $90\%$ C.L. limit are shown in Fig. \ref{fig:x1440-phikkp0} (b). The likelihood values for the number of events are obtained by fitting the $K^{+} K^{-} \pi^{0}$ distributions with a X(1440) signal, whose mass and width are fixed to those of the decay $J/\psi \rightarrow \omega K^{+} K^{-}\pi^{0}$, plus a third-order background polynomial. The detection efficiency is $2.49\%$, and the upper limit on the branching fraction at the $90\%$ C.L. is: \begin{eqnarray} B( J/\psi \rightarrow \phi X(1440) \rightarrow \phi K^{+}K^{-}\pi^{0}) < 1.71 \times 10^{-5}. \end{eqnarray} \begin{figure}[htbp] \centering \includegraphics[width=0.35\textwidth]{x1440-0_phikkp0.eps} \includegraphics[width=0.35\textwidth]{fitnumber_phikkp0.eps} \caption{(a) The $K^{+} K^{-} \pi^{0}$ invariant mass recoiling against the $\phi$, and (b) the number of events of X(1440). The curve in (a) is the third-order polynomial to describe the background, and the observed number of events at the $90\%$ confidence level using a Bayesian method is indicated by arrow in (b).} \label{fig:x1440-phikkp0} \end{figure} \section{Systematic errors } In this analysis, the systematic errors on the branching fractions mainly come from following sources: \begin{itemize} \item MDC tracking efficiency\\ The MDC tracking efficiency is measured from clean channels like $J/\psi \rightarrow \Lambda \bar{\Lambda}$ and $\psi(2S) \rightarrow \pi^{+} \pi^{-} J/\psi$ with $J/\psi \rightarrow \mu^{+} \mu^{-}$. It is found that the MC simulation agrees with data within $1\%-2\%$ for each charged track. Therefore, $12\%$ is taken as the systematic error on the tracking efficiency for the channels with six charged tracks and $8\%$ for the channels with four charged tracks in the final states.\\ \item Photon detection efficiency \\ The photon detection efficiency is studied from $J/\psi \rightarrow \rho^{0} \pi^{0}$ events~\cite{lishm2004}. The results indicate that the difference between data and MC simulation is less than $2\%$ for each photon. Therefore, $4\%$ is taken to be the systematic error from the photon efficiency for the channels with two photons and $8\%$ for the channels with four photons in the final states.\\ \item PID \\ The PID efficiency of the kaon is studied with $J/\psi \rightarrow K^{+} K^{-} \pi^{0}$ events. The average PID efficiency difference between data and MC is found to be less than $2\%$. In this paper, $2\%$, $4\%$, and $6\%$ are conservatively taken as the systematic errors on PID efficiency for the channels with one, two, and three identified kaons, respectively. \\ \item $K^{0}_{S}$ reconstruction \\ The $K^{0}_{S}$ secondary vertex reconstruction is checked using $J/\psi \rightarrow K^{*\pm}K^{\mp} (K^{*\pm} \rightarrow K^{0}_{S} \pi^{\pm})$ events. It is found that the difference of the efficiency between data and MC simulation is $2.8\%$, which is taken to be the systematic error from the $K^{0}_{S}$ secondary vertex reconstruction. \\ \item Intermediate decay branching fractions\\ The branching fractions for $\eta \rightarrow \pi^{+}\pi^{-}\pi^{0}$, $\omega \rightarrow \pi^{+}\pi^{-}\pi^{0}$, and $\phi \rightarrow K^{+} K^{-}$ are taken from the PDG \cite{pdg2006}, and the errors on these branching fractions are included as systematic errors in our measurements. The error on the $K^{0}_{S} \rightarrow \pi^{+}\pi^{-}$ branching fraction is neglected in this analysis.\\ \item Kinematic fit\\ Kinematic fits are used to reduce backgrounds. Using the same method as in Ref.~\cite{rhopi2004}, the decay modes $J/\psi \rightarrow 3(\pi^{+}\pi^{-})\pi^{0}$, $J/\psi \rightarrow 2(\pi^{+}\pi^{-})\pi^{0}$, and $J/\psi \rightarrow 3(\pi^{+}\pi^{-})$ are studied~\cite{pi46eta} in order to estimate the corresponding systematic error. The kinematic fit efficiency differences between data and MC are $5.5\%$, $4.3\%$, and $8.7\%$, respectively. The efficiency difference between data and MC for the 6C-kinematic fit to $\eta_{C} \rightarrow \omega \omega \rightarrow 2(\pi^{+}\pi^{-}\pi^{0})$ is about $10\%$~\cite{zhounf2005}. Since the decays in this analysis are similar to the above decays, these systematic errors are taken as the corresponding systematic errors.\\ \item Background uncertainty\\ The background uncertainties come from the uncertainties associated with the estimation of the sideband backgrounds, the events from other background channels, as well as the uncertainties of background shapes, different fit ranges, and different binning. Therefore, the statistical error in the estimated number of background events, the largest difference from changing background shape, the difference from changing the fit range, the difference of changing fit binning, and some ignored backgrounds events are taken as the systematic errors due to the background uncertainty.\\ \item MC generator \\ There may be interference between charged and neutral $K^{*}$ modes, which is not included in the MC generator. In the sample of $J/\psi \to \omega K^{*}\bar{K}+c.c. \rightarrow \omega K^{0}_{S}K^{\pm}\pi^{\mp}$ decays, the $K^{\pm}\pi^{\mp}$ mass distribution in the $K^{*\pm}$ sideband and signal regions and the $K^{0}_{S}\pi$ mass distributions in the $K^{*0}$ sideband and signal regions from the scatter plot of $m_{K^{\pm}\pi^{\mp}}$ versus $m_{K^{0}_{S}\pi^{\mp}}$, are studied in real data and MC simulation. It is found that the difference between data and MC sample is $5.1\%$, so $5.1\%$ is taken as the systematic error from the MC model.\\ \item Number of $J/\psi$ events \\ The number of $J/\psi$ is $(57.7\pm2.7) \times 10^{6}$, determined from $J/\psi$ inclusive four-prong events~\cite{fangss2003}. The uncertainty is taken as a systematic error in the branching ratio measurement. \end{itemize} Table \ref{table:systematic-error} and Table \ref{table:x1440-systematic-error} list the systematic errors from all above sources, and the total systematic error is the sum of them added in quadrature. \begin{table} \centering \caption{Systematic errors in $B(J/\psi \rightarrow \{\eta,\omega,\phi\} K \bar{K} \pi)$.} \label{table:systematic-error} \begin{tabular}{l|c|c|c|c|c|c|c}\hline\hline $J/\psi \rightarrow$ & $\eta K^{0}_{S}K^{\pm}\pi^{\mp}$ & $\omega K^{0}_{S}K^{\pm}\pi^{\mp}$ & $\omega K^{*}K\rightarrow$ & $\omega K^{*}K\rightarrow$ & $\phi K^{0}_{S}K^{\pm}\pi^{\mp}$ & $\phi K^{*}K\rightarrow$ & $\phi K^{*}K\rightarrow$ \\ & & & $\omega K^{0}_{S}K^{\pm}\pi^{\mp}$ & $\omega K^{+}K^{-}\pi^{0}$ & & $\phi K^{0}_{S}K^{\pm}\pi^{\mp}$ & $\phi K^{+}K^{-}\pi^{0}$ \\\hline Error source & \multicolumn{7}{c}{relative error $(\%)$} \\\hline MDC tracking & 12 & 12 & 12 & 8 & 12 & 12 & 8 \\\hline photon efficiency & 4 & 4 & 4 & 8 & - & - & 4 \\\hline Particle ID & 2 & 2 & 2 & 2 & 4 & 4 & 6\\\hline $K^{0}_{S}$ 2nd vertex & 2.8 & 2.8 & 2.8 & - & 2.8& 2.8 & -\\\hline intermediate decays & 1.8 & 0.8 & 0.8 & 0.8 & 1.2 & 1.2 & 1.2 \\\hline kinematic fit & 5.5 & 5.5 & 5.5 & 10 & 8.2 & 8.2 & 4.3\\\hline Back. uncertainty & 1.8 & 2.0 & 5.9 & 11.7 & 8.4 & 6.1 & 7.3 \\\hline MC statistic & 1.9 & 1.3 & 2.3 & 3.2 & 3.2 & 3.4 & 2.2 \\\hline MC model & - & - & 5.1 & 5.1 & - & 5.1 & 5.1 \\\hline Number of $J/\psi$ events & \multicolumn{7}{c}{4.7} \\\hline total Sum & 15.4 & 15.2 & 17.1 & 20.7 & 18.4 &18.3 & 15.6\\\hline\hline \end{tabular} \end{table} \begin{table} \centering \caption{Systematic errors in $B(J/\psi \rightarrow \{\omega,\phi\} X(1440) \rightarrow \{\omega,\phi\}K \bar{K} \pi)$.} \label{table:x1440-systematic-error} \begin{tabular}{l|c|c|c|c}\hline\hline $J/\psi \rightarrow$ &$\omega X(1440)$ & $\omega X(1440)$ & $\phi X(1440)$ & $\phi X(1440)$ \\ & $\rightarrow \omega K^{0}_{S}K^{\pm}\pi^{\mp}$ & $\rightarrow \omega K^{+}K^{-}\pi^{0}$ & $\rightarrow \phi K^{0}_{S}K^{\pm}\pi^{\mp}$ & $\rightarrow \phi K^{+}K^{-}\pi^{0}$\\\hline Error source & \multicolumn{4}{c}{relative error $(\%)$} \\\hline MDC tracking & 12 & 8 & 12 & 8 \\\hline photon efficiency & 4 & 8 & - & 4 \\\hline Particle ID & 2 & 2 & 4 & 6 \\\hline $K^{0}_{S}$ 2nd vertex & 2.8 & - & 2.8 & - \\\hline intermediate decays & 0.8 & 0.8 & 1.3 & 1.2 \\\hline kinematic fit & 5.5 & 10 & 8.2 & 4.3 \\\hline MC statistic & 2.7 & 2.6 & 0.8 & 0.8 \\\hline Back. uncertainty & 6.4 & 10.9 & -& - \\\hline Number of $J/\psi$ events & \multicolumn{4}{c}{4.7} \\\hline Sum & 16.6 & 19.6 & 16.2 & 12.6 \\\hline\hline \end{tabular} \end{table} \section{\bf Results} Table \ref{table:wphikksp-bes} lists the branching fractions of $J/\psi\rightarrow\{\eta,\omega,\phi\} K^{0}_{S}K^{\pm}\pi^{\mp}$, $J/\psi\rightarrow \{\omega,\phi\} K^{*}\bar{K}+c.c.$ from different decay modes. These branching fractions are somewhat larger than those of other experiments in Table \ref{table:wphikksp-pdg}~\cite{wkkpmark3}~\cite{dm288} but they are still consistent within errors. The branching fraction for $J/\psi\rightarrow \eta K^{0}_{S}K^{\pm}\pi^{\mp}$ is measured for the first time. In the invariant mass spectra of $K^{0}_{S}K^{\pm}\pi^{\mp}$ and $K^{+}K^{-}\pi^{0}$ recoiling against the $\omega$, the resonance at $1.44$ GeV/$c^{2}$ is observed, with the mass, width, and branching fractions listed in Table \ref{table:x1440-branching-ratio}; while in the invariant mass spectra of $K^{0}_{S}K^{\pm}\pi^{\mp}$ and $K^{+}K^{-}\pi^{0}$ recoiling against the $\phi$, no significant structure near 1.44 GeV/c$^2$ is seen and an upper limits on the $J/\psi$ decay branching fractions at the $90\%$ C.L. are given in Table \ref{table:x1440-branching-ratio}. \begin{table} \centering \caption{The branching fractions of $J/\psi$ decays in BESII.} \label{table:wphikksp-bes} \begin{tabular}{l|l|l|c|l}\hline\hline Decay & final state & No. of events & efficiency & Branching fraction ($10^{-4}$)\\\hline $\omega K^{0}_{S}K^{+}\pi^{-}+c.c.$ & $(\pi^{+}\pi^{-}\pi^{0})K^{0}_{S}K^{\pm}\pi^{\mp}$& $1971.7\pm 41.4$ & $1.48\%$ & $37.7\pm0.8\pm5.8$ \\\hline $\eta K^{0}_{S}K^{+}\pi^{-}+c.c.$ & $(\pi^{+}\pi^{-}\pi^{0})K^{0}_{S}K^{\pm}\pi^{\mp}$ & $231.6\pm 23.1$ & $1.18\%$ & $21.8\pm 2.2 \pm 3.4$ \\\hline $\omega K^{*}\bar{K}+c.c.$ & $(\pi^{+}\pi^{-}\pi^{0})K^{0}_{S}K^{\pm}\pi^{\mp}$ & $898.7\pm97.7$ & $1.23\%$ & $62.0 \pm 6.8 \pm10.6$ \\ & $(\pi^{+}\pi^{-}\pi^{0})K^{+}K^{-}\pi^{0}$ & $175.6\pm27.4$ & $0.32\%$ & $65.3\pm10.2\pm13.5$ \\\hline $\phi K^{0}_{S}K^{+}\pi^{-}+c.c.$ & $(K^{+}K^{-})K^{0}_{S}K^{\pm}\pi^{\mp}$ & $227.1\pm19.0$ & $1.56\%$ & $7.4\pm0.6 \pm1.4$\\\hline $\phi K^{*}\bar{K}+c.c.$ & $(K^{+}K^{-})K^{0}_{S}K^{\pm}\pi^{\mp}$ & $194.8\pm25.0$ & $1.42\%$ & $20.8\pm2.7\pm3.9 $ \\ & $(K^{+}K^{-})K^{+}K^{-}\pi^{0}$ & $237.7\pm29.5$ & $1.71\% $ & $29.6\pm3.7\pm4.7$ \\\hline\hline \end{tabular} \end{table} \begin{table} \centering \caption{The branching fractions of $J/\psi$ decays from MarkIII~\cite{wkkpmark3} and DM2~\cite{dm288} Collaborations}. \label{table:wphikksp-pdg} \begin{tabular}{l|l|l|c}\hline\hline & Decay & final state & Branching fraction ($10^{-4}$)\\\hline MarkIII & $\omega K^{0}_{S}K^{+}\pi^{-}+c.c.$ & $(\pi^{+}\pi^{-}\pi^{0})K^{0}_{S}K^{\pm}\pi^{\mp}$ & $29.5\pm1.4\pm7.0$ \\\cline{2-4} & $\omega K^{*}\bar{K}+c.c.$ & $(\pi^{+}\pi^{-}\pi^{0})K^{0}_{S}K^{\pm}\pi^{\mp}$ & $53\pm14\pm14$ \\ & & $(\pi^{+}\pi^{-}\pi^{0})K^{+}K^{-}\pi^{0}$ & \\\cline{2-4} & $\phi K^{0}_{S}K^{+}\pi^{-}+c.c.$ & $(K^{0}_{S}K^{0}_{L}))K^{0}_{S}K^{\pm}\pi^{\mp}$ & $7.0\pm0.6\pm1.0$ \\ & & $(K^{+}K^{-})K^{0}_{S}K^{+}\pi^{-}+c.c.$ & \\\hline DM2 &$\phi K^{0}_{S}K^{+}\pi^{-}+c.c.$ & $(K^{+}K^{-})K^{0}_{S}K^{\pm}\pi^{\mp}$ & $7.4\pm0.9\pm1.1$ \\\cline{2-4} & $\phi K^{*}\bar{K}+c.c.$ & $(K^{+}K^{-})K^{0}_{S}K^{\pm}\pi^{\mp}$ & $20.8\pm2.7\pm3.7$ \\\hline\hline \end{tabular} \end{table} \begin{table} \centering \caption{The mass, width, and branching fractions of $J/\psi$ decays into ${\{\omega,\phi\}} X(1440)$.} \label{table:x1440-branching-ratio} \begin{tabular}{l|l}\hline\hline $J/\psi \rightarrow \omega X(1440)$ & $J/\psi \rightarrow \omega X(1440)$ \\ ($X\rightarrow K^{0}_{S}K^{+}\pi^{-}+c.c.$) & ($X\rightarrow K^{+}K^{-}\pi^{0}$) \\\hline $M=1437.6\pm 3.2$ MeV/$c^{2}$ & $M=1445.9\pm 5.7 $ MeV/$c^{2}$ \\ $\Gamma=48.9 \pm 9.0$ MeV/$c^{2}$ & $\Gamma=34.2\pm18.5$ MeV/$c^{2}$ \\\hline \multicolumn{2}{l}{$B( J/\psi\rightarrow \omega X(1440)\rightarrow \omega K^{0}_{S}K^{+}\pi^{-}+c.c.)=(4.86\pm0.69\pm0.81) \times 10^{-4}$}\\\hline \multicolumn{2}{l}{$B( J/\psi\rightarrow \omega X(1440) \rightarrow \omega K^{+}K^{-}\pi^{0})~~~~~~~~= (1.92\pm0.57\pm0.38) \times 10^{-4}$} \\\hline \multicolumn{2}{l}{$B(J/\psi\rightarrow \phi X(1440) \rightarrow \phi K^{0}_{S}K^{+}\pi^{-}+c.c.)<1.93 \times 10^{-5}$} ($90\%$ C.L.)\\\hline \multicolumn{2}{l}{$B( J/\psi \rightarrow \phi X(1440) \rightarrow \phi K^{+}K^{-}\pi^{0})~~~~~~~~< 1.71 \times 10^{-5}$} ($90\%$ C.L.)\\\hline \hline \end{tabular} \end{table} \acknowledgments The BES collaboration thanks the staff of BEPC and computing center for their hard efforts. This work is supported in part by the National Natural Science Foundation of China under contracts Nos. 10491300, 10225524, 10225525, 10425523, 10625524, 10521003, the Chinese Academy of Sciences under contract No. KJ 95T-03, the 100 Talents Program of CAS under Contract Nos. U-11, U-24, U-25, and the Knowledge Innovation Project of CAS under Contract Nos. U-602, U-34 (IHEP), the National Natural Science Foundation of China under Contract No. 10225522 (Tsinghua University), and the Department of Energy under Contract No. DE-FG02-04ER41291 (U. Hawaii).
1,314,259,994,022
arxiv
\section*{References}
1,314,259,994,023
arxiv
\subsection*{Raman coherence in a harmonic potential} The coherence of an atomic ensemble can be accessed via a Ramsey experiment. Two $\pi/2$ pulses are applied to atoms initially in the ground state $\ket{1}$ of a two-level system, with a time delay $t$ in between. Immediately after the first pulse the internal atomic state in the rotating frame is given by $\ket{\psi} = \frac{1}{\sqrt{2}}\left(\ket{1}+ e^{i \vec{k}_{\text{eff}} \cdot \vec{r}_0} \ket{2}\right)$, where $\vec{k}_{\text{eff}} = \vec{k}_1-\vec{k}_2$, and $\vec{r}_0$ is the initial position of the atom. The Ramsey coherence upon readout is given by \eq{C(t) = \left| \left< e^{i \vec{k}_{\text{eff}} \cdot \left[\vec{r}_0 - \vec{r}(t)\right]+ i \phi(t)} \right> \right|, \label{eq:coherence_no_pi}} where $\vec{r}(t)$ is the atomic position at time $t$, and $\phi$ is the phase accumulated due to the differential light shift of the trapping laser~\footnote{\ensuremath{^{87}\text{Rb }} atoms trapped in an optical dipole trap experience a differential AC-Stark shift imposed by the different detuning of the trapping laser from their two ground state hyperfine levels. Effectively this creates a stationary inhomogeneous broadening of the spectrum, decreasing the coherence time of the ensemble~\cite{Kuhr2005}.}. When the dephasing is dominated by the Doppler shift of the Raman beams, Eq.~\eqref{eq:coherence_no_pi} can be separated to a product of the MW and Raman coherences \eq{\begin{split} C(t) &= \left| \left< e^{i \phi(t)} \right> \right| \times \left| \left< e^{i \vec{k}_{\text{eff}} \cdot \left[\vec{r}_0 - \vec{r}(t)\right]} \right> \right| \\ & = C_\text{MW}(t) \times C_{\text{R}}(t),\end{split}\label{eq:split_coherence}} where $C_\text{R}(t)$ is the fast evolving coherence due to the Raman control, and $C_\text{MW}(t)$ is the slow evolving coherence due to all other factors. The properties of $C_\text{MW}(t)$ have been extensively studied~\cite{Cornell2002,Treutlein2004,Sagi2010_Universal,Deutsch2010,Kleine2011,Coslovsky2017Collisions}. Typically, and in particular under our experimental conditions, $C_\text{MW}$ decays much slower than $C_\text{R}$, rendering its dynamics negligible. For non-interacting atoms trapped in a 1D harmonic trap of frequency $\omega$~\footnote{For non-interacting particles in a perfectly harmonic trap the generalization to 3D is trivial, as the different axes are uncoupled.}, the trajectory of each atom is given by $x(t) = x_0 \cos(\omega t) + \frac{v_0}{\omega}\sin(\omega t)$, where $x_0,v_0$ are its initial position and velocity. The Ramsey coherence can be analytically derived by substituting this into $C_\text{R}$ of~Eq.~\eqref{eq:split_coherence} and averaging over the Boltzmann-distributed initial conditions $f(v_0,x_0)\sim \exp\left[-m(v_0^2+\omega^2 x_0^2)/2k_BT\right]$. Here $m$ is the atomic mass, $k_B$ is the Boltzmann constant and $T$ is the temperature of the atomic ensemble. The resultant coherence is given by \begin{equation} C_\text{R}(t) = \exp\left[-\ensuremath{\mathcal{N}}^2 \left[1-\cos(\omega t)\right]\right], \label{eq:coherence_harmonic} \end{equation} where we have defined the number of phase fringes on the atomic cloud as $\ensuremath{\mathcal{N}} = k_{\text{eff}} x_{\text{RMS}}$, with $x_{\text{RMS}} = \sqrt{\frac{k_B T}{m \omega^2}}$ the size of the atomic cloud. Figure~\ref{fig:fig1}(a) shows a plot of Eq.~\eqref{eq:coherence_harmonic} for three values of $\ensuremath{\mathcal{N}}$, revealing that whenever $\omega t$ is an even multiple of $\pi$ there is a full revival of coherence. Minimal coherence, $e^{-2\ensuremath{\mathcal{N}}^2}$, is reached whenever $\omega t$ is an odd multiple of $\pi$. For $\omega t \ll 1$ the coherence decays as a Gaussian $\sim \exp\left[-(t/\tau)^2\right]$, with $\tau = \sqrt{2}/\ensuremath{\mathcal{N}}\omega = \sqrt{2}/k_{\text{eff}} v_\text{RMS}$, where $v_\text{RMS} = \sqrt{k_BT/m}$ is the RMS velocity. In the limit $\omega t \rightarrow 0$, the initial decay rate converges to that of free atoms up to a factor of $\sqrt{2}$~\footnote{The reason for the $\sqrt{2}$ discrepancy is that under thermal equilibrium, the equipartition theorem dictates that as $\omega t \rightarrow 0$, the size of the atomic cloud grows and even at the limit there are always atoms experiencing strong potential gradients and the effect of the harmonic potential cannot be neglected.}. \subsection*{Effects of anharmonicity} For a thermal cloud in a symmetric trap that is not perfectly harmonic, each atom has a slightly different oscillation period and therefore at time $t = T_\text{osc}= 2 \pi/\braket{\omega}$, the atoms do not return to their initial position, reducing the revival amplitude. We perform a Monte Carlo simulation, solving the equations of motion for $10^3$ atoms in a 3D Gaussian confining potential and calculating their coherence according to Eq.~\eqref{eq:coherence_no_pi}. Fig.~\ref{fig:fig1}(a-b) compare the analytic result of Eq.~\eqref{eq:coherence_harmonic} to simulation results for different values of $\ensuremath{\mathcal{N}}$, revealing good agreement between the two. In the simulations [Fig.~\ref{fig:fig1}(b)], however, the effect of anharmonicity is manifested in a broadening and a reduction in amplitude of the Raman revivals as $\ensuremath{\mathcal{N}}$ increases. \begin{figure} \centering \begin{overpic} [width=\linewidth]{fig1} \put(40,280){ \textbf{(a)}} \put(148,280){ \textbf{(b)}} \put(40,190){ \textbf{(c)}} \put(148,190){ \textbf{(d)}} \put(40,90){ \textbf{(e)}} \end{overpic} \caption[Ramsey coherence in a trap.]{{Simulated Ramsey coherence in a trap.} {\bf (a)} Plot of the result of Eq.~\eqref{eq:coherence_harmonic} for $\ensuremath{\mathcal{N}}=1/3$ (purple, solid), $\ensuremath{\mathcal{N}}=1$ (green, dashed) and $\ensuremath{\mathcal{N}}=3$ (light blue, dash-dotted). The coherence drops rapidly at an $\ensuremath{\mathcal{N}}$-dependent timescale and revives fully after an oscillation period in the trap. {\bf (b)} In contrast to the perfectly harmonic trap case of (a), for a Gaussian trap (with anharmonicity $\mathcal{A}=0.025$) the simulated revivals do not reach unity coherence, but rather decrease monotonically. {\bf (c)} Scaling of $\epsilon$, the infidelity of the revival [Eq~\eqref{eq:error}], with $\ensuremath{\mathcal{N}}$ gives a slope of 1.993~(2), in agreement with prediction. This simulation is a single run at a given temperature in which we scan $\ensuremath{\mathcal{N}}$ by changing the angle between the Raman beams. {\bf (d)} $\log (\epsilon)$ as a function of $\log (T)$. The linear fit gives a slope of 2.96~(4), also in agreement with the prediction. {\bf (e)} An improvement over the anharmonic-trap Raman coherence (solid blue line) is achieved by applying echo pulses at the peaks of the revivals (dashed red line), or by using dynamical decoupling (dotted orange line). In the dynamical decoupling simulation the rate of $\pi$-pulses is 40 pulses per $T_{\text{osc}}$. The dynamical decoupling pulses are applied until $t=4T_{\text{osc}}$, at which point the coherence drops and revives after yet another oscillation period. All errors correspond to a $1\sigma$ confidence level.} \label{fig:fig1} \end{figure} To further analyze this effect we define a dimensionless ensemble anharmonicity $\mathcal{A}\equiv\braket{\omega}/\omega-1$, under the assumption that the oscillation frequency shift of each atom is proportional to its initial energy. Here $\omega$ is the (harmonic) frequency at the trap bottom. For a perfectly harmonic trap $\mathcal{A} = 0$, and for our crossed Gaussian-beam dipole trap $\mathcal{A} < 0$. Calculating the phase at integer multiples $\nu$ of the trapping period $T_\text{osc}$ to leading order in $\mathcal{A}$ yields the error in the $\nu^\text{th}$ revival caused by anharmonicity \begin{equation}\label{eq:error} \epsilon_\nu\equiv1-C_\text{R}\left(t=\nu T_\text{osc}\right)\approx 6\pi^2\mathcal{A}^2\ensuremath{\mathcal{N}}^2\nu^2. \end{equation} We test this scaling by comparing it to simulations with a constant anharmonicity $\mathcal{A}=0.025$ and varying $\ensuremath{\mathcal{N}}$ [Fig.~\ref{fig:fig1}(c)]. The fitted slope in a log-log scale is 1.993~(2), very close to 2, as expected. This scaling prediction can also be expanded to other experimental parameters, such as temperature. Since $\left\vert\mathcal{A}\right\vert \sim T$ and $\ensuremath{\mathcal{N}} \sim T^{1/2}$, we get that $\epsilon \sim T^3$~\footnote{The scaling for $\mathcal{A}$ comes from the assumption that the oscillation frequency shift of each atom is proportional to its initial energy. The scaling for \ensuremath{\mathcal{N}} is a consequence of the equipartition theorem.}. We test this prediction by comparing simulation results at different temperatures to the expected scaling law [Fig.~\ref{fig:fig1}(d)]. The fitted slope in the log-log scale is 2.96~(4), in agreement with the prediction. A well established method for extending the coherence is the Hahn echo ($\pi/2\to\pi\to\pi/2$)~\cite{Hahn}, where the inverting $\pi$ pulse is applied at time $t_\pi$. Neglecting global phases, the effect of the $\pi$-pulse is given by $\ket{1} \rightarrow \ket{2}$ and $\ket{2} \rightarrow e^{-2i \vec{k}_{\text{eff}} \cdot \vec{r}(t_\pi)}\ket{1}$. The coherence is then \eq{C_\text{R}(t>t_\pi) = \left| \left< \exp\left[i \vec{k}_{\text{eff}} \cdot \left[2\vec{r}(t_\pi)-\vec{r}_0 - \vec{r}(t)\right]\right] \right> \right| \label{eq:coherence_Echo}.} In a perfectly harmonic trap, when $t_\pi = 2\pi/\omega = T_\text{osc}$, the oscillation period in the trap, the echo-pulse has no effect on the coherence. According to our simulations, when echo-pulses are applied at the times of coherence revival in an anharmonic trap, the following revivals improve. This is because to first order the phase vanishes independently of the displacement of the atoms (which differs from atom to atom), and the echo pulse overcomes the anharmonicity [Fig.~\ref{fig:fig1}(e)]. Taking into account second order effects, we find $\epsilon \sim \mathcal{A}^4 \ensuremath{\mathcal{N}}^2$. On the other hand, if $t_\pi = T_\text{osc}/2$, \eq{C_\text{R}(t>t_\pi) = \exp\left[-\ensuremath{\mathcal{N}}^2\left[5+3\cos{(\omega t)}\right]\right]. \label{eq:bad_echo}} The coherence drops to $e^{-8\ensuremath{\mathcal{N}}^2}$ at $t=T_\text{osc}$, and experiences a small revival at $t=3T_\text{osc}/2$ to a value of $e^{-2\ensuremath{\mathcal{N}}^2}$. An echo pulse applied at $t=T_\text{osc}$ keeps the coherence high at the time of the following revival because the dynamics before and after the pulse are nearly identical. An echo-pulse applied at $t = T_\text{osc}/2$ dramatically decreases the following coherence due to the fact that here dynamics before and after the pulse are opposite. A more robust method for reducing anharmonicity-driven decoherence is dynamical decoupling (DD)~\cite{Biercuk2009,Almog2011,Genov2017}. Application of a fast sequence of $\pi$-pulses (\emph{i.e.} many pulses within the trap oscillation period) can be described by a generalization of Eq.~\eqref{eq:coherence_Echo}, yielding the result depicted in orange in Fig.~\ref{fig:fig1}(e). The coherence remains high up to $t = 4 T_{\text{osc}}$, which is when the simulated DD pulses stop. Then there is a decay of the coherence followed by a revival at $t = 5 T_{\text{osc}}$. DD can also be useful in mitigating the effect of interactions~\cite{Sagi2010_Process}. A prominent example of such interactions is elastic atomic collisions occurring at a rate $\Gamma_\text{coll}$, which can hinder the coherence significantly. A single collision will reduce both the Ramsey and echo revivals because when an atom collides, its trajectory is changed and therefore it does not return to its original position at $t=T_{\text{osc}}$. Under the effect of a set of DD pulses given at a frequency $f_\text{DD}$, the attainable coherence time, $t_c$, can be derived by adapting the calculation of~\cite{Sagi2010_Process}. It is the time for which the width of phase distribution is on the order of the effective wavelength of the Raman control $\lambda_\text{eff}\sim 1/k_\text{eff}$, yielding $t_c\sim \tau_D^2f_\text{DD}^2/\Gamma_\text{coll}$ with the notable quadratic dependence on $f_\text{DD}$. The Doppler time $\tau_D=v_\text{RMS}k_\text{eff}$ is the RMS velocity of the cloud times the effective Raman wave number. \subsection*{Experimental setup and results} Our cold atoms apparatus is described in detail in \cite{Sagi2010_Universal}. In short, it consists of $6\times 10^4$ $^{87}$Rb atoms at a temperature of about $2.4 \mu$K trapped in a crossed optical trap. Trapping frequencies are $(\omega_x, \omega_y, \omega_z) \approx 2 \pi \times (385, 385, 110)$~Hz giving $x_\text{RMS} \approx 6~\mu$m. The Raman transitions are performed between the $\ket{1} \equiv \ket{F=1,m_F=0}$ and $\ket{2} \equiv \ket{F=2,m_F=0}$ first-order Zeeman insensitive states of the $5^2S_{1/2}$ manifold. At the beginning of each experiment the atoms are prepared in the $\ket{1}$ state. At the end of each experiment we use a state-selective fluorescence-detection (``normalized detection'') scheme to evaluate the fraction of atoms at state $\ket{2}$. For the Raman control we use a distributed feedback laser offset-locked to an atomic reference~\cite{offset_locking_paper}, yielding a one-photon detuning of -3.4~GHz relative to the $F'=2$ level of the $D_1 (5^2P_{1/2})$ transition at a wavelength of 795~nm. The output beam is split into two, one directly transported to the atoms via single-mode, polarization maintaining optical fiber and the other passing a fiber-coupled electro-optic modulator before being transported to the atoms in the same way. A 10~MHz frequency shift is used to select only one of the sidebands created by the electro-optic modulator. Each beam hits the trapped atomic cloud at a small angle ($\pm \alpha/2$) from the trap longitudinal symmetry axis. A magnetic field corresponding to 100~kHz in the direction of this symmetry axis sets the quantization axis. The two beams are circularly polarized to give a large $\sigma^+$ component, only allowing transitions that preserve $m_F$. By setting the two-photon detuning, only the $\ket{1} \leftrightarrow \ket{2}$ transition is resonant. In Fig.~\ref{fig:fig2}, we present a set of$\alpha \approx 0.1^\circ$, practically Doppler-free spectroscopic measurements obtained using our Raman control. Fig.~\ref{fig:fig2}(a) presents a Rabi spectrum. Fitting the obtained spectrum to a squared sinc function we find that the resonance frequency is shifted from that of the hyperfine transition by about 1.4~kHz. This is mainly due to the differential light shift caused by the Raman control itself. For our chosen one-photon detuning, the Raman beams generate light shifts of opposite signs, meaning that by properly choosing the power ratio between them, the shift can be nullified [Fig.~\ref{fig:fig2}(b)]. This ability to control the light shift enables us to have a controlled detuning without having to sacrifice the quality of the pulses and the contrast of the Rabi fringes, providing rapid Ramsey oscillations and allowing exploration of the fast decay dynamics. Fig.~\ref{fig:fig2}(c) shows Raman Rabi oscillations, decaying to $1/e$ within about 6.4 oscillations (130~$\mu$s), dominated by spatial inhomogeneity of the Raman control which causes different atoms to experience a different effective Rabi frequency. This is sufficient for few-pulse sequences such as Ramsey and echo, but a limiting factor in the ability to perform many-pulse sequences such as DD. Fig.~\ref{fig:fig2}(d) presents a negligible-angle Raman Ramsey measurement, imitating the dynamics of the MW coherence. Represented by the contrast of the fringes, the coherence decays on a time scale on the order of tens of milliseconds, comparable to decay times obtained with similar experimental conditions using MW control~\cite{Sagi2010_Universal}. \begin{figure} \centering \begin{overpic} [width=\linewidth]{fig2} \put(130,275){\large \textbf{(a)}} \put(180,275){\large \textbf{(b)}} \put(198,180){\large \textbf{(c)}} \put(198,83){\large \textbf{(d)}} \end{overpic} \caption[Raman Ramsey experiment]{{Doppler-free Raman spectroscopy ($\alpha\approx 0.1^{\circ}$).} {\bf (a)} Atomic Rabi spectrum. The atoms perform the $\ket{F=1,m_F = 0}\to\ket{F=2,m_F = 0}$ transition indicated by the measurement of the normalized $\ket{F=2,m_F = 0}$ population (vertical axis). Horizontal axis is the detuning from the two-photon resonance of $\approx6.8~$GHz. {\bf (b)} Light shift induced by the Raman beams as a function of the ratio between their powers. By properly choosing the detuning and the power ratio, the light shift can be either canceled out or increased. Solid line is a linear fit. {\bf (c)} Rabi Oscillations. The two-photon Rabi frequency is 49~kHz (solid line is a fit to an exponentially decaying sine) and the decay time is 130 $\mu$s, dominated by inhomogeneity in the Raman beams. The data presented is an average over three realizations. {\bf (d)} Ramsey fringes. The extracted envelope (dash-dotted red line) shows long coherence times. Solid line is a fit to an exponentially decaying sine. Normalized detection stands for the fraction of atoms in the $\ket{F=2}$ state.} \label{fig:fig2} \end{figure} Fig.~\ref{fig:fig3}(a) shows a Ramsey measurement induced by the Raman beams separated by an angle $\alpha \approx 0.9^\circ$, generating $\ensuremath{\mathcal{N}} \approx 0.78$ fringes on the cloud. The coherence decays at a timescale $\tau\approx 780~\mu$s, compared to the expected value of $\tau = \sqrt{2}/\ensuremath{\mathcal{N}}\omega \approx 750~\mu$s calculated from the independently measured temperature and trap oscillation frequencies. This is much faster than that of the small-angle Fig.~\ref{fig:fig2}(d), and justifies the neglection of $C_\text{MW}$ in the derivation of Eq.~\eqref{eq:split_coherence}. The coherence decays to a minimum value of 0.29, in agreement with the predicted value of 0.3 obtained from Eq.~\ref{eq:coherence_harmonic}. A clear revival can be seen around $t = T_{\text{osc}} = 2.6$~ms. The revival amplitude is 0.61. Fig.~\ref{fig:fig3}(b) presents an echo experiment in which the echo $\pi$-pulse is given at $t_\pi = T_{\text{osc}}$. The second revival, at $2T_{\text{osc}}$, becomes sharper. The echo pulse, however, does not improve the coherence of the sequential revival. Fig.~\ref{fig:fig3}(c) shows an echo experiment with a $\pi$-pulse given at $t \approx T_{\text{osc}}/2$, predicted to cause a deterioration of the coherence. As expected from the analysis of Eq.~\eqref{eq:bad_echo}, a rapid decrease of the coherence is observed after the echo-pulse with a small revival at $t = 1.5T_{\text{osc}}$. \begin{figure} \centering \begin{overpic} [width=0.9\linewidth]{fig3} \put(175,250){\large \textbf{(a)}} \put(175,170){\large \textbf{(b)}} \put(175,90){\large \textbf{(c)}} \end{overpic}\\ \caption[Ramsey and echo with Raman beams]{{Raman Ramsey and echo.} {\bf (a)} For a finite, small angle ($\alpha \approx 0.9^{\circ}$) a rapid partial decay is observed, followed by clear partial revivals at $t = T_{\text{osc}}$ and $t = 2T_{\text{osc}}$. {\bf (b)} An echo experiment with a $\pi$-pulse given at $t = T_{\text{osc}}$. The contrast of the revivals is augmented. {\bf (c)} An echo experiment with a $\pi$-pulse given at $t = T_{\text{osc}}/2$. As a result the revival at $t = T_{\text{osc}}$ disappears and a small revival appears at about $t = 1.5T_{\text{osc}}$. Red lines, summarized in Fig.~\ref{fig:fig4}(a), depict the coherence, extracted by taking the standard deviation over an oscillation.} \label{fig:fig3} \end{figure} Fig.~\ref{fig:fig4}(a) summarizes the results of the Ramsey and echo experiments, where the echo pulse is given at both $T_\text{osc}$ and $T_\text{osc}/2$ [Fig.~\ref{fig:fig3}(a-c)], normalized to the natural scale of $C_\text{R}\in[0,1]$, and compares them to Monte Carlo simulations for a realistic trap (see appendix) and for an ideal Gaussian cross-beam trap. Fig.~\ref{fig:fig4}(b) presents the simulated coherence for the trap with the measured anharmonicity, in excellent agreement with all of the experimental data and without any free fitting parameters. Fig.~\ref{fig:fig4}(c) shows results for the ideal crossed beams Gaussian trap, highlighting the effect of the added anharmonicity. \begin{figure} \centering \begin{overpic} [width=\linewidth]{fig4} \put(35,215){\large \textbf{(a)}} \put(35,125){\large \textbf{(b)}} \put(35,35){\large \textbf{(c)}} \end{overpic} \caption[Echo]{{Echo dynamics.} {\bf (a)} A summary of measurements (a-c) of Fig.~\ref{fig:fig3}. {\bf (b)} Simulated coherence for a trap with measured anharmonicity of $\mathcal{A}=0.4$ (see appendix), in excellent agreement with the experiment with no fitting parameters. {\bf (c)} Simulation results for an ideal crossed Gaussian beam trap, highlighting the deteriorating effect of the added anharmonicity on the atomic coherence.} \label{fig:fig4} \end{figure} Fig.~\ref{fig:fig5} shows results of a DD experiment (a) and compares them to simulation (b). As long as the DD is active the coherence remains high, up to a slow decay appearing in the experimental data and not in the simulation. This is due to the finite number of Rabi oscillations available in the experiment before loss of coherence [Fig.~\ref{fig:fig2}(c)]. These effects can be further mediated by use of more advanced DD schemes~\cite{Sagi2010_Process,Almog2011}. Once the DD pulses cease there is a rapid decay followed by a revival at about a $T_{\text{osc}}$ later, in qualitative agreement with a generalization of our predictions of Eq.~\eqref{eq:coherence_Echo}. \begin{figure} \centering \begin{overpic} [width=\linewidth]{fig5} \put(35,140){\large \textbf{(a)}} \put(35,42){\large \textbf{(b)}} \end{overpic} \caption[Dynamical decoupling]{{Dynamical decoupling.} {\bf (a)} Experimental results. During the DD, the coherence remains high. A revival appears about $T_{\text{osc}}$ after the DD has ended. The number of $\pi$-pulses is 10 and 6 for the 2.65~ms and 1.59~ms data respectively. {\bf (b)} Simulation results in a calibrated anharmonicity trap.} \label{fig:fig5} \end{figure} Let us now analyze a simple case of a realistic trapping potential with weaker anharmonicity. Specifically, a Ioffe-Pritchard magnetic trap, commonly used in cold atoms. In this trap, the anharmonicity is simply controlled by a magnetic field bias from the trap bottom~\cite{Esslinger1998,Steinhauer2002}. We calculate the trap ensemble anharmonicity $\mathcal{A}$ as a function of the bias magnetic field for realistic parameters and present the resulting Raman coherence time, defined as the time at which the coherence revival of Eq.~\eqref{eq:error} drops to 1/2~\footnote{For the magnetic trap we use the expansion to the $4^\text{th}$ order in $x$ of the magnetic field magnitude $|\vec{B}|=\sqrt{B_x^2+B_\text{bias}^2}$ for the $y=0$ plane with $\hat{z}$ pointing along the axis of the bias coil. Here $B_x$ is linear in $x$, $B_\text{bias}$ is the bias field and we use the \ensuremath{^{87}\text{Rb }} wavelength $\lambda=780$~nm. For the correlation time $t_c$ we use the expression given in Eq.~\eqref{eq:error} and solve for $\epsilon_\nu=1/2$. Defining $t_c=\nu(\epsilon_\nu=1/2)T_\text{osc}$ we obtain $t_c=\left(6\pi\omega_\text{osc}\mathcal{A}^2\mathcal{N}^2\right)^{-1}$}. The obtained coherence times, isolating the effect of anharmonicity, are presented in Figure~\ref{fig:fig6}. The coherence time ranges from sub-$\mu$s for 10~$\mu$K atoms at a small bias (representing high anharmonicity) and a large angle between the beams ($180^o$, typical for atomic interferometers~\cite{Horikoshi2007,Burke2008,Segal2010,Kafle2011,Leonard2012,Fogarty2013,Mizrahi2013}) to $\sim10^4$~s for 1~$\mu$K atoms, strong bias field and a small angle ($1^o$, typical for atomic memories~\cite{Zhao2009Long,dudin2010light,dudin2013light}). Echo pulses given at trapping periods will increase these times even further~\footnote{We note that a revival of coherence for a large number of trap oscillations has been observed with a single ion in a linear Paul trap~\cite{Mizrahi2013}, however the frequency there is high, on the order of tens of MHz, leading to an overall shorter coherence time.}. \begin{figure} \centering \begin{overpic} [width=\linewidth]{fig6} \end{overpic} \caption[Coherence time in a magnetic trap]{{Predicted coherence time in a magnetic trap.} Coherence time, defined as the time at which the coherence at the $\nu^\text{th}$ revival reaches a value of 1/2, as a function of the bias field. Solid lines represent a temperature of 1~$\mu$K and dash-dotted lines a temperature of 10~$\mu$K. Red color (thick) represents an angle between the Raman beams of $180^o$, typical for atomic interferometers and blue color (thin) represents a $1^o$ angle, typical for atomic memories. For cold atomic ensembles and small angles the coherence is maintained for as long as $\sim10^4$~s. This analysis isolates the effect of anharmonicity and disregards other decoherence mechanisms.} \label{fig:fig6} \end{figure} \subsection*{Summary and outlook} In this paper we have demonstrated experimentally and analyzed theoretically and numerically the effect of a trapping potential on Raman-imprinted atomic coherence. Clear revivals of the coherence at integer multiples of the trapping period have been predicted and measured. We have shown that the amplitude of the observed revivals is highly affected by anharmonicity of the trapping potential. In addition, we predict that for traps with weak anharmonicity, quantum control methods such as echo and dynamical decoupling can increase the revival amplitude, thereby mitigating the deterioration of the coherence due to the trap anharmonicity and elastic atomic collisions. Raman atomic coherence in a trap is closely related to the field free-oscillation atom interferometry~\cite{Horikoshi2007,Burke2008,Segal2010,Kafle2011,Leonard2012,Fogarty2013} and may help to further analyze the limitations and properties of such interferometers. Our results apply also to light storage~\cite{zhao2009millisecond,schnorrberger2009electromagnetically, dudin2010light, dudin2013light}, where the stored atomic coherence is released in the form of light emitted in a controlled direction. We expect that this type of experiment would generate similar results to the one presented in this paper, \emph{i.e.} a fast reduction in the power of the diffracted light, a partial revival after the trapping period and improved signals when spin echo or dynamical decoupling are used. Such stored-light experiment can be used to store images~\cite{Shuker2008} whose number of resolution pixels correspond to $\mathcal{N}$, the number of fringes used in Eqns.~\eqref{eq:coherence_harmonic} and~\eqref{eq:bad_echo}.
1,314,259,994,024
arxiv
\section{Introduction}\label{sec:intro} A simultaneous embedding of two planar graphs embeds each graph in a planar way---using the same vertex positions for both embeddings. Edges of one graph are allowed to intersect edges of the other graph. There are two versions of the problem: In the first version, called \emph{Simultaneous Embedding with Fixed Edges} (\textsc{Sefe}), edges that occur in both graphs must be embedded in the same way in both graphs (and hence, cannot be crossed by any other edge). In the second version, these edges can be drawn differently for each of the graphs. Both versions of the problem have a geometric variant where edges must be drawn using straight-line segments. Simultaneous embedding problems have been extensively investigated over the last few years, starting with the work of Brass et al.~\cite{bcdeeiklm-spge-CG07} on simultaneous straight-line drawing problems. Bl\"asius et al.~\cite{bkr-sepg-HGDV13} recently surveyed the area. For example, it is possible to decide in linear time whether a pair of graphs admits a \textsc{Sefe} or not, if the common graph is biconnected~\cite{DBLP:journals/jda/AngeliniBFPR12}. When actually drawing these simultaneous embeddings, a natural choice is to use straight-line segments. Only very few graphs can be drawn in this way, however, and some existing results need exponential area. For instance, there exist a tree and a path which cannot be drawn simultaneously with straight-line segments~\cite{DBLP:journals/jgaa/AngeliniGKN12}, and the algorithm for simultaneously drawing a tree and a matching~\cite{cklmsv-gsegm-JGAA11} does not provide a polynomial area bound. For the case of edges with bends, that is, polygonal edges, Erten and Kobourov \cite{ek-sefb-JGAA05} showed that three bends and quadratic area suffice for any pair of planar graphs (without fixed edges), and that one bend suffices for pairs of trees. Kammer~\cite{k-setbe-SWAT05} reduced the number of bends to two for the general planar case. In these results, however, the \emph{crossing angles} can be very small. We suggest a new approach that overcomes the aforementioned problems. We insist that crossings occur at right angles, thereby ``taming'' them. We do this while drawing on a grid of size $O(n) \times O(n)$ for $n$-vertex graphs, and we can still draw any pair of planar graphs simultaneously. We do not consider the problem of fixed edges. In a way, our results give a measure for the geometric complexity of simultaneous embeddability for various pairs of graph classes, some of which can be combined more easily (that is, with fewer bends) and some not as easily (needing more bends). Brightwell and Scheinermann~\cite{BS93} proved that the problem of simultaneously drawing a (primal) embedded graph and its dual always admits a solution if the input graph is a triconnected planar graph. Erten and Kobourov~\cite{EK05b} presented an $O(n)$-time algorithm that computes simultaneous drawings of a triconnected planar graph and its dual on an $O(n^2)$ integer grid, where $n$ is the total number of vertices in the graph and its dual. However, these drawings can have non-right angle crossings. More formally, in this paper we study the \emph{RAC simultaneous drawing problem} (\emph{\textsc{RacSim}\xspace drawing problem}). Let~$G_1=(V,E_1)$ and~$G_2=(V,E_2)$ be two planar graphs on the same vertex set. We say that~$G_1$ and~$G_2$ admit a \textsc{RacSim}\xspace drawing if we can place the vertices on the plane such that (i) each edge is drawn as a polyline, (ii) each graph is drawn planar, (iii) there are no edge overlaps and (iv) crossings between edges in~$E_1$ and~$E_2$ occur at right angles. Argyriou et al.~\cite{abks-grsdg-JGAA13} introduced and studied the geometric version of \textsc{RacSim}\xspace drawing. In particular, they proved that it is always possible to construct a geometric \textsc{RacSim}\xspace drawing of a cycle and a matching in quadratic area, while there exist a wheel and a cycle which do not admit a geometric \textsc{RacSim}\xspace drawing. The problem that we study was left as an open problem. \begin{table}[b] \centering \begin{tabular}{l@{\quad$+$\quad}lcc} \multicolumn{2}{c}{Graph classes} & Number of bends & Ref.\\ \hline Planar & Planar & $6+6$ & Thm.~\ref{thm:planar}\\ 2-page book embeddable & 2-page book embeddable & $4+4$ & Cor.~\ref{cor:twopage}\\ Outerplanar & Outerplanar & $3+3$ & Thm.~\ref{thm:outerplanar}\\ \hline Cycle & Cycle & $1+1$ & Thm.~\ref{thm:cycle}\\ Caterpillar & Cycle & $1+1$ & Thm.~\ref{thm:cater}\\ \multicolumn{2}{l}{Four Matchings} & $1+1+1+1$ & Thm.~\ref{thm:fourmatch}\\ Tree & Matching & $1+0$ & Thm.~\ref{thm:treematch}\\ \hline Wheel & Matching & $2+0$ & Thm.~\ref{thm:wheelmatch}\\ Outerpath & Matching & $2+1$ & Thm.~\ref{thm:outpmatch}\\ \hline \end{tabular} \medskip \caption{A short summary of our results.} \label{table:res} \end{table} \paragraph{Our contribution.} First, we look at the most general version of the problem: Two planar graphs. (In a simultaneous drawing, certainly both graphs must---individually---be planar.) We give a linear-time algorithm for this case, which produces a drawing in quadratic area with at most six bends per edge. For 2-page book embeddable graphs and outerplanar graphs, we give algorithms that guarantee four and three bends respectively.. Then we turn our attention to graph classes that are more restricted, but for which we can give algorithms that use very few bends. See Table~\ref{table:res} for a full list of results. The main approach in these algorithm is to find linear orders on the vertices of the two graphs and then to compute coordinates for the vertices based on these orders. (See for example also Kaufmann and Wiese~\cite{kw-evpfb-JGAA02}.) \section{\textsc{RacSim}\xspace Drawings of general graphs} \label{sec:morebends} In this section, we study general classes of planar graphs and show how to efficiently construct \textsc{RacSim}\xspace drawings with more than two bends per edge in quadratic area. In particular, we prove two planar graphs on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $(14n-26) \times (14n-26)$ with six bends per edge (Theorem~\ref{thm:planar}). This result can also be applied to 2-page book embeddable graph, where it gives a \textsc{RacSim}\xspace drawing on an integer grid of size $(11n - 32) \times (11n - 32)$. This also improves the number of bends to four per edge (Corollary~\ref{cor:twopage}). If the input is two outerplanar graphs, the algorithm can be improved to get a \textsc{RacSim}\xspace drawing on an integer grid of size $(7n - 10) \times (7n - 10)$, with three bends per edge (Theorem~\ref{thm:outerplanar}). \begin{figure}[tb]% \hfill \begin{subfigure}{.44\textwidth} \centering \includegraphics{kaufmannwiese}% \caption{} \label{fig:kaufmannwiese}% \end{subfigure} \hfill \begin{subfigure}{.44\textwidth} \centering \includegraphics[page=1]{planar}% \caption{} \label{fig:planar}% \end{subfigure} \hfill \caption{(a)~A drawing of a planar graph by Kaufmann and Wiese~\cite{kw-evpfb-JGAA02} and (b)~the drawing by our algorithm with at most six bends per edge. The edge that crosses the spine is drawn dashed. The dummy vertex placed on this edge is drawn as a square.} \end{figure} \begin{theorem}\label{thm:planar} Two planar graphs on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $(14n-26) \times (14n-26)$ with six bends per edge. The drawing can be computed in $O(n)$ time. \end{theorem} \begin{pf} Let~$\ensuremath{\mathcal{G}}\xspace_1=(V,E_1)$ and~$\ensuremath{\mathcal{G}}\xspace_2=(V,E_2)$ be the two planar graphs. Central in our approach is an algorithm by Kaufmann and Wiese~\cite{kw-evpfb-JGAA02}, which, given a planar graph~$\ensuremath{\mathcal{G}}\xspace$, computes a mapping of the vertices of~$\ensuremath{\mathcal{G}}\xspace$ to the points of a point set~$\ensuremath{\mathcal{P}}\xspace$ restricted on a horizontal line (called \emph{spine}), such that when $\ensuremath{\mathcal{G}}\xspace$ is embedded on~$\ensuremath{\mathcal{P}}\xspace$, each edge of the graph crosses the spine at most once (Note that the algorithm of Kaufmann and Wiese has been used in the past for the simultaneous drawing problem; see~\cite{ek-sefb-JGAA05}.) We subdivide all edges of~$\ensuremath{\mathcal{G}}\xspace_i$ that cross the spine, by introducing a single \emph{dummy vertex} for each such edge. Let~$\xi_i$ be such an embedding of~$\ensuremath{\mathcal{G}}\xspace_i$ and denote by~$E_i^\text{A}$ and~$E_i^\text{B}$ the edges that are drawn completely above and below the spine in~$\xi_i$, respectively. Also, let~$\ensuremath{\mathcal{G}}\xspace_i'=(V_i',E_i')=(V \cup V_i, E_i^\text{A} \cup E_i^\text{B})$ be the resulting graph, where~$V_i$ contains the dummy vertices. We denote by~$\chi_i:V_i'\rightarrow\{1,\dots,|V_i'|\}$ the linear order of the vertices of~$\ensuremath{\mathcal{G}}\xspace_i$ along the spine in~$\xi_i$, $i=1,2$. Let~$v_1,\dots,v_{V_1'}\in V_i'$ be the vertices with~$\chi_1(v_i)=i$. We place~$v_1$ in the first column. Between two consecutive vertices~$v_i$ and~$v_{i+1}$, we reserve several columns for bends of edges incident to~$v_i$ and~$v_{i+1}$, in the following order. \begin{compactenum}[(i)] \item One column for the edges $(v_i,\cdot)\in E_2^\text{A}$, if one exists. \item One column for every edge~$(v_i,v_j)\in E_1'$ with~$j>i$. \item One column for every edge~$(v_k,v_{i+1})\in E_1'$ with~$k<i+1$. \item One column for the edges~$(v_{i+1},\cdot)\in E_2^\text{B}$, if one exists. \end{compactenum} Note that, for (ii) and (iii), we can save some columns because an edge in~$E^\text{A}_1$ and an edge in~$E^\text{B}_1$ can use the same column for their bend. This procedure gives us the $x$-coordinates of the vertices. Analogously, we can get the $y$-coordinates of the vertices by rotating the drawing by~$90^\circ$ and applying the procedure to the order~$\chi_2$. Let~$R$ be the smallest rectangle enclosing all vertices. We draw~$\ensuremath{\mathcal{G}}\xspace_1'$ and~$\ensuremath{\mathcal{G}}\xspace_2'$ with at most four bends per edge such that all edge segments of~$\ensuremath{\mathcal{G}}\xspace_1'$ in~$R$ are either vertical or of $y$-length exactly 1, and all edge segments of~$\ensuremath{\mathcal{G}}\xspace_2'$ in~$R$ are either horizontal or of $x$-length exactly 1; see Fig.~\ref{fig:planar}. First, we draw the edges~$(v_i,v_j)\in E^\text{A}_1$ with~$i<j$. We draw the edges in a nested order: When we place the edge~$(v_i,v_j)$, then there is no edge~$(v_k,v_l)\in E^\text{A}_1$ with~$k\le i$ and~$l\ge j$ that has not already been drawn. Recall that the first column to the right and the first column to the left of every vertex is reserved for the edges in~$E_1$. We draw$(v_i,v_j)$ with at most~4 bends as follows. We start with a slanted segment that has its end point in the row above~$v_i$, and in the first unused column that does not lie to the left of~$v_i$. We follow with a vertical segment to the top that leaves~$R$. We add a horizontal segment above~$R$. In the last unused column that does not lie to the right of~$v_j$, we add a vertical segment that ends one row above~$v_j$. We close the edge with a slanted segment that has its end point in~$v_j$. We draw the edges in~$E'^B_1$ symmetrically with the horizontal segment below~$R$. Note that this algorithm always uses the top and the bottom port of a vertex~$v$, if there is at least one edge~$(v,\cdot)$ in~$E^\text{A}_1$ and~$E^\text{B}_1$, respectively. Every dummy vertex~$t$ has exactly one edge~$(t,\cdot)$ in~$E^\text{A}_1$ and~$E^\text{B}_1$, respectively. Thus, the edges incident to~$t$ only use the top and the bottom port. We create a drawing of~$\ensuremath{\mathcal{G}}\xspace_1$ with at most~6 bends per edge by removing the dummy vertices from the drawing. In the same way, we create a drawing of~$\ensuremath{\mathcal{G}}\xspace_2$ with at most~6 bends per edge. We will now show that the drawing obtained by combining the drawing of~$\ensuremath{\mathcal{G}}\xspace_1$ and~$\ensuremath{\mathcal{G}}\xspace_2$ yields a \textsc{RacSim}\xspace drawing. By construction, all segments of~$E_1$ inside~$R$ are either vertical segments or slanted segments of $x$-length at least~2 and $y$-length exactly~1. All segments of~$E_2$ inside~$R$ are either horizontal segments or slanted segments of $x$-length exactly~1 and $y$-length at least~2. Thus, the slanted segments can not overlap. Further, all crossings inside~$R$ occur between a horizontal and a vertical segment, and thus form right angles. Also, there are no segments in~$E_1$ that lie to the left or to the right of~$R$, and there are no segments in~$E_2$ that lie above or below~$R$. Hence, there are no crossings outside of~$R$ and the drawing is a \textsc{RacSim}\xspace drawing. We will now count the columns used by the drawing. For every vertex in~$V$ except the left-most and the right-most, we reserve two additional columns for the edges in~$E_2$; for the remaining two, we only have to reserve one additional column. For every edge in~$E_1$, we need up to~3 columns: One for each end point of the slanted segment at each vertex, and one for the vertical segment that crosses the spine, if it exists. Note that at least one edge per vertex does not need a slanted segment. For every edge in~$E_2$, we need up to~1 column for the vertical segment to the side of~$R$. Since there are at most~$3n-6$ edges, our drawing needs~$3n-2+3(3n-6)-n+3n-6=14n-26$ columns. Analogously, we can show that the algorithm needs~$14n-26$ rows, and thus draws the graphs on a $(14n-26)\times(14n-26)$-grid. Since the algorithm of Kaufmann and Wiese runs in~$O(n)$ time, our algorithm also runs in~$O(n)$ total time. \end{pf} We can improve the results of Theorem~\ref{thm:planar} for 2-page book embeddable graphs. In a 2-page book embedding, there are no edges that cross the spine. Since these edges are the only ones that need six bends, we can reduce the number of bends per edge to four. Further, the number of columns and rows are reduced by one per edge. This yields the following corollary. \begin{corollary}\label{cor:twopage} Two 2-page book embeddable graphs on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $(11n-32) \times (11n-32)$ with four bends per edge. \end{corollary} \begin{theorem}\label{thm:outerplanar} Two outerplanar graphs on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $(7n-10) \times (7n-10)$ with three bends per edge. \end{theorem} \begin{pf} It follows by Nash-Williams'~\cite{nw-dfgf-JLMS64} formula that every outerplanar graph has arboricity~2, that is, it can be decomposed into two forests. Let~$\ensuremath{\mathcal{O}}\xspace_1=(V,E_1)$ and~$\ensuremath{\mathcal{O}}\xspace_2=(V,E_2)$ be two outerplanar graphs. We draw both graphs as a 2-page book embedding with each forest on one page. We create a 1-page book embedding for~$\ensuremath{\mathcal{O}}\xspace_1$ and~$\ensuremath{\mathcal{O}}\xspace_2$. This gives us the order on the $x$-coordinates and on the $y$-coordinates, respectively. It follows by Corollary~\ref{cor:twopage} that, by using the algorithm described in the proof of Theorem~\ref{thm:planar}, we create a \textsc{RacSim}\xspace drawing with at most two bends per edge. We will now show how to adjust the algorithm to reduce the number of bends by one. Let~$E_1^A$ and~$E_1^B$ be the two forests~$\ensuremath{\mathcal{O}}\xspace_1$ is decomposed into. We will draw the edges of~$E_1^A$ above the spine and the edges~$E_1^B$ below the spine. By rooting the tree in~$E_1^A$, we can direct each edge such that every vertex has exactly one incoming edge. Recall that, in the drawing produced in Theorem~\ref{thm:planar}, one edge per vertex can use the top port. We adjust the algorithm such that every directed edge~$(v,w)$ enters the vertex~$w$ from the top port. Thus, we draw the edge as follows. We start with a slanted segment of $y$-length exactly~1. We follow with a vertical segment to the top. We proceed with a horizontal segment that ends directly above~$w$ and finish the edge with a vertical segment that enters~$w$ from the top port. We use the same approach for the edges in~$E_1^B$. The second outerplanar graph~$\ensuremath{\mathcal{O}}\xspace_2$ is drawn analogously. Since every port of a vertex is only used once, the drawing has no overlaps. We now analyze the number of columns used. For every vertex but the left-most and right-most, we again reserve two additional columns for the edges in~$E_2$; for the remaining two vertices, we reserve one additional column. However, the edges in~$E_1$ now only need one column for the bend of the single slanted segment. For every edge in~$E_2$, we need up to~1 column for the vertical segment to the side of~$R$. Since there are at most~$2n-4$ edges, our drawing needs~$3n-2+2n-4+2n-4=7n-10$ columns. Analogously, we can show that the algorithm needs~$7n-10$ rows. \end{pf} \section{\textsc{RacSim}\xspace Drawings with one bend per edge} \label{sec:onebend} In this section, we study simple classes of planar graphs and show how to efficiently construct \textsc{RacSim}\xspace drawings with one bend per edge in quadratic area. In particular, we prove that two cycles or four matchings (i.e., two classes of graphs of exactly the same size) on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $2n \times 2n$; see~Theorems~\ref{thm:cycle} and \ref{thm:fourmatch}, respectively. If the input to our problem is a caterpillar and a cycle, then a \textsc{RacSim}\xspace drawing with one bend per edge is also possible on an integer grid of size $(2n-1) \times 2n$; see Theorem~\ref{thm:cater}. For a tree and a cycle, we can construct a \textsc{RacSim}\xspace drawing with one bend per tree-edge, and no bends in the edges of the matching on an integer grid of size $n \times (n-1)$; see Theorem~\ref{thm:treematch}. \begin{lemma}\label{lem:path} Two paths on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $2n \times 2n$ with at most one bend per edge. The drawing can be computed in $O(n)$ time. \end{lemma} \begin{pf} Let~$\ensuremath{\mathcal{P}}\xspace_1=(V,E_1)$ and~$\ensuremath{\mathcal{P}}\xspace_2=(V,E_2)$ be the two input paths. Following standard practices from the literature~(see e.g., Brass et al.~\cite{bcdeeiklm-spge-CG07}), we draw~$\ensuremath{\mathcal{P}}\xspace_1$ $x$-monotone and~$\ensuremath{\mathcal{P}}\xspace_2$ $y$-monotone. This ensures that each of the two paths will eventually be drawn planar. Next, we describe how to compute the exact coordinates of the vertices (and how to route the edges) of $\ensuremath{\mathcal{P}}\xspace_1$ and $\ensuremath{\mathcal{P}}\xspace_2$, such that all (potential) crossings are at right-angles and more importantly there are no edge-segments overlaps. More precisely, we denote by $p_i:V \rightarrow \{1,2,\ldots,n\}$ the function which maps a vertex of path $\ensuremath{\mathcal{P}}\xspace_i$ to its position in $\ensuremath{\mathcal{P}}\xspace_i$, $i=1,2$. Then, a vertex $v \in V$ is drawn at point $(2p_1(v)-1,2p_2(v)-1)$; see Fig.~\ref{fig:paths}. An edge $(v,v')\in E_1$, with $v$ to the left of $v'$, has a single bend either at point $(x(v),y(v')-1)$, if $y(v)<y(v')$ or at point $(x(v),y(v')+1)$, otherwise. On the other hand, an edge $(v,v')\in E_2$, with $v$ above $v'$, also has a single bend either at point $(x(v)+1,y(v'))$, if $x(v)<x(v')$ or at point $(x(v)-1,y(v'))$, otherwise. \begin{figure}[tb]% \hfill \begin{subfigure}{.44\textwidth} \centering \includegraphics{paths}% \caption{Two paths:~$\ensuremath{\mathcal{P}}\xspace_1$ (solid) and~$\ensuremath{\mathcal{P}}\xspace_2$ (dashed)} \label{fig:paths} \end{subfigure} \hfill \begin{subfigure}{.44\textwidth} \centering \includegraphics{cycles}% \caption{Two cycles:~$\ensuremath{\mathcal{C}}\xspace_1$ (solid) and~$\ensuremath{\mathcal{C}}\xspace_2$(dashed)} \label{fig:cycles}% \end{subfigure} \hfill \caption{\textsc{RacSim}\xspace drawings with one bend per edge of: (a)~two paths and (b)~two cycles.} \end{figure} Clearly, the area required by the drawing is $(2n-1) \times (2n-1)$. From left to right, the edges of $\ensuremath{\mathcal{P}}\xspace_1$ leave the vertices vertically and enter them ``diagonally''. Similarly, from bottom to top, the edges of $\ensuremath{\mathcal{P}}\xspace_2$ leave the vertices horizontally and enter them ``diagonally''. In addition, a non-rectilinear edge-segment is drawn between two consecutive (horizontal or vertical) grid lines. Hence, it cannot be involved in crossings or overlaps. Since $\ensuremath{\mathcal{P}}\xspace_1$ and $\ensuremath{\mathcal{P}}\xspace_2$ are $x$- and $y$-monotone, respectively, it follows that all (potential) crossings must involve a vertical edge-segment of $\ensuremath{\mathcal{P}}\xspace_1$ and a horizontal edge-segment of $\ensuremath{\mathcal{P}}\xspace_2$, which clearly yields right-angles at the crossing points. \end{pf} We say that an edge uses the bottom (left/right/top, resp.) \emph{port} of a vertex if it enters the vertex from the bottom (left/right/top, resp.). \begin{theorem}\label{thm:cycle} Two cycles on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $2n \times 2n$ with at most one bend per edge. The drawing can be computed in $O(n)$ time. \end{theorem} \begin{pf} Let~$\ensuremath{\mathcal{C}}\xspace_1=(V,E_1)$ and~$\ensuremath{\mathcal{C}}\xspace_2=(V,E_2)$ be the two input cycles and let $v \in V$ be an arbitrary vertex. We temporarily delete one edge from each of the two cycles incident to vertex $v$; say~$(v,w_1) \in E_1$ from~$\ensuremath{\mathcal{C}}\xspace_1$ and~$(v,w_2) \in E_2$ from~$\ensuremath{\mathcal{C}}\xspace_2$ (refer the bold-drawn edges of Figure~\ref{fig:cycles}). This results into two paths, say $\ensuremath{\mathcal{P}}\xspace_1$ and~$\ensuremath{\mathcal{P}}\xspace_2$, with endpoints $v$ and $w_1$, and, $v$ and $w_2$, respectively. We employ the algorithm supporting Lemma~\ref{lem:path} to construct a \textsc{RacSim}\xspace drawing of $\ensuremath{\mathcal{P}}\xspace_1$ and~$\ensuremath{\mathcal{P}}\xspace_2$ on an integer grid of size $(2n-1) \times (2n-1)$. In the resulting drawing, vertex $v$ is placed at the bottom-left corner of the bounding box containing the drawing,~$w_1$ along its right side, and~$w_2$ along its top side. By construction, the bottom port of vertex $w_1$ and the left port of vertex $w_2$ are both unoccupied. Hence, the edges $(v,w_1)$ and $(v,w_2)$ that form $\ensuremath{\mathcal{C}}\xspace_1$ and~$\ensuremath{\mathcal{C}}\xspace_2$ can be drawn with a single bend each at points $(2n-1,0)$ and $(0,2n-1)$, respectively; see Figure~\ref{fig:cycles}. Clearly, none of them is involved in crossings, while the total area of the drawing gets larger by a single unit at each dimension. \todo{MB: crossings, since they ``surround the remaining drawing''. Therefore, the angles at the crossing points are right.} \end{pf} \begin{theorem}\label{thm:cater} A caterpillar and a cycle on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $(2n-1) \times 2n$ with at most one bend per edge. The drawing can be computed in $O(n)$ time. \end{theorem} \begin{pf} We denote by $\ensuremath{\mathcal{A}}\xspace=(V,E_\ensuremath{\mathcal{A}}\xspace)$ and $\ensuremath{\mathcal{C}}\xspace=(V,E_\ensuremath{\mathcal{C}}\xspace)$ the caterpillar and the cycle, respectively. Starting from a spine vertex that is endpoint of the spine of $\ensuremath{\mathcal{A}}\xspace$, we perform a BFS traversal on $\ensuremath{\mathcal{A}}\xspace$ assuming that we first visit all leg vertices incident to a spine vertex before visiting its neighboring vertex along the spine. Assume, without loss of generality, that $V=\{v_1,v_2,\ldots,v_n\}$ is the ordered vertex set implied by the aforementioned traversal of $\ensuremath{\mathcal{A}}\xspace$; see Figure~\ref{fig:catercycle}. As in the proof of Theorem~\ref{thm:cycle}, we temporarily delete an edge of $\ensuremath{\mathcal{C}}\xspace$ (incident to the first vertex of $V$; refer the bold-drawn edge of Figure~\ref{fig:catercycle}) and obtain a path, say $\ensuremath{\mathcal{P}}\xspace=(V,E_\ensuremath{\mathcal{P}}\xspace)$. Let $p:V \rightarrow \{1,2,\ldots,n\}$ be a function which maps a vertex of path $\ensuremath{\mathcal{P}}\xspace$ to its position in $\ensuremath{\mathcal{P}}\xspace$. For $i=1,2,\ldots,n$, we draw vertex $v_i$ at point $(2i-1,2p(v_i)-1)$. An edge $(v,v')\in E_\ensuremath{\mathcal{P}}\xspace$, with $v$ above $v'$, has a single bend either at point $(x(v)+1,y(v'))$, if $x(v)<x(v')$, or at point $(x(v)-1,y(v'))$, otherwise. On the other hand, an edge $(v,v')\in E_\ensuremath{\mathcal{A}}\xspace$, with $v$ to the left of $v'$, also has a single bend either at point $(x(v'),y(v)+1)$, if $y(v)<y(v')$ or at point $(x(v'),y(v)-1)$, otherwise. \begin{figure}[tb]% \hfill \begin{subfigure}{.49\textwidth} \centering \includegraphics{catercycle}% \caption{} \label{fig:catercycle} \end{subfigure} \hfill \begin{subfigure}{.49\textwidth} \centering \includegraphics{fourmatchings}% \caption{} \label{fig:fourmatchings}% \end{subfigure} \hfill \caption{\textsc{RacSim}\xspace drawings with one bend per edge of: % (a)~a caterpillar~$\ensuremath{\mathcal{A}}\xspace$ (solid; its spine is drawn bold) and a cycle $\ensuremath{\mathcal{C}}\xspace$ (dashed); % (b)~four matchings~$\ensuremath{\mathcal{M}}\xspace_1$ (solid-plain), $\ensuremath{\mathcal{M}}\xspace_2$ (solid-bold), $\ensuremath{\mathcal{M}}\xspace_3$ (dashed-plain) and~$\ensuremath{\mathcal{M}}\xspace_4$ (dashed-bold).} \end{figure} The approach described above ensures that $\ensuremath{\mathcal{P}}\xspace$ is drawn y-monotone, hence planar. The spine of $\ensuremath{\mathcal{A}}\xspace$ is drawn x-monotone. The legs of a spine vertex of $\ensuremath{\mathcal{A}}\xspace$ are drawn ``to the right'' of their parent spine vertex and ``to the left'' of the next vertex along the spine. Hence, $\ensuremath{\mathcal{A}}\xspace$ is drawn planar as well. The non-rectilinear edge-segments of $\ensuremath{\mathcal{A}}\xspace$ are of $y$-length one, while the non-rectilinear edge-segments of $\ensuremath{\mathcal{P}}\xspace$ are of $x$-length one. Thus, they cannot be involved in crossings, which implies that all (potential) crossings form right angles. Now, it remains to describe how to draw the edge that we removed in order to transform cycle $\ensuremath{\mathcal{C}}\xspace$ to path $\ensuremath{\mathcal{P}}\xspace$. By construction, this edge connects vertex $v_1$ (which is drawn at the bottom-left corner of the bounding box containing the drawing) with the vertex drawn at the top side of the bounding box containing the drawing. As the top port of $v_1$ is unoccupied, if this edge bends at $(1,2n)$, then it is not involved in crossings; see Figure~\ref{fig:catercycle}. The total area required by the drawing is $(2n-1) \times 2n$. \end{pf} \begin{theorem}\label{thm:fourmatch} Four matchings on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $2n \times 2n$ with at most one bend per edge. The drawing can be computed in $O(n)$ time. \end{theorem} \begin{pf} Let $\ensuremath{\mathcal{M}}\xspace_1=(V,E_1)$, $\ensuremath{\mathcal{M}}\xspace_2=(V,E_2)$, $\ensuremath{\mathcal{M}}\xspace_3=(V,E_3)$ and~$\ensuremath{\mathcal{M}}\xspace_4=(V,E_4)$ be the input matchings. Let also $\ensuremath{\mathcal{M}}\xspace_{1,2}=(V,E_1 \cup E_2)$ and~$\ensuremath{\mathcal{M}}\xspace_{3,4}=(V,E_3 \cup E_4)$. Since $\ensuremath{\mathcal{M}}\xspace_1$ and $\ensuremath{\mathcal{M}}\xspace_2$ are defined on the same vertex set, $\ensuremath{\mathcal{M}}\xspace_{1,2}$ is a $2$-regular graph. Thus, each connected component of $\ensuremath{\mathcal{M}}\xspace_{1,2}$ corresponds to a cycle of even length which alternates between edges of $\ensuremath{\mathcal{M}}\xspace_1$ and $\ensuremath{\mathcal{M}}\xspace_2$; see Figure~\ref{fig:fourmatchings}. The same holds for $\ensuremath{\mathcal{M}}\xspace_{3,4}$. W.l.o.g. we further assume that $\ensuremath{\mathcal{M}}\xspace_{1,2} \cup \ensuremath{\mathcal{M}}\xspace_{3,4}$ is a connected graph. We seek to draw~$\ensuremath{\mathcal{M}}\xspace_{1,2}$ $x$-monotone and~$\ensuremath{\mathcal{M}}\xspace_{3,4}$ $y$-monotone. We start with choosing an arbitrary vertex~$v \in V$. Let~$\ensuremath{\mathcal{C}}\xspace$ be the cycle of $\ensuremath{\mathcal{M}}\xspace_{1,2}$ containing vertex~$v$. We determine the $x$-coordinates of the vertices of~$\ensuremath{\mathcal{C}}\xspace$ by traversing cycle $\ensuremath{\mathcal{C}}\xspace$ in some particular direction (starting from vertex $v$) and assigning to each vertex of $\ensuremath{\mathcal{C}}\xspace$ its discovery time as the $x$-coordinate. Next, we determine the $y$-coordinates of the vertices of all cycles, say $\ensuremath{\mathcal{C}}\xspace_1, \ensuremath{\mathcal{C}}\xspace_2, \ldots \ensuremath{\mathcal{C}}\xspace_k$, of~$\ensuremath{\mathcal{M}}\xspace_{3,4}$ that have at least one vertex with a determined $x$-coordinate. We order these cycles as follows: For a pair of cycles $\ensuremath{\mathcal{C}}\xspace_i$ and $\ensuremath{\mathcal{C}}\xspace_j$, $\ensuremath{\mathcal{C}}\xspace_i$ precedes $\ensuremath{\mathcal{C}}\xspace_j$ if and only if $\ensuremath{\mathcal{C}}\xspace_i$ contains a vertex (which we call \emph{anchor} vertex of $\ensuremath{\mathcal{C}}\xspace_i$) with a determined $x$-coordinate strictly smaller that the corresponding ones of all vertices of~$\ensuremath{\mathcal{C}}\xspace_j$, $i,j \in \{1,2,\ldots, k\}$. Assume w.l.o.g. that $\ensuremath{\mathcal{C}}\xspace_1 \rightarrow \ensuremath{\mathcal{C}}\xspace_2 \rightarrow \ldots \rightarrow \ensuremath{\mathcal{C}}\xspace_k$ is the computed order. In what follows, we start with the first cycle $\ensuremath{\mathcal{C}}\xspace_1$ of the computed order and determine the $y$-coordinates of its vertices. To do so, we traverse $\ensuremath{\mathcal{C}}\xspace_1$ in some particular direction (starting from its anchor vertex) and assign to each vertex of $\ensuremath{\mathcal{C}}\xspace_1$ the discovery time as $y$-coordinate. We proceed similarly with the remaining cycles, assuming that $\ensuremath{\mathcal{C}}\xspace_{i+1}$ is placed directly above $\ensuremath{\mathcal{C}}\xspace_i$, $i=1,2,\dots,k-1$. Now, there are no vertices with a determined $x$-coordinate but without a determined $y$-coordinate. However, the other way around is possible, i.e., there might exist vertices with a determined $y$-coordinate, but without a determined $x$-coordinate. If this is the case, we repeat the aforementioned procedure to determine the $x$-coordinates of the vertices of all cycles of~$\ensuremath{\mathcal{M}}\xspace_{1,2} \setminus \ensuremath{\mathcal{C}}\xspace$ that have at least one vertex with a determined $y$-coordinate. Since at each step of our algorithm the number of vertices that have an undetermined $x$- or $y$-coordinate is reduced by the size of the cycles participating in this step and $\ensuremath{\mathcal{M}}\xspace_{1,2} \cup \ensuremath{\mathcal{M}}\xspace_{3,4}$ is a connected graph, our algorithm guarantees that all vertices will be assigned both $x$ and $y$-coordinate. Next, we describe how to route the edges of cycles in~$\ensuremath{\mathcal{M}}\xspace_{1,2}$ and $\ensuremath{\mathcal{M}}\xspace_{3,4}$, respectively. We draw each edge of~$\ensuremath{\mathcal{M}}\xspace_{1,2}$ by using a vertical edge-segment at its left end-vertex, and a slightly slanted horizontal segment at its right end-vertex; see Figure~\ref{fig:fourmatchings}. Note that the slanted edge-segments are all of $x$-length~1, so they can not be intersected by vertical edge-segments. In addition, the last edge of each cycle (referred to as \emph{closing edge}) is drawn with a vertical edge-segment incident to its right end-point directed downwards below all vertices of the cycle, and a slightly slanted horizontal segment to the its left end-vertex. Similarly, we draw the edges of~$\ensuremath{\mathcal{M}}\xspace_{3,4}$; see Figure~\ref{fig:fourmatchings} for an illustration. Our choice of coordinates guarantees that the $x$-coordinates of the cycles of~$\ensuremath{\mathcal{M}}\xspace_{1,2}$ and the $y$-coordinates of the cycles of~$\ensuremath{\mathcal{M}}\xspace_{3,4}$ form disjoint intervals. Thus, the area below a cycle of~$\ensuremath{\mathcal{M}}\xspace_{1,2}$ and the area to the left of a cycle of~$\ensuremath{\mathcal{M}}\xspace_{3,4}$ are free from vertices. Hence, the slanted segments of the closing edges can not have a crossing that does violate the RAC restriction; \todo{Not clear!} see Fig.~\ref{fig:fourmatchings}. \end{pf} \begin{figure}[tb]% \begin{minipage}{.49\textwidth} \centering \includegraphics{treematching2}% \end{minipage} \hfill \begin{minipage}{.49\textwidth} \vspace{3.7em} \centering \includegraphics{wheelmatching}% \end{minipage} \begin{minipage}{.47\textwidth} \centering \caption{A \textsc{RacSim}\xspace drawing of a tree~(solid) and a matching (dashed)} \label{fig:treematching} \end{minipage} \hfill \begin{minipage}{.48\textwidth} \centering \caption{A \textsc{RacSim}\xspace drawing of a wheel (solid; its rim is drawn bold) and a matching (dashed)} \label{fig:wheelmatching}% \end{minipage} \end{figure} \begin{theorem}\label{thm:treematch} A tree and a matching on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size~$n \times (n-1)$ with one bend per tree-edge, and no bends in the edges of the matching. The drawing can be computed in $O(n)$ time. \end{theorem} \begin{sketchofproof} We inductively place each matching in one row. In every step, we decide whether to add the next matching to the stack at the top or at the bottom. We determine the $x$-coordinates of the matching by using a specific post-order. An illustration is given in Fig.\ref{fig:treematching}; a detailed proof is given in the appendix. \qed \end{sketchofproof} \section{\textsc{RacSim}\xspace Drawings with two bends per edge} \label{sec:twobends} In this section, we study more complex classes of planar graphs and show how to efficiently construct \textsc{RacSim}\xspace drawings with two bends per edge in quadratic area. In particular, we prove that a wheel and a matching on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $(1.5n-1) \times (n+2)$ with two bends per edge and no bends, respectively; see~Theorem~\ref{thm:wheelmatch}. If the input to our problem is an outerpath and a matching, then a \textsc{RacSim}\xspace drawing with two bends per edge and no bends, respectively, is also possible on an integer grid of size $(3n-2) \times (3n-2)$; see Theorem~\ref{thm:outpmatch}. \begin{theorem} A wheel and a matching on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $(1.5n-1) \times (n+2)$ with two bends per edge and no bends, respectively. The drawing can be computed in $O(n)$ time. \label{thm:wheelmatch} \end{theorem} \begin{pf} We denote the wheel by $\ensuremath{\mathcal{W}}\xspace$ and the matching by $\ensuremath{\mathcal{M}}\xspace$. Let the common vertex set of $\ensuremath{\mathcal{W}}\xspace$ and $\ensuremath{\mathcal{M}}\xspace$ be $V = \{v_1, v_2, \ldots, v_n \}$, where $n \geq 4$. If $v_1$ is the center of $\ensuremath{\mathcal{W}}\xspace$ and $\ensuremath{\mathcal{C}}\xspace_w:~v_2 \rightarrow v_3 \rightarrow \ldots \rightarrow v_{n} \rightarrow v_2$ is the rim of $\ensuremath{\mathcal{W}}\xspace$, then $E(\ensuremath{\mathcal{W}}\xspace) = \{(v_i, v_{i+1});~i = 1,\ldots,n-1\} \cup \{(v_n, v_2)\} \cup \{(v_1, v_i);~i = 2,\ldots,n\}$. Let~$\ensuremath{\mathcal{M}}\xspace'$ be the matching~$\ensuremath{\mathcal{M}}\xspace$ without the edge incident to~$v_1$. We first compute the $x$-coordinates of the vertices of $\ensuremath{\mathcal{W}}\xspace \cup \ensuremath{\mathcal{M}}\xspace'$, such that $\ensuremath{\mathcal{C}}\xspace_w-\{(v_n,v_2)\}$ is $x$-monotone (if drawn with straight-line edges). More precisely, for $i=2,\ldots, n$ we set $x(v_i)=2i-3$. The $y$-coordinates of the vertices of $\ensuremath{\mathcal{W}}\xspace \cup \ensuremath{\mathcal{M}}\xspace'$ are computed based on matching $\ensuremath{\mathcal{M}}\xspace'$. W.l.o.g., we assume that the first edge of $\ensuremath{\mathcal{M}}\xspace'$ is the one incident to $v_2$; the remaining ones are next in some order. Let~$k<n/2$ be the number of matching edges in~$\ensuremath{\mathcal{M}}\xspace'$. Then, the end-points of the $i$-th edge of $\ensuremath{\mathcal{M}}\xspace'$ have $y$-coordinate $2i-1$, $i=1,2,\ldots,k$. Next, we assign the $y$-coordinate~$2k+1$ to the vertices incident to the rim without a matching edge. Finally, the center~$v_1$ of~$\ensuremath{\mathcal{W}}\xspace$ is located at point~$(1,2k+3)$. We now have to describe where each edge of~$\ensuremath{\mathcal{W}}\xspace$ bends, as~$\ensuremath{\mathcal{M}}\xspace'$ is drawn bendless. We draw the spikes $(v_1,v_i),i=2,\ldots,n$ of $\ensuremath{\mathcal{W}}\xspace$ with exactly one bend at point $(x(v_i),$ $2k+2)$. Since vertex $v_2$ is left-most in the constructed drawing, we can save the bend of spike $(v_0,v_1)$. We draw the edges $(v_i,v_{i+1})$ of~$\ensuremath{\mathcal{C}}\xspace_w-\{(v_n,v_1)\}$ from left to right. If~$y(v_{i+1})>y(v_i)$, we draw the edge with a single bend at~$(x(v_{i+1}),y(v_{i})+1)$. If~$y(v_{i-1})>y(v_i)>y(v_{i+1})$, we draw the edge with a single bend at the point $(x(v_{i+1}),y(v_{i})-1)$. If~$y(v_i)>y(v_{i-1}),y(v_{i+1})$, the bottom port at~$v_i$ is already used. Thus, we draw the edge with two bends at the point~$(x(v_{i+1}),y(v_{i})-1)$ and the point~$(x(v_{i+1}),y(v_{i+1})+1)$. Finally, we delete the unused columns; see Figure~\ref{fig:wheelmatching}. Our approach ensures that $\ensuremath{\mathcal{C}}\xspace_w-\{(v_n,v_2)\}$ is drawn (non-strictly) $x$-monotone, hence planar. The last edge $(v_n,v_2)$ of $\ensuremath{\mathcal{C}}\xspace_w$ ``surrounds'' the drawing; so, it is crossing-free. Further, the spikes are not involved with crossings with the rim of $\ensuremath{\mathcal{W}}\xspace$, as they are drawn in its exterior. So, $\ensuremath{\mathcal{W}}\xspace$ is drawn planar, as desired. On the other hand, all edges of $\ensuremath{\mathcal{M}}\xspace'$ are drawn as horizontal, non-overlapping line-segments. So, $\ensuremath{\mathcal{M}}\xspace'$ is planar as well. The non-rectilinear edge-segments of $\ensuremath{\mathcal{W}}\xspace-{(v_n,v_2)}$ are of $y$-length one. So, they cannot be crossed by the edges of $\ensuremath{\mathcal{M}}\xspace'$. As the edge $(v_n,v_2)$ is not involved in crossings, it follows that all (potential) crossings between $\ensuremath{\mathcal{W}}\xspace$ and $\ensuremath{\mathcal{M}}\xspace'$ form right angles. Finally, we have to insert the matching edge~$(v_1,v_i)=\ensuremath{\mathcal{M}}\xspace-\ensuremath{\mathcal{M}}\xspace'$. Since~$v_i$ is not incident to a matching edge in~$\ensuremath{\mathcal{M}}\xspace'$, it is placed above all matching edges. Then~$(v_1,v_i)\in\ensuremath{\mathcal{W}}\xspace$ does not cross a matching edge, so we can use this edge as a double edge. We will now prove the area bound of the drawing algorithm. First, we count the rows used. Since we remove the matching edge incident to~$v_1$, the matching~$\ensuremath{\mathcal{M}}\xspace'$ has~$k\le n/2-1<n/2$ matching edges. We place the bottommost vertex in row~$1$ and the topmost vertex, that is vertex~$v_1$, in row~$2k+3$. We add one extra bend in row~$0$ for the edge~$(v_n,v_2)$. Thus, our drawing uses $2k+3+1\le n+2$ rows. Next, we count the columns used. The vertices~$v_2,\dots,v_n$ are each placed in their own column. Every spike has exactly one bend in the column of a vertex. An edge~$(v_i,v_{i+1})$ of rim $\ensuremath{\mathcal{W}}\xspace$ has exactly one bend in a vertex column, except for the case where~$y(v_i)>y(v_{i-1}),y(v_{i+1})$, in which it needs an extra bend between~$v_i$ and~$v_{i+1}$, $i = 1,\ldots,n-1$. Clearly, there can be at most~$n/2-1$ vertices satisfying this condition. Since the edge~$(v_n,v_2)$ uses an extra column to the right of~$v_n$, our drawing uses $(n-1)+(n/2-1)+1=1.5n-1$ columns. \end{pf} \begin{theorem}\label{thm:outpmatch} An outerpath and a matching on a common set of $n$ vertices admit a \textsc{RacSim}\xspace drawing on an integer grid of size $(3n-2) \times (3n-2)$ with two bends per edge and one bend, respectively. The drawing can be computed in $O(n)$ time. \end{theorem} \begin{sketchofproof} Augment the outerpath to maximal outerpath. Removing its outer-cycle, the result is a caterpillar, which determines the $x$-coordinates of the vertices. The $y$-coordinates are computed so that the matching is planar. An illustration is given in Fig.\ref{fig:outerpathmatching}; a detailed proof is given in the appendix. \qed \end{sketchofproof} \section{Conclusions and Open Problems} In this paper, we have studied RAC simultaneous drawings with few bends per edge. We proved that two planar graphs always admit a RAC simultaneous drawing with at most six bends per edge. For more restricted classes of graphs, we drastically improved the number of bends per edge. All of these drawings are within quadratic area. The results presented in this paper raise several questions that remain open, such as the following. \begin{compactenum} \item Is it possible to reduce the number of bends per edge for the classes of graphs that we presented in this paper? \item What additional non-trivial classes of graphs admit a \textsc{RacSim}\xspace drawing with better-than-general number of bends? \item As a variant of the problem, it might be possible to reduce the required number of bends per edge by relaxing the strict constraint that intersections must be right-angle and instead ask for drawings that have close to optimal crossing resolution. \item The computational complexity of the general problem remains open: Given two or more planar graphs on the same set of vertices and an integer $k$, is there a \textsc{RacSim}\xspace drawing in which each graph is drawn with at most $k$ bends per edge and the crossing are at right angles? \end{compactenum} \begin{small} \bibliographystyle{plain}
1,314,259,994,025
arxiv
\section{Introduction} \indent\indent The recent discovery of the top quark \cite{TOP} has focused attention on top-quark physics. With the advent of accelerators able to produce copious numbers of top quarks, a comparison of the top quark's observed properties with those predicted by the Standard Model promises to be an important test of the model and may well provide insight into exciting new physics. In this paper we calculate the next-to-leading-order cross section for the weak process $q \bar{q} \to t \bar{b}$, which produces a single top quark via a virtual $s$-channel $W$ boson (Fig.~\ref{tree graph}) \cite{CPet,SW}. The most important corrections to the ${\sl O}(\alpha_W^2)$ leading-order cross section are the QCD correction of ${\sl O}(\alpha_s)$ and the Yukawa correction of ${\sl O}(\alpha_W m_t^2/M_W^2)$. The Yukawa correction, which arises from loops of Higgs bosons and the scalar components of virtual vector bosons, dominates the ordinary ${\sl O}(\alpha_W)$ electroweak correction in the large $m_t$ limit. For the known value of the top-quark mass, $m_t=175 \pm 6$ GeV, the Yukawa correction is expected to be at least as large as the ordinary electroweak correction. \begin{figure} \centerline{\psfig{figure=qq-tb_tree.ps,width=3in}} \caption{\footnotesize Single-top-quark production via $q \bar q \to t \bar b$.} \label{tree graph} \end{figure} A precise theoretical calculation of the cross section for $q \bar q \to t \bar b$ is necessary for a number of reasons. The cross section obviously determines the yield of single top quarks produced via this process. More importantly, the coupling of the top quark to the $W$ boson in $q \bar q \to t \bar b$ is proportional to the Cabibbo-Kobayashi-Maskawa (CKM) matrix element $V_{tb}$, one of the few Standard Model parameters not yet measured experimentally. If there are only three generations, unitarity of the CKM matrix implies that $|V_{tb}|$ must be very close to unity ($.9988 < |V_{tb}| < .9995$) \cite{PDB}. However, if there is a fourth generation, $|V_{tb}|$ could be anything between (almost) zero and unity, depending on the amount of mixing between the third and fourth generations. Measurement of the $q \bar q \to t \bar b$ cross section, coupled with an accurate theoretical calculation, may provide the best direct measurement of $|V_{tb}|$ \cite{SW}. Finally, in addition to being interesting in its own right, $q \bar q \to t \bar b$ is a significant background to other processes, such as $q \bar q \to W\!H$ with $H\to b\bar b$, where $H$ is the Higgs boson \cite{HIGGS}. In some ways, $q \bar q \to t \bar b$ is similar to the more-studied $W$-gluon fusion process (Fig.~\ref{Wg graph}) \cite{WG}. However, where that process involves a space-like $W$ boson with $q^2 < 0$, the process $q \bar q \to t \bar b$ proceeds via a time-like $W$ boson with $q^2 > (m_t + m_b)^2$. Thus these two processes, together with the decay of the top quark, $t \to Wb$ (where the $W$ boson has $q^2 \approx M_W^2$), probe complementary aspects of the top quark's weak charged current. The kinematic distributions of the final-state particles in the two processes also differ significantly. There is an additional jet present in $W$-gluon fusion, and the $\bar b$ quark is usually produced at low transverse momentum, while in $q \bar q \to t \bar b$, the $\bar b$ quark recoils against the $t$ quark with high transverse momentum. \begin{figure} \centerline{\psfig{figure=Wg-tb.ps,width=5in}} \caption{\footnotesize Single-top-quark production via $W$-gluon fusion.} \label{Wg graph} \end{figure} At the Fermilab Tevatron ($\sqrt{S}$ = 2 TeV $p \bar p$ collider), the sum of the cross sections for $q \bar q \to t \bar b$ and $q \bar q \to \bar t b$ is roughly a factor of seven smaller than the dominant $t \bar t$ production cross section \cite{TT}, and about a factor of two smaller than the $W$-gluon-fusion cross section \cite{WG}. Nevertheless, a recent study indicates that with double $b$ tagging, a signal is observable at the Tevatron with 2-3~fb$^{-1}$ of integrated luminosity \cite{SW}. Unfortunately, even though the $q \bar q \to t \bar b, \bar t b$ cross section is larger at the CERN Large Hadron Collider (LHC, $\sqrt{S}$ = 14 TeV $pp$ collider), the signal will likely be obscured by backgrounds from the even larger $t \bar t$ and $W$-gluon fusion processes, which are initiated by gluons \cite{SW}. An important feature of $q \bar q \to t \bar b$ is the accuracy with which the cross section can be calculated. The top-quark mass is much larger than $\Lambda_{QCD}$, so calculations are performed in a regime where perturbative QCD is very reliable. The correction to the initial state is identical to that occurring in the ordinary Drell-Yan process $q \bar q \to W^*\!\to \bar \ell \nu$ ($W^*$ denotes a virtual $W$ boson), which has been calculated to ${\sl O}(\alpha_s^2)$ \cite{DY2}. Furthermore, by experimentally measuring $q \bar q \to W^* \to \bar \ell \nu$, the initial quark-antiquark flux can be constrained without recourse to perturbation theory.\footnote{Since the longitudinal momentum of the neutrino cannot be reconstructed, the $q^2$ of the $W^*$ cannot be determined, so $q \bar q \to W^* \to \bar \ell \nu$ yields only a constraint on the quark-antiquark flux, rather than a direct measurement.} This provides a check of the parton distribution functions, and allows the reduction of systematic errors. The parton distribution functions are not expected to be a large source of uncertainty, as the dominant contribution to the cross section comes from quark and antiquark distribution functions evaluated at relatively high values of $x$, where they are well known. There is little sensitivity to the less-well-known gluon distribution function, in contrast to the case of $W$-gluon fusion. The final-state correction to the inclusive cross section is straightforward, and involves no collinear or infrared singularities. The QCD corrections to the initial and final states do not interfere at next-to-leading-order because the $t \bar b$ is in a color singlet if a gluon is emitted from the initial state, but a color octet if it is emitted from the final state. There is, however, interference at ${\sl O}(\alpha_s^2)$ from the emission of two gluons. This paper is organized as follows. In Section 2 we present the ${\sl O}(\alpha_s)$ QCD corrections to both the initial and final states, and discuss their dependence on the renormalization and factorization scales. In Section 3 we present the ${\sl O}(\alpha_W m_t^2/M_W^2)$ Yukawa correction. In Section 4 we present a summary of our results. We give an analytic expression for the Yukawa correction in an appendix. \section{QCD correction} \indent\indent The diagrams which contribute to the ${\sl O}(\alpha_s)$ correction to $q \bar q \to t \bar b$ are shown in Fig.~\ref{QCD graphs}. As mentioned in the Introduction, the QCD corrections to the initial and final states do not interfere at ${\sl O}(\alpha_s)$. Therefore, we may consider the corrections to the initial and final states separately. To this end, we break up the process $p \bar p \to t \bar b + X$ into the production of a virtual $W$ boson of mass-squared $q^2$, followed by its propagation and decay into $t \bar b$. \begin{figure} \centerline{\psfig{figure=qq-tb_qcd.ps,width=5in}} \caption{\footnotesize ${\sl O}(\alpha_s)$ correction to $q \bar q \to t \bar b$: (a)-(c) initial state, (d) final state.} \label{QCD graphs} \end{figure} The production cross section of the virtual $W$ boson is formally identical to that of the Drell-Yan process, to all orders in QCD. The modulus squared of the decay amplitude of the virtual $W$ boson, integrated over the phase space of all final-state particles, is obtained by the application of Cutkosky's rules \cite{CUT} as twice the imaginary part of the self-energy of the $W$ boson due to a $t \bar b$ loop, again to all orders in QCD. Furthermore, because the current to which the $W$ boson couples in the initial state is conserved to all orders in QCD (for massless quarks), we need only consider the $-g^{\mu\nu}$ term in the $W$-boson propagator and self-energy. Thus we may write the differential cross section as \begin{equation} \frac{d\sigma}{dq^2}(p\bar{p} \to t\bar{b}+X) = \sigma(p\bar{p}\to W^*\!+X) \frac{{\rm Im}\:\Pi(q^2,m_t^2,m_b^2)}{\pi(q^2-M_W^2)^2} \label{MASTER} \end{equation} where $\Pi$ is the coefficient of the $-g^{\mu\nu}$ term of the self-energy of a $W$ boson with mass-squared $q^2$. The total cross section is obtained by integrating over $q^2$. This equation is valid to ${\sl O}(\alpha_s)$, but not beyond, because it neglects the interference between the QCD corrections to the initial and final states. To demonstrate this procedure, we obtain the leading-order cross section for $p \bar p \to t \bar b$ using \begin{eqnarray} \sigma(p\bar{p}\to W^*\!+X) & = & \sum_{i,j} \int dx_1 \int dx_2 \, [q_i(x_1,\mu_F)\bar{q}_j(x_2,\mu_F) +\bar{q}_i(x_1,\mu_F)q_j(x_2,\mu_F)] \\ & & \times |V_{ij}|^2 \frac{\pi^2\alpha_W}{3}\delta(x_1x_2S-q^2) \nonumber \end{eqnarray} where $\alpha_W=g^2/4\pi \equiv \sqrt 2 G_{\mu}M_W^2/\pi$, $S$ is the square of the total hadronic center-of-mass energy, $q$ and $\bar{q}$ are the parton distribution functions, $\mu_F$ is the factorization scale, and the sum on $i$ and $j$ runs over all contributing quark-antiquark combinations. At leading order, the coefficient of the $-g^{\mu\nu}$ term in the imaginary part of the $W$-boson self-energy is \begin{equation} {\rm Im}\:\Pi(q^2,m_t^2,m_b^2) = \frac{\alpha_W\lambda^{1/2}|V_{tb}|^2}{2} \left[ 1 - \frac{m_t^2+m_b^2}{2q^2} - \frac{(m_t^2-m_b^2)^2}{2q^4} \right] \end{equation} where $\lambda$ is the triangle function associated with two-particle phase space, \begin{equation} \lambda = \lambda(q^2,m_t^2,m_b^2) = q^4 + m_t^4 + m_b^4 - 2q^2m_t^2 - 2q^2m_b^2 - 2m_t^2m_b^2 \;. \end{equation} Using Eq.~(1), the differential cross section is thus \begin{eqnarray} \frac{d\sigma}{dq^2}(p\bar{p} \to t\bar{b} + X) & = & \sum_{i,j}\int dx_1 \int dx_2 \, [q_i(x_1,\mu_F)\bar{q}_j(x_2,\mu_F) +\bar{q}_i(x_1,\mu_F)q_j(x_2,\mu_F)] \\ & & \times |V_{ij}|^2 \frac{\pi\alpha_W^2\lambda^{1/2}|V_{tb}|^2} {12(q^2-M_W^2)^2} \left[ 1 - \frac{m_t^2+m_b^2}{2q^2} - \frac{(m_t^2-m_b^2)^2}{2q^4} \right] \delta(x_1x_2S-q^2) \; . \nonumber \label{LO} \end{eqnarray} At leading order, the integration over $q^2$ to obtain the total cross section is trivial due to the delta function. At next-to-leading order, however, it is necessary to perform the integration numerically. The ${\sl O}(\alpha_s)$ corrections to the Drell-Yan process \cite{DY1} and the $W$-boson self energy \cite{STN,CGN} were both calculated many years ago. We use the expression for $\sigma(p\bar{p}\to W^*+X)$ as given in Eqs.~(9.5) and (12.3) of Ref.~\cite{TASI}, and Im $\Pi$ as derived from Eq.~(3.3) of Ref.~\cite{CGN}.\footnote{The exact correspondence between our notation and that of Ref.~\cite{CGN} is \mbox{Im $\Pi = 3\pi\alpha_W|V_{tb}|^2 {\rm Im}(\Pi_1^V+\Pi_1^A)$}.} We use $m_t$ = 175 GeV, $m_b$ = 5 GeV, $M_W$ = 80.33 GeV, $|V_{tb}|$ = 1, $G_{\mu} = 1.16639 \times 10^{-5}\;{\rm GeV}^{-2}$, and $\alpha_s$ as given by the parton distribution functions. The calculation of the initial-state correction includes divergences arising from collinear parton emission. These divergences cancel with corresponding divergences present in the QCD correction to the parton distribution functions. The finite terms remaining depend on the factorization scale $\mu_F$, both through the parton distribution functions and explicitly in the partonic cross section. The variation of the leading-order and next-to-leading-order cross sections with $\mu_F/\sqrt {q^2}$, where $\sqrt {q^2}$ is the mass of the virtual $W$ boson,\footnote{We have chosen to refer the scale $\mu_F$ to the $q^2$ of the virtual $W$ boson because this is the quantity which appears in the factorization logarithms. Thus the factorization scale $\mu_F$ varies when integrating over $q^2$ to obtain the total cross section.} is shown in Fig.~\ref{muf} at both the Tevatron and the LHC. The leading-order cross section is calculated with the CTEQ3L leading-order parton distribution functions, and the next-to-leading-order cross section with the CTEQ3M next-to-leading-order parton distribution functions \cite{CTEQ}. The leading-order cross section varies considerably with $\mu_F$, while the next-to-leading-order cross section is appreciably less sensitive. The next-to-leading-order cross section shown in Fig.~4 contains only the initial-state correction. We see that for $\mu_F=\sqrt {q^2}$ the initial-state correction is +36\% at the Tevatron and +33\% at the LHC.\footnote{If both the leading-order and next-to-leading-order cross sections are calculated with the CTEQ3M next-to-leading-order parton distribution functions, the initial-state correction is $+27\%$ at the Tevatron and $+15\%$ at the LHC. Thus $+9\%$ of the initial-state correction at the Tevatron, and $+18\%$ at the LHC, is due to the increase in the leading-order cross section when it is calculated with next-to-leading-order parton distribution functions.} In what follows, we set $\mu_F=\sqrt{q^2}$. \begin{figure} \input{muf_tev.tex} \input{muf_lhc.tex} \caption{\footnotesize Factorization-scale dependence of the leading-order (LO) and next-to-leading-order (NLO) cross sections for $q\bar q\to t\bar b, \bar t b$ at the Tevatron and the LHC. The NLO cross sections include only the initial-state QCD correction, and not the final-state correction. The LO cross sections are calculated with the CTEQ3L LO parton distribution functions, and the NLO cross sections with the CTEQ3M NLO parton distribution functions.} \label{muf} \end{figure} The cross section at next-to-leading order also depends on the renormalization scale, $\mu_R$, at which $\alpha_s$ is evaluated. In Fig.~\ref{mur} we show the next-to-leading-order cross section, including both initial- and final-state corrections, as a function of $\mu_R/\sqrt {q^2}$, at both the Tevatron and the LHC. The dependence of the cross section on the renormalization scale first appears at next-to-leading order and is therefore mild. In what follows, we set $\mu_R=\sqrt{q^2}$. The final-state correction increases the cross section by $+18\%$ of the leading-order cross section at the Tevatron and $+17\%$ at the LHC. \begin{figure} \input{mur_tev.tex} \input{mur_lhc.tex} \caption{\footnotesize Renormalization-scale dependence of the leading-order (LO) and next-to-leading-order (NLO) cross sections for $q\bar q\to t\bar b, \bar t b$ at the Tevatron and the LHC. The NLO cross sections include both the initial-state and final-state correction. The LO cross sections are calculated with the CTEQ3L LO parton distribution functions, and the NLO cross sections with the CTEQ3M NLO parton distribution functions.} \label{mur} \end{figure} We show in Fig.~\ref{dsig} the leading-order and next-to-leading-order differential cross section as a function of the mass of the virtual $W$ boson, $\sqrt {q^2}$, at both the Tevatron and the LHC. Also shown are the separate ${\sl O}(\alpha_s)$ corrections from the initial and final states. These corrections have different shapes from the leading-order cross section, and from each other. In order to observe $t\bar b$ production experimentally, it is necessary to detect the $\bar b$ quark \cite{SW}. Thus the measured cross section will exclude some region near threshold, where the $\bar b$ quark does not have sufficient transverse momentum to be detected with high efficiency. Therefore the measured cross section, as well as the QCD correction, will depend on the acceptance for the $\bar b$ quark. \begin{figure} \input{dsig_tev.tex} \input{dsig_lhc.tex} \caption{\footnotesize Differential cross section for $q\bar q\to t\bar b,\bar t b$ versus the mass of the virtual $s$-channel $W$ boson, at the Tevatron and the LHC. Both the leading-order (LO) and next-to-leading order (NLO) cross sections are shown, as well as the separate contributions from the initial-state (IS) and final-state (FS) corrections. The LO cross sections are calculated with the CTEQ3L LO parton distribution functions, and the NLO cross sections with the CTEQ3M NLO parton distribution functions.} \label{dsig} \end{figure} If the top and bottom quarks were stable, they would form quarkonium bound states just below threshold \cite{CPras}. We estimate the distance below threshold that the ground state would occur, by analogy with the hydrogen atom, to be $E \approx (4\alpha_s/3)^2 m_b/2 \approx 50$ MeV.\footnote{Here $m_b$ is the approximate reduced mass of the system, and $C_F = 4/3$ is the usual SU(3) group theory factor associated with the fundamental representation.} The formation time of the ground state is approximately $1/E$. This is much greater than the top-quark lifetime, $\Gamma_t^{-1} \approx (1.5\:{\rm GeV})^{-1}$, so there is not sufficient time for quarkonium bound states to form \cite{FK}. Because the top-quark width is small compared to its mass, interference between the corrections to production and decay amplitudes has a negligible effect, of order $\alpha_s\Gamma_t/m_t$, on the total cross section \cite{FKM}. This interference also has a negligible effect on differential cross sections, such as the invariant-mass distribution of the decay products of the top quark \cite{P}. Our final results for the cross section and uncertainty will be presented in Section 4. \section{Yukawa correction} \indent\indent The diagrams which contribute to the ${\sl O}(\alpha_W m_t^2/m_W^2)$ Yukawa correction to $q \bar q \to t \bar b$ are shown in Fig.~\ref{Yukawa graphs}. The dashed lines represent the Higgs boson and the unphysical scalar $W$ and $Z$ bosons associated with the Higgs field (in the $R_{\xi}$ gauge). The effect of a top-quark loop in the $W$-boson propagator, which might be expected to contribute a term of Yukawa strength, is absorbed by the renormalized weak coupling constant, which we express in terms of $G_{\mu}$, the Fermi constant measured in muon decay ($\alpha_W=g^2/4\pi \equiv \sqrt 2 G_{\mu}M_W^2/\pi$). We use standard Feynman integral techniques with dimensional regularization to calculate the loop diagrams \cite{PV}, and work in the approximation where the bottom quark is massless. Our other parameters are $m_t$ = 175 GeV, $M_W$ = 80.33 GeV, $|V_{tb}|$ = 1, and $G_{\mu} = 1.16639 \times 10^{-5}\;{\rm GeV}^{-2}$. \begin{figure} \centerline{\psfig{figure=qq-tb_Yuk.ps,width=5in}} \caption{\footnotesize ${\sl O}(\alpha_W m_t^2/M_W^2)$ corrections to $q \bar q \to t \bar b$. The dashed lines represent the Higgs boson and the unphysical scalar $W$ and $Z$ bosons (in $R_{\xi}$ gauge): (a) wavefunction renormalization, (b) vertex correction.} \label{Yukawa graphs} \end{figure} In the $m_b=0$ approximation, the matrix element of the $t \bar b$ current may be written as \begin{eqnarray} i \bar{u}(p_t)\Gamma^{\mu A} v(p_b) & = & \left( \frac{-igT^A}{2\sqrt{2}} \right) \left\{ \bar{u}(p_t) \gamma^{\mu}(1-\gamma^5) v(p_b) \frac{}{} \right. \\ & & \left. + \left( \frac{m_t^2 G_{\mu}}{8\sqrt{2}\pi^2} \right) \bar{u}(p_t) \left[ \gamma^{\mu} F_1(q^2) + \frac{(p_t^{\mu} - p_b^{\mu})}{m_t} F_2(q^2) \right] (1-\gamma^5) v(p_b) \right\} \nonumber \end{eqnarray} where $p_t$ and $p_b$ are the outgoing four-momenta of the $t$ and $\bar b$ quarks, respectively; the form factors $F_1$ and $F_2$ are functions of $q^2=(p_t+p_b)^2$ and the Higgs-boson mass; and $T^A$ is an SU(3) matrix [Tr$(T^A T^B)=\frac{1}{2} \delta^{AB}$]. The fractional change in the differential cross section as a function of the $q^2$ of the virtual $W$ boson is \begin{equation} \frac{\Delta d\sigma_Y/d\sqrt{q^2}}{d\sigma_{LO}/d\sqrt{q^2}} = \left(\frac{m_t^2 G_{\mu}}{8 \sqrt 2 \pi^2} \right) \left[ 2F_1(q^2) + F_2(q^2) \frac{(q^2 - m_t^2)^2} {q^4 -\frac{1}{2} q^2 m_t^2 - \frac{1}{2} m_t^4} \right] \;. \end{equation} Analytic expressions for the form factors $F_1$ and $F_2$ are given in an appendix. The fractional change in the total cross section, $\Delta\sigma_Y/\sigma_{LO}$, is plotted in Fig.~\ref{Yuk} vs.~the Higgs-boson mass, $M_H$, at both the Tevatron and the LHC. For values of $M_H$ between 60 GeV and 1 TeV, the absolute value of the Yukawa correction is never more than one percent of the leading-order cross section. Thus the Yukawa correction is negligible for this process, as has also been found to be the case for $t\bar t$ production \cite{STANGE,BDHMSW}. Since $W$-gluon fusion also involves the $t \bar b$ weak charged current, our calculation suggests that the Yukawa correction to that process is also negligible. As previously mentioned, the ordinary weak correction is expected to be comparable to the Yukawa correction, so it too should be negligible.\footnote{The complete calculation of the ordinary weak correction would require a set of parton distribution functions which are extracted with weak corrections included. Such a set is not available at this time.} The Yukawa correction could potentially be much larger in models with enhanced couplings of Higgs bosons to top or bottom quarks \cite{STANGE,KAO}. \begin{figure} \input{yuk_tev.tex} \input{yuk_lhc.tex} \caption{\footnotesize Fractional change in the total cross section for $q\bar q\to t\bar b, \bar t b$ due to the Yukawa correction vs.~the Higgs-boson mass at the Tevatron and the LHC.} \label{Yuk} \end{figure} \section{Conclusions} \indent\indent The cross section for $q\bar q \to t\bar b, \bar t b$ at both the Tevatron and the LHC is given in Table \ref{table}. The leading-order cross section, next-to-leading-order cross section including only the initial-state QCD correction, and the full next-to-leading order cross section are given. The factorization and renormalization scales are both set equal to $\sqrt{q^2}$, the mass of the virtual $W$ boson. We give results for three different sets of next-to-leading-order parton distribution functions: CTEQ3M \cite{CTEQ}, MRS(A$^{'}$), and MRS(G) \cite{MRSA}.\footnote{The leading-order CTEQ cross section is calculated with the CTEQ3L leading-order parton distribution functions.} The QCD correction to the cross section is significant: about $+54\%$ at the Tevatron and $+50\%$ at the LHC, with the leading-order cross section evaluated with leading-order parton distribution functions, and the next-to-leading-order cross section evaluated with next-to-leading-order parton distribution functions.\footnote{If next-to-leading-order parton distribution functions are used at both leading and next-to-leading order, the correction is about $+45\%$ at the Tevatron and $+32\%$ at the LHC.} The size of the ${\cal O}(\alpha_s)$ correction improves the outlook for observation of this process in Run II at the Tevatron. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{$m_t = 175\;{\rm GeV}, \mu_R = \mu_F = \sqrt{q^2}$} & \makebox[8em]{CTEQ3L,3M} & \makebox[8em]{MRS(A$'$)} & \makebox[8em]{MRS(G)} \\ \hline & $\sigma_{LO}$ & .578 & .601 & .602 \\ \cline{2-5} Tevatron & $\sigma_{NLO(IS)}$ & .789 & .766 & .758 \\ \cline{2-5} $\sqrt{S}$ = 2 TeV & $\sigma_{NLO}$ & .894 & .868 & .860 \\ \cline{2-5} & \multicolumn{4}{|c|}{\bf\boldmath $\sigma_{\bf NLO}$ (avg) = .88 $\pm$ .05 pb} \\ \hline & $\sigma_{LO}$ & 6.76 & 7.83 & 7.81 \\ \cline {2-5} LHC & $\sigma_{NLO(IS)}$ & 9.02 & 9.01 & 9.03 \\ \cline{2-5} $\sqrt{S}$ = 14 TeV & $\sigma_{NLO}$ & 10.19 & 10.17 & 10.21 \\ \cline{2-5} & \multicolumn{4}{|c|}{\bf\boldmath $\sigma_{\bf NLO}$ (avg) = 10.2 $\pm$ 0.6 pb} \\ \hline \end{tabular} \caption{\footnotesize Leading-order (LO) and next-to-leading-order (NLO) cross sections (pb) for $q \bar q \to t \bar b,\bar t b$ at the Tevatron and the LHC for three different sets of parton distribution functions (PDFs). The NLO cross section including only the initial state (IS) correction is also given. The CTEQ LO cross section is computed with the CTEQ3L LO PDFs; all other cross sections are computed with NLO PDFs. The final NLO cross section is the average of the CTEQ3M and MRS(A$'$) cross sections, with an uncertainty of $\pm 6\%$, as discussed in the text.} \label{table} \end{center} \end{table} As shown in Fig.~\ref{muf}, varying the factorization scale between one half and twice $\sqrt {q^2}$ changes the cross section by only $\pm 2\%$. Varying the renormalization scale over this same range yields a similar change in the cross section, as shown in Fig.~\ref{mur}. Using these results to estimate the contribution from higher-order QCD corrections, we conclude that the uncertainty in the cross section is at the level of $\pm 4\%$. This conclusion is supported by the known next-to-next-to-leading-order correction to the Drell-Yan process, which is about $2\%$ (in the modified minimal subtraction ($\overline{\rm MS}$) scheme) \cite{DY2}. It is difficult to reliably ascertain the uncertainty in the cross section from the parton distribution functions at this time. The small difference in the next-to-leading-order cross sections using MRS(A$^{'}$) and MRS(G) supports the contention that the calculation is insensitive to the gluon distribution function. The difference between the cross section using CTEQ3M and MRS(A$^{'}$) suggests that the uncertainty in the cross section from the parton distribution functions is on the order of $\pm 2\%$. However, since each set of parton distribution functions represents the best fit to some set of data, the uncertainty is certainly larger than this. Therefore, we assign an uncertainty of $\pm 4\%$ from the parton distribution functions. For our final estimate, we average the next-to-leading-order cross sections using the CTEQ3M and MRS(A$^{'}$) parton distribution functions. We assign an uncertainty of $\pm 6\%$, which reflects the uncertainties above, added in quadrature. We quote as our final result for $q\bar q\to t\bar b, \bar t b$ a cross section of $.88 \pm .05$ pb at the Tevatron,\footnote{For $\sqrt{S} = 1.8$ TeV, the cross section at the Tevatron is $.73 \pm .04$ pb} and $10.2 \pm 0.6$ pb at the LHC. An additional source of uncertainty is the top-quark mass. A plot of the next-to-leading order cross section for $q\bar q\to t\bar b, \bar t b$ as a function of the top-quark mass is show in Fig.~\ref{mt}. It is anticipated that this uncertainty will be $\pm 6$ GeV when the data from Run I at the Tevatron are fully analyzed. This yields an uncertainty of $\pm 15\%$ in the cross section at the Tevatron. The uncertainty in the mass is expected to decrease to $\pm 4$ GeV in Run II \cite{tev2000}, corresponding to an uncertainty in the cross section of $\pm 10\%$. A high-luminosity Tevatron might be capable of reducing the uncertainty in the mass to $\pm 2$ GeV \cite{tev2000}, which would yield an uncertainty in the cross section of $\pm 5\%$. The uncertainty in the cross section at the LHC is comparable. \begin{figure} \input{mt_tev.tex} \input{mt_lhc.tex} \caption{\footnotesize Next-to-leading order cross section for $q\bar q\to t\bar b, \bar t b$ as a function of the top-quark mass.} \label{mt} \end{figure} Much can be done to reduce the uncertainty in the calculation. The next-to-next-to-leading-order correction to the Drell-Yan process is already known \cite{DY2}. The full next-to-next-to-leading-order QCD correction to $q \bar q \to t \bar b$ can and should be completed in the near future. This should reduce the uncertainty in the cross section from yet higher orders to below the $1\%$ level. A reliable estimate of the uncertainty in the parton distribution functions requires a set with built-in uncertainties, which we hope will be available in the near future. It seems likely that by the time the process $q\bar q\to t\bar b$ is observed in Run II at the Tevatron, the theoretical uncertainty in the cross section will be slightly larger than $\pm 10\%$, due mostly to the uncertainty in the mass. This is adequate in comparison with the anticipated experimental errors. The statistical error on the measured cross section in Run II will be about $\pm 20\%$ \cite{SW}. This corresponds to a measurement of $|V_{tb}|$ with an accuracy of $\pm 10\%$ (assuming $|V_{tb}| \approx 1$). A high-luminosity Tevatron, which could potentially deliver 30 fb$^{-1}$ over several years, would allow a measurement of the cross section with a statistical uncertainty of about $6\%$, with a comparable theoretical uncertainty. Combining the statistical and theoretical uncertainties in quadrature, this corresponds to a measurement of $|V_{tb}|$ with an accuracy of about $\pm 4\%$. The process $q\bar q \to t\bar b$ is also important as a background to the process $q\bar q\to WH$ with $H\to b\bar b$ at the Tevatron. We show in Fig.~\ref{wh} the next-to-leading-order cross sections for $q\bar q \to W^{\pm}H$, as well as $q\bar q \to ZH$, at both the Tevatron and the LHC \cite{HW}. The significant increase in the $q\bar q\to t\bar b$ cross section at next-to-leading order could have a negative impact on the ability to find an intermediate-mass Higgs boson at the Tevatron. \begin{figure} \input{wh_tev.tex} \input{wh_lhc.tex} \caption{\footnotesize Cross section for $q \bar q \to W^{\pm}H$ and $q \bar q \to ZH$ at the Tevatron and the LHC, calculated at next-to-leading order using CTEQ3M parton distribution functions.} \label{wh} \end{figure} \section*{Acknowledgements} \indent\indent We are grateful for conversations with S.~Keller, B.~Kniehl, S.~Kuhlmann, and T.~Stelzer. This work was supported in part by Department of Energy grant DE-FG02-91ER40677. We gratefully acknowledge the support of a GAANN fellowship, under grant number DE-P200A40532 from the U.~S.~Department of Education for MS. \section*{Appendix} \indent\indent Below are the form factors for the Yukawa correction to the matrix element of the $t \bar b$ charged current. These corrections arise from loops of Higgs bosons and the unphysical scalar $W$ and $Z$ bosons associated with the Higgs field in the $R_{\xi}$ gauges. The masses-squared of the unphysical scalar bosons are $\xi M_W^2,\xi M_Z^2$. In the numerical calculations, we set $\xi = 0$ (Landau gauge). The integrals were reduced to the standard one-, two-, and three-point scalar loop integrals and then evaluated with the aid of the code FF \cite{FF}. The notation is adopted from \cite{PV}; the arguments of the functions give the internal masses-squared followed by the external momenta squared. \begin{eqnarray*} F_1 & = & {\small \frac{1}{2}} [4C_{24}(\xi M_W^2,M_H^2,m_t^2;q^2,m_t^2,0) + 4C_{24}(\xi M_W^2,\xi M_Z^2,m_t^2;q^2,m_t^2,0) \\ & & \mbox{} + B_1(M_H^2,m_t^2;m_t^2) + (M_H^2 - 4m_t^2) B_0^{'}(M_H^2,m_t^2;m_t^2) \\ & & \mbox{} + B_1(\xi M_Z^2,m_t^2;m_t^2) + \xi M_Z^2 B_0^{'} (\xi M_Z^2,m_t^2;m_t^2) \\ & & \mbox{} + B_1(\xi M_W^2,0;m_t^2) + (\xi M_W^2 - m_t^2) B_0^{'}(\xi M_W^2,0;m_t^2) \\ & & \mbox{} + B_1(\xi M_W^2,m_t^2;0) + (\xi M_W^2 + m_t^2) B_0^{'}(\xi M_W^2,m_t^2;0)] \\ &&\\ F_2 & = & m_t^2 [C_{23}(\xi M_W^2,M_H^2,m_t^2;q^2,m_t^2,0) + C_{23}(\xi M_W^2,\xi M_Z^2,m_t^2;q^2,m_t^2,0) \\ & & \mbox{} + C_{21}(\xi M_W^2,M_H^2,m_t^2;q^2,m_t^2,0) + C_{21}(\xi M_W^2,\xi M_Z^2,m_t^2;q^2,m_t^2,0) \\ & & \mbox{} + 2C_{11}(\xi M_W^2,M_H^2,m_t^2;q^2,m_t^2,0)] \end{eqnarray*} Note: In the reduction of the three-point integrals, a misprint was discovered in Ref.~\cite{PV}. On p.~199 in Appendix E, in an unnumbered equation near the bottom of the page, $C_{22}$ and $C_{23}$ were transposed. The correct equation is \mbox{$(C_{23},C_{22}) = X^{-1}(R_4,R_6)$}.
1,314,259,994,026
arxiv
\section{Introduction.}\label{intro} \section{Introduction} Although it has been around for over a century, margarine was not always the preferred tablespread in the United States. In 1930, per capita consumption of margarine was only 2.6 pounds (vs. 17.6 pounds of butter). Times have changed for the better, though (if you're a margarine manufacturer, that is). Today, per capita consumption of margarine in the United States is 8.3 pounds (including vegetable oil spreads) whereas butter consumption is down to about 4.2 pounds. Furthermore, as shown in Figure \ref{frontier}, it is always butter, not margarine, that is traded off against guns. This leads to the announcement of our result. \begin{theorem} \label{marg-butt-th} In a reverse dictionary, $(\mbox{\bi marg}\succ\mbox{\bi butt\/}\; \land\; \mbox{\bi arine}\succ\mbox{\bi er})$. Moreover, continuous reading of a compact subset of the dictionary attains the minimum of patience at the moment of giving up. \end{theorem} The proof will be given in the e-companion to this paper. \begin{figure}[t] \begin{center} \includegraphics[height=2in]{Sample-Figure} \caption{Production Possibilities Frontier.} \label{frontier} \end{center} \end{figure} \section{Motivation} Margarine or butter? According to the website of the \cite{namm}, ``Despite the recommendations of health professionals and leading health organizations to choose margarine, many consumers are confused.'' But whether or not they are confused, consumers are voting with their pocketbooks. The \cite{abi}, whose slogan is ``Things are better with butter!'', presents many tempting recipes on its website, but also reports declining sales in its marketing releases. \begin{hypothesis} Things are better with butter. \end{hypothesis} Indeed, even though a reputed chain email letter claims that margarine is ``but one molecule from being plastic'' \citep{btc}, American consumers appear to be sliding away from butter. Given this trend, a historical review of margarine is in order. \begin{lemma} Many consumers are confused. \end{lemma} \begin{lemma} Whether or not the consumers are confused, they are voting with their pocketbooks. \end{lemma} \begin{proposition} American consumers are sliding away from butter. \end{proposition} \section{Historical Timeline} The following are milestones in the history of margarine as reported by the \cite{namm2}. Note that they have been transcribed verbatim here, which is generally bad practice. Even if the material is explicitly indicated as a quotation, having this much content from another source will almost certainly result in rejection of the paper for lack of originality. But if not called out {\em as a quotation}, lifting even a single sentence (or less) from another source is plagiarism, even if the source is cited. Plagiarism is a very serious offense, which will not only lead to rejection of a paper, but will also bring more serious sanctions, such as being banned from the journal, notification of your dean or department chair, etc. So don't do it! There are many on-line resources to help determine what constitutes plagiarism and how to avoid (see, e.g., CollegeBoard.com). But the simplest rule to follow is ``when it doubt, call it out.'' That is, make very plain what comes from other sources, in properly cited word-for-word quotations or paraphrases. \section{1800s} \begin{quotation} \begin{description} \item[\bf 1870] Margarine was created by a Frenchman from Provence, France -- Hippolyte M\`ege-Mouriez -- in response to an offer by the Emperor Louis Napoleon III for the production of a satisfactory substitute for butter. To formulate his entry, M\`ege-Mouriez used margaric acid, a fatty acid component isolated in 1813 by Michael Chevreul and named because of the lustrous pearly drops that reminded him of the Greek word for pearl -- margarites. From this word, M\`ege-Mouriez coined the name margarine for his invention that claimed the Emperor's prize. \item[\bf 1873] An American patent was granted to M\`ege-Mouriez who intended to expand his French margarine factory and production to the United States. While demand for margarine was strong in northern Europe and the potential equally as promising in the U.S., M\`ege-Mouriez's operations nevertheless failed and he died obscurely. \item[\bf 1878] Unilever began manufacturing margarine in Europe. \item[\bf 1871-73] The U. S. Dairy Company in New York City began production of "artificial butter." \item[\bf 1877] State laws requiring identification of margarine were passed in New York and Maryland as the dairy industry began to feel the impact of this rapidly growing product \item[\bf 1881] Improvements to M\`ege-Mouriez's formulation were made; U.S. Dairy created a subsidiary, the Commercial Manufacturing Company, to produce several million pounds annually of this new product. \item[\bf 1885] When a court voided a ban on margarine in New York, dairy militants turned their attention to Washington, resulting in Congressional passage of the Margarine Act of 1886. The Act imposed a tax of two cents per pound on margarine and required expensive licenses for manufacturers, wholesalers and retailers of margarine. President Grover Cleveland, from the dairy state of New York, signed the law, describing it as a revenue measure. However, the 1886 law failed to slow the sale of margarine principally because it did not require identification of margarine at the point of sale and margarine adversaries turned their attention back to the states. \item[\bf 1886] More than 30 manufacturing facilities were reported to be engaged in the production of margarine. Among them were Armour and Company of Chicago and Lever Brothers of New York. Seventeen states required the product to be specifically identified as margarine. Various state laws to control margarine were passed in a number of states, but were not enforced. Later that year, New York and New Jersey prohibited the manufacture and sale of yellow-colored margarine. \end{description} \end{quotation} \section{1900s} \subsection{Before the End of WWII} \begin{quotation} \begin{description} \item[\bf 1902] 32 states and 80\% of the U.S. population lived under margarine color bans. While the Supreme Court upheld such bans, it did strike down forced coloration (pink) which had begun in an effort to get around the ban on yellow coloring. During this period coloring in the home began, with purveyors providing capsules of food coloring to be kneaded into the margarine. This practice continued through World War II. \item[\bf 1902] Amendments to the Federal Margarine Act raised the tax on colored margarine five-fold, but decreased licensing fees for white margarine. But demand for colored margarine remained so strong, that bootleg colored margarine flourished. \item[\bf 1904] Margarine production suffered and consumption dropped from 120 million pounds in 1902 to 48 million. \item[\bf 1910] Intense pressure by competitors to keep prices low and new product innovations, as well as dairy price increases, returned production levels of margarine back to 130 million pounds. The Federal tax remained despite many efforts to repeal it, but consumption grew gradually in spite of it. \item[\bf 1920] With America's entry into World War I, the country began to experience a fat shortage and a sharp increase in the cost of living, both factors in driving margarine consumption to an annual per capita level of 3.5 pounds. \item[\bf 1930] The Margarine Act was again amended to place the Federal tax on naturally-colored (darkened with the use of palm oil) as well as artificially-colored margarine. During the Depression dairy interests again prevailed upon the states to enact legislation equalizing butter and margarine prices. Consumers reacted and consumption of margarine dropped to an annual per capita level of 1.6 pounds. \item[\bf 1932] Besides Federal taxes and licenses, 27 states prohibited the manufacture or sale of colored margarine, 24 imposed some kind of consumer tax and 26 required licenses or otherwise restricted margarine sales. The Army, Navy and other Federal agencies were barred from using margarine for other than cooking purposes. \item[\bf 1941] Through production innovations, advertising and improved packaging, margarine consumption regained lost ground. A Federal standard was established recognizing margarine as a spread of its own kind. With raised awareness of margarine's health benefits from a 1941 National Nutrition Conference, consumers began to take notice of restrictions on margarine that were keeping the product from them and artificially inflating the price. \item[\bf 1943] State taxes on margarine were repealed in Oklahoma. The courts removed color barriers in other states shortly after World War II (see \citealt{tjp}). \end{description} \end{quotation} \subsection{After the End of WWII} \begin{quotation} \begin{description} \item[\bf 1947] Residual war shortages of butter sent it to a dollar a pound and Margarine Act repeal legislation was offered from many politicians. \item[\bf 1950] Some of the more popular brands prior up until now were Cloverbloom, Mayflower, Mazola, Nucoa, Blue Plate, Mrs. Filbert's, Parkay, Imperial, Good Luck, Nu-Maid, Farmbelle, Shedd's Safflower, Churngold, Blue Bonnet, Fleischmann's, Sunnyland and Table Maid. \item[\bf 1950] Margarine taxes and restrictions became the talk of the country. Finally, following a significant effort by the National Association of Margarine Manufacturers, President Truman signed the Margarine Act of 1950 on March 23 of that year. \item[\bf 1951] The Federal margarine tax system came to an end. Pre-colored margarine was enjoyed by a consumer also pleased with lower prices. Consumption almost doubled in the next twenty years. State color bans, taxes, licenses and other restrictions began to fall. \item[\bf 1960s] The first tub margarine and vegetable oil spreads were introduced to the American public. \item[\bf 1967] Wisconsin became the last state to repeal restrictions on margarine \citep{w}. \item[\bf 1996] A bill introduced by Rep. Ed Whitfield would signal an end to the last piece of legislation that adversely affects the sale of margarine. Currently, federal law prohibits the retail sale of margarine in packages larger than one pound, as well as detailed requirements regarding the size and types of labeling of margarine and a color requirement. This new legislation would remove these restrictions from the Federal Food, Drug, and Cosmetic Act (FFDCA). Rep. Whitfield's bill, the Margarine Equity Act, is part of HR 3200, the Food and Drug Administration (FDA) reform package and addresses dated requirements that are not applicable to the marketplace. \item[\bf 1998] 125th anniversary of the U.S. patent for margarine \noindent{{\em Source:} \cite{namm}.} \end{description} \end{quotation} \section{Introduction.}\label{intro} \section{Introduction} Although it has been around for over a century, margarine was not always the preferred tablespread in the United States. In 1930, per capita consumption of margarine was only 2.6 pounds (vs. 17.6 pounds of butter). Times have changed for the better, though (if you're a margarine manufacturer, that is). Today, per capita consumption of margarine in the United States is 8.3 pounds (including vegetable oil spreads) whereas butter consumption is down to about 4.2 pounds. Furthermore, as shown in Figure \ref{frontier}, it is always butter, not margarine, that is traded off\footnote{Who would expect that?} against guns. This leads to the announcement of our result.\footnote{This is really silly. We will demonstrate here how an endnote should look like if it is longer than just one line.} \begin{theorem} \label{marg-butt-th} In a reverse dictionary, $(\mbox{\bi marg}\succ\mbox{\bi butt\/}\; \land\; \mbox{\bi arine}\succ\mbox{\bi er})$. Moreover, continuous reading of a compact subset of the dictionary attains the minimum of patience at the moment of giving up. \end{theorem} The proof will be given in the e-companion to this paper. \begin{figure}[t] \begin{center} \includegraphics[height=2in]{Sample-Figure} \caption{Production Possibilities Frontier.} \label{frontier} \end{center} \end{figure} \section{Motivation} Margarine or butter? According to the website of the \cite{namm}, ``Despite the recommendations of health professionals and leading health organizations to choose margarine, many consumers are confused.'' But whether or not they are confused, consumers are voting with their pocketbooks. The \cite{abi}, whose slogan is ``Things are better with butter!'', presents many tempting recipes on its website, but also reports declining sales in its marketing releases. \begin{hypothesis} Things are better with butter. \end{hypothesis} Indeed, even though a reputed chain email letter claims that margarine is ``but one molecule from being plastic'' \citep{btc}, American consumers appear to be sliding away from butter. Given this trend, a historical review of margarine is in order. \begin{lemma} Many consumers are confused. \end{lemma} \begin{lemma} Whether or not the consumers are confused, they are voting with their pocketbooks. \end{lemma} \begin{proposition} American consumers are sliding away from butter. \end{proposition} \section{Historical Timeline} The following are milestones in the history of margarine as reported by the \cite{namm2}. Note that they have been transcribed verbatim here, which is generally bad practice. Even if the material is explicitly indicated as a quotation, having this much content from another source will almost certainly result in rejection of the paper for lack of originality. But if not called out {\em as a quotation}, lifting even a single sentence (or less) from another source is plagiarism, even if the source is cited. Plagiarism is a very serious offense, which will not only lead to rejection of a paper, but will also bring more serious sanctions, such as being banned from the journal, notification of your dean or department chair, etc. So don't do it! There are many on-line resources to help determine what constitutes plagiarism and how to avoid (see, e.g., CollegeBoard.com). But the simplest rule to follow is ``when it doubt, call it out.'' That is, make very plain what comes from other sources, in properly cited word-for-word quotations or paraphrases. \section{1800s} \begin{quotation} \begin{description} \item[\bf 1870] Margarine was created by a Frenchman from Provence, France -- Hippolyte M\`ege-Mouriez -- in response to an offer by the Emperor Louis Napoleon III for the production of a satisfactory substitute for butter. To formulate his entry, M\`ege-Mouriez used margaric acid, a fatty acid component isolated in 1813 by Michael Chevreul and named because of the lustrous pearly drops that reminded him of the Greek word for pearl -- margarites. From this word, M\`ege-Mouriez coined the name margarine for his invention that claimed the Emperor's prize. \item[\bf 1873] An American patent was granted to M\`ege-Mouriez who intended to expand his French margarine factory and production to the United States. While demand for margarine was strong in northern Europe and the potential equally as promising in the U.S., M\`ege-Mouriez's operations nevertheless failed and he died obscurely. \item[\bf 1878] Unilever began manufacturing margarine in Europe. \item[\bf 1871-73] The U. S. Dairy Company in New York City began production of "artificial butter." \item[\bf 1877] State laws requiring identification of margarine were passed in New York and Maryland as the dairy industry began to feel the impact of this rapidly growing product \item[\bf 1881] Improvements to M\`ege-Mouriez's formulation were made; U.S. Dairy created a subsidiary, the Commercial Manufacturing Company, to produce several million pounds annually of this new product. \item[\bf 1885] When a court voided a ban on margarine in New York, dairy militants turned their attention to Washington, resulting in Congressional passage of the Margarine Act of 1886. The Act imposed a tax of two cents per pound on margarine and required expensive licenses for manufacturers, wholesalers and retailers of margarine. President Grover Cleveland, from the dairy state of New York, signed the law, describing it as a revenue measure. However, the 1886 law failed to slow the sale of margarine principally because it did not require identification of margarine at the point of sale and margarine adversaries turned their attention back to the states. \item[\bf 1886] More than 30 manufacturing facilities were reported to be engaged in the production of margarine. Among them were Armour and Company of Chicago and Lever Brothers of New York. Seventeen states required the product to be specifically identified as margarine. Various state laws to control margarine were passed in a number of states, but were not enforced. Later that year, New York and New Jersey prohibited the manufacture and sale of yellow-colored margarine. \end{description} \end{quotation} \section{1900s} \subsection{Before the End of WWII} \begin{quotation} \begin{description} \item[\bf 1902] 32 states and 80\% of the U.S. population lived under margarine color bans. While the Supreme Court upheld such bans, it did strike down forced coloration (pink) which had begun in an effort to get around the ban on yellow coloring. During this period coloring in the home began, with purveyors providing capsules of food coloring to be kneaded into the margarine. This practice continued through World War II. \item[\bf 1902] Amendments to the Federal Margarine Act raised the tax on colored margarine five-fold, but decreased licensing fees for white margarine. But demand for colored margarine remained so strong, that bootleg colored margarine flourished. \item[\bf 1904] Margarine production suffered and consumption dropped from 120 million pounds in 1902 to 48 million. \item[\bf 1910] Intense pressure by competitors to keep prices low and new product innovations, as well as dairy price increases, returned production levels of margarine back to 130 million pounds. The Federal tax remained despite many efforts to repeal it, but consumption grew gradually in spite of it. \item[\bf 1920] With America's entry into World War I, the country began to experience a fat shortage and a sharp increase in the cost of living, both factors in driving margarine consumption to an annual per capita level of 3.5 pounds. \item[\bf 1930] The Margarine Act was again amended to place the Federal tax on naturally-colored (darkened with the use of palm oil) as well as artificially-colored margarine. During the Depression dairy interests again prevailed upon the states to enact legislation equalizing butter and margarine prices. Consumers reacted and consumption of margarine dropped to an annual per capita level of 1.6 pounds. \item[\bf 1932] Besides Federal taxes and licenses, 27 states prohibited the manufacture or sale of colored margarine, 24 imposed some kind of consumer tax and 26 required licenses or otherwise restricted margarine sales. The Army, Navy and other Federal agencies were barred from using margarine for other than cooking purposes. \item[\bf 1941] Through production innovations, advertising and improved packaging, margarine consumption regained lost ground. A Federal standard was established recognizing margarine as a spread of its own kind. With raised awareness of margarine's health benefits from a 1941 National Nutrition Conference, consumers began to take notice of restrictions on margarine that were keeping the product from them and artificially inflating the price. \item[\bf 1943] State taxes on margarine were repealed in Oklahoma. The courts removed color barriers in other states shortly after World War II (see \citealt{tjp}). \end{description} \end{quotation} \subsection{After the End of WWII} \begin{quotation} \begin{description} \item[\bf 1947] Residual war shortages of butter sent it to a dollar a pound and Margarine Act repeal legislation was offered from many politicians. \item[\bf 1950] Some of the more popular brands prior up until now were Cloverbloom, Mayflower, Mazola, Nucoa, Blue Plate, Mrs. Filbert's, Parkay, Imperial, Good Luck, Nu-Maid, Farmbelle, Shedd's Safflower, Churngold, Blue Bonnet, Fleischmann's, Sunnyland and Table Maid. \item[\bf 1950] Margarine taxes and restrictions became the talk of the country. Finally, following a significant effort by the National Association of Margarine Manufacturers, President Truman signed the Margarine Act of 1950 on March 23 of that year. \item[\bf 1951] The Federal margarine tax system came to an end. Pre-colored margarine was enjoyed by a consumer also pleased with lower prices. Consumption almost doubled in the next twenty years. State color bans, taxes, licenses and other restrictions began to fall. \item[\bf 1960s] The first tub margarine and vegetable oil spreads were introduced to the American public. \item[\bf 1967] Wisconsin became the last state to repeal restrictions on margarine \citep{w}. \item[\bf 1996] A bill introduced by Rep. Ed Whitfield would signal an end to the last piece of legislation that adversely affects the sale of margarine. Currently, federal law prohibits the retail sale of margarine in packages larger than one pound, as well as detailed requirements regarding the size and types of labeling of margarine and a color requirement. This new legislation would remove these restrictions from the Federal Food, Drug, and Cosmetic Act (FFDCA). Rep. Whitfield's bill, the Margarine Equity Act, is part of HR 3200, the Food and Drug Administration (FDA) reform package and addresses dated requirements that are not applicable to the marketplace. \item[\bf 1998] 125th anniversary of the U.S. patent for margarine \noindent{{\em Source:} \cite{namm}.} \end{description} \end{quotation} \section{Proof of Theorem \ref{marg-butt-th}.} To avoid confusion, theorems that we repeat for readers' convenience will have the same appearance as when they were mentioned for the first time. However, here they should be coded by \verb+repeattheorem+ instead of \verb+theorem+ to keep labels/pointers uniquely resolvable. Other predefined theorem-like environments work similarly if they need to be repeated in what becomes the \mbox{e-companion.} \begin{repeattheorem}[Theorem 1.] In a reverse dictionary, $(\mbox{\bi marg}\succ\mbox{\bi butt\/}\; \land\; \mbox{\bi arine}\succ\mbox{\bi er})$. Moreover, continuous reading of a compact subset of the dictionary attains the minimum of patience at the moment of giving up. \end{repeattheorem} \subsection{Preparatory Material} \begin{lemma} \label{aux-lem1} In a reverse dictionary, $\mbox{\bi g}\succ\mbox{\bi t\/}$. \end{lemma} \begin{lemma} \label{aux-lem2} In a reverse dictionary, $\mbox{\bi e}\succ\mbox{\bi r\/}$. \end{lemma} \proof{Proof of Lemmas \ref{aux-lem1} and \ref{aux-lem2}.} See the alphabet and the tebahpla.\Halmos \endproof \begin{remark} Note that the title of the proof should be keyed explicitly in each case. Authors can hardly agree on what would be the default proof title, so there is no default. Even \verb+\proof{Proof.}+ should be keyed out. \end{remark} \subsection{Proof of the Main Result} \proof{Proof of Theorem 1.} The first statement is a conseqence of Lemma~\ref{aux-lem1} and \ref{aux-lem2}. The rest relies on the fact that the continuous image of a compact set into the reals is a closed interval, thus having a minimum point.\Halmos \endproof \begin{figure}[t] \begin{center} \includegraphics[height=1.5in]{Sample-Figure} \caption{Production Possibilities Frontier Again.} \label{ECfrontier} \end{center} \end{figure} \section{Conclusions} Since we didn't do anything original in this paper, we don't actually have any conclusions. But we have to have a conclusions section in here, so we're writing one. Don't the margins look good? How about those section headings? Pretty snappy, eh? However, just because we didn't produce any results, doesn't mean that there isn't good butter re-search going on out there. Many researchers (e.g., \citealt{trbn}, \citealt{h}) continue to push out the envelope of our understanding of butter and its health effects. Others are focusing on related products, such as cheese (see, e.g., \citealt{fo}). Still others are investigating the linguistic \citep{fs} and sociopolitical \citep{g} implications of butter. So butter remains a hot research area with lots of potential for the future. All the potential in the world won't amount to much if research isn't cited correctly, though. Make sure you include complete citation information for your references, including publication or re-trieval dates for website citations, publication year and volume and issue numbers for journal articles, publisher names and locations for books, reports, and conference proceedings, and page numbers for eve-rything, but especially for direct quotes. For citations of unpublished work, you need to include the date of update, as well as the name and address of the organization that sponsored the work. Take a look at the reference section below to see how references should be formatted. \theendnotes \ACKNOWLEDGMENT{The authors gratefully acknowledge the existence of the Journal of Irreproducible Results and the support of the Society for the Preservation of Inane Research.} \section{Proofs} \subsection{Proof of Theorem \ref{thm:stable}} Recall that $Q_i = \Psi_i^T (2 \gamma_i \Psi_i \Sigma_i \Psi_i^T)^{-1} \Psi_i$, $F = \{(i, j): 1 \leq i < j \leq n, \Psi_i \bm{e}_j \neq \bm{0}\}$, and $\textrm{uvec}(X)_F\in\mathbb{R}^{|F|}$ is a vector whose entries are the ordered set $\{X_{ij}\mid (i,j)\in F\}$. Note that $\Psi_i \Sigma_i \Psi_i^T$ is positive definite, since it is a principal submatrix of the positive definite matrix $\Sigma_i$. \begin{repeattheorem}[Restatement of Theorem \ref{thm:stable}]. Define $n\times n$ matrices $A$, $B_{(i,j)}$, and $C_{(i,j)}$ as follows: \begin{align*} A_{ij} &=\bm{e}_i^T Q_j M \bm{e}_j, & B_{(i,j)} &= \bm{e}_i\bm{e}_j^T Q_i, & C_{(i,j)} &= (B_{(i,j)} - B_{(j,i)}) - (B_{(i,j)} - B_{(j,i)})^T. \end{align*} Let $Z_F$ be the $|F|\times |F|$ matrix whose rows are the ordered sets $\{\textrm{uvec}(C_{(i,j)})_F \mid (i,j)\in F\}$. Then, we have the following: \begin{enumerate} \item A stable point $(W, P)$ under $\{\Psi_i\}$ exists if $\textrm{uvec}(A-A^T)_F$ lies in the column space of $Z_F$. \item A unique stable point always exists if $Z_F$ is full rank. \end{enumerate} \end{repeattheorem} \proof{Proof.} For clarity of exposition, we first prove the result when all edges are allowed, and then consider the case of disallowed edges. \noindent{\bf (1) All edges allowed.} Here, $E=\{i,j\mid 1 \leq i < j\leq n\}$, and we use $\textrm{uvec}(.)$ and $Z$ to refer to $\textrm{uvec}(.)_E$ and $Z_E$ in the theorem statement. For any price matrix $P$ with $P=-P^T$, consider the matrix $W$ whose $j^{th}$ column has the utility-maximizing contract sizes for agent $j$: \begin{align*} W_{ij} &= \bm{e}_i^T \Psi_j^T (2 \gamma_j \Psi_j \Sigma_j \Psi_j^T)^{-1} \Psi_j (M - P)\bm{e}_j = \bm{e}_i^T Q_j (M - P) \bm{e}_j. \end{align*} The tuple $(W, P$) is stable if $W=W^T$. So, for all $i<j$, we require \begin{align} W_{ij} &= W_{ji} \\ \Leftrightarrow \bm{e}_i^T Q_j (M - P) \bm{e}_j &= \bm{e}_j^T Q_i (M - P) \bm{e}_i \nonumber\\ \Leftrightarrow \bm{e}_i^T Q_j M \bm{e}_j - \bm{e}_j^T Q_i M \bm{e}_i &= \bm{e}_i^T Q_j P \bm{e}_j - \bm{e}_j^T Q_i P \bm{e}_i \nonumber\\ \Leftrightarrow \bm{e}_i^T (A-A^T) \bm{e}_j &= \bm{e}_i^T (Q_j P - (Q_i P)^T) \bm{e}_j.\label{eq:stability1} \end{align} Since $P=-P^T$, we must have $P=R - R^T$, where $R$ is upper-triangular with zero on the diagonal. Hence, using $Q_i=Q_i^T$, we have \begin{align*} \bm{e}_i^T (Q_j P - (Q_i P)^T) \bm{e}_j &= \bm{e}_i^T (Q_j P + P Q_i) \bm{e}_j\\ &= tr P (\bm{e}_j\bm{e}_i^T Q_j + Q_i\bm{e}_j\bm{e}_i^T)\\ &= tr (R - R^T) (B_{(j,i)} + B_{(i,j)}^T)\\ &= tr R^T C_{(i,j)}\\ &= \textrm{uvec}(R)^T \textrm{uvec}(C_{(i,j)}), \end{align*} where we used the upper-triangular nature of $R$ in the last step. Plugging into Eq.~\ref{eq:stability1}, a stable point exists if there is an appropriate vector $\bm{p}:=\textrm{uvec}(R)\in\mathbb{R}^{n(n-1)/2}$ such that for all $1 \leq i < j \leq n$, $\bm{e}_i^T (A-A^T) \bm{e}_j = \textrm{uvec}(C_{(i,j)})^T \bm{p}$. This is equivalent to $\textrm{uvec}(A-A^T) = Z \bm{p}$. \smallskip\noindent{\bf (2) Disallowed edges. } If $\{i, j\}$ is a prohibited edge then $\Psi_i \bm{e}_j = \Psi_j \bm{e}_i = \bm{0}$, so $B_{(i, j)} = B_{(j, i)} = 0$, so $\bm{e}_{ij}^T Z = \bm{0}^T$. Also, $A_{ij} = A_{ji} = 0$ so $\textrm{uvec}(A - A^T)_{ij} = 0$. Therefore, the equality $\bm{e}_i^T (A-A^T) \bm{e}_j = \textrm{uvec}(C_{(i,j)})^T \bm{x}$ is achieved for any solution vector $\bm{x}$ if $\{i, j\}$ is a prohibited edge. We can therefore reduce the linear system $Z \bm{p} = \textrm{uvec}(A-A^T)$ from part (1) by deleting rows of $Z$ corresponding to prohibited edges. Similarly, since the system is constrained by $\bm{p}_{ij} = 0$ for prohibited edges $\{i, j\}$, the columns of $Z$ corresponding to such edges have no effect on the solution set. We conclude that the linear system in (1) is equivalent to the (unconstrained) reduced system $Z_F \bm{p}_F = \textrm{uvec}(A - A^T)_F$. If $Z_F$ has full rank then the unique reduced solution is $\bm{p}_F = Z_F^{-1} \textrm{uvec}(A - A^T)_F$. \Halmos \endproof \subsection{Proof of Theorem \ref{thm:stable-common}} We begin with a perturbation lemma. \begin{lemma}\label{lemma:perturb-inv} Let $E$ and $Z_F$ be defined as in Theorem~\ref{thm:stable}. Let $\underline{\lambda}(A) \vcentcolon= \min \{\abs{\lambda}: \lambda \in \sigma(Z_F) \setminus \{0\}\}$. Then for all $\epsilon \in (0, \underline{\lambda}(Z_F))$, the perturbed matrix $Z_F + \epsilon I_{|F|}$ is full rank. \end{lemma} \proof{Proof of Lemma \ref{lemma:perturb-inv}.} The eigenvalues of $(Z_F + \epsilon I)$ are given by $\{\lambda + \epsilon: \lambda \in \sigma(Z_F)\}$. So, any zero eigenvalues are shifted to $\epsilon > 0$. For all other eigenvalues $\lambda \in \sigma(Z_F) \setminus \{0\}$, we have $\lambda+\epsilon\neq 0$ since $\epsilon < \underline{\lambda}(Z_F)$. Hence, $Z_F+\epsilon I_{|F|}$ is full rank. \Halmos \endproof \begin{repeattheorem}[Restatement of Theorem \ref{thm:stable-common}]. For any network setting $S=(\bm{\mu}_i, \gamma_i, \Sigma_i, \Psi_i)_{i \in [n]}$ and any $\epsilon>0$, there exist uncountably many network settings $S^\prime=(\bm{\mu}_i, \gamma_i, \Sigma_i^\prime, \Psi_i)_{i \in [n]}$ such that $\|\Sigma_i^\prime-\Sigma_i\|<\epsilon$ for all $i\in[n]$, and $S^\prime$ has a unique stable point. \end{repeattheorem} \proof{Proof of Theorem \ref{thm:stable-common}.} Define $F$, $Z_F$ and $Q$ as in Theorem~\ref{thm:stable}. Let $Z_F^\prime = Z_F + \beta I_{\abs{F}}$ for some $\beta>0$. For $\beta$ small enough, Lemma~\ref{lemma:perturb-inv} and Theorem~\ref{thm:stable} show that if $Z_F^\prime$ arises according to a network setting $S^\prime$, then that setting $S^\prime$ has a unique stable point. Therefore, it remains to show that there exists an $S^\prime$ such that the matrix of the corresponding system is $Z_F^\prime$. For $i \in [n]$ let $X_i \in \mathbb{R}^{n \times n}$ be diagonal matrix such that for $j \in [n]$, \begin{align*} (X_i)_{j, j} = \begin{cases} 1 & (i,j)\in F \\ 0 & \text{otherwise} \end{cases} \end{align*} We claim that $S^\prime$ results in the system $Z_F^\prime$ if for all $i \in [n]$, we define $\Sigma_i^\prime$ such that $\Psi_i \Sigma_i^\prime \Psi_i^T = \frac{1}{2 \gamma_i} (\Psi_i (Q_i + \beta X_i) \Psi_i^T)^{-1}$. The entries of $\Sigma_i^\prime$ and $\Sigma_i$ are kept the same outside of the submatrix selected by $\Psi_i \Sigma_i\Psi_i^T$. To see that this choice of $S^\prime$ results in $Z^\prime$, notice that $Q_i^\prime = \Psi_i^T (2 \gamma_i \Psi_i \Sigma_i^\prime \Psi_i^T)^{-1} \Psi_i = \Psi_i^T \Psi_i (Q_i + \beta X_i) \Psi_i^T \Psi_i$. This has the effect of shifting the diagonal of $Q_i$: \begin{align*} (Q_i^\prime)_{j, k} = \begin{cases} (Q_i)_{j, k} + \beta & j=k, (i,j)\in F \\ (Q_i)_{j, k} & \text{otherwise} \end{cases} \end{align*} Working through the matrices $B_{(i,j)}$ and $C_{(i,j)}$ in the statement of Theorem~\ref{thm:stable}, we find that $Z_F^\prime = Z_F + \beta I_{\abs{F}}$ under network setting $S^\prime$. Next, we want to bound $\|\Sigma_i^\prime-\Sigma_i\|$. Since $\Sigma_i^\prime$ has entries equal to that of $\Sigma_i$ outside of the submatrix selected by $\Psi_i \Sigma_i \Psi_i^T$, and since both matrices are positive definite symmetric, a permutation argument implies that to bound the spectral norm it suffices to analyze the principal submatrix that is changed. That is, $\| \Sigma_i - \Sigma_i^\prime \|_2 = \| \Psi_i (\Sigma_i - \Sigma_i^\prime) \Psi_i^T \|_2$. Let $r_i \vcentcolon= \| \beta (\Psi_i Q_i \Psi_i^T)^{-1} (\Psi_i X_i \Psi_i^T) \|$. We choose $\beta$ small enough that $r_i < 1$ for all $i\in [n]$. Then, using the power series identity $(I - X)^{-1} = \sum_{k=0}^{\infty} X^k$ for $\| X \| < 1$, we have: \begin{align*} \| \Psi_i (\Sigma_i - \Sigma_i^\prime) \Psi_i^T \| &= \frac{1}{2 \gamma_i} \| (\Psi_i (Q_i + \beta X_i) \Psi_i^T)^{-1} - (\Psi_i Q_i \Psi_i^T)^{-1} \| \\ &= \frac{1}{2 \gamma_i} \bigg\| \bigg( (\Psi_i Q_i \Psi_i^T) (I + \beta (\Psi_i Q_i \Psi_i^T)^{-1} (\Psi_i X_i \Psi_i^T)) \bigg)^{-1} - (\Psi_i Q_i \Psi_i^T)^{-1} \bigg\| \\ &= \frac{1}{2 \gamma_i} \bigg\| \bigg( \big( (I + \beta (\Psi_i Q_i \Psi_i^T)^{-1} (\Psi_i X_i \Psi_i^T) \big)^{-1} - I \bigg) (\Psi_i Q_i \Psi_i^T)^{-1} \bigg\| \\ &\leq \frac{\| (\Psi_i Q_i \Psi_i^T)^{-1} \|}{2 \gamma_i} \sum\limits_{k=1}^{\infty} \bigg\| \bigg(-\beta (\Psi_i Q_i \Psi_i^T)^{-1} (\Psi_i X_i \Psi_i^T) \bigg)^k \bigg\|\\ &= \frac{\| (\Psi_i Q_i \Psi_i^T)^{-1} \|}{2 \gamma_i} \frac{r_i}{1-r_i}.\\ \end{align*} Thus, for $\beta$ small enough (and hence $\{r_i\}_{i\in[n]}$ small enough), $\|\Sigma_i -\Sigma^\prime_i\|<\epsilon$ for all $i\in[n]$. \Halmos \endproof \subsection{Proof of Corollary \ref{cor:stability:sharedSigma}} \begin{repeatcorollary}[Restatement of Corollary \ref{cor:stability:sharedSigma}.] Suppose $\Sigma_i=\Sigma$ and $\Psi_i=I_n$ for all $i\in[n]$. Let $(\lambda_i, \bm{v}_i)$ denote the $i^{th}$ eigenvalue and eigenvector of $\Gamma^{-1/2}\Sigma\Gamma^{-1/2}$. Then, the network $W$ can be written in two equivalent ways: \begin{align*} \textrm{vec}(W) &= \frac 1 2 (\Gamma\otimes \Sigma + \Sigma \otimes \Gamma)^{-1}\textrm{vec}(M+M^T),\\ W &= \Gamma^{-1/2}\left( \sum_{i=1}^n \sum_{j=1}^n \frac{\bm{v}_i^T \Gamma^{-1/2}(M+M^T)\Gamma^{-1/2} \bm{v}_j}{2 (\lambda_i + \lambda_j)} \bm{v}_i \bm{v}_j^T \right) \Gamma^{-1/2}. \end{align*} \end{repeatcorollary} \proof{Proof.} We first prove the identity with $\textrm{vec}(W)$. For each agent $i$ the optimal set of contracts is given as $\bm{w}_i = (2 \gamma_i \Sigma_i)^{-1} (M - P)\bm{e}_i$. Since $\Sigma_i = \Sigma$ for all $i$, we obtain $W = \frac{1}{2} \Sigma^{-1} (M - P) \Gamma^{-1}$. Hence $M - P = 2 \Sigma W \Gamma$. Using $W = W^T$ and $P^T = -P$ for a stable feasible point $(W, P)$, we obtain $\Sigma W \Gamma + \Gamma W \Sigma = \frac{1}{2}(M + M^T)$. Vectorization implies $(\Gamma\otimes \Sigma + \Sigma \otimes \Gamma) \textrm{vec}(W) = \frac 1 2 \textrm{vec}(M+M^T)$. It remains to show that $(\Gamma\otimes \Sigma + \Sigma \otimes \Gamma)$ is invertible. Let $K \vcentcolon= (\Gamma\otimes \Sigma + \Sigma \otimes \Gamma)$ for shorthand. Notice $K = (\Gamma^{1/2} \otimes \Gamma^{1/2}) (I \otimes \Gamma^{-1/2} \Sigma \Gamma^{-1/2} + \Gamma^{-1/2} \Sigma \Gamma^{-1/2} \otimes I) (\Gamma^{1/2} \otimes \Gamma^{1/2})$. Let $K^\prime = (I \otimes \Gamma^{-1/2} \Sigma \Gamma^{-1/2} + \Gamma^{-1/2} \Sigma \Gamma^{-1/2} \otimes I)$. Since $(\Gamma^{1/2} \otimes \Gamma^{1/2})$ is invertible it suffices to show $K^\prime$ is invertible. Properties of Kronecker products imply that if a matrix $A \in \mathbb{R}^{n \times n}$ has strictly positive eigenvalues then $\sigma(I \otimes A + A \otimes I) = \{\lambda + \mu: \lambda, \mu \in \sigma(A)\}$ counting mutiplicities \citep{horn-johnson-topics-2008}. Let $\bm{v} \neq \bm{0}$. Then, since $\Sigma \succ 0$ and $\Gamma^{-1/2} \succ 0$ we obtain $\bm{v}^T \Gamma^{-1/2} \Sigma \Gamma^{-1/2} \bm{v} = (\Gamma^{-1/2} \bm{v})^T \Sigma (\Gamma^{-1/2} \bm{v}) > 0$. Hence $\Gamma^{-1/2} \Sigma \Gamma^{-1/2} \succ 0$, so $K^\prime$ is invertible and hence $K$ is invertible. This proves the first identity. Next, we prove the second identity. Properties of Kronecker products imply that $(K^\prime)^{-1}$ has eigendecomposition $(K^\prime)^{-1} = \sum_{i=1}^n \sum_{j=1}^n \frac{1}{\lambda_i + \lambda_j} (\bm{v}_i \otimes \bm{v}_j) (\bm{v}_i \otimes \bm{v}_j)^T$. Therefore, since $(\Gamma^{1/2} \otimes \Gamma^{1/2})^{-1} = (\Gamma^{-1/2} \otimes \Gamma^{-1/2})$ we obtain: \begin{align*} \textrm{vec}(W) &= (\Gamma^{-1/2} \otimes \Gamma^{-1/2}) \sum_{i=1}^n \sum_{j=1}^n \frac{1}{\lambda_i + \lambda_j} (\bm{v}_i \otimes \bm{v}_j) (\bm{v}_i \otimes \bm{v}_j)^T (\Gamma^{-1/2} \otimes \Gamma^{-1/2}) \textrm{vec}\big(\frac{M + M^T}{2}\big) \\ &= (\Gamma^{-1/2} \otimes \Gamma^{-1/2}) \sum_{i=1}^n \sum_{j=1}^n \frac{1}{2(\lambda_i + \lambda_j)} (\bm{v}_i \bm{v}_i^T \otimes \bm{v}_j \bm{v}_j^T) \textrm{vec}\big(\Gamma^{-1/2} (M + M^T) \Gamma^{-1/2} \big) \\ &= (\Gamma^{-1/2} \otimes \Gamma^{-1/2}) \textrm{vec}\left( \sum_{i=1}^n \sum_{j=1}^n \frac{\bm{v}_i^T \Gamma^{-1/2}(M+M^T)\Gamma^{-1/2} \bm{v}_j}{2 (\lambda_i + \lambda_j)} \bm{v}_i \bm{v}_j^T \right) \\ W &= \Gamma^{-1/2} \left( \sum_{i=1}^n \sum_{j=1}^n \frac{\bm{v}_i^T \Gamma^{-1/2}(M+M^T)\Gamma^{-1/2} \bm{v}_j}{2 (\lambda_i + \lambda_j)} \bm{v}_i \bm{v}_j^T \right) \Gamma^{-1/2} \end{align*} \Halmos \endproof \subsection{Proof of Theorem \ref{thm:domination}} \begin{repeattheorem}[Restatement of Theorem \ref{thm:domination}]. Let $(W^*, P^*)$ be a stable feasible point. Then there is no feasible $(W, P)$ such that $(W, P) \succ (W^*, P*)$. \end{repeattheorem} \proof{Proof of Theorem \ref{thm:domination}.} {\em Case 1: $P = P^*$.} First, consider a feasible $(W, P)$ such that $P = P^*$. Then $W \neq W^*$. Since $W^*$ is stable, by definition each agent optimizes contracts with respect to $P^*$, so no agent is worse off under $(W^*, P^*)$ then $(W, P^*)$. Hence $(W, P) \not\succ (W^*, P^*)$. {\em Case 2: $P \neq P^*$.} Second, suppose that $P \neq P^*$. Let $\Delta_i \vcentcolon= g_i(W, P) - g_i(W, P^*)$. It follows that $\Delta_i = (W \bm{e}_i)^T ((P^* - P) \bm{e}_i)$. Let $A \in \mathbb{R}^{n \times n}$ be defined as $A_{ij} = W_{ij} (P_{ij}^* - P_{ij})$. Then $\Delta_i = \bm{e}_i^T A \bm{1}$. Next, notice that $A_{ji} = - A_{ij}$. Therefore, $\sum_i \Delta_i = \bm{1}^T A \bm{1} = 0$. Hence, either $\Delta_i = 0$ for all $i$, or there exists $k$ such that $\Delta_k < 0$. {\em Case 2(i)}. Suppose there exists $k$ such that $\Delta_k < 0$. Then $g_k(W, P) < g_k(W, P^*)$. By case $1$, we have $g_k(W, P^*) \leq g_k(W^*, P^*)$. Therefore agent $k$ is strictly worse off, so $(W, P) \not\succ (W^*, P^*)$. {\em Case 2(ii)}. Suppose $\Delta_i = 0$ for all $i$. Then $g_i(W, P) = g_i(W, P^*)$ for all $i$. And by case $1$, we have $g_i(W, P^*) \leq g_i(W^*, P^*)$. Therefore no agent is better off, so $(W, P) \not\succ (W^*, P^*)$. \Halmos \endproof \subsection{Proof of Theorem \ref{thm:pairwise-nash}} \begin{repeattheorem}[Restatement of Theorem \ref{thm:pairwise-nash}] Any stable point $(W, P)$ is Higher-Order Nash Stable. \end{repeattheorem} \proof{Proof of Theorem \ref{thm:pairwise-nash}.} First, we argue $(W, P)$ is a Nash equilibrium. Suppose that agent $i$ wants to shift some of their contracts at the stable feasible point $(W, P)$. Suppose they propose $(w_{i, j_1}^\prime, p_{i, j_1}^\prime), \dots, (w_{i, j_m}^\prime, p_{i, j_m}^\prime)$ for $j_1, \dots, j_m \in [n]$. Let $(W^\prime, P^\prime)$ denote the new feasible point that occurs if all changes are accepted. By Theorem \ref{thm:domination} we know that $(W^\prime, P^\prime) \not\succ (W, P)$, so at least one agent does not prefer $(W^\prime, P^\prime)$. Since the only changes are to edges $\{i, j_1\}, \dots, \{i, j_m\}$, there must exist a $j \in \{j_1, \dots, j_m\}$ who does not prefer $(W^\prime, P^\prime)$. Therefore, they will reject the proposal of agent $i$ to shift to $(w_{ij}^\prime, p_{ij}^\prime)$. Then, agent $i$ can choose to either maintain the existing contract $(w_{ij}, p_{ij})$ or delete the edge $\{i, j\}$. We claim that agent $i$ prefers to keep the edge, since they could have chosen to set $w_{ij} = 0$ during the network formation process, no matter what price was offered. But $w_{ij} \neq 0$ at equilibrium $(W, P)$. By stability of $(W, P)$ we know $w_{ij}$ is the optimal choice for agent $i$ at prices $P$. Therefore, after agent $j$ rejects $(w_{ij}^\prime, p_{ij}^\prime)$, it follows that the edge remains at $(w_{ij}, p_{ij})$. Since $(W^\prime, P^\prime)$ was arbitrary, we conclude that at equilibrium, agent $i$ cannot propose any set of changes that result in a strictly better network for them. Therefore, their optimal action at $(W, P)$ is to not deviate from the equilibrium. Next, we show cartel robustness. Suppose $S \subset [n]$ is a strict subset and $(W^\prime, P^\prime) \neq (W, P)$ is a feasible point differing only at indices $\{i, j\}$ such that $i, j \in S$. By Theorem \ref{thm:domination}, we know $(W^\prime, P^\prime)$ cannot dominate $(W, P)$, so there is some agent $i \in [n]$ that does not prefer $(W^\prime, P^\prime)$ to $(W, P)$. Since $(W^\prime, P^\prime)$ only changes contracts where both members are in $S$, the utility of agents in $[n]\setminus S$ must be unchanged. Therefore $i \in S$, and hence not all members of the cartel have higher utility under $(W^\prime, P^\prime)$. \Halmos \endproof \subsection{Proof of Proposition \ref{price-update-rule}} \begin{repeatproposition}[Restatement of Proposition \ref{price-update-rule}] Consider a network setting $(\bm{\mu}_i, \gamma_i, \Sigma_i, \Psi_i)_{i \in [n]}$. Let $Q_i$ be as in Theorem \ref{thm:stable}. Given a price matrix $P=-P^T$ and a pair of firms $(i,j)$ that are permitted to trade, let $P^\prime$ be another skew-symmetric price matrix such that (a) $P^\prime$ differs from $P$ only in the cells $(i,j)$ and $(j,i)$, (b) $i$ and $j$ both maximize their utility at the same contract size under $P^\prime$, and (c) $i$ and $j$ can choose their optimal contract sizes with all other agents given these prices. Then, \begin{align*} P^\prime_{ij} &= \frac{1}{Q_{i;j,j} + Q_{j;i,i}} \Big(\bm{e}_i^T Q_j (M - P) \bm{e}_j - \bm{e}_j^T Q_i (M - P) \bm{e}_i \Big) + P_{ij} \end{align*} \end{repeatproposition} \proof{Proof.} Let $A_i \vcentcolon= \gamma_i Q_i$ for $i \in [n]$. Since $\Sigma_i \succ 0$ and $\Psi_i \Sigma_i \Psi_i^T$ is a principal submatrix, we know $\Psi_i \Sigma_i \Psi_i^T$ is real symmetric and positive definite, and hence its inverse is as well. Therefore $A_i$ is real symmetric and PSD. (It is not full rank in general, unless $\Psi_i = I$). Since $\{i, j\}$ is a permitted edge, $\Psi_i \bm{e}_j \neq \bm{0}$ and $\Psi_j \bm{e}_i \neq \bm{0}$. Therefore $A_{i;j,j} = \bm{e}_j^T A_i \bm{e}_j = (\Psi_i \bm{e}_j)^T (2 \Psi_i \Sigma_i \Psi_i^T)^{-1} (\Psi_i \bm{e}_j) > 0$ since $(2 \Psi_i \Sigma_i \Psi_i^T)^{-1}$ is positive definite. So, $A_{i;j,j} > 0$ and similarly $A_{j;i,i} > 0$. Now, the optimal contracts for agent $i$ under prices $P^\prime$ are given by $\bm{w}_i = A_i (M - P^\prime) \Gamma^{-1} \bm{e}_i$. Note that $P^\prime = P + (P^\prime_{ij}-P_{ij})(\bm{e}_i\bm{e}_j^T - \bm{e}_j\bm{e}_i^T)$. Since both $i$ and $j$ maximize their utility at the same contract size, we have: \begin{align*} \bm{w}_{i;j} &= \bm{w}_{j;i} \\ \Rightarrow \bm{e}_j^T \bm{w}_{i} &= \bm{e}_i^T \bm{w}_{j} \\ \Rightarrow \bm{e}_j^T (A_i (M - P^\prime) \Gamma^{-1}) \bm{e}_i &= \bm{e}_i^T (A_j (M - P^\prime) \Gamma^{-1}) \bm{e}_j \\ \Rightarrow \gamma_j \bm{e}_j^T A_i M \bm{e}_i - \gamma_i \bm{e}_i^T A_j M \bm{e}_j &= \gamma_j \bm{e}_j^T A_i P^\prime \bm{e}_i - \gamma_i \bm{e}_i^T A_j P^\prime \bm{e}_j \\ &= \gamma_j \bm{e}_j^T A_i P \bm{e}_i - \gamma_i \bm{e}_i^T A_j P \bm{e}_j - (P^\prime_{ij}-P_{ij})\left( \gamma_j \bm{e}_j^T A_i \bm{e}_j + \gamma_i \bm{e}_i^T A_j \bm{e}_i \right)\\ \Rightarrow P^\prime_{ij} - P_{ij} &= \frac{1}{\gamma_j A_{i;j,j} + \gamma_i A_{j;i,i}} \Big(\bm{e}_i^T \Gamma A_j (M - P) \bm{e}_j - \bm{e}_j^T \Gamma A_i (M - P) \bm{e}_i \Big) \\ &= \frac{1}{Q_{i;j,j} + Q_{j;i,i}} \Big(\bm{e}_i^T Q_j (M - P) \bm{e}_j - \bm{e}_j^T Q_i (M - P) \bm{e}_i \Big) \end{align*} \Halmos \endproof \subsection{Proof of Theorem \ref{thm:asymp}\label{dynamics-with-masking-appendix}} First, we characterize pairwise negotiation dynamics as linear in the price updates. \begin{theorem} Consider a network setting $(\bm{\mu}_i, \gamma_i, \Sigma_i, \Psi_i)_{i \in [n]}$. Define $Q_i$ as in Theorem~\ref{thm:stable}. Let $s_{ij}=1$ if $\{i, j\}$ is a permitted edge and $0$ otherwise. Let $L, R \in \mathbb{R}^{n^2 \times n^2}$ be diagonal matrices such that $L_{(i-1)n + j, (i-1)n + j} = Q_{i;jj} + Q_{j;ii}$ and $R_{(i-1)n+j, (i-1)n+j} = s_{ij}$, and $L^\dagger$ be the pseudoinverse of $L$. Let $\Delta_{(t + 1)} = P(t + 1) - P(t)$, where $P(t)$ is the price matrix at time step $t$ of pairwise negotiations. Then, \begin{align*} \textrm{vec}(\Delta_{(t+1)}) &= R \Big(I_{n^2} -\eta L^\dagger K \Big) \textrm{vec}(\Delta_{(t)}), & \text{where } K &= \sum\limits_{r=1}^{n} \big( \bm{e}_r \bm{e}_r^T \otimes Q_r + Q_r \otimes \bm{e}_r \bm{e}_r^T \big). \end{align*} \label{thm:linearization-dynamics-general} \end{theorem} \proof{Proof.} Let $\{i, j\}$ be a permitted edge. From Proposition~\ref{price-update-rule}, we obtain: \begin{align*} (\Delta_{(t + 1)})_{ij} &= \frac{\eta}{Q_{i;j,j} + Q_{j;i,i}} \Big(\bm{e}_i^T Q_j (M - P(t)) \bm{e}_j - \bm{e}_j^T Q_i (M - P(t)) \bm{e}_i \Big) \\ \Rightarrow (\Delta_{(t + 1)})_{ij} - (\Delta_{(t)})_{ij} &= \frac{\eta}{Q_{i;j,j} + Q_{j;i,i}} \Big(\bm{e}_i^T Q_j (-\Delta_{(t)}) \bm{e}_j - \bm{e}_j^T Q_i (-\Delta_{(t)}) \bm{e}_i \Big) \\ &= \frac{- \eta}{Q_{i;j,j} + Q_{j;i,i}} \Big(\bm{e}_i^T Q_j \Delta_{(t)} \bm{e}_j - \bm{e}_j^T Q_i \Delta_{(t)} \bm{e}_i \Big) \\ &= \frac{- \eta}{Q_{i;j,j} + Q_{j;i,i}} \bm{e}_i^T \Big(Q_j \Delta_{(t)} - (Q_i \Delta_{(t)})^T \Big) \bm{e}_j \\ \Rightarrow (Q_{i;j,j} + Q_{j;i,i}) \Big((\Delta_{(t + 1)})_{ij} - (\Delta_{(t)})_{ij}\Big) &= -\eta s_{ij} \bm{e}_i^T \Big(Q_j \Delta_{(t)} + \Delta_{(t)} Q_i \Big) \bm{e}_j, \end{align*} We assumed that $\{i, j\}$ was a permitted edge above, but notice the identity is also true for prohibited $\{i, j\}$ since both the numerator and denominator become $0$, and we can define their ratio to be $0$. Defining $Y_{ij} = \bm{e}_i^T \Big(Q_j \Delta_{(t)} + \Delta_{(t)} Q_i \Big) \bm{e}_j$, and recalling the definitions of $L$ and $R$ from the theorem statement, the above formula becomes \begin{align} L \textrm{vec}(\Delta_{(t+1)} - \Delta_{(t)}) &= -\eta R \textrm{vec}(Y). \label{eq:LRtmp1} \end{align} We show next that $\textrm{vec}(Y) = K \textrm{vec}(\Delta_{(t)})$, where $K$ is defined in the theorem statement. Let $tr$ denote the trace operator. Then, \begin{align*} (\bm{e}_j^T\otimes \bm{e}_i^T) \textrm{vec}(Y) = Y_{ij} &= \bm{e}_i^T \Big(Q_j \Delta_{(t)} + \Delta_{(t)} Q_i \Big) \bm{e}_j \\ &= tr\Big(\bm{e}_i^T Q_j \Delta_{(t)} \bm{e}_j\Big) + tr\Big(\bm{e}_i^T\Delta_{(t)} Q_i \bm{e}_j \Big) \\ &= tr\Big(\bm{e}_j^T \Delta_{(t)}^T Q_j^T \bm{e}_i\Big) + tr\Big(\bm{e}_i^T\Delta_{(t)} Q_i \bm{e}_j \Big) \\ &= tr\Big(\Delta_{(t)}^T Q_j^T \bm{e}_i \bm{e}_j^T \Big) + tr\Big(Q_i \bm{e}_j \bm{e}_i^T \Delta_{(t)} \Big) \\ &= \textrm{vec}(\Delta_{(t)})^T \textrm{vec}(Q_j^T \bm{e}_i \bm{e}_j^T + (Q_i \bm{e}_j \bm{e}_i^T)^T ) \\ &= \textrm{vec}(Q_j \bm{e}_i \bm{e}_j^T + \bm{e}_i \bm{e}_j^T Q_i)^T \textrm{vec}(\Delta_{(t)}) , \end{align*} where we used $Q_i=Q_i^T$. Hence we need to show $(\bm{e}_j^T \otimes \bm{e}_i^T) K = \textrm{vec}(Q_j \bm{e}_i \bm{e}_j^T + \bm{e}_i \bm{e}_j^T Q_i)^T$. Now, \begin{align} (\bm{e}_j^T \otimes \bm{e}_i^T) K &= (\bm{e}_j^T \otimes \bm{e}_i^T) (\sum\limits_{r=1}^{n} \bm{e}_r \bm{e}_r^T \otimes Q_r + Q_r \otimes \bm{e}_r \bm{e}_r^T) \nonumber\\ &= \sum\limits_{r=1}^{n} \delta_{jr} (\bm{e}_j^T \otimes \bm{e}_i^T Q_r) + \delta_{ir} (\bm{e}_j^T Q_r \otimes \bm{e}_i^T)\nonumber\\ &= (\bm{e}_j^T \otimes \bm{e}_i^T Q_j) + (\bm{e}_j^T Q_i \otimes \bm{e}_i^T)\nonumber\\ &= \left( \bm{e}_j\otimes Q_j \bm{e}_i + Q_i\bm{e}_j\otimes \bm{e}_i \right)^T.\label{eq:RK} \end{align} Now, we observe that $\bm{e}_j\otimes Q_j\bm{e}_i$ is the vectorization of a matrix whose $j^{th}$ column is $Q_j\bm{e}_i$, i.e., the matrix $Q_j\bm{e}_i\bm{e}_j^T$. Similarly, $Q_i\bm{e}_j\otimes \bm{e}_i$ is the vectorization of a matrix whose $i^{th}$ row is $(Q_i\bm{e}_j)^T$, i.e., the matrix $\bm{e}_i\bm{e}_j^TQ_i$. Hence, $(\bm{e}_j^T \otimes \bm{e}_i^T) K = \textrm{vec}(Q_j\bm{e}_i\bm{e}_j^T+\bm{e}_i\bm{e}_j^TQ_i)^T$, as desired. Plugging into Eq.~\ref{eq:LRtmp1}, \begin{align*} L \textrm{vec}(\Delta_{(t+1)} - \Delta_{(t)}) &= -\eta RK \textrm{vec}(\Delta_{(t)}) \\ \Rightarrow L \textrm{vec}(\Delta_{(t+1)}) &= L \textrm{vec}(\Delta_{(t)}) -\eta R K\textrm{vec}(\Delta_{(t)}) \\ \Rightarrow \textrm{vec}(\Delta_{(t+1)}) &= \Big(L^\dagger L -\eta L^\dagger R K \Big) \textrm{vec}(\Delta_{(t)}) \\ \Rightarrow \textrm{vec}(\Delta_{(t+1)}) &= \Big(R -\eta R L^\dagger K \Big) \textrm{vec}(\Delta_{(t)}) \\ &= R \Big(I_{n^2} -\eta L^\dagger K \Big) \textrm{vec}(\Delta_{(t)}), \end{align*} where we used the facts that $(\Delta_{t})_{ij}=(\Delta_{(t+1)})_{ij}=0$ for disallowed edges, and $L^\dagger L = R$ and $LR = RL = L$, which can be easily confirmed by inspection of these diagonal matrices. \Halmos \endproof We use Lyapunov theory to analyze the convergence of pairwise negotiation dynamics. In particular, we need the the discrete Lyapunov equation, also called the Stein equation. \begin{theorem}[\cite{callier-desoer} 7.d] \label{thm:callier} For the discrete-time dynamical system $\bm{x}_{t + 1} = A \bm{x}_{t}$, with $\bm{x}_t \in \mathbb{R}^n$, the following are equivalent: \begin{enumerate} \item The system is globally asymptotically stable towards $\bm{0}$. \item For any positive definite $R \in \mathbb{R}^{n \times n}$, there exists a unique solution $X \succ 0$ to the equation $$A X A^T - X = - R$$ \item For any eigenvalue $\lambda$ of $A$, $\abs{\lambda}<1$. \end{enumerate} \end{theorem} Pairwise negotiation dynamics can be described as a discrete-time linear system in $\textrm{vec}(\Delta_{t})$, where $\Delta_{t}$ is the price difference at time $t$. Clearly, the system converges iff $\Delta_{t}$ approaches zero. Therefore, we can use the Stein equation to prove global asymptotic stability conditions. We will also need the {\em commutation matrix}. \begin{lemma}[\cite{horn-johnson-topics-2008}] Let $\Pi^{(n, n)}: \mathbb{R}^{n^2} \to \mathbb{R}^{n^2}$ be a permutation matrix (called the {\em $(n, n)$ commutation matrix}) defined as $\Pi^{(n, n)} = \sum\limits_{i = 1}^{n} \sum\limits_{j = 1}^{n} \bm{e}_i \bm{e}_j^T \otimes \bm{e}_j \bm{e}_i^T$. Then for any $A, B \in \mathbb{R}^{n \times n}$, we have \begin{align*} A \otimes B = \Pi^{(n, n)} (B \otimes A) (\Pi^{(n, n)})^T \end{align*} \label{lemma:commutation-matrix} \end{lemma} Recall that for a linear operator $T$ that $\sigma(T)$ denotes the eigenvalues of $T$. We are ready to prove Part 1 of Theorem \ref{thm:asymp}. \begin{proposition}[Part 1 of Theorem \ref{thm:asymp}]\label{real-part-eigs-condition-stability-prop} Let $L, R, K$ be defined as in Theorem \ref{thm:linearization-dynamics-general}. For a matrix $X \in \mathbb{R}^{n^2 \times n^2}$ let $X\mid_R$ denote the principal submatrix of $X$ corresponding to the nonzero rows/columns of $R$. Define $\eta^\star = 2 / \left\|(L^\dagger K)\mid_R\right\|$. Then, for any $\eta \in (0, \eta^\star)$, $\textrm{vec}(\Delta_{(t)})$ is globally asymptotically stable towards $\bm{0}$. \end{proposition} \proof{Proof of Proposition \ref{real-part-eigs-condition-stability-prop}.} Let $T = R(I - \eta L^\dagger K)$. By Theorem~\ref{thm:callier}, the dynamics are globally asymptotically stable towards $\bm{0}$ iff for all $\lambda \in \sigma(T)$, we have $\abs{\lambda} < 1$. From Eq.~\ref{eq:RK} for a prohibited edge $(i,j)$, we see that $(\bm{e}_j^T\otimes \bm{e}_i^T)K=\bm{0}^T$, since $Q_i\bm{e}_j=\bm{0}=Q_j\bm{e}_i$. Hence, $K=RK$. Taking transposes and noting that both $K$ and $R$ are symmetric, we find $KR=K$. Hence, $T=R(I-\eta L^\dagger K)=R(I-\eta L^\dagger K)R$, where we used $R^2=R$. Thus, $T$ is zero except for the principal submatrix corresponding to the nonzero columns of $R$. So, to apply Theorem~\ref{thm:callier}, we only require $\abs{\lambda}<1$ for $\lambda \in \sigma(T\mid_R)$. For clarity of exposition we will first consider the case where $R = I$ (no prohibited edges). Then, the eigenvalues of $T\mid_R=T$ equal $1-\eta \lambda$, where $\lambda\in\sigma(L^{-1}K)=\sigma(L^{-1/2}KL^{-1/2})$ by a similarity transformation. Also, $K=U_1+U_2$, where $U_1=\sum\limits_{r =1}^{n} \big(\bm{e}_r \bm{e}_r^T \otimes Q_r\big)$ and $U_2 \vcentcolon= \sum\limits_{r =1}^{n} \big(Q_r \otimes \bm{e}_r \bm{e}_r^T \big)$. The matrix $U_1$ is block diagonal with positive-definite blocks $Q_r\succ 0$, so $U_1\succ 0$. By Lemma~\ref{lemma:commutation-matrix}, $U_2$ is similar to $U_1$ via a permutation matrix, so $U_2\succ 0$. Hence, $K\succ 0$, and $L^{-1/2}KL^{-1/2}\succ 0$. So, the eigenvalues of $L^{-1}K$ are real and positive. Hence, we have convergence iff for all $\lambda\in\sigma(L^{-1}K)$, we have $1 > (1-\eta\lambda)^2 = 1 - 2\eta\lambda + \eta^2\lambda^2$, i.e., $\lambda < 2/\eta$. Hence, $\eta^\star=2/\|L^{-1}K\|$ as required. Now we consider the prohibited edges setting ($R\neq I$). Here, convergence occurs iff $|1-\eta\lambda|<1$ for all $\lambda\in\sigma((L^\dagger K)\mid_R)$. Since $RL^\dagger R=L^\dagger$ and $RKR=K$, we have $(L^\dagger K)\mid_R = L^\dagger\mid_R K\mid_R = (L\mid_R)^{-1} K\mid_R$. Arguing as above, it suffices to show that $K\mid_R \succ 0$. We claim $K\mid_R = V_1 + V_2$ where $V_1$ is a block diagonal matrix with $i^{th}$ block equal to $(2 \gamma_i \Psi_i \Sigma_i \Psi_i^T)^{-1} \succ 0$, and $V_2$ is similar to $V_1$ via Lemma~\ref{lemma:commutation-matrix}. Hence $K\mid_R \succ 0$ and the expression for $\eta^\star$ follows. \Halmos \endproof \begin{proposition}[Part 2 of Theorem \ref{thm:asymp}]\label{prop:expConvergence} We define $\eta^\star$ as in Proposition~\ref{real-part-eigs-condition-stability-prop}, and $L, R, K, \alpha$ as in Theorem \ref{thm:linearization-dynamics-general}. Let $\eta\in(0,\eta^\star)$. Then, \begin{align*} \|P(t)-P^\star\|_F &\leq \frac{\alpha^t}{1-\alpha} \cdot \|P(1)-P(0)\|_F \end{align*} Here, $P^\star$ is the stable point to which the negotiation converges. \end{proposition} \proof{Proof.} Let $\beta$ denote the greatest eigenvalue in absolute value of $R(I_{n^2} - \eta L^\dagger K)$. From Theorem~\ref{thm:linearization-dynamics-general}, we have $\|\Delta_{t+1}\|_F \leq \abs{\beta} \|\Delta_t\|_F$. Recall that $\lambda_{\textrm{max}}, \lambda_{\textrm{min}}$ denote largest and smallest eigenvalues of the matrix $(L^\dagger K)\mid_R$ respectively. Since $\| R \| = 1$, it follows that $\abs{\beta} = \max\{ \abs{1-\eta\lambda_{\textrm{min}}}, \abs{1-\eta\lambda_{\textrm{max}}} \} = \alpha$. Then, \begin{align*} \|P^\star-P(t)\|_F &\leq \sum_{i>t} \|\Delta_i\|_F \\ &\leq \|\Delta_t\|_F(\alpha + \alpha^2 + \ldots) \\ &\leq \|\Delta_t\|_F \frac{\alpha}{1-\alpha}\\ &\leq (\alpha^{t-1} \|\Delta_1\|_F)\frac{\alpha}{1-\alpha} \\ &= \|\Delta_1\|_F \frac{\alpha^t}{1-\alpha} \end{align*} Since $\|\Delta_1\|_F = \| P(1) - P(0) \|_F$ we are done. \Halmos \endproof \subsection{Proof of Theorem \ref{thm:asymp:random}} We will use a series of Lemmas to reduce the result of Theorem \ref{thm:asymp:random} to a matrix concentration inequality in each of the $\hat{\Sigma}_i$. \begin{lemma} Let $\widehat{\eta^*}, \eta^*$ be as in Theorem \ref{thm:asymp:random}. Suppose all edges are permitted. Suppose that for all $i \in [n]$, we have $\|\hat{\Sigma}_i^{-1}-\Sigma^{-1}\|=o(1)$. Then, $\hat{\eta}^\star > \eta^\star(1-o(1))$. \label{lemma:sigma-inv-sufficient} \end{lemma} \proof{Proof.} Let $\hat{L}, \hat{K} \in \mathbb{R}^{n^2 \times n^2}$ be as in Theorem \ref{thm:asymp:random}, but built using $\hat{\Sigma}_1, \dots, \hat{\Sigma}_n$ instead of $\Sigma, \dots, \Sigma$. Let $L, K$ be defined similarly to $\hat{L}, \hat{K}$ but using $\Sigma$ in place of all $\hat{\Sigma}_i$. Then $\widehat{\eta^*} \vcentcolon= \frac{2}{\max\sigma(\hat{L}^{-1} \hat{K})}$ and $\eta^* \vcentcolon= \frac{2}{\max\sigma(L^{-1} K)}$. Let $\epsilon_L, \epsilon_K \in \mathbb{R}^{n^2 \times n^2}$ be such that $\hat{L}^{-1} = L^{-1} + \epsilon_L$ and $\hat{K} = K + \epsilon_K$. We will bound $\| \epsilon_L \|, \| \epsilon_K \|$. Let $Q_i, \hat{Q}_i$ be defined as in Theorem \ref{thm:stable}, so $Q_i \vcentcolon= (2 \gamma_i \Sigma)^{-1}$ and $\hat{Q}_i \vcentcolon= (2 \gamma_i \hat{\Sigma}_i)^{-1}$. Let $\alpha = \max\limits_{i \in [n]} \| \hat{Q}_i - Q_i \|$. Notice $\| \Gamma^{-1} \| = O(1)$, so $\alpha = o(1)$. First, since $L$ is diagonal, $\| \epsilon_L \| \leq \max\limits_{i, j \in [n]} \big((\hat{Q}_{i;jj} - Q_{i;jj}) + (\hat{Q}_{j;ii} - Q_{j;ii}) \big) \leq 2 \max\limits_{i, j \in [n]} \big(\hat{Q}_{i;jj} - Q_{i;jj} \big) \leq 2 \max\limits_{i \in [n]} \| \hat{Q}_i - Q_i \| = 2a$. Second, let $\hat K \vcentcolon= \hat U_1 + \hat U_2$ where $\hat U_1, \hat U_2$ are defined analogously to $U_1, U_2$ in the proof of Theorem \ref{thm:asymp}. Letting $\Pi$ be the $(n, n)$ commutation matrix of Lemma \ref{lemma:commutation-matrix}, we know $\hat U_2 = \Pi \hat U_1 \Pi^T$, so $\| \epsilon_K \| \leq 2 \| \hat U_1 - U_1 \|$. Since $U_1, \hat U_1$ are block diagonal with $i^{th}$ blocks $Q_i, \hat Q_i$ respectively, it follows $\| \hat U_1 - U_1 \| = \max\limits_{i \in [n]} \| \hat{Q}_i - Q_i \| = \alpha$. Hence $\| \epsilon_K \| \leq 2 \alpha$. Third, notice that since $\| \Sigma \|$ and $\| \Gamma \|$ are assumed to be $O(1)$ that $\| L^{-1} \| = O(\max_{i} \| Q_i \|) = O(1)$ and $\| K \| = O(\max_{i} \| Q_i \|) = O(1)$. So, \begin{align*} \| \hat{L}^{-1} \hat{K} - L^{-1} K \|_2 &\leq \| \epsilon_L \| \| K \| + \| L^{-1} \| \| \epsilon_K \| + \| \epsilon_L \| \| \epsilon_K \| \\ &\leq 2\alpha (\| K \| + 2 \alpha) + 4 \alpha (\| L^{-1} \| + \alpha) \\ &= 4 \alpha (\| K \| + \| L^{-1} \|) + 8 \alpha^2 \\ &\leq o(1) \end{align*} We conclude that $\| \hat{L}^{-1} \hat{K} \|_2 \leq \| L^{-1} K \|_2 + o(1)$, so $\widehat{\eta^*} \geq \frac{\eta^*}{1 + (o(1) / \| L^{-1} K \|)} \geq (1 - o(1)) \eta^*$. \Halmos \endproof \begin{lemma} Suppose for $i \in [n]$, we have $\delta_i \vcentcolon=\| \hat{\Sigma}_i - \Sigma\|=o(1)$. Then $\|\hat{\Sigma}_i^{-1} - \Sigma_i^{-1}\|=o(1)$. \label{lemma:sigma-inv-reduction} \end{lemma} \proof{Proof.} Weyl's inequality implies that $\lambda_{min}(\hat{\Sigma}_i) \geq \lambda_{min}(\Sigma) - \| \hat{\Sigma}_i - \Sigma\|$. Therefore, \begin{align*} \|\hat{\Sigma}_i^{-1}\| &= \frac{1}{\lambda_{min}(\hat{\Sigma}_i)}\\ &\leq \frac{1}{\lambda_{min}(\Sigma)- \delta_i} \\ &=\frac{1}{\lambda_{min}(\Sigma)} \Big(1+\frac{\delta_i}{\lambda_{min}(\Sigma)} + O\Big( \Big(\frac{\delta_i}{\lambda_{min}(\Sigma)}\Big)^2 \Big) \Big)\\ &=\|\Sigma^{-1}\|(1+o(1))\\ \Rightarrow \|\hat{\Sigma}_i^{-1} - \Sigma^{-1}\| &= \|\Sigma^{-1}(\Sigma_i-\hat{\Sigma}_i)\hat{\Sigma}_i^{-1}\|\\ &\leq (1 + o(1)) \|\Sigma^{-1}\|^2 \delta_i \\ &\leq o(1) \end{align*} The last step follows from the fact $\| \Sigma^{-1} \| = O(1)$. \Halmos \endproof The hypothesis of Lemma \ref{lemma:sigma-inv-reduction} follows from standard concentration bounds for random covariance matrices. \begin{theorem}[\cite{vershynin-book}, 4.7.3] Suppose $\Sigma \in \mathbb{R}^{n \times n}$ is positive definite and $\hat{\Sigma} \in \mathbb{R}^{n \times n}$ is a covariance estimate generated via $m$ samples from a $\mathcal{N}(\bm{0}, \Sigma)$ distribution, as in Theorem \ref{thm:asymp:random}. There exists an absolute constant $c > 0$ such that for all $u > 0$, with probability $\geq 1 -2e^{-u}$, \begin{align*} \| \Sigma - \hat\Sigma \|_2 \leq c \| \Sigma \|_2 \bigg(\sqrt{\frac{n + u}{m}} + \frac{n + u}{m}\bigg) \end{align*} \label{thrm:wishart-vershynin} \end{theorem} Theorem \ref{thm:asymp:random} follows easily. \proof{Proof of Theorem \ref{thm:asymp:random}} Suppose $m = \ceil*{n\log n}$. We will first consider the case of all edges permitted; the proof for prohibited edges is almost identical. Applying Theorem \ref{thrm:wishart-vershynin} to each $\hat{\Sigma}_i$, a union bound gives $\| \Sigma - \hat{\Sigma}_i \| \leq 2 \sqrt{2} c \| \Sigma \| \sqrt{\frac{n}{m}}$ for all $i \in [n]$ with probability at least $(1 - 2e^{-n})^n \geq 1 - e^{- n + O(\log n)} \geq e^{-\Omega(n)}$. Since $\| \Sigma \| = O(1)$ and $\frac{n}{m} = o(1)$, this simplifies to $\| \Sigma - \hat{\Sigma}_i \| = o(1)$ for all $i$. By Lemmas \ref{lemma:sigma-inv-sufficient} and \ref{lemma:sigma-inv-reduction}, we have $\hat{\eta}^* \geq \eta^* (1 - o(1))$ with probability at least $1 - e^{-\Omega(n)}$. If there are prohibited edges, then we must use matrix concentration to bound $\max\sigma(\hat{L}^\dagger \hat K)$ instead of $\max\sigma(\hat{L}^{-1} \hat K)$. Notice that prohibited edges have the effect of simply zeroing out certain rows and columns of $Q_i$, so that $Q_i \vcentcolon= \Psi_i (2 \gamma_i \Psi_i^T \Sigma_i \Psi_i)^{-1} \Psi_i^T$, rather than $(2 \gamma_i \Sigma_i)^{-1}$. Therefore, we can use Theorem \ref{thrm:wishart-vershynin} to bound $\| \Psi_i^T \hat{\Sigma}_i \Psi_i - \Psi_i^T \Sigma \Psi_i \|$ for all $i$, and then prove the appropriate analogue of Lemma \ref{lemma:sigma-inv-sufficient}. In particular, the sample size requirement remains the same. \Halmos \endproof \subsection{Proof of Proposition \ref{sdp-sigma-recovery}} \label{appendix:inference} \begin{repeatproposition}[Restatement of Proposition \ref{sdp-sigma-recovery}.] Finding the maximum likelihood estimator of $\Sigma$ under Assumption~\ref{assume:SDP} is equivalent to the following SDP: \begin{align*} \min\limits_{\Sigma} \sum\limits_{t = 1}^{T - 1} \| \Sigma (W(t + 1) - W(t)) + (W(t + 1) - W(t)) \Sigma \|_F^2 \\ \Sigma \succeq 0 \\ \textrm{tr}(\Sigma) = 1 \end{align*} \end{repeatproposition} Recall that in Assumption \ref{assume:SDP} we assumed that $M_{ij}(t)$ varies independently according to a Brownian motion with the same parameters for all $(i,j)$. To avoid ambiguity, we recall the definition of a standard Brownian motion as follows. \begin{definition}[Brownian Motion] For $d \geq 1$, a $d$-dimensional Brownian motion with scale parameter $\sigma > 0$ is a stochastic process $\{\bm{X_t}: t \geq 0\}$ such that $\bm{X_t} \in \mathbb{R}^d$ for all $t$, the components of $\bm{X_t}$ are independent, and for all $j \in [d]$, i) The process $\{(\bm{X_t})_j: t \geq 0\}$ has independent increments ii) For $r > 0$, the increment $(\bm{X_{t+r}})_j - (\bm{X_{t}})_j$ is distributed as $N(0, r \sigma^2)$. iii) With probability $1$, the function $t \mapsto \bm{X_t}$ is continuous on $[0, \infty)$. \end{definition} We can derive the SDP of Proposition \ref{sdp-sigma-recovery} as follows. \begin{proposition}\label{small-mean-shift-prop} Under Assumption~\ref{assume:SDP}, the maximum likelihood estimator for $\Sigma$ is the unique $\Sigma \succ 0$ such that $tr\Sigma = 1$ and \begin{itemize} \item {\bf Consistency}: For all $t \in [T]$, \begin{align*} W(t)\Sigma + \Sigma W(t) &= \frac 1 2 (M(t) + M(t)^T) \quad \text{for some $M(1), M(2), \ldots M(t)$} \end{align*} \item {\bf Minimum mean shift}: The resulting $M(1), \dots, M(T)$ minimize the objective \begin{align*} \sum\limits_{t=1}^{T-1} \| M(t+1) - M(t) \|_F^2 \end{align*} \end{itemize} \end{proposition} \proof{Proof of Proposition \ref{small-mean-shift-prop}.} \begin{align*} P(M(1), \ldots, M(T) &\mid W(1), \ldots, W(T), \Sigma) &\\ &\propto P(W(1), \ldots W(T) \mid M(1), \ldots, M(T), \Sigma)\cdot P(M(1), \ldots, M(T)\mid \Sigma)\\ &= \left(\prod_{t=1}^T \mathds{1}_{W(t)\Sigma + \Sigma W(t) = 0.5(M(t) + M(t)^T)}\right) \left( \prod_{t=1}^{T-1} P(M(t+1)-M(t))\right)\\ &= \left(\prod_{t=1}^T \mathds{1}_{\textrm{vec}(W(t)) = 0.5(\Sigma\otimes I + I\otimes \Sigma)\textrm{vec}(M(t) + M(t)^T)}\right) \left( \prod_{t=1}^{T-1} \exp(-\frac{\|\textrm{vec}(M(t+1)-M(t))\|^2}{2\sigma^2})\right), \end{align*} where the first step follows from Bayes' Rule, the second step from Corollary~\ref{cor:stability:sharedSigma}, and the third from Assumption~\ref{assume:SDP}. The theorem follows from the observation that for any matrix $X$, we have $\|\textrm{vec}(X)\|^2 = \|X\|_F^2$. \Halmos \endproof The proof of Proposition \ref{sdp-sigma-recovery} follows easily. \proof{Proof of Proposition \ref{sdp-sigma-recovery}.} By Proposition \ref{small-mean-shift-prop}, we obtain the SDP \begin{align*} \min\limits_{\Sigma} \sum\limits_{t=1}^{T-1} \| M(t+1) - M(t) \|_F^2 \\ \forall t \in [T]: W(t)\Sigma + \Sigma W(t) &= \frac 1 2 (M(t) + M(t)^T) \end{align*} under the assumptions of $\Sigma \succ 0$ and $\textrm{tr}(\Sigma) = 1$. Since the Frobenius norm is invariant under transposes, we have $$\sum\limits_{t=1}^{T-1} \| M(t+1) - M(t) \|_F^2 \propto \sum\limits_{t=1}^{T-1} \| (M(t+1) + M(t+1)^T) - (M(t) + M(t)^T) \|_F^2.$$ We can replace $M(t) + M(t)^T$ with $2 W(t)\Sigma + 2 \Sigma W(t)$ for all $t \in [T]$ to obtain the equivalent objective function $\sum\limits_{t=1}^{T-1} \| (W(t + 1) - W(t))\Sigma + \Sigma(W(t + 1) - W(t)) \|_F^2$ (up to a constant). This substitution enforces the fixed point equation $W(t)\Sigma + \Sigma W(t) = \frac 1 2 (M(t) + M(t)^T)$ for all $t \in [T]$, so the conclusion follows. \Halmos \endproof \begin{remark}[The prohibited edges setting.] \label{rem:inference_prohibited} Proposition \ref{sdp-sigma-recovery} generalizes straightforwardly to the setting of prohibited edges. Let $E$ denote the set of permitted edges. Then minimum mean shift assumption is equivalent to minimizing $\sum\limits_{t=1}^{T - 1} \sum\limits_{\{i, j\} \in E} \big(M(t+1) + M(t+1)^T - M(t) - M(t)^T\big)_{ij}^2$. In words, the objective just zeroes out prohibited edges, since mean estimates for prohibited edges have no effect on the network. For a network setting $(\bm{\mu}_j, \Sigma, \gamma_j, \Psi_j)_{j \in [n]}$, some algebra gives $M(t)_{ij} = \bm{e}_i^T 2 \gamma_j (\Psi_j^T \Psi_j) \Sigma (\Psi_j^T \Psi_j) W(t) \bm{e}_j$. Notice $\Psi_j^T \Psi_j \in \mathbb{R}^n$ is a diagonal matrix with $(\Psi_j^T \Psi_j)_{ii} = 1$ if $\{i, j\} \in E$ and zero otherwise. Therefore, it is clear that upon substitution, the objective is an SDP in $\Sigma$ with the same constraints. \end{remark} \subsection{Proof of Theorem \ref{thm:friction-no-equilibrium}} \begin{repeattheorem}[Restatement of Theorem \ref{thm:friction-no-equilibrium}.] Suppose that all firms use Eq.~\ref{eq:utilityFriction} as their utility function. Consider the setting where $\Sigma_i=I_n$, $\Gamma = I$. If there exists a pair of nodes $(k, \ell)$ such that $\mu_{k\ell}\neq -\mu_{\ell k}$ and $|\mu_{k\ell}+\mu_{\ell k}|<2\lambda$, then there is no stable equilibrium. \end{repeattheorem} \proof{Proof of Theorem \ref{thm:friction-no-equilibrium}.} First, we analyze equilibria under the modified utility function $g_i(w)$. Notice that the Hessian of $g_i(\bm{w})$ behaves as though $\lambda = 0$, since the second derivatives of $\| w \|_1$ are zero. So the Hessian is indeed negative definite, and so the optimal contracts of agent $i$ are given by solving $\nabla_{\bm{w}}g_i(\bm{w}) = \bm{0}$. Let $sgn: \mathbb{R} \to \mathbb{R}$ be the sign function \begin{align*} sgn(x) = \begin{cases} 1 & x > 0 \\ 0 & x = 0 \\ -1 & x < 0 \end{cases} \end{align*} Then $\nabla_{\bm{w}}g_i(\bm{w}) = \bm{0}$ iff $\bm{w} = \frac 1 2 (M - P)\bm{e}_i - \lambda\cdot sgn(\bm{w})$, where we apply $sgn(.)$ elementwise. Thus, $W = \frac{1}{2}(M-P) - \lambda\cdot sgn(W)$. Taking transposes and noting that $W=W^T$ and $P=-P^T$, we have $W = \frac{1}{2}(M+M^T) - \lambda\cdot sgn(W)$. Let $H = \frac{M + M^T}{2}$. Then, for all $k, \ell \in [n]$: \begin{align*} H_{k\ell} &= W_{k\ell} + \lambda\cdot sgn(W)_{k\ell} \end{align*} Observe that the absolute value of the right hand side is either $0$ (for $W_{k\ell}=0$) or greater than $\lambda$. Hence, there is no stable network if there exists $(k,\ell)$ such that $H_{k\ell}\neq 0$ and $|H_{k\ell}|<\lambda$. \Halmos \endproof \subsection{Proof of Theorem \ref{thm:perturb-sigma-scalar}.} \begin{repeattheorem}[Restatement of Theorem \ref{thm:perturb-sigma-scalar}.] Consider a network setting with $\Sigma_i=\Sigma$ for all firms. Let $W$ be the corresponding stable network. If the perceived covariance changes as $\Sigma\mapsto c\Sigma$ for some $c>0$, the stable point changes to $W\mapsto (1/c)W$. \end{repeattheorem} \proof{Proof of Theorem \ref{thm:perturb-sigma-scalar}.} The theorem follows from Corollary~\ref{cor:stability:sharedSigma}. \Halmos \endproof \subsection{Proof of Theorem \ref{thm:perturb-sigma-nonscalar}.} \begin{repeattheorem}[Restatement of Theorem \ref{thm:perturb-sigma-nonscalar}.] Consider a network setting with $\Sigma_i=\Sigma$ and $\Gamma = \gamma I_n$. Suppose the covariance changes as $\Sigma\mapsto \Sigma^\prime\succ \Sigma$. Let $W$ and $W^\prime$ be the stable points under $\Sigma$ and $\Sigma^\prime$ respectively. Then, \begin{align*} \textrm{tr}(M^T (W^\prime - W)) < 0, \end{align*} where $M$ is the matrix of expected returns. \end{repeattheorem} \proof{Proof of Theorem \ref{thm:perturb-sigma-nonscalar}.} By Corollary~\ref{cor:stability:sharedSigma}, $\textrm{vec}(W) = \gamma^{-1} (\Sigma \otimes I + I \otimes \Sigma)^{-1} \textrm{vec}(\frac{M + M^T}{2})$. Let $K = \gamma (\Sigma \otimes I + I \otimes \Sigma)$ and $K^\prime = \gamma (\Sigma^\prime \otimes I + I \otimes \Sigma^\prime)$. Since $\Sigma^\prime \succ \Sigma$ it follows that $K^\prime \succ K$. Therefore $K^{-1} \succ (K^\prime)^{-1}$. So, since $\textrm{vec}(W^\prime - W) = ((K^\prime)^{-1} - K^{-1}) \textrm{vec}(\frac{M + M^T}{2})$, we immediately obtain $\frac 1 2 \textrm{vec}(M + M^T)^T \textrm{vec}(W^\prime - W) < 0$. Since $W, W^\prime$ are symmetric it follows that $\textrm{vec}(M^T)^T \textrm{vec}(W^\prime - W) = \textrm{vec}(M)^T \textrm{vec}(W^\prime - W)$. So we have $\textrm{vec}(M)^T \textrm{vec}(W^\prime - W) < 0$. Since $\textrm{vec}(M)^T \textrm{vec}(W^\prime - W) = \textrm{tr}(M^T (W^\prime - W))$, the conclusion follows. \Halmos \endproof \subsection{Proof of Theorem \ref{thm:w_monotonic_mu}} \begin{repeattheorem}[Restatement of Theorem \ref{thm:w_monotonic_mu}.] Suppose $\Sigma_i=\Sigma$ for all firms. Then, for any $i,j \in [n]$, the value of $W_{ij}$ is monotonic with respect to $M_{ij}$ (the $i^{th}$ component of $\bm{\mu}_j$). \end{repeattheorem} \proof{Proof of Theorem \ref{thm:w_monotonic_mu}.} Let $(\lambda_i, \bm{v}_i)$ denote the $i^{th}$ eigenvalue and eigenvector of $\Gamma^{-1/2}\Sigma\Gamma^{-1/2}$, and let $V_{ij}=\bm{e}_i^T \bm{v}_j$. By Corollary~\ref{cor:stability:sharedSigma}, \begin{align*} W &= \Gamma^{-1/2}\left( \sum_{r=1}^n \sum_{s=1}^n \frac{\bm{v}_r^T \Gamma^{-1/2}(M+M^T)\Gamma^{-1/2} \bm{v}_s}{2 (\lambda_r + \lambda_s)} \bm{v}_r \bm{v}_s^T \right) \Gamma^{-1/2}\\ \Rightarrow \frac{\partial W_{ij}}{\partial M_{ij}} &= \bm{e}_i^T \Gamma^{-1/2}\left( \sum_{r=1}^n \sum_{s=1}^n \frac{\bm{v}_r^T \Gamma^{-1/2}(\bm{e}_i\bm{e}_j^T+\bm{e}_j\bm{e}_i^T)\Gamma^{-1/2} \bm{v}_s}{2 (\lambda_r + \lambda_s)} \bm{v}_r \bm{v}_s^T \right) \Gamma^{-1/2}\bm{e}_j\\ &= (\gamma_i\gamma_j)^{-1}\left( \sum_{r=1}^n \sum_{s=1}^n \frac{\bm{v}_r^T (\bm{e}_i\bm{e}_j^T+\bm{e}_j\bm{e}_i^T) \bm{v}_s}{2 (\lambda_r + \lambda_s)} (\bm{e}_i^T\bm{v}_r) (\bm{v}_s^T \bm{e}_j) \right)\\ &= (\gamma_i\gamma_j)^{-1}\left( \sum_{r=1}^n \sum_{s=1}^n \underbrace{\frac{V_{ir}V_{js} + V_{jr}V_{is}}{2 (\lambda_r + \lambda_s)} (V_{ir}V_{js})}_{Z_{rs}} \right)\\ &= (2\gamma_i\gamma_j)^{-1}\left( \sum_{r=1}^n \sum_{s=1}^n Z_{rs} + \sum_{s=1}^n\sum_{r=1}^n Z_{sr}\right)\\ &= (2\gamma_i\gamma_j)^{-1} \sum_{r=1}^n \sum_{s=1}^n \left(Z_{rs} + Z_{sr}\right)\\ &= (2\gamma_i\gamma_j)^{-1} \sum_{r=1}^n \sum_{s=1}^n \frac{\left(V_{ir}V_{js} + V_{jr}V_{is}\right)^2}{2 (\lambda_r + \lambda_s)} > 0. \Halmos \end{align*} \endproof \subsection{Proof of Proposition \ref{prop:network-gradient-mean-shock}} \begin{repeatproposition}[Restatement of Proposition \ref{prop:network-gradient-mean-shock}] Let $\Sigma$ have the eigendecomposition $\Sigma = V \Lambda V^T$. Then for $i, j, k, \ell \in [n]$, \begin{align} \frac{\partial W_{ij}}{\partial M_{k\ell}} = \frac 1 2 \sum\limits_{s, t \in [n]} \frac{V_{is} V_{ks} V_{jt} V_{\ell t} + V_{is}V_{\ell s}V_{jt}V_{kt}}{\lambda_s + \lambda_{t}}. \end{align} \end{repeatproposition} \proof{Proof.} This can be proved using the same steps as in the proof of Theorem~\ref{thm:w_monotonic_mu}. \Halmos \endproof \subsection{Hardness of Source Detection}\label{appendix:dwdm:approx} We begin by defining \begin{align} \label{eq:delWdelMapprox} \left|\frac{\partial W_{ij}}{\partial M_{k\ell}}\right|_{approx} &\vcentcolon= \frac{\left|V_{in} V_{kn} V_{jn} V_{\ell n}\right|}{2 \lambda_n}. \end{align} This approximates the right hand side of Eq.~\ref{eq:delWdelM} when the term corresponding to the smallest eigenvalue $\lambda_n$ dominates the sum. We now show that if the corresponding eigenvector $\bm{v}_n$ is random, source detection becomes impossible. \begin{proposition}[Hardness of Source Detection] Suppose $\bm{v}_n$ is Haar-distributed, that is, $\bm{v}_n$ is distributed uniformly on the unit sphere $S^{n-1}$. Then, if $\Sigma = V \Lambda V^T$ and $\Gamma = I$, \begin{align*} \mathop{\mathbb{P}}\left[\max\limits_{i, j \in [n]: (i, j) \neq (k, \ell)} \left|\frac{\partial W_{ij}}{M_{k \ell}}\right|_{approx} < \left|\frac{\partial W_{k \ell}}{M_{k \ell}}\right|_{approx} \right] \leq O\left(\frac{1}{n}\right). \end{align*} \label{prop:mean-shock-single-column} \end{proposition} \proof{Proof of Proposition \ref{prop:mean-shock-single-column}.} Without loss of generality we can set $k = 1, \ell = 2$ (the analysis of $k = \ell$ is identical). Notice that $\left|\frac{\partial W_{ij}}{M_{k \ell}}\right|_{approx}$ is maximized at the $(i, j)$ that maximizes $\abs{V_{in} V_{jn}}$. Now, consider $(i, j) \in \{(1, 2), (3, 4), \dots, (n-1, n)\}$. Notice the distribution of $\bm{v}_n$ is permutation-invariant by assumption. Hence the joint distribution of $(V_{in}, V_{jn})$ is the same for all such pairs $(i, j)$. Hence the distribution of $\abs{V_{in} V_{jn}}$ is also the same for all such $(i, j)$. Therefore, \begin{align*} \mathop{\mathbb{P}} \left[ \arg\max\limits_{(i, j) \in \{(1, 2), (3, 4), \dots, (n-1, n)\}} \left|\frac{\partial W_{ij}}{M_{12}}\right|_{approx} = (1, 2) \right] \leq O(1/n) \qquad\Halmos \end{align*} \endproof \subsection{Proofs of Proposition \ref{prop:firms-exchangeable-stronger}} \label{appendix:exchangeable} \begin{repeatproposition}[Restatement of Proposition \ref{prop:firms-exchangeable-stronger}] Suppose $M, \Sigma, \Gamma$ exhibit community structure (Eq.~\ref{eq:community}), and all the error terms $(\epsilon_i)_{i\in [n]}$ and $(\epsilon^\prime_{\theta_i, j})_{i,j\in [n]}$ are independent and identically distributed. Let $\pi: [n] \to [n]$ be any intra-community permutation, and let $\Pi: \mathbb{R}^n \to \mathbb{R}^n$ be the corresponding column-permutation matrix: $\Pi(\bm{e}_i) = \bm{e}_{\pi(i)}$. Then, $W$ and $\Pi^T W \Pi$ are identically distributed. \end{repeatproposition} \proof{Proof.} Let $H = \frac 1 2 (M + M^T)$. The fixed point equation for $W$ is given by Corollary \ref{cor:stability:sharedSigma} as $\Sigma W \Gamma + \Gamma W \Sigma = H$. Vectorization implies $(\Gamma \otimes \Sigma + \Sigma \otimes \Gamma) \textrm{vec}(W) = \textrm{vec}(H)$. Let $X \sim Y$ denote that a pair of random variables $X, Y$ are identically distributed. We want to show $\Pi^T W \Pi \sim W$. Vectorization gives $\textrm{vec}(\Pi^T W \Pi) = (\Pi^T \otimes \Pi^T) \textrm{vec}(W)$. Let $P = (\Pi^T \otimes \Pi^T)$ and $K = (\Gamma \otimes \Sigma + \Sigma \otimes \Gamma)$ for shorthand. In this notation, we want to show that $P K^{-1} \textrm{vec}(H) \sim K^{-1} \textrm{vec}(H)$. Since $P$ is a permutation, we have $P K^{-1} \textrm{vec}(H) = P K^{-1} P^T P \textrm{vec}(H) = (PKP^T)^{-1} P \textrm{vec}(H)$. Since the collections of random variables $\{\epsilon_i\}_i$ and $\{\epsilon_{\theta_i, j}^\prime \}_{i, j}$ are independent, we know $\textrm{vec}(H)$ and $K$ are independent. So to show $(PKP^T)^{-1} P \textrm{vec}(H) \sim K^{-1} \textrm{vec}(H)$ it suffices to show that $P \textrm{vec}(H) \sim \textrm{vec}(H)$ and $PKP^T \sim K$. Notice $P \textrm{vec}(H) = \textrm{vec}(\Pi^T H \Pi)$. Hence, we want to show $\Pi^T H \Pi \sim H$, which holds iff $\Pi^T (M + M^T) \Pi \sim M + M^T$. Notice that $\Pi^T M^T \Pi = (\Pi^T M \Pi)^T$, so if $\Pi^T M \Pi \sim M$ then we obtain $\Pi^T M^T \Pi \sim M^T$ as well. It suffices to show $\Pi^T M \Pi \sim M$. Similarly, we can simplify $PKP^T = \Pi^T \Sigma \Pi \otimes \Pi^T \Gamma \Pi + \Pi^T \Gamma \Pi \otimes \Pi^T \Sigma \Pi$. It suffices to show $\Pi^T \Gamma \Pi \sim \Gamma$ and $\Pi^T \Sigma \Pi = \Sigma$. We are left to show that $\Pi^T \Sigma \Pi = \Sigma$ and $\Pi^T A \Pi \sim A$ for $A \in \{\Gamma, M\}$. {\em Proof of $\Pi^T \Sigma \Pi = \Sigma$.} Let $i, j \in [n]$. Then $(\Pi^T \Sigma \Pi)_{ij} = \Sigma_{\pi(i), \pi(j)} = g(\theta_{\pi(i)}, \theta_{\pi(j)})$. Since $\pi$ only commutes members within communities, $g(\theta_{\pi(i)}, \theta_{\pi(j)}) = g(\theta_i, \theta_j) = \Sigma_{ij}$. So $\Pi^T \Sigma \Pi = \Sigma$. {\em Proof of $\Pi^T \Gamma \Pi \sim \Gamma$.} Notice $\Pi^T \Gamma \Pi$ and $\Gamma$ are both diagonal. Let $i \in [n]$. Then $(\Pi^T \Gamma \Pi)_{ii} = \Gamma_{\pi(i), \pi(i)} = h(\theta_{\pi(i)}) + \epsilon_{\pi(i)} = h(\theta_{i}) + \epsilon_{\pi(i)}$. Since $\theta_{i} = \theta_{\pi(i)}$, we know $\epsilon_i\sim \epsilon_{\pi(i)}$. The conclusion follows. {\em Proof of $\Pi^T M \Pi \sim M$.} Let $i, j \in [n]$. Then $(\Pi^T M \Pi)_{ij} = M_{\pi(i), \pi(j)} = f(\theta_{\pi(i)}, \theta_{\pi(j)}) + \epsilon_{\theta_{\pi(i)}, \pi(j)}^\prime = f(\theta_{i}, \theta_{j}) + \epsilon_{\theta_{i}, \pi(j)}^\prime$. Since $\theta_{j} = \theta_{\pi(j)}$, we know that $\epsilon_{\theta_{i}, \pi(j)}^\prime\sim \epsilon_{\theta_{i}, j}^\prime$, and the conclusion follows. \Halmos \endproof \subsection{Proof of Theorem \ref{thm:nonidentGammaM}} \begin{repeattheorem}[Restatement of Theorem \ref{thm:nonidentGammaM}] Consider two network settings $S=(\bm{\mu}_i, \Sigma, \gamma_i)_{i\in[n]}$ and $S^\prime=(\bm{\mu}_i, \Sigma, \gamma_i^\prime)_{i\in[n]}$ which differ only in the risk-aversions of firms $J=\{j\mid \gamma_j\neq \gamma^\prime_j\}\subseteq [n]$. Then, there exists a setting $S^\dagger=(\bm{\mu}_i^\dagger, \Sigma, \gamma_i)_{i\in[n]}$ such that $\bm{\mu}_i=\bm{\mu}^\dagger_i$ for all $i\notin J$ and the stable networks under $S^\dagger$ and $S^\prime$ are identical. \end{repeattheorem} \proof{Proof of Theorem \ref{thm:nonidentGammaM}.} First, consider the network settings $S$ and $S^\prime$. Let $\Gamma \in \mathbb{R}^{n \times n}$ be a diagonal matrix with $\Gamma_{i, i} = \gamma_i$; define $\Gamma^\prime$ similarly under $S^\prime$. Let the corresponding networks be $W$ and $W^\prime$, and let $\Delta_W=W^\prime-W$ and $\Delta_\Gamma=\Gamma^\prime-\Gamma$. By Corollary~\ref{cor:stability:sharedSigma}, we have \begin{align} \Sigma W \Gamma + \Gamma W \Sigma &= \frac{M + M^T}{2} = \Sigma W^\prime \Gamma^\prime + \Gamma^\prime W^\prime \Sigma\nonumber\\ \Rightarrow \frac{M + M^T}{2} &= \Sigma (W + \Delta_W)(\Gamma + \Delta_\Gamma) + (\Gamma + \Delta_\Gamma) (W + \Delta_W) \Sigma \nonumber \\ &= \Sigma W \Gamma + \Gamma W \Sigma + \Sigma \Delta_W \Gamma + \Gamma \Delta_W \Sigma + \Sigma W \Delta_\Gamma + \Delta_\Gamma W \Sigma + \Sigma \Delta_W \Delta_\Gamma + \Delta_\Gamma \Delta_W \Sigma \nonumber \\ \Rightarrow \Sigma \Delta_W \Gamma + \Gamma \Delta_W \Sigma &= - (\Sigma W \Delta_\Gamma + \Delta_\Gamma W \Sigma + \Sigma \Delta_W \Delta_\Gamma + \Delta_\Gamma \Delta_W \Sigma)\nonumber\\ &= - (\Sigma W^\prime \Delta_\Gamma + \Delta_\Gamma W^\prime \Sigma)\label{eq:dG} \end{align} Next, consider $S$ versus $S^\dagger$. Suppose that $M^\dagger$ has columns $\bm{\mu}_1^\dagger, \dots, \bm{\mu}_n^\dagger$ and let $\Delta_M = M^\dagger - M$. Let $W^\dagger$ be the fixed point network under $S^\dagger$, given by $\Sigma W^\dagger \Gamma + \Gamma W^\dagger \Sigma = \frac{M^\dagger + (M^\dagger)^T}{2}$. Let $\Delta_W^\dagger = W^\dagger - W$. Then a similar argument gives: \begin{align} \frac{\Delta_M + \Delta_M^T}{2} = \Sigma \Delta_W^\dagger \Gamma + \Gamma \Delta_W^\dagger \Sigma \label{eq:dM} \end{align} Therefore, from Eq \ref{eq:dG} and \ref{eq:dM}, it follows that $W^\prime = W^\dagger$ if $$\frac{\Delta_M + \Delta_M^T}{2} = - (\Sigma W^\prime \Delta_\Gamma + \Delta_\Gamma W^\prime \Sigma).$$ Hence, $W^\prime=W^\dagger$ if we set $\Delta_M = - \Sigma W^\prime \Delta_\Gamma$. It remains to show that $M^\dagger$ differs from $M$ only in columns corresponding to $J$. Suppose that $i \not \in J$. Then $\gamma_i = \gamma_i^\prime$, so $\Delta_\Gamma \bm{e}_i = \bm{0}$. We conclude that $\Delta_M \bm{e}_i = \bm{0}$ and hence $M \bm{e}_i = M^\dagger \bm{e}_i$. \Halmos \endproof \section{Experimental Details}\label{appendix-datasets} \subsection{Fama-French Stock Market Data} We use the Fama-French value-weighted asset returns dataset, for 96 assets over 625 months \citep{fama-french-2015}. \subsection{OECD International Trade Data} We use international trade statistics from the OECD to get quarterly measurements of bilateral trade between 46 large economies, including the top 15 world nations by GDP \cite{oecd-stats}. The data are available at the OECD Statistics webpage (\url{https://stats.oecd.org/}). The data are measured quarterly from Q1 2010 to Q2 2022. We take the sum of trade flows $i \to j$ and $j \to i$ to measure the weight of an edge $\{i, j\}$. To obtain the corresponding $\Sigma$, we run our inference procedure (Section~\ref{sec:model:inference}). Since there is no data for within-country trade, the network has no self-loops ($W_{ii}=0$). So we modify the inference according to Remark~\ref{rem:inference_prohibited} in Appendix~\ref{appendix:inference}. \subsection{Outlier Detection Simulation}\label{appendix-gamma-exps} The experiments in Figure \ref{fig:deviator-gamma-detection} proceed as follows. Fix a number of communities $k$ and number of firms $n$. Fix a value of $\sigma > 0$. For us, $k = 2$, $n \in \{20, 100, 300\}$, and $\sigma \in \{\sigma_1, \dots, \sigma_{10}\}$, where the $\sigma_i$ are logarithmically spaced on the interval $[0.1, 1]$, so that \begin{align*} \sigma \in \{0.1 , 0.12915497, 0.16681005, 0.21544347, 0.27825594, \\ 0.35938137, 0.46415888, 0.59948425, 0.77426368, 1.0\} \end{align*} For a setting of $n, k, \sigma$, we perform the following simulation $m = 500$ times. {\em Generate communities.} Generate the community membership matrix $\Theta \in \{0, 1\}^{n \times k}$ with rows independently and uniformly at random from $\{\bm{e_1}, \dots, \bm{e_k}\}$. {\em Generate the network setting.} The deterministic functions $f, g, h$ for $M, \Sigma, \Gamma$ respectively are as follows. First $f(\theta_1, \theta_2) = f(\theta_2, \theta_1) = 1$ and $f = 0$ otherwise. Next, let $G \in \mathbb{R}^{k \times k}$ be the matrix $G_{ij} = g(\theta_i, \theta_j)$. Then $G$ is generated from a normalized Wishart distribution centered at $I_k$ and with $5$ degrees of freedom. Finally, $h(\theta_i) = 1$ for all $i$. The noise variables for agent beliefs are as follows. Sample i.i.d. $\epsilon_i$ according to a $N(0, \sigma^2)$ distribution truncated to $[-0.5, 0.5]$ for all $i$. Sample $\epsilon_{\theta_{i}, j}^\prime \overset{\mathrm{iid}}{\sim} N(0, \sigma^2)$ for all $i, j$. {\em Designate an outlier.} Set the the noise parameter $\epsilon_1 = -0.5$ for firm $1$ (the risk-seeker), so as $\sigma \to 0$, $\gamma_1$ gets further separated from all other $\gamma_i$. {\em Outlier detection simulation.} Then for a random firm $i$ such that $\theta_i \neq \theta_1$, we test whether the outlier $\hat j \vcentcolon= \arg\max\limits_{j: \theta_j = \theta_1} \abs{W_{i, j}}$ is equal to the true outlier firm $1$. {\em Collate results.} Once the $m = 500$ runs are completed for a single setting of $n, k, \sigma$, we obtain an estimate $\hat p$ for the probability of successful deviator detection at this setting of parameters. We plot a confidence interval $[p - 2 \sqrt{\frac{\hat p (1 - \hat p)}{m}}, p + 2 \sqrt{\frac{\hat p (1 - \hat p)}{m}}]$. This is plotted on the $y$-axis. The $x$-axis quantifies how much $\gamma_1$ deviates from the mean, in terms of the number of standard deviations of the truncated normal distribution $\epsilon_i$. \section{Conclusions} \label{sec:conc} We have proposed a model of a weighted undirected financial network of contracts. The network emerges from the beliefs of the participant firms. The link between the two is utility maximization coupled with pricing. For almost all belief settings, our approach yields a unique network. This network satisfies a strong Higher-Order Nash Stability property. Furthermore, the firms can converge to this stable network via iterative pairwise negotiations. The model yields two insights. First, a regulator is unable to reliably identify the causes of a change in network structure, or engage in targeted interventions. The reason is that firms seek to diversify risk by exploiting correlations. We find that in realistic settings, there are often combinations of trades that offer seemingly low risk. Hence, all firms aim to use such trades. The over-dependence on a few such combinations leads to a pattern of connections between firms that thwarts targeted regulatory interventions. The second insight is that firms can use the network to update their beliefs. For instance, they can identify counterparties that behave very differently from their peers. However, the cause of the outlierness remains hidden. If all firms in one line of business become more risk-seeking, the result is indistinguishable from that business becoming more profitable. Innocuous events (such as a news story) may cause beliefs to change suddenly, leading to drastic changes in the network. While our work focuses on mean-variance utility, many of our results hold more generally. This is because the mean-variance utility is a good approximation of a perturbation of other utility functions around the stable point. Some of our results for pairwise negotiations and targeted interventions are based on such perturbation arguments. Hence, these generalize to other choices of utility functions. \subsection{Inferring Beliefs from the Network Structure} \label{sec:model:inference} Suppose we are given a network that lies at a unique stable point as defined in Theorem~\ref{thm:stable}. How can we infer the beliefs of the agents? \paragraph{Non-identifiability of beliefs.} Suppose we are given a network $W$ that is generated using a single covariance $\Sigma_i=\Sigma\succ 0$. We want to infer the agents' beliefs $(M, \Gamma, \Sigma)$. By Corollary~\ref{cor:stability:sharedSigma}, \begin{align*} \frac 1 2 \textrm{vec}(M+M^T) &= (\Gamma\otimes\Sigma + \Sigma\otimes \Gamma)\textrm{vec}(W). \end{align*} Clearly, the agents' beliefs can only be specified up to an appropriate scaling of $M$, $\Gamma$, and $\Sigma$. But even if we specify a scale (e.g., $\textrm{tr}[\Gamma]=\textrm{tr}[\Sigma]=1$), for any valid choice of $\Gamma$ and $\Sigma$ we can find a corresponding $M$. Thus, even in the simple setting of identical covariance and fixed scale, the network $W$ cannot be used to select a unique combination of the parameters $(M, \Gamma, \Sigma)$. By a similar argument, we cannot identify the underlying beliefs even if we observe multiple networks generated using the same $\Sigma$ and $\Gamma$ (but different $M$). Thus, we need further assumptions in order to infer beliefs. \begin{assumption} \label{assume:SDP} We consider a sequence of networks $W(t)$ generated over successive timesteps $t=1, \ldots, T$. We assume that (a) $\Gamma(t)=I$ and $\Sigma_i(t)=\Sigma$ for all timesteps, (b) for all $i,j\in[n]$, $M_{ij}(t)$ varies independently according to a Brownian motion with the same parameters for all $(i,j)$, and (c) $\textrm{tr}\Sigma = 1$. \end{assumption} The first assumption is motivated by the observations in portfolio theory that errors in mean estimation are far more significant than covariance estimation errors~\citep{chopra-2013}. So, accounting for variations in $\Sigma$ may be less important than variations in $M$ (but see Remark~\ref{rem:timevaryingSigma} below). The homogeneity of risk aversion was noted in Section~\ref{sec:model}, and this justifies setting $\Gamma=I$. The second assumption is common in the literature on pricing models~\citep{geman-2001,bianchi-2013}. The third assumption fixes the scale, as discussed above. Under these assumptions, we can infer $\Sigma$ (and hence $M(t)$) via a computationally tractable Semidefinite Program (SDP), as shown next. \begin{proposition}\label{sdp-sigma-recovery} Finding the maximum likelihood estimator of $\Sigma$ under Assumption~\ref{assume:SDP} is equivalent to the following SDP: \begin{align*} \min\limits_{\Sigma} \sum\limits_{t = 1}^{T - 1} \| \Sigma (W(t + 1) - W(t)) + (W(t + 1) - W(t)) \Sigma \|_F^2 \text{$\quad \text{ s. t. } \Sigma \succeq 0, \textrm{tr}(\Sigma) = 1.$} \end{align*} \end{proposition} \begin{remark}[Generalization to time-varying $\Sigma$] \label{rem:timevaryingSigma} We can extend our formulation from a constant covariance $\Sigma$ to one where the covariance updates at times $0<T_1 <T_2< \ldots< T_m<T$. Denote the covariance in each interval $j$ as $\Sigma_{(j)}$. Then, we can modify the objective of the SDP above with an additional term that penalizes differences between successive covariances: $\nu \cdot \sum_j \|\Sigma_{(j+1)}-\Sigma_{(j)}\|$, for some chosen matrix norm $\|.\|$ and regularization parameter $\nu>0$. This allows the covariance to change over time while penalizing abrupt changes. Note that the objective function remains convex. The choice of update times $T_1, \ldots, T_m$ can be tuned based on heuristics or prior information. \end{remark} \section{Insights for Firms} \label{sec:firms} Until now, we have treated the beliefs of firms as fixed and exogenous. In this section, we consider how a firm can use its contracts to gain insights into other firms and update its beliefs. For instance, suppose a firm $j$ faces a crisis, e.g., a looming debt payment that may make it insolvent. The firm may then become risk-seeking (i.e., lower its $\gamma_j$), hoping that the risks pay off. Another firm $i$ may be unaware of the crisis, so $i$'s risk perceptions (perhaps based on historical data) would be outdated. Can firm $i$ {\em infer} the lower $\gamma_j$, solely from $i$'s contracts $\bm{w}_i$ with all firms? What if a group of firms become risk-seeking, and not just one firm? \subsection{Detecting Outlier Firms} \label{sec:firms:single} Intuitively, firm $i$ will try to answer these questions by comparing the behavior of firm $j$ against other similar firms. We formalize this by assuming that each firm $j$ belongs to a community $\theta_j$, e.g., banking, or real-estate, or insurance, etc. The community of each firm is publicly known. Firms in the same community are perceived to have similar return distributions: \begin{align} M_{ij} &= f(\theta_i, \theta_j) + \epsilon^\prime_{\theta_i,j}, & \Sigma_{ij} &= g(\theta_i, \theta_j), & \gamma_i &= h(\theta_i) + \epsilon_i \label{eq:community} \end{align} for some unknown deterministic functions $f(.)$, $g(.)$, and $h(.)$ and random error terms $\epsilon_i$ and $\epsilon^\prime_{\theta_i,j}$. We also assume that all firms use the same covariance $\Sigma$. Now, suppose one firm $j$ is an outlier, with very different beliefs from other firms in its community. For firm $i$ to detect the outlier firm $j$, the contract size $W_{ij}$ should deviate from a cluster of contracts $\{W_{ij'}\mid \theta_{j'} = \theta_j\}$ of other firms from the same community as firm $j$. Now, outlier detection methods often assume independent datapoints. In our model, all contracts are dependent. But we can still do outlier detection if the contracts are appropriately exchangeable. We prove below this is the case. \begin{definition} An intra-community permutation is a permutation $\pi: [n] \to [n]$ such that $\pi(i) = j$ implies that $\theta_i = \theta_j$. \end{definition} \begin{proposition} \label{prop:firms-exchangeable-stronger} Suppose $M, \Sigma, \Gamma$ exhibit community structure (Eq.~\ref{eq:community}), and all the error terms $(\epsilon_i)_{i\in [n]}$ and $(\epsilon^\prime_{\theta_i, j})_{i,j\in [n]}$ are independent and identically distributed. Let $\pi: [n] \to [n]$ be any intra-community permutation, and let $\Pi: \mathbb{R}^n \to \mathbb{R}^n$ be the corresponding column-permutation matrix: $\Pi(\bm{e}_i) = \bm{e}_{\pi(i)}$. Then, $W$ and $\Pi^T W \Pi$ are identically distributed. \end{proposition} \begin{corollary}\label{cor:firms-exchangeable} Let $j_1, \dots, j_m \in [n]$ belong to the same community: $\theta_{j_1} = \dots = \theta_{j_m}$. Suppose the conditions of Proposition~\ref{prop:firms-exchangeable-stronger} hold. Then, for any $i \in [n]$, the joint distribution of $(W_{i, j_1}, \ldots, W_{i, j_m})$ is exchangeable. \end{corollary} \begin{figure}[htb] \begin{center} \includegraphics[width=.8\linewidth]{figs/new-121222/dev_detection_subset_500_runs-eps-converted-to.pdf} \caption{ Success rate for detecting outlier risk-seeking firms. Detection is more successful when there are fewer firms and when the risk-seeking firm's $\gamma_{\textrm{outlier}}$ is more standard deviations away from the $\gamma$ of the normal firms. } \label{fig:deviator-gamma-detection} \end{center} \end{figure} {\bf Empirical Results for Outlier Detection.} We generate community-based networks (Eq.~\ref{eq:community}) such that $\gamma_i\sim N(1,\sigma^2)$ truncated to $[0.5, 1.5]$. The smaller the $\sigma$, the more closely the $\gamma_i$ values cluster around $1$. For the outlier risk-seeking firm, we set $\gamma_{\textrm{outlier}}=0.5$. For clarity of exposition, we do not add heterogeneity in expected returns: $\epsilon^\prime=0$ everywhere. To detect outliers under exchangeability (Corollary~\ref{cor:firms-exchangeable}), we can use methods based on conformal prediction~\citep{guan-2022}. Here, we use a simpler approach: pick the firm $j$ with the largest contract size as the outlier; $\hat j \vcentcolon= \arg\max\limits_{j \in \{j_1, \dots, j_m\}} \abs{W_{i, j}}$. We run $500$ experiments for each choice of $\sigma$, and count the frequency with which the outlier firm is detected via its contract size. Further details are presented in Appendix~\ref{appendix-gamma-exps}. Figure \ref{fig:deviator-gamma-detection} shows the results. We characterize the degree of outlierness by how many standard deviations away $\gamma_{\textrm{outlier}}$ is from the baseline of $1$. The smaller the $\sigma$, the more the outlierness. The success rate increases with increasing outlierness, as expected. It also increases when the number of firms $n$ is reduced. This is because contract sizes depend on the $\gamma$ values of all firms; fewer firms reduces the chances of any one firm attaining large contract sizes due to randomness. \subsection{Risk-Aversion versus Expected Returns} The discussion above showed that a firm can detect outlier counterparties. However, the firm cannot determine {\em why} the counterparty is an outlier, as the following theorem shows. \begin{theorem}[Non-identifiability of risk-aversion versus expected returns] \label{thm:nonidentGammaM} Consider two network settings $S=(\bm{\mu}_i, \Sigma, \gamma_i)_{i\in[n]}$ and $S^\prime=(\bm{\mu}_i, \Sigma, \gamma_i^\prime)_{i\in[n]}$ which differ only in the risk-aversions of firms $J=\{j\mid \gamma_j\neq \gamma^\prime_j\}\subseteq [n]$. Then, there exists a setting $S^\dagger=(\bm{\mu}_i^\dagger, \Sigma, \gamma_i)_{i\in[n]}$ such that $\bm{\mu}_i=\bm{\mu}^\dagger_i$ for all $i\notin J$ and the stable networks under $S^\dagger$ and $S^\prime$ are identical. \end{theorem} Thus, one cannot determine if an outlier is more risk-seeking than its community or expects higher profits. But risk-seeking behavior may be indicative of stress, while higher profits than similar firms are unlikely. Hence, in either case, the firm detecting the outlier may choose to reduce its exposure to the outlier. However, this approach fails if an entire community shifts its behavior. The following example illustrates the problem. \begin{example} Consider two communities numbered $1$ and $2$, with $n_1$ and $n_2$ firms respectively. Let the setting $S$ of Theorem~\ref{thm:nonidentGammaM} correspond to \begin{align*} M_{ij} &= \left\{\begin{array}{cl} a & \text{if $\theta_i=\theta_j=1$}\\ b & \text{if $\theta_i=\theta_j=2$}\\ c/2 & \text{otherwise}\end{array} \right. & \Sigma_{ij} &= \left\{\begin{array}{cl} 1 & \text{if $\theta_i=\theta_j=1$}\\ 1 & \text{if $\theta_i=\theta_j=2$}\\ 0 & \text{otherwise}\end{array} \right. & \gamma_i &= 1. \end{align*} Now, suppose that under setting $S^\prime$, $\gamma_i\mapsto\gamma_i+\delta$ for some small $\delta$ for all nodes $i$ in community $1$. The change in the network would be the same if we had updated the columns corresponding to community~$1$ in the $M$ matrix instead (setting $S^\dagger$): \begin{align*} M^\dagger_{ij} &= M_{ij} + \Delta(\theta_i, \theta_j) & \Delta(\theta_i, \theta_j) + O(\delta^2) &= \left\{\begin{array}{cl} -\delta a/2 & \text{if $\theta_i=\theta_j=1$}\\ -\delta b\cdot n_2/(n_1+n_2) & \text{if $\theta_i=2, \theta_j=1$}\\ 0 & \text{if $\theta_j=2$}\end{array} \right. \end{align*} Thus, a firm from community $2$ cannot determine if the network change was due to a change in $(\gamma_i)_{\theta_i=1}$ or $(\bm{\mu}_i)_{\theta_i=1}$. For instance, when $b>0$, an increase in risk-seeking ($\delta<0$) looks the same as an increase in trading benefits ($\Delta(1,2)>0$). In the former case, firms in community~$2$ should {\em reduce} their exposure to community~$1$ firms. But in the latter case, they should {\em increase} exposure. Since the data cannot be used to choose the appropriate action, the behaviors of firms may be guided by their prior beliefs or inertia. When such beliefs change due to external events (e.g., due to news about one firm in community~$1$), the resulting change in the network may be drastic.$\hfill\Box$ \end{example} \section{Insights for Regulators} \label{sec:regulators} A financial regulator can observe the network but does not know the firms' beliefs. The regulator may ask: what changes in beliefs caused recently observed changes in the network? What are the side effects of different regulatory interventions? To answer these questions, we need to know how changes in firms' beliefs or utility functions affect the network. That is the subject of this section. \subsection{Effect of Friction in Contract Formation} \label{sec:regulators:friction} Our model imposes no costs for contract formation. This is reasonable for large firms where the fixed costs associated with contract negotiations may be small relative to the contract sizes. But in an overheating market, a regulator may impose frictions by penalizing large contracts, for example by increasing margin requirements. We show next that such frictions may backfire, by ruling out all stable equilibria. Specifically, we model frictions via an additional $\ell_1$ penalty to the utility of Eq.~\ref{eq:utility}: \begin{align} \label{eq:utilityFriction} g_i(\bm{w}) = \bm{w}^T (\bm{\mu}_i - P \bm{e}_i) - \gamma_i \bm{w}^T \Sigma_i \bm{w} - \lambda \| \bm{w} \|_1, \end{align} where $\lambda$ is a parameter chosen by the regulator. Larger contract-related costs correspond to higher values of $\lambda$. \begin{theorem}[Friction Can Lead to Loss of Equilibrium] Consider the simple setting where $\Sigma_i=I_n$, $\Gamma = I$, and there exists a pair of firms $(i, j)$ such that $\mu_{ij} \neq -\mu_{ji}$. Suppose that all firms use Eq.~\ref{eq:utilityFriction} as their utility function. Then, if $\lambda > \min\limits_{k, \ell \in [n]} \frac{\mu_{k \ell} + \mu_{\ell k}}{2}$, there is no stable equilibrium. \label{thm:friction-no-equilibrium} \end{theorem} Thus, the addition of frictions may destabilize the system, which is the very result a regulator aims to prevent by adding such frictions. Furthermore, the destabilizing level of friction is not known a priori to the regulator, since it depends on the (hidden) beliefs of firms. \subsection{Effect of Changes in the Perceived Risk} Regulatory actions can change the risk perceptions of firms, who then update their covariances accordingly. If the risk scales upwards, we find smaller contract sizes, as expected. \begin{theorem}[Stable Point Scales Inversely with Risk] Consider a network setting with $\Sigma_i=\Sigma$ for all firms. Let $W$ be the corresponding stable network. If the covariance changes from $\Sigma$ to $c\Sigma$ for some $c>0$, the stable point changes from $W$ to $(1/c)W$. \label{thm:perturb-sigma-scalar} \end{theorem} But this does not generalize to non-uniform increases in $\Sigma$, as we show next. \begin{theorem}[Non-uniform Increases in Risk] Consider a network setting with $\Sigma_i=\Sigma$ and $\gamma_i = \gamma$ for all $i$. Suppose the covariance changes as $\Sigma\mapsto \Sigma^\prime\succ \Sigma$. Let $W$ and $W^\prime$ be the stable points under $\Sigma$ and $\Sigma^\prime$ respectively. Then, \begin{align*} \textrm{tr}(M^T (W^\prime - W)) < 0, \end{align*} where $M$ is the matrix of expected returns. \label{thm:perturb-sigma-nonscalar} \end{theorem} This shows that an increase in risk leads to a decrease in the weighted average of the contract sizes. The weights are given by the expected return beliefs of the firms. However, individual contracts between firms can increase, as can the norm $\|W\|_F$. This is because increases in the covariance $\Sigma$ may also increase correlations, which can offer better hedging opportunities. By hedging some risks, larger contract sizes can be supported. \subsection{Effect of Changes in Perceived Expected Returns} Now, we consider shifts in perceived rewards of contracts, as measured by the matrix $M$ of expected returns from unit contracts. We find that contract sizes increase if the perceived reward increases, as expected. \begin{theorem}[$W$ is monotonic with $M$] \label{thm:w_monotonic_mu} Suppose $\Sigma_i=\Sigma$ for all firms. Then, for any $i,j \in [n]$, the value of $W_{ij}$ is monotonic with respect to $M_{ij}$ (the $i^{th}$ component of $\bm{\mu}_j$). \end{theorem} Now, a change in $M_{ij}$ may affect not only $W_{ij}$ but also all other contracts. Can we trace the changes in $W$ back to the underlying changes in $M$? \begin{definition}[Source Detection Problem] Suppose that a financial regulator observes two networks $W$ and $W^\prime$, with the only difference being a small change in a single entry of $M$ (say, $M_{ij}$). Can the regulator identify the pair $(i, j)$? \end{definition} One approach is to try to infer all beliefs of all firms, and then identify the changed belief. But, as discussed in Section~\ref{sec:model:inference}, the beliefs are only identifiable under extra assumptions and more data. An alternative approach for the source detection problem is to find the entry $(i,j)$ with the largest change $|W_{ij}-W^\prime_{ij}|$. The intuition is that a change in $M_{ij}$ has a direct effect on $W_{ij}$ and (hopefully weaker) indirect effects on other contracts. Thus, the source detection problem is closely tied to the following: \begin{definition}[Targeted Intervention Problem] Can a regulator induce a small change in a single entry of $M$ (say, $M_{ij}$) such that the change in $W_{ij}$ is significantly larger than changes in other entries of $W$? \end{definition} \begin{proposition}[Network gradient over expected returns] \label{prop:network-gradient-mean-shock} Let $\Gamma=I_n$, $\Sigma_i=\Sigma$ for all $i$, and let $\Sigma$ have the eigendecomposition $\Sigma = V \Lambda V^T$. Then for $i, j, k, \ell \in [n]$, \begin{align} \label{eq:delWdelM} \frac{\partial W_{ij}}{\partial M_{k\ell}} = \frac 1 2 \sum\limits_{s, t \in [n]} \frac{V_{is} V_{ks} V_{jt} V_{\ell t} + V_{is}V_{\ell s}V_{jt}V_{kt}}{\lambda_s + \lambda_{t}}. \end{align} \end{proposition} Proposition~\ref{prop:network-gradient-mean-shock} shows how the network $W$ changes in response to a change in the $(k,\ell)$ entry of the expected returns matrix $M$. When all eigenvalues of $\Sigma$ are equal (that is, $\Sigma\propto I_n$), a change in $M_{k\ell}$ only affects $W_{k\ell} (=-W_{\ell k})$, as can be seen from Corollary~\ref{cor:stability:sharedSigma}. But when the eigenvalues are skewed, the terms in Eq.~\ref{eq:delWdelM} corresponding to the smallest eigenvalues have greater weight. In such circumstances, the indirect effect of a change in $M_{k\ell}$ on other $W_{ij}$ can be significant. The following empirical results show that this is indeed the case. \begin{figure}[htbp] \centering \includegraphics[width=0.7 \textwidth]{figs/new-121222/success_rate_equicorr_d_scaled_121222-eps-converted-to.pdf} \caption{The success rate for the Source Detection Problem goes to zero as $\alpha$ and $\epsilon$ increase in a noisy scaled equi-correlation model of $\Sigma$. We set $n = 50$ and $\rho = 0.1$. } \label{fig:equicorr} \end{figure} \smallskip\noindent {\bf Empirical Results for the Source Detection Problem (Simulated Data).} Here, we set the covariance $\Sigma=D^{1/2} (R + \mathcal{E}) D^{1/2}$, where $D$ is a diagonal matrix, $R$ a correlation matrix, and $\mathcal{E}$ a noise matrix. If $\mathcal{E}=0$, then $D_{ii}$ would be the variance of firm $i$. We set $D_{ii}$ according to a power law: $D_{ii} = i^{-\alpha}$ for an $\alpha > 0$. Larger values of $\alpha$ correspond to greater skew in the variances. We choose $R$ to be an equi-correlation matrix with $1$ along the diagonal and $\rho \in (0, 1)$ everywhere else. We draw the error matrix $\mathcal{E}$ from a scaled Wishart distribution: $\mathcal{E} = \| R\|_2 \cdot \mathcal{W}(\sqrt{\epsilon}\cdot I_n, n) / n$ for some chosen the noise level $\epsilon$. As $\epsilon$ increases, the noise $\mathcal{E}$ dominates $R$. Figure \ref{fig:equicorr} shows the success rate of source detection over $1000$ experiments for various values of $(\epsilon, \alpha)$ for $\rho = 0.1$. As $\alpha$ increases, the variances become more skewed and the source detection can fail even with $\epsilon=0$ noise. When $\epsilon$ grows, the success rate for the source detection problem goes to zero. This suggests that skew combined with noise makes source detection difficult. We observe similar results for real-world choices of $\Sigma$, as we show next. \smallskip\noindent {\bf Empirical Results for the Source Detection Problem (Real-World Data).} We consider two datasets: (a) a trade network between $46$ large economies~\citep{oecd-stats}, and (b) a simulated network between $96$ portfolio managers following various Fama-French strategies~\cite{fama-french-2015}. For each dataset, we construct a ``ground-truth'' covariance $\Sigma$ using all available data (the details are in in Appendix~\ref{appendix-datasets}). Then, using $m$ independent samples $\bm{x}_i\sim\mathcal{N}(0, \Sigma)$, we build a ``data-driven'' covariance $\hat{\Sigma} = (1/m)\sum_{i=1}^m \bm{x}_i\bm{x}_i^T$. We use this $\hat{\Sigma}$ to construct the financial network. \begin{figure}[tbp] \centering \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{figs/new-121222/recovery_prob_500_runs_ff-eps-converted-to.pdf} \caption{Simulated network of $96$ portfolio managers.} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{figs/new-121222/oecd_recovery_rate_1000_runs-eps-converted-to.pdf} \caption{$46$-country (OECD) trade network.} \end{subfigure} \caption{The success rate for the Source Detection Problem scales monotonically with the number of samples used to construct the data-driven covariance matrix $\hat{\Sigma}$. } \label{fig:meanShock} \end{figure} Figure~\ref{fig:meanShock} shows the success rate over $500$ experiments for various choices of the sample size $m$. The success rate increases monotonically with $m$. The reason for this behavior lies in the spectra of $\Sigma$ and $\hat{\Sigma}$. We find that in both datasets, the largest and smallest eigenvalues of $\Sigma$ are separated by several orders of magnitude. This gap becomes even more extreme in the data-driven $\hat{\Sigma}$; the fewer the samples $m$, the greater the gap (see Figure \ref{fig:spectra}). In fact, we observe that the smallest eigenvalue of $\hat{\Sigma}$ is much smaller than the second-smallest eigenvalue: $\lambda_n\ll \lambda_{n-1}$. \cite{zhao-2019-bounded-noise-portfolio} make similar observations. \begin{figure}[tbp] \centering \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{figs/new-121222/spectra_with_num_samples_100_to_300-eps-converted-to.pdf} \caption{Simulated network of $96$ portfolio managers} \end{subfigure}% \begin{subfigure}{0.5\textwidth} \includegraphics[width=\textwidth]{figs/oecd/spectra_with_num_samples.pdf} \caption{$46$-country (OECD) trade network} \end{subfigure} \caption{ The eigenvalues of estimated covariance matrices are skewed, and the degree of skew depends on the number of samples. As the sample size decreases, so does the smallest eigenvalue $\lambda_n$ and the ratio $\lambda_n/\lambda_{n-1}$. } \label{fig:spectra} \end{figure} In summary, the experiments on both simulated and real-world datasets highlight the difficulty of source detection and targeted intervention in realistic networks. The reason is the skew in the eigenvalues coupled with noise, which affects the eigenvectors. Skewed eigenvalues correspond to trade combinations (eigenvectors) that are seemingly low-risk. Hence, firms use such trades to diversify. This implies that these eigenvectors have an outsized effect on the network, and how it responds to local changes. Intuitively, if these eigenvectors are ``random,'' the effect of a changed belief $M_{k\ell}$ affects the rest of the network randomly. Hence, the direct effects on $W_{k\ell}$ may be less than the indirect effects on other $W_{ij}$. We explore this theoretically in Appendix~\ref{appendix:dwdm:approx}. \section{Introduction} The financial crisis of $2008$ showed the need for mitigating systemic risks in the financial system. There has been much recent work on categorizing such risks~\citep{glasserman-young-2015, glasserman-young-2016, birge-2021}. While the causes of systemic risk are varied, they often share one feature. This shared feature is the network of interconnections between firms via which problems at one firm spread to others. One example is the weighted directed network of debt between firms. If one firm defaults on its debt, its creditor firms suffer losses. Some creditors may be forced into default, triggering a default cascade~\citep{eiseinberg-noe-2001}. Another example is the network implicitly formed by firms holding similar assets. Here, sales by one firm can lead to mark-to-market valuation losses at other firms. This can lead to fire sales of assets~\citep{caballero-simsek-2013, cont16credit, feinstein-2020, feinstein-sojmark-2021}. The structure of inter-firm networks plays a vital role in the financial system. Small changes in network structure can lead to jumps in credit spreads in Over-The-Counter (OTC) markets~\citep{ eisfeldt18otc}. Network density and diversification affect how robust the networks are to shocks and how such shocks propagate~\citep{ elliot-2014, acemoglu-2015}. The network structure also affects the design of regulatory interventions~\citep{papachristou-kleinberg-2022,calafiore-2022}. Despite its importance, many prior works use simplistic descriptions of the network structure. For instance, they often assume that the network is fixed and observable. But only regulators may have access to the full network. Furthermore, shocks or regulatory interventions can change the network. Others assume that the network belongs to a general class, such as ring networks or core-periphery networks~\citep{caballero-simsek-2013,amini-2015-control}. But financial networks exhibit complex structure~\citep{peltonen14network, eisfeldt18otc}. Leverage levels, size heterogeneity, and other factors can affect the network topology~\citep{glasserman-young-2016}. Hence, there is a need for models to help reason about financial networks. In this paper, we design a model for a weighted network of contracts between agents, such as firms, countries, or individuals. The contracts can be arbitrary, and the edge weights denote contract sizes. In designing the model, we have two main desiderata. First, the model must account for heterogeneity between firms. This follows from empirical observations that differences in dealer characteristics lead to different trade risk exposures in OTC markets~\citep{eisfeldt18otc}. Second, each firm seeks to maximize its utility and selects its contract sizes accordingly. In effect, each firm tries to optimize its portfolio of contracts~\citep{markowitz-1952}. The model must reflect this behavior. From this starting point, we ask the following questions: \smallskip \begin{center} \begin{enumerate} \item How does a network emerge from interactions between heterogeneous utility-maximizing firms? \item How does the network respond to regulatory interventions? \item How can the network structure inform the beliefs that firms hold about each other? \end{enumerate} \end{center} \noindent Next, we review the relevant literature. \smallskip\noindent {\bf Imputing financial networks.} We often have only partial information about the structure of a financial network. For example, we may know the total liability of each bank in a network. From this, we want to reconstruct all the inter-bank liabilities~\citep{squartini-2018-reconstruction}. One approach is to pick the network that minimizes KL divergence from a given input matrix~\citep{upper04estimating}. \cite{mastromatteo12reconstruction} use message-passing algorithms, while \cite{gandy-verart-2017} use a Bayesian approach. But such random graph models often do not reflect the sparsity and power-law degree distributions of financial networks~\citep{upper-2011-simulation}. Furthermore, these models do not account for the utility-maximizing behavior of firms. \smallskip\noindent {\bf General-purpose network models.} The simplest and most well-explored network model is the random graph model~\citep{gilbert_random_1959, erdos_random_1959}. Here, every pair of nodes is linked independently with probability $p$. Generalizations of this model allow for different degree distributions and edge directionality~\citep{aiello00random, duijn_p2:_2004}. Exponential random graph models remove the need for independence, but parameter estimation is costly~\citep{frank_markov_1986, wasserman_logit_1996, hunter_inference_2006, caimo_bayesian_2011}. Several models add node-specific latent variables to model the heterogeneity of nodes. For example, in the Stochastic Blockmodel and its variants, nodes are members of various latent communities. The community affiliations of two nodes determine their probability of linkage~\citep{holland_stochastic_1983, wang87stochastic, chakrabarti04rmat, airoldi08mixed, mao_overlapping_2018}. Instead of latent communities, \cite{hoff02latent} assign a latent location to each node. Here, the probability of an edge depends on the distance between their locations. All the latent variable models assume conditional independence of edges given the latent variables. But in financial networks, contracts between firms are not independent. Two firms will sign a contract only if the marginal benefit of the new contract is higher than the cost. This cost/benefit tradeoff depends on all other contracts signed with other firms. Unlike our model, existing general-purpose models do not account for such utility-maximization behavior. \smallskip\noindent {\bf Network games.} Games have been widely studied for dynamic routing, path planning, and so on~\citep{wu-2021, wu-2022, peters-2021}. In network games, the payoffs of nodes are dependent on the actions of their neighbors \citep{tardos-2004}. One well-studied class of network games is linear-quadratic games. Such games have linear dynamics and quadratic payoff functions. Prior work has explored the stability of Nash equilibria~\citep{guo-2021-lq-games} and algorithms to learn the agents' payoff functions~\citep{leng-2020-learning}. But our model does not yield a linear-quadratic game except in exceptional cases. Instead, our process involves non-linear rational functions of the beliefs of firms. Thus, our setting differs from linear-quadratic games. Recently, network games have been extended to settings where the number of players tends to infinity \citep{carmona-2022-graphon}. However, we only consider finite networks. \smallskip\noindent {\bf Strategic network formation.} There is recent work on games designed to build networks. Prior works prove variants of Nash stability for such games~\citep{calvo-2009-pairwise, hellmann-2021-pairwise}. We also show that our model achieves a strong form of Nash stability. But prior works typically consider unweighted networks. In contrast, we model weighted networks, with weights denoting contract sizes. Also, we analyze how the network adapts to changing beliefs of the participant firms. The linkage between firms' utilities and their beliefs, and its effect on stability, is not considered in prior work. There is also work combining network formation processes with network-based games~\citep{golub-sadler-2021}. The authors seek a Nash equilibrium over the network and the agents' actions. In other words, no agent wants to modify the network or change her actions given the network. We show similar results for our model. However, our work has several differences. Ours is for a weighted network, while they only consider unweighted networks. Also, they primarily focus on the case of ``separable games.'' In our setting, this corresponds to the case where all firms are uncorrelated. This is an unrealistic assumption since firms often face correlated counterparties. The presence of correlations also lets firms diversify their contracts. \subsection{Our Contributions} We develop a new network model of contracts between heterogeneous agents, such as firms, countries, or individuals. Each agent aims to maximize a mean-variance utility parametrized by its beliefs. But for two agents to sign a contract, both must agree to the contract size. For a stable network, all agents must agree to all their contracts. We show that such constraints are solvable by allowing agents to pay each other. By choosing prices appropriately, every agent maximizes its utility in a stable network. \smallskip\noindent {\bf Characterization of stable networks (Section~\ref{sec:model}):} We show that unique stable networks exist for almost all choices of agents' beliefs. These networks are robust against actions by cartels, a condition that we call Higher-Order Nash Stability. The agents can also converge to the stable network via iterative pairwise negotiations. The convergence is exponential in the number of iterations. Hence, the stable network can be found quickly. Finally, we show how to infer the agents' beliefs by observing network snapshots over time, under certain conditions. \smallskip\noindent {\bf The limits of regulation (Section~\ref{sec:regulators}):} A financial regulator can observe the entire network but not the agents' beliefs. Suppose firm $i$ changes its beliefs about firm $j$. Then the contract size between $i$ and $j$ will change. Indirectly, other contracts will change too. We show empirically that in realistic settings, the indirect effects can be as significant as the direct effects. In such cases, the regulator cannot infer the underlying cause of changes in the network. Similarly, suppose the regulator intervenes with one firm, affecting its beliefs. The resulting network changes need not be localized to that firm's neighborhood in the network. Thus, targeted interventions can have strong ripple effects. Broad-based interventions aimed at increasing stability can also have adverse effects. For instance, increasing margin requirements may make stable networks impossible. \smallskip\noindent {\bf Outlier detection by firms (Section~\ref{sec:firms}):} A firm $i$ can observe its contracts with counterparties but not the entire network. Suppose another firm $j$ (say, a real-estate firm) has beliefs that are very different from its peers. Then, we prove that under certain conditions, $j$'s contract size with~$i$ is also an outlier compared to other real-estate firms. So, firm $i$ can use the network to detect outliers and update its beliefs. But suppose all real-estate firms change their beliefs. This changes all their contract sizes without creating outliers. We show that $i$ cannot determine the cause of this change. For example, firm $i$ would observe the same change whether all real-estate firms had become more risk-seeking or profitable. However, firm $i$ may want to increase its exposure if they are more profitable but reduce exposure if they are more risk-seeking. Since the data cannot identify the proper action, firm $i$ remains uncertain. Exogenous, seemingly insignificant information may persuade firm $i$ one way or another. Thus, minor news may trigger drastic changes in the network. \smallskip\noindent {\bf Notation.} We use lowercase letters, with or without subscripts, to denote scalars (e.g., $c, \gamma_i$). Lowercase bold letters denote vectors ($\bm{\mu}_i, \bm{w}$), and uppercase letters denote matrices ($W, P, \Sigma_i$). We use $\bm{\mu}_{i;j}$ to refer to the $j^{th}$ component of the vector $\bm{\mu}_i$, and $\Sigma_{i;jk}$ for the $(j,k)$ cell of matrix $\Sigma_i$. We use $\bm{v}^T$ to denote the transpose of a vector $\bm{v}$, and $\|\cdot\|_p$ to denote the $\ell_p$ norm of a vector or matrix. We say $A\succeq 0$ if $A$ is positive semidefinite, $A\succ 0$ if it is positive definite, and $A\succeq B$ if $A-B\succeq 0$. The vectors $\bm{e}_1, \ldots, \bm{e}_n$ denote the standard basis in $\mathbb{R}^n$, and $I_n$ is the $n\times n$ identity matrix. If $A \in \mathbb{R}^{m \times n}, B \in \mathbb{R}^{p \times q}$ then $A \otimes B \in \mathbb{R}^{mp \times nq}$ denotes their tensor product: $(A \otimes B)_{ij, k\ell} = A_{ik}B_{j \ell}$. For an appropriate matrix $M$, $\textrm{tr}(M)$ calculates its trace, $\textrm{vec}(M)$ vectorizes $M$ by stacking its columns into a single vector, and $\textrm{uvec}(M)$ vectorizes the upper-triangular off-diagonal entries of $M$. For an integer $r \geq 1$, we use $[r]$ to denote the set of integers $[r] \vcentcolon= \{1, 2, \dots, r\}$. \section{The Proposed Model} \label{sec:model} We consider a {\em weighted} network $W\in\mathbb{R}^{n\times n}$ between $n$ agents (such as firms, countries, or individuals). The element $W_{ij}$ represents the size of a contract between agents $i$ and $j$. We make no assumptions about the content of the contract. For instance, the contract could be a credit default swap, a stock swap, or an insurance contract. Since contracts need mutual agreement, $W_{ij}=W_{ji}$. We take $W_{ii}$ to represent $i$'s investment in itself. Note that a negative contract ($W_{ij}=W_{ji}<0$) is a valid contract that reverses the content of a positive contract. For example, if a positive contract is a derivative trade between two firms, the negative contract swaps the roles of the two firms. Let $\bm{w}_i$ denote the $i^{th}$ column of $W$ (i.e., $\bm{w}_{i;j}=W_{ji}$ for all $j$). Each agent $i$ would prefer to set its contract sizes $\bm{w}_i$ to maximize its utility. But other agents will typically have different preferences. So, to achieve an agreement about the contract size, agents can pay each other. We denote by $P_{ji}$ the price per unit contract that $i$ pays to $j$, for a total payment of $P_{ji}\cdot W_{ji}$. If $P_{ji}<0$, then $j$ pays~$i$. Since payments are zero-sum and $W_{ji} = W_{ij}$, we must have $P_{ji}=-P_{ij}$. Each contract yields a stochastic payout, and agents have beliefs about these payouts. We represent agent $i$'s beliefs by a vector ${\bm\mu}_i$ of expected returns and a covariance matrix $\Sigma_i\succ 0$. Specifically, let $X_{ij}$ be a random variable representing the payout obtained by agent $i$ from a unit-sized contract with agent $j$. Then, agent $i$ believes that $E[X_{ij}] = {\bm \mu}_{i;j}$ and $Cov(X_{ij}, X_{ik}) = \Sigma_{i;jk}$. Note that we do {\em not} assume that the contracts are zero-sum or that the beliefs are correct, even approximately. Thus, the overall expected return from all contracts of $i$ is $\bm{w}_i^T(\bm{\mu}_i-P\bm{e}_i)$, and the variance of the overall return is $\bm{w}_i^T \Sigma_i \bm{w}_i$. We assume that each agent has a mean-variance utility~\citep{markowitz-1952}: \begin{align} \text{agent $i$'s utility } g_i(W, P) &:= \bm{w}_i^T (\bm{\mu}_i - P \bm{e}_i) - \gamma_i \cdot \bm{w}_i^T \Sigma_i \bm{w}_i, \label{eq:utility} \end{align} where $\gamma_i > 0$ is a risk-aversion parameter. Homogeneity of risk aversion has been observed in game settings~\citep{metrick-1995}, surveys~\citep{kimball-2008}, and investment portfolios~\citep{paravisini-2017} (see also~\cite{ang-2014}). Hence, we expect all $(\gamma_i)_{i\in[n]}$ to be similar. Note that Eq.~\ref{eq:utility} ignores costs for contract formation; we will consider these in Section~\ref{sec:regulators:friction}. The model above allows contracts between all pairs of agents. But some edges may be prohibited due to logistical or legal reasons. For each agent $i$, let $J_i\subseteq [n]$ denote the ordered set of agents with whom $i$ can form an edge. So, if $k\notin J_i$ (and hence $i\notin J_k$), we have $W_{ik}=W_{ki}=P_{ik}=P_{ki}=0$. Similarly, if $i\notin J_i$, then self-loops are prohibited ($W_{ii}=P_{ii}=0$). We will encode these constraints in the binary matrix $\Psi_i\in\mathbb{R}^{|J_i|\times n}$ where $\Psi_{i;jk} = 1$ if $k$ is the $j^{th}$ element of $J_i$, and $\Psi_{i;jk}=0$ otherwise. In other words, $\Psi_i$ is obtained from $I_n$ by deleting the rows corresponding to the prohibited counterparties of $i$. Thus, for any $\bm{v}\in\mathbb{R}^n$, $\Psi_i\bm{v}$ selects the elements of $\bm{v}$ corresponding to $J_i$. If all edges are allowed, we have $\Psi_i=I_n$ for all $i$. \begin{definition}[Network Setting] A {\em network setting} $(\bm{\mu}_i, \gamma_i, \Sigma_i, \Psi_i)_{i \in [n]}$ captures the beliefs and constraints of $n$ agents. When there are no constraints (i.e., all edges are allowed), we drop the $\Psi_i=I_n$ terms to simplify the exposition. Finally, we will use $M\in \mathbb{R}^{n \times n}$ to denote a matrix whose $i^{th}$ column is $\bm{\mu}_i$, and $\Gamma$ to denote a diagonal matrix with $\Gamma_{ii}=\gamma_i$. \end{definition} \subsection{Finding the Stable Point via Pairwise Negotiations} \label{sec:model:pairwise} To compute the stable point in Theorem~\ref{thm:stable}, we must know the beliefs of all agents. But in practice, contracts are set iteratively by negotiations among pairs of agents. We will now formalize the process of pairwise negotiations and characterize the conditions under which such negotiations can converge to the stable point. We propose a multi-round pairwise negotiation process. In round $t+1$, every pair of agents $i$ and $j$ update the price $P_{ij}(t)$ to $P_{ij}(t+1)$ (and hence $P_{ji}(t)$ to $P_{ji}(t+1)$) as follows. First, they agree to a price $P^\prime_{ij}$ between themselves, {\em assuming optimal contract sizes with all other agents at the current prices $P(t)$.} In other words, we assume that the other agents will accept the prices in $P(t)$ and the contract sizes preferred by $i$ and $j$. Under this condition, $P^\prime_{ij}$ is the price at which $i$'s optimal contract size with $j$ is also $j$'s optimal size with $i$. All pairs of agents calculate these prices {\em simultaneously}. We create a new price matrix $P^\prime$ from these prices. Then, we set $P(t+1)= (1-\eta)P(t) + \eta P^\prime$, where $\eta\in (0, 1)$ is a dampening factor chosen to achieve convergence. Algorithm~\ref{alg:pairwise} shows the details. Next, we show how the price $P^\prime_{ij}$ can be computed. Slightly abusing notation, we use $P^\prime$ in the next proposition to denote a matrix that is identical to $P(t)$ except for the pair $(i,j)$. \begin{proposition}[Price after Pairwise Negotiation]\label{price-update-rule} Consider a network setting $(\bm{\mu}_i, \gamma_i, \Sigma_i, \Psi_i)_{i \in [n]}$. Let $Q_i$ be as in Theorem \ref{thm:stable}. Given a price matrix $P=-P^T$ and a pair of firms $(i,j)$ that are permitted to trade, let $P^\prime$ be another skew-symmetric price matrix such that (a) $P^\prime$ differs from $P$ only in the cells $(i,j)$ and $(j,i)$, (b) $i$ and $j$ both maximize their utility at the same contract size under $P^\prime$, and (c) $i$ and $j$ can choose their optimal contract sizes with all other agents given these prices. Then, \begin{align*} P^\prime_{ij} &= \frac{1}{Q_{i;j,j} + Q_{j;i,i}} \Big(\bm{e}_i^T Q_j (M - P) \bm{e}_j - \bm{e}_j^T Q_i (M - P) \bm{e}_i \Big) + P_{ij} \end{align*} \end{proposition} \begin{algorithm} \caption{Pairwise Negotiations} \label{alg:pairwise} \begin{algorithmic}[1] \Procedure{Pairwise}{$\eta\in(0,1)$} \State $t\gets 0$ \State $P(0)\gets\text{ any skew-symmetric matrix}$ \While{$P(t)$ has not converged} \State $\forall i,j\in[n], P^\prime_{ij}\gets$ pairwise-negotiated price for $(i,j)$ (Prop.~\ref{price-update-rule}) \State $P(t+1)\gets (1-\eta) P(t) + \eta P^\prime$ \State $t\gets t+1$ \EndWhile \EndProcedure \end{algorithmic} \end{algorithm} Now, we will show that Algorithm~\ref{alg:pairwise} converges. First, we define {\em global asymptotic stability} (following \citet{callier-desoer}). \begin{definition}[Global Asymptotic Stability] The pairwise negotiation process is globally asymptotically stable for a given network setting and dampening factor $\eta$ if, for any initial price matrix $P(0)$, there exists a matrix $P^\star$ such that the sequence of price matrices $P(t)$ converges to $P^\star$ in Frobenius norm: $\lim\limits_{t\to\infty} \|P(t) - P^\star\|_F = 0$. \end{definition} When pairwise negotiations are globally asymptotically stable, the limiting matrix $P^\star$ must be skew-symmetric since each $P(t)$ is skew-symmetric. Also, since prices are updated whenever two agents disagree on the size of the contract between them, all agents agree on their contract sizes at $P^\star$. Hence, $P^\star$ must be a stable point for the given network setting. Most network settings have a unique stable point (Theorem~\ref{thm:stable-common}). Hence, we typically expect a single $P^\star$ for all starting points $P(0)$. Now, we show that for a range of $\eta$, pairwise negotiations are globally asymptotically stable. \begin{theorem}[Convergence Conditions and Rate] \label{thm:asymp} Let $Q_i$ be defined as in Theorem \ref{thm:stable}. Define the following $n^2\times n^2$ matrices: \begin{align*} K &\vcentcolon= \sum\limits_{r=1}^{n} \bm{e}_r \bm{e}_r^T \otimes Q_r + Q_r \otimes \bm{e}_r \bm{e}_r^T & \\ L_{(i - 1) n + j, (i - 1) n + j} &= Q_{i;j,j} + Q_{j; i,i} \quad\forall i,j\in[n] & \text{($L$ is diagonal).} \end{align*} Let $(L^\dagger K)\mid_R$ denote the principal submatrix of $L^\dagger K$ containing the rows/columns $(i-1)n+j$ such that the edge $(i,j)$ is not prohibited. Let $\lambda_{\textrm{max}}, \lambda_{\textrm{min}}$ be the largest and smallest eigenvalues of the matrix $(L^\dagger K)\mid_R$ respectively. Let $\eta^* = \frac{2}{\lambda_{\textrm{max}}}$. Then, we have: \begin{enumerate} \item For all $\eta \in (0, \eta^*)$, pairwise negotiations with $\eta$ are globally asymptotically stable. \item For such an $\eta$, the convergence is exponential in the number of rounds $t$: \begin{align*} \|P(t)-P^\star\|_F &\leq \frac{\alpha^t}{1-\alpha} \cdot \|P(1)-P(0)\|_F, &\text{where }\alpha = \max\{ \abs{1-\eta\lambda_{\textrm{min}}}, \abs{1-\eta\lambda_{\textrm{max}}} \}. \end{align*} \end{enumerate} Here, $P^\star$ is the stable point to which the negotiation converges. \end{theorem} \subsection{Pairwise Negotiations under Random Covariances} \label{sec:model:randomcov} So far we have made no assumptions about agents' beliefs. In this section, we analyze the convergence of pairwise negotiations for ``data-driven'' agents. Specifically, each agent $i$ now {\em estimates} its covariance matrix. For this section only, we will call the covariance matrix $\hat{\Sigma}_i$ instead of $\Sigma_i$ to emphasize that it is an estimated quantity. Suppose each agent $i$ observes $m$ independent data samples. Each sample is a vector of the returns of unit contracts with all $n$ agents. The samples for agent $i$ are collected in a matrix $X_i\in\mathbb{R}^{n\times m}$, with one column per sample. The sample covariance of this data is $\hat{\Sigma}_i$. We assume that all agents observe samples from the same return distribution, which has covariance $\Sigma$. Under a wide range of conditions, $\|\hat{\Sigma}_i - \Sigma\|\to 0$ in probability~\citep{vershynin-book}. Hence, at convergence, the maximum allowed dampening rate $\eta^\star$ in Theorem~\ref{thm:asymp} would be a function of $\Sigma$. But for finite sample sizes, each agent's $\hat{\Sigma}_i$ can be different. Hence, the maximum dampening $\hat{\eta}^\star$ may be less than $\eta^\star$. The smaller the $\hat{\eta}^\star$, the worse the rate of convergence of pairwise negotiations. However, even with a few samples, $\hat{\eta}^\star$ is close to $\eta^\star$, as the next theorem shows. \begin{theorem}[Small Sample Sizes are Sufficient for Fast Convergence] \label{thm:asymp:random} Suppose that $\| \Sigma \|, \| \Sigma^{-1} \|, \| \Gamma \|,$ and $\| \Gamma^{-1} \|$ are $O(1)$ and all edges are allowed. Also, suppose that each sample column of $X_i$ is drawn independently from a $\mathcal{N}(\bm{0}, \Sigma)$ distribution, and ${\hat{\Sigma}_i \vcentcolon= (1/m) X_i X_i^T}$. Let $\hat{\eta}^\star$ be the maximum dampening factor using $(\hat{\Sigma}_i)_{i\in[n]}$ as defined in Theorem~\ref{thm:asymp}. Let $\eta^\star$ be the dampening factor if $\hat{\Sigma}_i$ were replaced by $\Sigma$ for all $i$. If $m = \ceil*{n \log n}$, then for large enough $n$, ${\hat{\eta}^\star \geq (1 - o(1)) \eta^\star}$ with probability at least $1 - \exp(-\Omega(n))$. \end{theorem} Theorem~\ref{thm:asymp:random} shows that data-driven agents using a broad range of dampening factors are still likely to find the stable point via pairwise negotiations. Furthermore, the amount of data they need is comparable to the number of agents (up to a logarithmic factor). \section{Acknowledgments} The authors thank Stathis Tompaidis, Marios Papachristou, Kshitij Kulkarni, and David Fridovich-Keil for valuable discussions and suggestions. This work was supported by NSF grant 2217069 and a Dell Faculty Award. \bibliographystyle{informs2014} \subsection{Characterizing Stable Points} \label{sec:model:stable} In the above model, every agent tries to optimize its own utility (Eq.~\ref{eq:utility}). We now characterize the conditions under which selfish utility-maximization leads to a stable network. \begin{definition}[Feasibility] A tuple $(W, P)$ is feasible if $W=W^T$, $P=-P^T$, and $W$ and $P$ obey the constraints encoded in $(\Psi_i)_{i\in[n]}$. \end{definition} \begin{definition}[Stable point] A feasible $(W, P)$ is stable if each agent achieves its maximum possible utility given prices $P$: \begin{align*} g_i(W, P) = \max_{\text{feasible} (W', P) \text{ under } \{\Psi_i\}} g_i(W', P) \quad \forall i\in [n]. \end{align*} \end{definition} \begin{figure}[tbp] \centering \begin{subfigure}{0.39\textwidth} \includegraphics[width=\textwidth]{figs/noprice-eps-converted-to.pdf} \caption{No price payments allowed} \label{fig:example:noprice} \end{subfigure} \begin{subfigure}{0.39\textwidth} \includegraphics[width=\textwidth]{figs/yesprice-eps-converted-to.pdf} \caption{Firm~$2$ pays $5/3$ per contract} \label{fig:example:yesprice} \end{subfigure} \begin{subfigure}{0.19\textwidth} \raisebox{1.5em}{ \includegraphics[width=\textwidth]{figs/pict3}} \caption{Network} \label{fig:example:network} \end{subfigure}% \caption{{\em Example of a stable point for two firms:} (a) When firms are not allowed to pay each other, they may be unable to agree to a contract, even if trading improves their utilities. (b) By allowing for prices, both firms can agree on a contract size. In effect, firm~$2$ shares its utility with firm~$1$ to achieve agreement. (c) The stable network is shown.} \label{fig:example} \end{figure} \begin{example} Suppose we only have two firms with the following setting: \begin{align*} \text{mean beliefs }M &= \begin{bmatrix}0 & 3\\mathop{\mathbb{1}} & 4\end{bmatrix} & \text{covariance }\Sigma_1=\Sigma_2 &=\begin{bmatrix}1 & 0\\0 & 2\end{bmatrix} & \text{risk aversion }\gamma_1=\gamma_2&=1. \end{align*} So, both firms perceive a benefit from trading ($M_{12}>0, M_{21}>0$). If trading is disallowed, the optimum $W$ is diagonal with $W_{11}=0$ and $W_{22}=1$ (and $P$ is the zero matrix). The corresponding utilities are $0$ for firm~$1$ and $2$ for firm~$2$. Suppose we allow trading but do not allow pricing (Figure~\ref{fig:example:noprice}). Then, the two firms can each improve their utility by trading, but achieve their optimum utilities at different contract sizes. Hence, they may be unable to agree to a contract. In Figure~\ref{fig:example:yesprice}, firm~$2$ pays firm~$1$ a specially chosen price of $5/3$ per unit contract. At this price, both firms achieve their optimum utilities at the same contract size $W_{12}=W_{21}=2/3$. Hence, they can agree to a contract. By paying the price, firm~$2$ shares some of its utility with firm~$1$ to achieve agreement on the contract. This choice of $W$ and $P$ is a stable point (Figure~\ref{fig:example:network}). The following results show that this is the {\em only} stable point.$\hfill\Box$ \end{example} Define $Q_i = \Psi_i^T (2 \gamma_i \Psi_i \Sigma_i \Psi_i^T)^{-1} \Psi_i$. When all edges are allowed, $\Psi_i=I_n$ and $Q_i=(2\gamma_i\Sigma_i)^{-1}$. Let $F = \{(i, j): 1 \leq i < j \leq n, \Psi_i \bm{e}_j \neq \bm{0}\}$ denote the ordered pairs $i<j$ where $P_{ij}$ is allowed to be non-zero. Note that $|F|\leq n(n-1)/2$. For any $n\times n$ matrix $X$, let $\textrm{uvec}(X)_F\in\mathbb{R}^{|F|}$ be a vector whose entries are the ordered set $\{X_{ij}\mid (i,j)\in F\}$. \begin{theorem}[Existence and Uniqueness of Stable Point] \label{thm:stable} Define $n\times n$ matrices $A$, $B_{(i,j)}$, and $C_{(i,j)}$ as follows: \begin{align*} A_{ij} &=\bm{e}_i^T Q_j M \bm{e}_j, & B_{(i,j)} &= \bm{e}_i\bm{e}_j^T Q_i, & C_{(i,j)} &= (B_{(i,j)} - B_{(j,i)}) - (B_{(i,j)} - B_{(j,i)})^T. \end{align*} Let $Z_F$ be the $|F|\times |F|$ matrix whose rows are the ordered sets $\{\textrm{uvec}(C_{(i,j)})_F \mid (i,j)\in F\}$. Then, we have the following: \begin{enumerate} \item A stable point $(W, P)$ under $\{\Psi_i\}$ exists if $\textrm{uvec}(A-A^T)_F$ lies in the column space of $Z_F$. \item A unique stable point always exists if $Z_F$ is full rank. \end{enumerate} \end{theorem} A special case of the above is when all agents have the same covariance ($\Sigma_i=\Sigma$ for all $i\in[n]$). One example is when the risk of a contract is primarily counterparty risk (so $\Sigma_{i;jk}$ depends on $j$ and $k$, not $i$) and there is reliable public data on such risks (say, via credit rating agencies). We can then characterize the stable network as follows. \begin{corollary}[Shared $\Sigma$, all edges allowed] \label{cor:stability:sharedSigma} Suppose $\Sigma_i=\Sigma$ and $\Psi_i=I_n$ for all $i\in[n]$. Let $(\lambda_i, \bm{v}_i)$ denote the $i^{th}$ eigenvalue and eigenvector of $\Gamma^{-1/2}\Sigma\Gamma^{-1/2}$. Then, the network $W$ can be written in two equivalent ways: \begin{align*} \textrm{vec}(W) &= \frac 1 2 (\Gamma\otimes \Sigma + \Sigma \otimes \Gamma)^{-1}\textrm{vec}(M+M^T),\\ W &= \Gamma^{-1/2}\left( \sum_{i=1}^n \sum_{j=1}^n \frac{\bm{v}_i^T \Gamma^{-1/2}(M+M^T)\Gamma^{-1/2} \bm{v}_j}{2 (\lambda_i + \lambda_j)} \bm{v}_i \bm{v}_j^T \right) \Gamma^{-1/2}. \end{align*} \end{corollary} Next, we show that most settings have unique stable points, and such stable networks are robust. \begin{theorem}[Unique Stable Points are Common] For any network setting $S=(\bm{\mu}_i, \gamma_i, \Sigma_i, \Psi_i)_{i \in [n]}$ and any $\epsilon>0$, there exist uncountably many network settings $S^\prime=(\bm{\mu}_i, \gamma_i, \Sigma_i^\prime, \Psi_i)_{i \in [n]}$ such that $\|\Sigma_i^\prime-\Sigma_i\|<\epsilon$ for all $i\in[n]$, and $S^\prime$ has a unique stable point. \label{thm:stable-common} \end{theorem} For two feasible tuples $(W_1, P_1)$ and $(W_2, P_2)$, let $(W_2, P_2)$ {\em dominate} $(W_1, P_1)$ if for all $i\in[n], g_i(W_1, P_1)\leq g_i(W_2, P_2)$, with at least one inequality being strict. \begin{theorem}[Stable points cannot be dominated] \label{thm:domination} Suppose a stable point $(W, P)$ exists. Then, there is no feasible $(W', P')$ that dominates $(W, P)$. \end{theorem} The stable point obeys a strong form of robustness that we call {\em Higher-Order Nash Stability}. This strengthens the notions of {\em pairwise stability}~\citep{hellmann-2013} and {\em pairwise Nash}~\citep{calvo-2009-pairwise, golub-sadler-2021} by allowing for agent coalitions, instead of just considering pairs of agents. It is also closely related to the concept of {\em Strong Nash equilibrium}, which strengthens Nash equilibrium by requiring that no subset of agents can deviate at equilibrium without at least one agent being worse off~\citep{mazalov-2019-book}. \begin{definition}[Agent Action] At a given feasible point $(W, P)$, an ``action'' by agent $i$ is the ordered set $(w_{i, j}^\prime, p_{i, j}^\prime)_{j\in J_i}$, where $J_i\subseteq[n]$ is the set of permissible edges for agent $i$. The action represents a set of proposed changes to $i$'s existing contracts. Each agent $j\in J_i$ responds as follows: \begin{enumerate} \item If the new $(w_{ij}^\prime, p_{ij}^\prime)$ raises $j$'s utility, then $j$ agrees to the revised contract and price. \item Otherwise, $i$ must either keep the existing contract or cancel it ($w_{ij}=p_{ij}=0$). We assume that $i$ cancels the contract if and only if this strictly increases $i$'s utility. \end{enumerate} We call the shifted $(W^\prime, P^\prime)$ the {\em resulting network}. \end{definition} \begin{definition}[Higher-Order Nash Stability] A feasible $(W, P)$ is Higher-Order Nash Stable if: \begin{enumerate} \item {\em Nash equilibrium}: No agent $i$ has an action such that the resulting network $(W^\prime, P^\prime)$ is strictly better for $i$. \item {\em Cartel robustness}: For any proper subset $S \subset [n]$ of agents, there is no feasible point $(W^\prime, P^\prime)$ that differs from $(W, P)$ only for indices $\{i, j\}$ with $i\in S, j\in S$ such that all agents in $S$ have higher utility under $(W^\prime, P^\prime)$ than $(W, P)$. \end{enumerate} \end{definition} \begin{theorem}[Higher-Order Nash Stability] \label{thm:pairwise-nash} Any stable point $(W, P)$ is Higher-Order Nash Stable. \end{theorem}
1,314,259,994,027
arxiv
\section{Introduction \label{sec:intr}} We have reported results of $B_K$ calculated using improved staggered fermions with $N_f=2+1$ flavors in Ref.~\cite{wlee-10-3,Bae:2011ff}. We refer to Ref.~\cite{wlee-10-3} and \cite{Bae:2011ff} as SW-1 and SW-2 afterwards. In SW-1, we use three different lattice spacings to control the discretization errors. The dominant error in this result comes from uncertainty in the matching factors. One hidden uncertainty is that we use the diagonal approximation (uncorrelated fitting) instead of the full covariance fitting in SW-1. In fact, one of the most frequently asked questions on SW-1 is why we do the uncorrelated fitting ({i.e.} using the diagonal approximation) instead of the full covariance fitting. Here, we would like to address this issue on the covariance fitting and the diagonal approximation. A significant difficulty in fitting the highly correlated data has been pointed out in the literature such as Refs.~\cite{Thacker:1990bm,Drummond:1992pg,kilcup-1994-1,michael-1994-1, Michael:1994sz}. In the literature, they have proposed a number of prescriptions such as diagonal approximation \cite{michael-1994-1}, modifying the covariance matrix \cite{Michael:1994sz}, and cut-off method in the popular name of singular value decomposition (SVD) \cite{Thacker:1990bm,Drummond:1992pg,kilcup-1994-1}. The weakness of these approaches is that all of these methods try to modify the covariance matrix one way or another. Hence, we lose the true meaning of $\chi^2$ and we do not know the quality of fitting in this case. Therefore, we propose a new method, the eigenmode shift (ES) method, which does not modify the covariance matrix but only use our freedom to modify the fitting functional form based on the Bayesian method. It turns out that the ES method allows for a probability interpretation of quality of fitting based on the Bayesian $\chi^2$ distribution.\footnote{This is different from the normal $\chi^2$ distribution, which assumes the uniform prior information. We will address this issue when we discuss on the Bayesian method.} An alternative approach is the orthodox Bayesian method. In this approach, we add higher order terms to the fitting function with proper constraints until it fits the data. This also turns out to be another good solution to the problem. The paper is organized as follows. In Sec.~\ref{sec:cov-fit}, we review the covariance fitting process and give a physical meaning of covariance matrix. In Sec.~\ref{sec:bk-cov-mat}, we address the problem with small eigenvalues of covariance matrix. In Sec.~\ref{sec:pres}, we list the possible solutions to the problem and discuss about the pros and cons. Here, we explain the ES method. In Sec.~\ref{sec:err-cov-mat}, we give a theoretical background for the pdf (probability distribution function) of the eigenvalues of the covariance matrix. In Sec.~\ref{sec:conclude}, we close with some concluding remarks. \section{Review of covariance fitting} \label{sec:cov-fit} Let us consider $N$ samples of unbiased estimates of quantity $y_i$ with $i=1,2,3,\ldots,D$. Here, the data set is $\{y_i(n) | n=1,2,3,\ldots,N \}$. Let us assume that the samples $y_i(n)$ are statistically independent in $n$ for fixed $i$ but are substantially correlated in $i$. For example, a similar situation occurs in lattice gauge theory calculations where there are $N$ independent gauge configurations and $B_K$ values with multiple choices of different quark mass pairs of $m_x$ (valence down quark mass) and $m_y$ (valence strange quark mass), which corresponds to $D$ Green functions measured over the gauge configurations. An introduction to this subject is given in Ref.~\cite{milc-1988-1,toussaint-1990-1,anderson-2003,johnson-2007}. The fitting functional form suggested by the SU(2) staggered chiral perturbation theory (SChPT) is linear as follows, \begin{equation} f_\text{th} (X) = \sum_{a=1}^{P} c_a F_a(X) \label{eq:fit-func-1} \end{equation} where $c_a$ are the low energy constants (LECs) and $F_a$ is a function of $X$ which represents collectively $X_P$ (pion squared mass of $\bar{x}x$), $Y_P$ (pion squared mass of $\bar{y}{y}$), and so on. The details on $F_a$ and $X$ are given in SW-1. Here, we focus on the X-fit of 4X3Y-NNLO fitting of the SU(2) SChPT, which is explained in great detail in SW-1. In this fit, we have three LECs and so $P=3$. We are interested in the probability distribution of the average $\bar{y}_i$ of the data $y_i(n)$. \begin{equation} \bar{y}_i = \frac{1}{N} \sum_{n=1}^{N} y_i(n) \end{equation} We assume that the measured values of $\bar{y}_i$ have a normal distribution $P(\bar{y})$ by the central limit theorem for the multivariate statistical analysis as follows, \begin{equation} P(\bar{y}) = \frac{1}{Z} \exp[ -\frac{1}{2} \sum_{i,j=1}^{D} (\bar{y}_i - \mu_i) (N \ \Gamma^{-1}_{ij}) (\bar{y}_j - \mu_j) ] \,, \end{equation} where $\mu_i$ represents the true mean value of $y_i$, which is, in general, unknown and can be obtained as $N \rightarrow \infty$, and \[ Z = \int [d\bar{y}] \exp[ -\frac{1}{2} \sum_{i,j=1}^{D} (\bar{y}_i - \mu_i) (N \ \Gamma^{-1}_{ij}) (\bar{y}_j - \mu_j) ] \,. \] Here, $\Gamma_{ij}$ is the true covariance matrix, which is, in general, unknown in our problems. The maximum likelihood estimator of $\Gamma_{ij}$ turns out to be the sample covariance matrix $S_{ij}$ defined as follows, \begin{eqnarray} S_{ij} &=& \frac{1}{N-1} \sum_{n=1}^{N} [y_i(n) - \bar{y}_i] [y_j(n) - \bar{y}_j] \\ C_{ij} &=& \frac{1}{N} S_{ij}, \label{eq:cov_mat} \end{eqnarray} where $C_{ij}$ is the normalized sample covariance matrix. Here, note that the covariance matrix is a symmetric and positive definite matrix which has real and positive eigenvalues\footnote{ Here, the positive means that the eigenvalues of the covariance matrix cannot be negative. In other words, some of them may be zero and the rest are positive. }. We assume that our theory\footnote{Here, it means the SU(2) SChPT.} must describe the data well. Then, \[ \mu_i \rightarrow \nu_i = f_\text{th}(X_i) = \sum_{a=1}^{P} c_a F_a(X_i) \,. \] In other words, we want to test whether $\nu_i$ describes the data reliably from the standpoint of statistics. In this procedure, we want to determine $c_a$ (LECs) to give the best fit. Here, the best fit is defined by minimizing the $T^2$ of the numerical results $\{\bar{y}_i\}$, where the $T^2$ is defined by \begin{eqnarray} T^2 = \sum_{i,j=1}^{D} [\bar{y}_i - \nu_i] [N \ S^{-1}_{ij}] [\bar{y}_j - \nu_j] \,. \end{eqnarray} We notice that $Y_i = \sqrt{N} [\bar{y}_i - \nu_i]$ is distributed according to $\mathcal{N}(\rho, \Gamma)$ and $\rho_i = \sqrt{N} [\mu_i - \nu_i]$. Here, we use the same notation as in Ref.~\cite{anderson-2003}. Then, note that $(N-1) S_{ij}$ is independently distributed as \[ \sum_{n=1}^{N-1} Z_i(n) Z_j(n) \] where $Z(n)$ is distributed according to $\mathcal{N}(0,\Gamma)$. In this case, $[T^2/(N-1)] [(N-d)/d]$ is distributed as a non-central $F$ distribution of $F_{d, N-d}$, which is defined in Ref.~\cite{anderson-2003}, and its non-centrality parameter is \[ \sum_{i,j} \rho_i \Gamma_{ij}^{-1} \rho_{j} =\sum_{i,j} (\mu_i - \nu_i) ( N \ \Gamma_{ij}^{-1} ) (\mu_j - \nu_j) \,. \] Here, $d$ is the degrees of freedom of the fitting, defined by $d = D-P$. In Ref.~\cite{anderson-2003}, it is proved that the limiting distribution of $T^2$ as $N \rightarrow \infty$ is the $\chi^2$-distribution with $P$ degrees of freedom if $\mu_i = \nu_i$. At this point, we have to minimize $T^2$ in order to determine the LECs: $\{c_a\}$. Hence, we need to solve the following equation: \begin{equation} \frac{\partial T^2}{\partial c_a} = 0 \end{equation} We can rewrite this equation as follows, \begin{eqnarray} & & \frac{\partial T^2}{\partial c_a} = 2 \sum_{i,j=1}^{D} \frac{\partial f_\text{th}(X_i)}{\partial c_a} C^{-1}_{ij} [f_\text{th}(X_j) - \bar{y}_j ] = 0 \\ & & \sum_{b=1}^{P} A_{ab} \ c_b = h_a \end{eqnarray} where \begin{subequations} \begin{eqnarray} A_{ab} &=& \sum_{i,j=1}^{D} F_a (X_i) C^{-1}_{ij} F_b(X_j) \\ h_a &=& \sum_{i,j=1}^{D} F_a (X_i) C^{-1}_{ij} \bar{y}_j \,. \end{eqnarray} \label{eq:mat:vec} \end{subequations} Here, note that the matrix $A_{ab}$ is a symmetric matrix. The solution can be obtained by simply solving the linear algebra. \begin{equation} \hat{A} \vec{c} = \vec{h} \quad \rightarrow \quad \vec{c} = \hat{A}^{-1} \vec{h} \label{eq:sol} \end{equation} So far, so good. One caveat is that the solution of Eq.~\eqref{eq:sol} exists only if the covariance matrix $C$ is non-singular. In practice, even though the covariance matrix has only very small eigenvalues, it is enough to cause a very poor fitting. We will address this issue in the next sections. \section{Trouble with covariance matrix for $B_K$} \label{sec:bk-cov-mat} First, we address the issue on the quality of the fitting function form suggested by SChPT. Second, we would like to address a typical difficulty with a general covariance fitting of highly correlated data. To demonstrate the problem, we choose the $B_K$ data on a coarse (C3) ensemble out of MILC asqtad lattices using the notation of SW-1 and SW-2. This ensemble is particularly a good sample, because it has relatively large statistics. The input parameters for the C3 ensemble is summarized in Table \ref{tab:para-C3}. \begin{table}[tbp] \centering \begin{tabular}{|l | l|} \hline parameter & value \\ \hline sea quarks & asqtad staggered fermions \\ valence quarks & HYP staggered fermions \\ gluons & Symanzik improved gluon action \\ geometry & $20^3 \times 64$ \\ number of confs & 671 \\ number of meas & 9 per conf \\ $am_l/am_s$ & $0.01/0.05$ \\ $1/a$ & 1662 MeV \\ $\alpha_s$ & 0.3286 at $\mu=1/a$\\ $am_x, am_y$ & $0.005 \times n$ $(n=1,2,3,\ldots,10)$ \\ \hline \end{tabular} \caption{ Parameters for the numerical study on the coarse (C3) ensemble. $m_l$ is the light sea quark mass, $m_s$ the strange sea quark mass, $m_x$ the light valence quark mass, and $m_y$ the strange valence quark mass. Here, ``conf'' and ``meas'' represent gauge configuration and measurement, respectively.} \label{tab:para-C3} \end{table} \subsection{Quality of the fitting function} \label{subsec:qual-fit-func} The fitting function given in Eq.~\eqref{eq:fit-func-1} is derived from SChPT. Hence, its reliability is directly related to the validity of SChPT. It is beyond the scope of this paper to discuss the validity of SChPT, which has been proved to be true through extensive numerical study in Ref.~\cite{wlee-99,bernard-03,milc-rmp-09,steve-06,wlee-10-3}. Here, we take a different approach to the same issue. Basically, we take some trial fitting functions which do not have any solid theoretical background such as SChPT but are highly empirical. We choose two empirical fitting functions: one is linear and the other is quadratic in \[ X = \frac{m_\pi^2(\bar{x}x;\xi_5)}{\Lambda^2} \,, \] where $\Lambda = 1.0$GeV (a generic scale of chiral perturbation theory). The fitting results are summarized in Table \ref{tab:fit-func}. \begin{table}[tbp] \centering \begin{tabular}{| c | c | r r |} \hline $f(X)$ & memo & $\chi^2_\text{diag}$ & $\chi^2$ \\ \hline $c_1 + c_2 X$ & trial & 1.47(73) & 12.6(50) \\ $c_1 + c_2 X + c_3 X^2$ & trial & 0.19(13) & 8.5(59) \\ $f_\text{th}(X)$ & SChPT & 0.16(12) & 7.2(54) \\ \hline \end{tabular} \caption{ List of fitting functions and its quality. $f(X)$ is the fitting function. The ''memo'' means the theoretical origin of the fitting functions. $\chi^2_\text{diag}$ represents $T^2$ with diagonal approximation and $\chi^2$ represents $T^2$ with the full covariance matrix.} \label{tab:fit-func} \end{table} From the table, we observe that the $\chi^2$ value for $f_\text{th}$ is consistently smallest, which indicates that our choice of the fitting function is quite optimal. We will get back to this issue when we discuss about the full covariance fitting and its trouble. \subsection{Full covariance fitting} \label{subsec:full-cov-fit} As one can see in Table \ref{tab:para-C3}, we have 55 combinations of $m_x$ and $m_y$ (10 degenerate pairs ($m_x=m_y$) and 45 non-degenerate pairs ($m_x \ne m_y$)), and so $D=55$ for this example. Since the number of gauge configurations is 671, $N=671$. The fit type is the X-fit of the 4X3Y-NNLO fitting of the SU(2) SChPT as explained in SW-1, and so $P=3$. In a single X-fit, we use only 4 data points and so we may say that $D=4$. Now, let us consider the $D \times D$ covariance matrix. It has only 10 ($=D(D+1)/2$) components which can be determined completely in a linearly independent way, using 671 independent configurations. Here, we focus on the troublesome small eigenvalues of the covariance matrix. To be concrete, let us walk through a specific example of $B_K$ to demonstrate the problem and its consequence. In the X-fit, we fix $am_y = 0.05$ and select 4 data points of $am_x =$ 0.005, 0.010, 0.015, 0.020 to fit to the functional form suggested by the SU(2) SChPT as in SW-1. Hence, the covariance matrix $C_{ij}$ is a $4 \times 4$ matrix. \begin{equation} C_{ij} = \left[ \begin{array}{c c c c} 1.42, & 0.661, & 0.398, & 0.274 \\ 0.661, & 0.392, & 0.271, & 0.204 \\ 0.398, & 0.271, & 0.205, & 0.165 \\ 0.274, & 0.204, & 0.165, & 0.138 \end{array} \right] \times 10^{-5} \end{equation} Its eigenvalues are \begin{eqnarray} \lambda_i = \{\ 1.95\times 10^{-5}, \ 1.92\times 10^{-6}, \ 7.58\times 10^{-8},\ 1.11\times 10^{-9} \} \,. \end{eqnarray} The components of the matrix $C_{ij}$ are between $1.42 \times 10^{-5} $ and $1.38 \times 10^{-6}$. In the meanwhile, the smallest eigenvalue is smaller than the components by three orders of magnitude. Now let us look into the eigenvectors: \begin{eqnarray} v_1 = \left[ \begin{array}{r} 0.837 \\ 0.429 \\ 0.276 \\ 0.200 \end{array} \right]\,, \quad v_2 = \left[ \begin{array}{r} -0.508 \\ 0.387 \\ 0.542 \\ 0.546 \end{array} \right]\,, \quad v_3 = \left[ \begin{array}{r} 0.202 \\ -0.739 \\ 0.0725 \\ 0.639 \end{array} \right]\,, \quad v_4 = \left[ \begin{array}{r} -0.0378 \\ 0.347 \\ -0.790 \\ 0.503 \end{array} \right]\,. \end{eqnarray} The eigenvector $v_4$ corresponds to the smallest eigenvalue. This eigenmode dominates the $\chi^2$ fitting completely. Let us expand $\bar{y}$ in terms of eigenvectors as follows, \begin{equation} \bar{y} = \sum_{i=1}^4 a_i v_i \,, \end{equation} where $a_i$ is the eigenmode projection coefficient. \begin{table}[tbp] \centering \begin{tabular}{| c | c c c c |} \hline $i$ & 1 & 2 & 3 & 4 \\ \hline $a_i$ & 1.021(4) & 0.5655(14) & 0.1061(3) & 0.01442(3) \\ $\alpha_i$ & 0.7589(18) & 0.2328(18) & 0.008190(69) & 0.0001513(12) \\ \hline \end{tabular} \caption{ Eigenmode projection coefficients for $\bar{y}$.} \label{tab:y-eigen} \end{table} Let us define $\alpha_i$ as follows, \[ \alpha_i = \frac{|a_i|^2}{\sum_j |a_j|^2 } \,. \] Here, note that $\alpha_i$ represents the probability of the specific eigenmode in the data $\bar{y}$. In Table \ref{tab:y-eigen}, we show $a_i$ and $\alpha_i$ for the data $\bar{y}$. The eigenmode $v_1$ and $v_2$ describes more than 99\% of the data $\bar{y}$. The remaining 0.8\% is occupied by $v_3$ and the rest 0.015\% is from $v_4$. We can rewrite the inverse covariance matrix as follows, \begin{equation} [C_{ij}^{-1}] = \sum_{k=1}^4 \frac{1}{\lambda_k} | v_k \rangle \langle v_k | \,. \end{equation} Hence, we can also express the $T^2$ as follows, \begin{equation} T^2 = \sum_{i=1}^4 \frac{1}{\lambda_i} \langle \Delta y | v_i \rangle^2 \,, \end{equation} where $\Delta y_j \equiv [\bar{y}_j - \nu_j]$. Here, note that the least $T^2$ fitting is completely dominated by $v_4$, which becomes an inconvenient truth and causes a serious trouble in covariance fitting. \begin{figure}[tbp] \centering \includegraphics[width=20pc] {bk_sxpt_su2_4X3Y_NNLO_Xfit_cov_MILC_2064f21b676m010m050} \caption{ $B_K(1/a)$ vs. $X_P$ on the C3 ensemble. The fit type is 4X3Y-NNLO in the SU(2) analysis. We fix $am_y = 0.05$. The red line represents the results of fitting with the full covariance matrix. The red diamond corresponds to the $B_K$ value obtained by extrapolating $m_x$ to the physical light valence quark mass after setting all the pion multiplet splittings to zero.} \label{fig:cov-C3} \end{figure} In Figure \ref{fig:cov-C3}, we show the fitting results with the full covariance matrix. As one can see, the fitting curve does not pass through the data points. Hence, the quality of fitting looks poor to our eyes. The $T^2$ value is \[ T^2 = 7.2 \pm 5.4 \,. \] The multivariate statistical theory predicts the following \cite{schervish-1995}: \begin{eqnarray} {\cal E} (T^2) &=& (d + \kappa) \left[ 1 + \frac{d+1}{N} + {\cal O}(\frac{1}{N^2}) \right] \\ {\cal V} (T^2) &=& 2 (d + 2\kappa) \biggl[ 1 + \frac{1}{N} \Big(2 d + 4 + \frac{(d+\kappa)^2}{d + 2 \kappa} \Big) + {\cal O}(\frac{1}{N^2} ) \biggl] \end{eqnarray} where ${\cal E} (T^2)$ and ${\cal V} (T^2)$ represent the expectation value and variance of the $T^2$ distribution. Here, $d$ is the degrees of freedom and $\kappa$ is the non-centrality parameter. If the degrees of freedom is comparable to the number of samples ($d \approx N$), the leading deviation of the $T^2$ distribution from the $\chi^2$ distribution becomes of order ${\cal O}(1)$ and so we can not use the $\chi^2$ distribution in this case, which is also pointed out in Ref.~\cite{michael-1994-1} in the context of distorted normal distribution. In Ref.~\cite{toussaint-08}, the sample size effect is systematically explained in terms of $1/N$ expansion. However, in our example of $B_K$, $d=1$ and $N=671$. Hence, the leading correction of the $T^2$ distribution to the $\chi^2$ distribution will be negligibly small ($\approx 0.15$\%). In our example, the non-centrality parameter can be estimated by $\kappa \approx T^2 - d = 6.2$. Then, we can obtain ${\cal V}(T^2)$ as follows, \begin{eqnarray} {\cal V}(T^2) &\approx& 2 (d + 2\kappa) = 26.8 \end{eqnarray} Hence, the error of $T^2$ is supposed to be $\sqrt{26.8} = 5.2$, which is reasonably consistent with the measured value 5.4. The $\kappa$ is the non-centrality parameter which represents how much the fitting function deviates from the true mean values. In our example, $\kappa = 6.2$ turns out to be a rather large value which comes from the fact that the small deviation of our fitting function from the true value due to the truncation of the higher order terms in the series expansion of the SU(2) SChPT can be amplified dramatically if there are small eigenvalues in the covariance matrix. Let us decompose the fitting function in terms of eigenmodes as follows, \begin{eqnarray} \nu &=& f_\text{th} = \sum_{i=1}^4 b_i v_i \\ \beta_i &=& \frac{|b_i|^2}{\sum_j |b_j|^2 } \,. \end{eqnarray} To see how much the fitting function deviates from the data in a specific eigenmode, we define the difference, $\delta_i$, as follow, \begin{equation} \delta_i = |a_i - b_i|. \label{eq:delta} \end{equation} As summarized in Table \ref{tab:f-eigen-cov}, the difference, $\delta_i$, is $7.22\times10^{-3}$ for $v_1$, $2.40\times10^{-3}$ for $v_2$, $3.28\times10^{-4}$ for $v_3$, whereas it is only $9.69\times10^{-6}$ for $v_4$. Hence, the procedure of the least $\chi^2$ fitting works hard for the coefficient of $v_4$ but work less precisely for the coefficients of $v_1$ and $v_2$ mainly because the eigenvalue $\lambda_4$ is significantly smaller than $\lambda_1$ and $\lambda_2$. The irony is that the data has only 0.015\% overlap with $v_4$ while more than 99\% of it is dominated by $v_1$ and $v_2$. In this sense, the failure of the full covariance fitting is obviously due to the fact that the least $\chi^2$ fitting tries to determine the coefficient of $v_4$ component of the data very precisely but lose precision in determining the coefficients of the $v_1$ and $v_2$ components. As a consequence, the fitting curve misses the data points and the quality of fitting looks poor to our eyes. \begin{table}[tbp] \centering \begin{tabular}{| c | c c c c |} \hline $i$ & 1 & 2 & 3 & 4 \\ \hline $b_i$ & 1.014(4) & 0.5679(11) & 0.1058(3) & 0.01443(3) \\ $a_i$ & 1.021(4) & 0.5655(14) & 0.1061(3) & 0.01442(3) \\ $10^5 \cdot \delta_i$ & 722(270) & 240(90) & 32.8(123) & 0.969(362) \\ $\beta_i$ & 0.7548(10) & 0.2368(9) & 0.008212(69) & 0.0001528(11) \\ \hline \end{tabular} \caption{ Eigenmode decomposition of $f_\text{th}$ for the full covariance fitting.} \label{tab:f-eigen-cov} \end{table} How can we improve the situation? We will address this issue in the next section. \section{Prescription} \label{sec:pres} Here, we present a number of potential solutions to the problem raised in the previous section. Part of the solutions are well-known but vulnerable. Part of the solutions are new but of noteworthy merit. We will go through them one by one, and discuss about the pros and cons. \subsection{Diagonal approximation} \label{subsec:diag} One simple solution to the problem is to use the diagonal approximation (uncorrelated fitting) \cite{michael-1994-1}. In this method, we neglect the off-diagonal covariance as follows: \begin{eqnarray} C_{ij} = 0 \qquad \text{ if } i \ne j \,. \end{eqnarray} In this way, the small eigenvalue problem disappears and the fitting results are shown in Figure \ref{fig:diag-C3}. \begin{figure}[tbp] \centering \includegraphics[width=20pc] {bk_sxpt_su2_4X3Y_NNLO_Xfit_diag_MILC_2064f21b676m010m050} \caption{ $B_K(1/a)$ vs. $X_P$ on the C3 ensemble. All the parameters are the same as in Figure \protect\ref{fig:cov-C3}. Here, the red line represents the results of the uncorrelated fitting using the diagonal approximation. } \label{fig:diag-C3} \end{figure} The fitting function $f_\text{th}$ in the diagonal approximation is decomposed into eigenmodes of the full covariance matrix in Table \ref{tab:f-eigen-diag}. In this fit, the difference, $\delta_i$, is $3.80\times10^{-4}$ for $v_1$, $4.26\times10^{-4}$ for $v_2$, $5.23\times10^{-4}$ for $v_3$ and $4.85\times10^{-4}$ for $v_4$. Here, note that the diagonal approximation method removes the small eigenvalues and so it takes all the eigenmodes, equally. As a result, the differences for all directions are less than or equal to $5.23\times10^{-4}$. Hence, the fitting looks quite reasonable to our eyes as one can see in Figure \ref{fig:diag-C3}. \begin{table}[tbp] \centering \begin{tabular}{| c | c c c c |} \hline $i$ & 1 & 2 & 3 & 4 \\ \hline $b_i$ & 1.021(4) & 0.5659(13) & 0.1056(3) & 0.01490(18) \\ $a_i$ & 1.021(4) & 0.5655(14) & 0.1061(3) & 0.01442(3) \\ $10^5 \cdot \delta_i$ & 38.0(142) & 42.6(159) & 52.3(195) & 48.5(181) \\ $\beta_i$ & 0.7586(17) & 0.2332(16) & 0.008112(77) & 0.0001617(35) \\ \hline \end{tabular} \caption{ Eigenmode decomposition of $f_\text{th}$ for the fitting with diagonal approximation.} \label{tab:f-eigen-diag} \end{table} In the diagonal approximation, we lose the physical meaning of $\chi^2$, because we underestimate the $\chi^2$ by neglecting the off-diagonal terms in the covariance matrix. This point is a major drawback of the diagonal approximation. \subsection{Cutoff method} \label{subsec:cutoff} Another solution is to exclude the $v_4$ eigenmode from the fitting. Since we know that the $v_4$ eigenmode has least significant contribution in the data, we can think of the philosophy of removing them by hand as suggested by Refs.~\cite{NR-2007-1,kilcup-1994-1}. A popular and systematic way of chopping away the $v_4$ eigenmode is to set up such a cutoff that we project out the eigenvectors of those eigenvalues smaller than the cutoff into a null space of the inverse covariance matrix $C^{-1}_{ij}$. We call this the cutoff method. A number of lattice QCD groups \cite{ref:fnal-2010-1,ref:fnal-2010-2,ref:lanl-1999-1} use this method in the popular name of the SVD (singular value decomposition) method. Now let us walk through an example to demonstrate how it works. In our example of $B_K$, we have three parameters to determine from the fit. Hence, it is possible to remove only one eigenmode $v_4$ out of the four by setting $1/\lambda_4 = 0$. In Figure \ref{fig:cutoff-C3}, we show the results of the covariance fitting using the cutoff method. It is amusing to see how good it works. The results are consistent with those in Figure \ref{fig:diag-C3}. \begin{figure}[tbp] \centering \includegraphics[width=20pc] {bk_sxpt_su2_4X3Y_NNLO_Xfit_cutoff_MILC_2064f21b676m010m050} \caption{ $B_K(1/a)$ vs. $X_P$ on the C3 ensemble. All the parameters are the same as in Figure \protect\ref{fig:diag-C3}. Here, the red line represents the results of the covariance fitting after removing the smallest eigenvalue using the cutoff method. } \label{fig:cutoff-C3} \end{figure} \begin{table}[tbp] \centering \begin{tabular}{| c | c c c c |} \hline $i$ & 1 & 2 & 3 & 4 \\ \hline $b_i$ & 1.021(4) & 0.5655(14) & 0.1061(3) & 0.01524(30) \\ $a_i$ & 1.021(4) & 0.5655(14) & 0.1061(3) & 0.01442(3) \\ $10^5 \cdot \delta_i$ & 0(0) & 0(0) & 0(0) & 82.1(307) \\ $\beta_i$ & 0.7589(18) & 0.2328(18) & 0.008190(69) & 0.0001690(63) \\ \hline \end{tabular} \caption{ Eigenmode decomposition of $f_\text{th}$ for the fitting with the cutoff method.} \label{tab:f-eigen-cutoff} \end{table} In Table \ref{tab:f-eigen-cutoff} we decompose the fitting function $f_\text{th}$ obtained using the cutoff method into eigenmodes of the full covariance matrix. In this case, the difference, $\delta_i$, is zero for $i=1,2,3$ and is $8.21\times 10^{-4}$ in $v_4$. In this method, we do not care about the $v_4$ eigenmode at all. As a result, the fitting quality looks quite good to our eyes, which is quite consistent with that of the diagonal approximation. However, a major drawback of the cutoff method is that we cannot give the physical meaning to the quality of fit, which is normally reflected in the minimized value of $\chi^2$. In Appendix~\ref{app:subsec:dist_chisq_cutoff}, we show that the probability distribution of $\chi^2$ defined in cutoff method is the $\chi^2$ distribution with degrees of freedom equal to $D-P-R$. Here, $D$ is the number of data points, $P$ is the number of fitting parameters and $R$ is the number of removed eigenmodes. However, even though we know the distribution of the $\chi^2$ in cutoff method, we cannot measure the quality of fit through the minimized value of $\chi^2$. This is because we remove some eigenmodes and the $\chi^2$ in the cutoff method is orthogonal to the removed eigenmodes. As you know, the physical $\chi^2$ has $D-P$ degrees of freedom, while the $\chi^2$ of the cut-off method possesses $D-P-R$ degrees of freedom. Unfortunately, the missing degrees of freedom, $R$ are physical. The situation becomes even worse when we use the resampling method, such as jackknife method or bootstrap method. When the size of covariance matrix is large, we might lose a control over the number of the small eigenvalues that we remove. In other words, in one jackknife sample, we may remove two small eigenvalues and in another jackknife sample, we may remove three of them. During the procedure, the definition of $\chi^2$ is shifting from one to another. \subsection{Modified covariance matrix} \label{subsec:mod-cov-mat} One may take another approach to handle the small eigenmodes as in Ref.~\cite{milc-2002-1}. We first define the correlation matrix $\widetilde{C}$ as follows, \begin{eqnarray} \sigma_i &=& \sqrt{C_{ii}} \\ \widetilde{C}_{ij} &=& \frac{C_{ij}}{\sigma_i \sigma_j} \end{eqnarray} such that the diagonal components of $\widetilde{C}_{ij}$ is 1. In our example of $B_K$, it is \begin{equation} \widetilde{C}_{ij} = \left[ \begin{array} {c c c c} 1.000 & 0.888 & 0.738 & 0.619 \\ 0.888 & 1.000 & 0.955 & 0.877 \\ 0.738 & 0.955 & 1.000 & 0.978 \\ 0.619 & 0.877 & 0.978 & 1.000 \end{array} \right] \end{equation} Hence, we can say that the correlation matrix is a normalized covariance matrix. In our example, the off-diagonal terms are quite large between 0.6 and 1.0, which indicates that the data is highly correlated and moving together in the same direction. The eigenvalues of the correlation matrix are \begin{eqnarray} \widetilde{\lambda}_i &=& \{ \ 3.54, \ 0.437, \ 0.0249, \ 0.000521 \} \end{eqnarray} One may choose their cutoff as $\lambda_\text{cut} = 1$ as in Ref.~\cite{milc-2002-1}. We remove by hand all the eigenmodes whose eigenvalue is smaller than $\lambda_\text{cut}$. In our example, this corresponds to removing three eigenmodes of $\widetilde{\lambda}_2$, $\widetilde{\lambda}_3$, and $\widetilde{\lambda}_4$. It is obvious that the remaining correlation matrix is highly singular. In order to avoid the singular behavior, we restore the diagonal components back to 1 by hand as in Ref.~\cite{milc-2002-1}. Let us call this modified correlation matrix $\overline{C}_{ij}$. Then, let us define the modified covariance matrix $M_{ij}$ as \begin{equation} M_{ij} = \overline{C}_{ij} \times \sigma_i \times \sigma_j \end{equation} In our example of $B_K$, it is \begin{equation} M_{ij} = \left[ \begin{array} {c c c c} 1.42 & 0.632 & 0.453 & 0.353 \\ 0.632 & 0.392 & 0.275 & 0.214 \\ 0.453 & 0.275 & 0.205 & 0.153 \\ 0.353 & 0.214 & 0.153 & 0.138 \end{array} \right] \times 10^{-5} \end{equation} Then, we use $M_{ij}$ as their covariance matrix and fit the data. We call this the modified covariance matrix (MCM) method.\footnote{ MILC does not use this method anymore in their fitting \cite{milc-private-2011}.} In Figure \ref{fig:mod-C3}, we show the results of the covariance fitting using the modified covariance matrix. The fitting quality is somewhere between the diagonal approximation and the full covariance fitting. \begin{figure}[tbp] \centering \includegraphics[width=20pc] {bk_sxpt_su2_4X3Y_NNLO_Xfit_mod_MILC_2064f21b676m010m050} \caption{ $B_K(1/a)$ vs. $X_P$ on the C3 ensemble. All the parameters are the same as in Figure \protect\ref{fig:diag-C3}. Here, the red line represents the results of the covariance fitting using the modified covariance matrix. } \label{fig:mod-C3} \end{figure} \begin{table}[tbp] \centering \begin{tabular}{| c | c c c c |} \hline $i$ & 1 & 2 & 3 & 4 \\ \hline $b_i$ & 1.018(4) & 0.5665(12) & 0.1056(3) & 0.01473(12) \\ $a_i$ & 1.021(4) & 0.5655(14) & 0.1061(3) & 0.01442(3) \\ $10^5 \cdot \delta_i$ & 289(108) & 103(38) & 48.9(182) & 30.9(116) \\ $\beta_i$ & 0.7573(13) & 0.2344(13) & 0.008143(73) & 0.0001584(24) \\ \hline \end{tabular} \caption{ Eigenmode decomposition of $f_\text{th}$ for the fitting with the MCM method.} \label{tab:f-eigen-mod} \end{table} In Table \ref{tab:f-eigen-mod} we decompose the fitting function $f_\text{th}$ obtained using the MCM method into eigenmodes of the full covariance matrix. In this case, the difference, $\delta_i$, is $2.89\times10^{-3}$ for $v_1$, $1.03\times10^{-3}$ for $v_2$, $4.89\times10^{-4}$ for $v_3$ and $3.09\times10^{-4}$ for $v_4$. As a result, the fitting quality look mediocre to our eyes and worse than that of the diagonal approximation. The main disadvantage of using the MCM method is that the $\chi^2$ loses the physical meaning completely, because the modified covariance matrix is significantly different from the full covariance matrix by construction. However, one may view the diagonal approximation as a special case of the MCM method where all the eigenmodes are removed. In this sense, the MCM method is as bad as the diagonal approximation. \subsection{Eigenmode shift method} \label{subsec:shift} So far, all the methods are based on the philosophy that we may manipulate or modify the covariance matrix. This philosophy is very dangerous in a sense that the modification results in a missing information in physics. The missing information is highly physical. Hence, it is not a good idea to modify the covariance matrix. The only degrees of freedom that we have is to modify the fitting function, but not the covariance matrix. We know that the whole trouble comes from the inexact fitting function that has small error in $v_4$ eigenmode direction. Hence, we can think of a new fitting function $f_\text{th}'$ defined as follows, \begin{equation} f_\text{th}'(X) = f_\text{th} (X) + \eta v_4 \end{equation} Here, $\eta$ is a tiny parameter which can be determined using the Bayesian method. The Bayes theorem \cite{sivia-2006} says that \begin{eqnarray} P(A|\mathbb{D},I) &\propto& P(\mathbb{D} | A,I) \times P(A|I) \end{eqnarray} Here, $A$ represents our theoretical hypothesis, $\mathbb{D}$ is the data, and $I$ corresponds to the background information. Note that $P(\mathbb{D} | A,I)$ means the probability that we obtain the data set of $\mathbb{D}$ if $A$ and $I$ are given to us. We know the conditional likelihood function of $P(\mathbb{D} | A,I)$, which is nothing but \begin{eqnarray} P(\mathbb{D} | A,I) &\propto& \exp\left(-\frac{\chi^2}{2}\right) \\ \label{eq:chisq_ES} \chi^2 &=& \sum_{i,j} [\bar{y}_i - \nu_i'] C^{-1}_{ij} [\bar{y}_j - \nu_j'] \\ \nu_i' &=& f_\text{th}'(X_i)\,, \end{eqnarray} as explained in Ref.~\cite{sivia-2006}. In addition, if we impose the maximum entropy principle \cite{sivia-2006} on the prior condition that $ a_\eta - \sigma_\eta \lesssim \eta \lesssim a_\eta + \sigma_\eta$, then the prior becomes the following: \begin{eqnarray} P(A | I) &\propto& \exp\left( - \frac{\chi^2_\textrm{prior}}{2} \right) \\ \chi^2_\textrm{prior} &=& \frac{(\eta-a_\eta)^2}{\sigma_\eta^2} \end{eqnarray} Then, we obtain the posterior pdf as follows: \begin{eqnarray} P(A|\mathbb{D},I) &\propto& \exp\left( - \frac{\chi^2_\textrm{aug}}{2}\right) \\ \label{eq:aug_chisq_ES} \chi^2_\text{aug} &=& \chi^2 + \chi^2_\textrm{prior} \end{eqnarray} The Bayesian principle \cite{sivia-2006} is to determine the fitting parameters by maximizing the posterior pdf: $P(A|\mathbb{D},I)$. This is equivalent to minimizing the $\chi^2_\textrm{aug}$. Let us switch the gear to our choice of the prior condition. From the SChPT, the neglected highest order term in the $f_\text{th} (X)$ is \[ X^2 (\ln(X))^2 \approx 0.006 \] where $X = X_P/\Lambda^2 \approx 0.02$. The only constraint on the coefficient $c_4$ of this term is that $c_4 = 0 \pm 1$, but we do not know the sign of $c_4$. Hence, we set $a_\eta = 0$ and $\sigma_\eta = 0.006$. Then we perform the full covariance fitting with the extra fitting parameter, $\eta$, by minimizing $\chi^2_\text{aug}$ (the Bayesian principle). The obtained $\eta$ is \[ \eta = -0.00082(31) \,. \] When we do the extrapolation to the physical pion mass, we use only the $f_\text{th}(X)$ function, dropping out the $\eta$ term, which is too small to make any difference at any rate. We call this the eigenmode shift (ES) method. \begin{figure}[tbp] \centering \includegraphics[width=20pc] {bk_sxpt_su2_4X3Y_NNLO_Xfit_es_MILC_2064f21b676m010m050} \caption{ $B_K(1/a)$ vs. $X_P$ on the C3 ensemble. All the parameters are the same as in Figure \protect\ref{fig:diag-C3}. Here, the red line represents the results of the eigenmode shift method.} \label{fig:shift-C3} \end{figure} In Figure \ref{fig:shift-C3}, we show the fitting results obtained using the ES method. In our example, tiny correction proportional to $\eta$ makes the fitting results that pass through the average data point. \begin{table}[tbp] \centering \begin{tabular}{| c | c c c c |} \hline $i$ & 1 & 2 & 3 & 4 \\ \hline $b_i$ & 1.021(4) & 0.5655(14) & 0.1061(3) & 0.01524(30) \\ $a_i$ & 1.021(4) & 0.5655(14) & 0.1061(3) & 0.01442(3) \\ $10^5 \cdot \delta_i$ & 1.88(70) & 0.624(233) & 0.0855(319) & 81.9(306) \\ $\beta_i$ & 0.7589(18) & 0.2328(18) & 0.008190(69) & 0.0001690(63) \\ \hline \end{tabular} \caption{ Eigenmode decomposition of $f_\text{th}$ for the fitting with the ES method.} \label{tab:f-es} \end{table} In Table \ref{tab:f-es}, we show the eigenmode decomposition of $f_\textrm{th}$ when we use the ES method in fitting. As one can see in the table, the $\delta_i$ is smaller by order of magnitude compared with the diagonal approximation for $i=1,2,3$. The only non-trivial component is $\delta_4$, which is taken care of by the shift parameter $\eta$. As a result, in Table \ref{tab:f'-es}, the $\delta_4$ for $f_\textrm{th}'$ becomes negligibly small by the shift parameter $\eta$. \begin{table}[tbp] \centering \begin{tabular}{| c | c c c c |} \hline $i$ & 1 & 2 & 3 & 4 \\ \hline $b_i$ & 1.021(4) & 0.5655(14) & 0.1061(3) & 0.01442(3) \\ $a_i$ & 1.021(4) & 0.5655(14) & 0.1061(3) & 0.01442(3) \\ $10^5 \cdot \delta_i$ & 1.88(70) & 0.624(233) & 0.0855(319) & 0.00252(94) \\ $\beta_i$ & 0.7589(18) & 0.2328(18) & 0.008190(69) & 0.0001513(12) \\ \hline \end{tabular} \caption{ Eigenmode decomposition of $f_\text{th}'$ for the fitting with the ES method.} \label{tab:f'-es} \end{table} The success of the fitting can be checked against the hypothesis that $|\eta| \lesssim 0.006$. The results of fitting say that $\eta = -0.00082(31)$, which is highly consistent with the hypothesis. In this sense, the Bayesian prior is quite reasonable and self-consistent. Let us turn to the issue of quality of fitting and physical interpretation of $\chi_\textrm{aug}^2$. In a naive sense of physical interpretation, we may count the prior condition as one of data points, and we consider the shift parameter $\eta$ as a new parameter in the fitting. Hence, in this interpretation, the effective number of data points is $\tilde{D} = D + 1$, and the effective number of unknown parameters of the fitting function is $\tilde{P} = P + 1$. Accordingly, the effective degrees of freedom becomes $\tilde{d} = \tilde{D} - \tilde{P} = D - P = d$. Let us redefine the notation for the eigenmodes as follows: \begin{eqnarray} \tilde{v}_i &=& \begin{bmatrix} v_i \\ 0 \end{bmatrix} \quad \text{for} \quad i= \{1,2,3,4\} \\ \tilde{v}_5 &=& \begin{bmatrix} 0 \\ 0 \\ 0 \\ 0 \\ 1 \end{bmatrix} \end{eqnarray} and \begin{eqnarray} \tilde{\lambda}_i &=& \lambda_i \quad \text{for} \quad i= \{1,2,3,4\} \\ \tilde{\lambda}_5 &=& \sigma_\eta^2 \end{eqnarray} Then, we can rewrite the $\chi^2_\textrm{aug}$ as follows: \begin{eqnarray} \chi^2_\textrm{aug} = \sum_{a=1}^5 \frac{\delta_a^2}{\tilde{\lambda}_a} \label{eq:chisq-dof} \end{eqnarray} where $\delta_a$ is defined similarly as in Eq.~\eqref{eq:delta}. We can prove with ease that all the eigenvectors are orthogonal to each other. If we assume that we may count the prior condition as one of the data points, we realize that the effective degrees of freedom have been increased by one as we observe in Eq.~\eqref{eq:chisq-dof}. The number of unknown parameters in fitting is $\tilde{P}$. Hence, even in the strict sense of physical interpretation, the effective degrees of freedom is $\tilde{d} = \tilde{D}-\tilde{P}$. Therefore, the $\chi^2_\textrm{aug}$ must follow the normal $\chi^2$ distribution with $\chi^2_\textrm{aug} \approx \tilde{d} \pm \sqrt{2 \tilde{d}} = d \pm \sqrt{2 d}$, as explained in Appendix \ref{app:subsec:dist_chisq_ES}. However, in our Bayesian prior information, we did not use the statistical information for $a_\eta$ and $\sigma_\eta$, but the optimal range of possible value of $\eta$ to set the $a_\eta$ and $\sigma_\eta$. ({i.e.} $a_\eta \ne \mathcal{E}(\eta)$ and $\sigma_\eta \ne \sqrt{\mathcal{V}(\eta)}$). Hence, the choice of $a_\eta$ and $\sigma_\eta$ could be larger or smaller (overestimated or underestimated) than the statistical value of them. As a consequence, our estimate of $\chi^2_\textrm{aug}$ could be smaller or larger than the normal value of $\chi^2_\textrm{aug} \approx \tilde{d}$. Hence, in this approach of Bayesian method, we cannot rely on the strict statistical interpretation of $\chi^2_\textrm{aug}$. One good news is that the probability interpretation is still possible with $\chi^2_\textrm{aug}$, which allows for the quality of fitting\footnote{Here, the quality of fitting means that we can compare two different fitting procedures and determine which fitting is more reliable based on the Bayesian method. For example, we can compare the full covariance fitting and the ES method for $B_K$, since both of these methods allows for the probability interpretation.} and the model selection on the basis of the Bayesian statistics \cite{sivia-2006}. Hence, we can, at least, tell from $\chi^2_\textrm{aug}$ which model or hypothesis is more probable. This is an important point because it justifies the fitting procedure that finds the most probable fitting parameters by minimizing $\chi^2_\textrm{aug}$. In our example of $B_K$, the $\chi^2_\text{aug} / \text{dof}$ for the ES method is $0.019(14)$. In the limit of $\sigma_\eta \rightarrow \infty$, we can remove the prior condition completely, which we call unconstrained ES method. In Appendix~\ref{app:sec:equiv_cutoff_ES}, we prove that the unconstrained ES method is equivalent to the cutoff method. However, the original ES method is quite different from the cutoff method in the following sense. The effective number of degrees of freedom for the ES method is $\tilde{d} = \tilde{D} - \tilde{P} = 1$ while that for the cutoff method is $\tilde{d} = (D-1) - P = 0$. In addition, the ES method is rigorously based on the Bayesian method and is subject to the probability interpretation. However, the cutoff method does not allow for the probability interpretation mainly because it modifies the Hilbert space for the covariance matrix. In addition, in the ES method, we modify the fitting function by the shift parameter $\eta$, which we can monitor and gives us an estimate of how much we are changing the fitting function. In the case of the cutoff method, we do not know how much of the fitting function we are dumping into the null space of the covariance matrix. In order to illustrate the difference between the ES method and the cutoff method, we provide a pedagogical and heuristic example in Appendix \ref{app:sec:example}, in which the ES method works well, but the cutoff method and the diagonal approximation fail manifestly. \subsection{Bayesian method} \label{subsec:bayes} When we obtain the fitting function, we use the staggered chiral perturbation theory to expand it in powers of $p^2$, $a^2$, and $m_q$. In the series expansion, we must truncate the higher order terms because we cannot include an arbitrary number of terms in the fitting function. One constraint is that we have only 4 data points of $B_K$ for the SU(2) analysis. Hence, the fitting function can have at most 3 unknown parameters, if we want to perform the normal least $\chi^2$ fitting based on the multivariate statistics theory. This means that we can include all the next to leading order (NLO) terms and one additional term at the next to next to leading order (NNLO). It is the fitting function of Eq.~\eqref{eq:fit-func-1} that we obtain following this premature logical path. This looks fine as long as the truncated terms at the higher order are under control such that the full covariance fitting works well. However, in our case of the SU(2) analysis on $B_K$, the full covariance fitting fails manifestly because the data have such a high precision that the truncated terms of the higher order are required to fit the data. We cannot add higher order terms to the fitting function in a normal sense of the multivariate statistical theory. Hence, the situation is checkmate as it is. The question is how we can get out of this trap. A natural answer is the Bayesian method \cite{Lepage-2001,sivia-2006}. In the Bayesian method, the prior condition behaves in the fitting as if it is one of the data points as explained in the previous subsection. Hence, it is possible to add $n$ higher order terms as long as we impose $m$ prior conditions on the fitting with $n \le m$. In practice, we impose the same number of prior conditions as that of the higher order terms added to the fitting function as follows. The fitting function has three additional terms at higher order: \begin{eqnarray} f_\text{th}^{\text{B}}(X) &=& f_\text{th}(X) + c_4^b ~ X^2 \left(\ln X \right)^2 + c_5^b ~ X^2 \left(\ln X \right) + c_6^b ~ X^3 \,. \label{eq:f_th-bayes} \end{eqnarray} We impose the prior conditions on the fitting through the prior probability as in the previous subsection: \begin{equation} \chi^2_\textrm{prior} = \sum_{k=4}^{6} \frac{({c_k^b} - a_{k;B})^2}{\sigma_{k;B}^2} \end{equation} Since we know that $c_k^b = 0 \pm 1$, we may choose $a_{k;B} =0$ and $\sigma_{k;B} = 1$. In the Bayesian method, we use $\chi^2_\textrm{aug}$ instead of $\chi^2$, in the analysis, which is defined as \begin{eqnarray} L &\equiv& \log(P(A|\mathbb{D},I)) \\ \chi^2_\textrm{aug} &=& (-2) L = \chi^2 + \chi^2_\textrm{prior} \end{eqnarray} where $P(A|\mathbb{D},I)$ is the posterior pdf \cite{sivia-2006}. The Bayesian principle is that we determine the fitting parameters such that they maximize the posterior pdf or minimize $\chi^2_\textrm{aug}$. The main advantage of the Bayesian method is that it allows for probability interpretation and model selection as explained in Ref.~\cite{sivia-2006}. \begin{figure}[tbp] \centering \includegraphics[width=20pc] {bk_sxpt_su2_4X3Y_BAYES_Xfit_MILC_2064f21b676m010m050} \caption{ $B_K(1/a)$ vs. $X_P$ on the C3 ensemble. All the parameters are the same as in Figure \protect\ref{fig:diag-C3}. Here, the red line represents the results of the Bayesian method. } \label{fig:bayes-C3} \end{figure} In Figure \ref{fig:bayes-C3}, we show the fitting results obtained using the Bayesian method. The fitting has $\chi^2_\textrm{aug} = 1.09(81)$. The effective number of the data points is $\tilde{D} = 4 + 3 = 7$ and the number of the unknown parameters is $\tilde{P} = 6$. Hence, the effective number of degrees of freedom is $\tilde{d} = \tilde{D} - \tilde{P} = 1 = d$, the same as the full covariance fitting. \begin{table}[tbp] \centering \begin{tabular}{| c | c c c c |} \hline $i$ & 1 & 2 & 3 & 4 \\ \hline $b_i$ & 1.020(5) & 0.5659(13) & 0.1060(3) & 0.01442(3) \\ $a_i$ & 1.021(4) & 0.5655(14) & 0.1061(3) & 0.01442(3) \\ $10^5 \cdot \delta_i$ & 110(41) & 36.5(136) & 5.00(187) & 0.147(55) \\ $\beta_i$ & 0.7583(16) & 0.2334(16) & 0.008194(69) & 0.0001515(12) \\ \hline \end{tabular} \caption{ Eigenmode decomposition of $f_\text{th}^\text{B}$ for the fitting with the Bayesian method.} \label{tab:f-bayes} \end{table} In Table \ref{tab:f-bayes}, we decompose the fitting function $f_\text{th}^\text{B}$ obtained using the Bayesian method in terms of the eigenmodes of the full covariance matrix. The fitting looks fine to our eyes. A major advantage of the Bayesian method is that it does NOT touch the covariance matrix at all unlike the cutoff method and the diagonal approximation. In the Bayesian method, we need to gauge the sensitivity of the fitting to the prior condition. In our case, we change the prior condition as follows: \begin{equation} \sigma_{k;B} = 1 \rightarrow 2 \end{equation} while we keep $a_{k;B}$ unchanged. The fitting results are changed as follows. \begin{eqnarray} B_K &=& 0.5757(53) \rightarrow 0.5772(58) \\ \chi^2_\textrm{aug} &=& 1.09(81) \rightarrow 0.31(23) \end{eqnarray} The mean value of $B_K$ is shifted by $0.28\sigma$ while the error bar increases by 9\%. Hence, the difference between the ES method and the Bayesian method is $0.18\sigma$, which is smaller than the systematic error due to the ambiguity in the prior condition. Since both the ES and Bayesian methods are based on the Bayes theorem, they are equivalent to each other in that sense. When we obtain the higher order terms in Eq.~\eqref{eq:f_th-bayes}, we use the continuum chiral perturbation theory but not the staggered chiral perturbation theory. Hence, the functional form of each higher order term is approximate and not exact. From this standpoint, we cannot claim that the Bayesian method is better than the ES method. Therefore, we decide to quote the difference between the results of ES and Bayesian methods as our systematic error due to the ambiguity in the covariance fitting in the SW-2, and to choose the results of the Bayesian method as the central value. \section{Error analysis of the covariance matrix} \label{sec:err-cov-mat} It is true that elements of the covariance matrix may, in general, have significantly larger errors than those of the small eigenvalues. However, what makes the significant difference in fitting is the small eigenvalues and the corresponding eigenmodes. Hence, we focus on the error analysis of the small eigenvalues. The probability distribution of the small eigenvalues is the gamma ($\Gamma$) distribution, which is proved in Appendix \ref{app:sec:stat-anal}. Since the $\Gamma$ distribution is different from the normal distribution, it is important to check whether our data respects the $\Gamma$ distribution or not. In Figure \ref{fig:hist-l4}, we show the probability density of the $\lambda_4$ eigenvalue in the format of histogram. The blue curve in the plot corresponds to the $\Gamma$ distribution function. We find that the data respects the $\Gamma$ distribution very well. \begin{figure}[tbp] \centering \includegraphics[width=20pc]{dist_eig4} \caption{ Histogram of the $\lambda_4$ eigenvalue. The histogram has been obtained using the bootstrap method. The number of the bootstrap Monte Carlo samples is 10,000. The normalization of the histogram is adjusted such that the total probability is one. The blue curve represents the $\Gamma$ distribution. The details on the $\Gamma$ distribution function are explained in Appendix \ref{app:sec:stat-anal}. } \label{fig:hist-l4} \end{figure} In Table \ref{tab:err-anal-eigen}, we present the error of the eigenvalues. The jackknife method is not very sensitive to the left-right asymmetry of the distribution and provides a rough estimate of the errors. However, the bootstrap method is by nature sensitive to the asymmetry. In the table, we show the jackknife error as well as the left \& right errors of the bootstrap method. As one can see, the left errors are consistently smaller than the right errors, which indicates that the probability follows the $\Gamma$ distribution. \begin{table}[tbp] \centering \begin{tabular}{| c | c | c c c c |} \hline $i$ & scale & $\lambda_i$ & $\sigma_i(\text{jk})$ & $\sigma_i^L(\text{bs})$ & $\sigma_i^R(\text{bs})$ \\ \hline 1 & $10^{-6}$ & $19.51$ & 1.13(10) & 1.08(9) & 1.16(11) \\ 2 & $10^{-7}$ & $19.24$ & 1.08(7) & 1.05(7) & 1.11(8) \\ 3 & $10^{-9}$ & $75.79$ & 4.35(35) & 4.18(33) & 4.47(37) \\ 4 & $10^{-11}$ & $110.9$ & 5.97(39) & 5.79(38) & 6.11(42) \\ \hline \end{tabular} \caption{ Error analysis of eigenvalues. The scale represents the overall multiplication factor. $\sigma_i$ is the error of the $\lambda_i$. The ``jk'' and ``bs'' index represent the jackknife method and the bootstrap method, respectively. $\sigma_i^L$ and $\sigma_i^R$ represents the left error and right error.} \label{tab:err-anal-eigen} \end{table} \section{Conclusion \label{sec:conclude}} Here, we address an issue of covariance fitting on the highly correlated data: a general question frequently asked in the lattice QCD community. As an example, we have chosen the 4X3Y-NNLO fit of the $B_K$ data based on the SU(2) staggered chiral perturbation theory explained in SW-1. It turns out that the smallest eigenvalue of the covariance matrix leads to a extremely poor fitting. If there exist very tiny eigenvalues in the full covariance matrix, the small discrepancy between the fitting function and the data can be dramatically amplified to make such a trouble that the fitting fails in passing through the data points within the statistical uncertainty. In order to get around the trouble, the lattice community have been applying the diagonal approximation, the cutoff method, the modified covariance method to the data analysis. All of these poor prescriptions modify the covariance matrix in one way or another and so lose the physical interpretation of $\chi^2$ completely. Hence, we have been searching for a possible method which does not touch the covariance matrix and allow for the physical interpretation of the $\chi^2$. A natural prescription which satisfies our requirements turns out to be the Bayesian method and its variations. In this paper, we suggest a new proposal: the eigenmode shift (ES) method. In this method, we shift the fitting function by a negligibly tiny amount, which allows the full covariance fitting. Note that we do not need to modify the covariance matrix at all in the ES method, and, in addition, $\chi^2_\textrm{aug}$ has a physical meaning based on the Bayesian probability interpretation. Another good approach is the Bayesian method in which we add as many higher order terms to the fitting function as the prior conditions such that the fitting works well ({i.e.} the $\chi^2_\textrm{aug}$ has a reasonable value and the fitting looks fine to our eyes). One ambiguity in this method is our choice of higher order terms. In our example of $B_K$, we use the continuum chiral perturbation theory to obtain the functional form of each higher order term. Since it is a kind of approximation and not exact, we cannot claim that the Bayesian method is better than the ES method. Our final suggestion is that it might be a good choice if one can try both the ES and Bayesian methods in the data analysis and quote the difference as the systematic uncertainty due to an ambiguity in the covariance fitting. We apply this approach to the error analysis in SW-2. In order to help readers to digest the main points of this paper, in Appendix \ref{app:sec:example}, we provide a pedagogical example of data analysis, in which the diagonal approximation and the cut-off method fail in fitting manifestly, but the ES method and the Bayesian method work very well. This exemplifies an odd truth that the conventional wisdom in the diagonal approximation and the cut-off method might be falling apart in some cases. \section*{Acknowledgements} C.~Jung is supported by the US DOE under contract DE-AC02-98CH10886. The research of W.~Lee is supported by the Creative Research Initiatives program (No.~2012-0000241) of the NRF grant funded by the Korean government (MEST). Computations for this work were carried out in part on QCDOC computers of the USQCD Collaboration at Brookhaven National Laboratory, and in part on the DAVID GPU clusters at Seoul National University. The USQCD Collaboration are funded by the Office of Science of the U.S. Department of Energy. W.~Lee acknowledges support from the KISTI supercomputing center through the strategic support program (No.~KSC-2011-G2-06).
1,314,259,994,028
arxiv
\section{\def\@secnumfont{\mdseries}\@startsection{section}{1}% \z@{.7\linespacing\@plus\linespacing}{.5\linespacing}% {\normalfont\scshape\centering}} \def\subsection{\def\@secnumfont{\bfseries}\@startsection{subsection}{2}% {\parindent}{.5\linespacing\@plus.7\linespacing}{-.5em}% {\normalfont\bfseries}} \makeatother \def\subl#1{\subsection{}\label{#1}} \newcommand{\Hom}{\operatorname{Hom}} \newcommand{\mode}{\operatorname{mod}} \newcommand{\End}{\operatorname{End}} \newcommand{\wh}[1]{\widehat{#1}} \newcommand{\Ext}{\operatorname{Ext}} \newcommand{\ch}{\text{ch}} \newcommand{\ev}{\operatorname{ev}} \newcommand{\Ob}{\operatorname{Ob}} \newcommand{\soc}{\operatorname{soc}} \newcommand{\rad}{\operatorname{rad}} \newcommand{\head}{\operatorname{head}} \def\operatorname{spec}{\operatorname{spec}} \def\operatorname{Im}{\operatorname{Im}} \def\operatorname{gr}{\operatorname{gr}} \def\operatorname{mult}{\operatorname{mult}} \def\operatorname{Max}{\operatorname{Max}} \def\operatorname{Ann}{\operatorname{Ann}} \def\operatorname{sym}{\operatorname{sym}} \def\operatorname{loc}{\operatorname{loc}} \def\operatorname{\br^\lambda_A}{\operatorname{\br^\lambda_A}} \def\underline{\underline} \def$A_k(\lie{g})(\bsigma,r)${$A_k(\lie{g})(\bsigma,r)$} \newcommand{\Cal}{\cal} \newcommand{\Xp}[1]{X^+(#1)} \newcommand{\Xm}[1]{X^-(#1)} \newcommand{\on}{\operatorname} \newcommand{\Z}{{\bold Z}} \newcommand{\J}{{\cal J}} \newcommand{\Q}{{\bold Q}} \renewcommand{\P}{{\cal P}} \newcommand{\N}{{\bold N}} \newcommand\boa{\bold a} \newcommand\bob{\bold b} \newcommand\boc{\bold c} \newcommand\bod{\bold d} \newcommand\boe{\bold e} \newcommand\bof{\bold f} \newcommand\bog{\bold g} \newcommand\boh{\bold h} \newcommand\boi{\bold i} \newcommand\boj{\bold j} \newcommand\bok{\bold k} \newcommand\bol{\bold l} \newcommand\bom{\bold m} \newcommand\bon{\mathbb n} \newcommand\boo{\bold o} \newcommand\bop{\bold p} \newcommand\boq{\bold q} \newcommand\bor{\bold r} \newcommand\bos{\bold s} \newcommand\boT{\bold t} \newcommand\boF{\bold F} \newcommand\bou{\bold u} \newcommand\bov{\bold v} \newcommand\bow{\bold w} \newcommand\boz{\bold z}\newcommand\ba{\bold A} \newcommand\bb{\bold B} \newcommand\bc{\mathbb C} \newcommand\bd{\bold D} \newcommand\be{\bold E} \newcommand\bg{\bold G} \newcommand\bh{\bold H} \newcommand\bi{\bold I} \newcommand\bj{\bold J} \newcommand\bk{\bold K} \newcommand\bl{\bold L} \newcommand\bm{\bold M} \newcommand\bn{\mathbb N} \newcommand\bo{\bold O} \newcommand\bp{\bold P} \newcommand\bq{\bold Q} \newcommand\br{\bold R} \newcommand\bs{\bold S} \newcommand\bt{\bold T} \newcommand\bu{\bold U} \newcommand\bv{\bold V} \newcommand\bw{\bold W} \newcommand\bz{\mathbb Z} \newcommand\bx{\bold x} \newcommand\KR{\bold{KR}} \newcommand\rk{\bold{rk}} \newcommand\het{\text{ht }} \newcommand\toa{\tilde a} \newcommand\tob{\tilde b} \newcommand\toc{\tilde c} \newcommand\tod{\tilde d} \newcommand\toe{\tilde e} \newcommand\tof{\tilde f} \newcommand\tog{\tilde g} \newcommand\toh{\tilde h} \newcommand\toi{\tilde i} \newcommand\toj{\tilde j} \newcommand\tok{\tilde k} \newcommand\tol{\tilde l} \newcommand\tom{\tilde m} \newcommand\ton{\tilde n} \newcommand\too{\tilde o} \newcommand\toq{\tilde q} \newcommand\tor{\tilde r} \newcommand\tos{\tilde s} \newcommand\toT{\tilde t} \newcommand\tou{\tilde u} \newcommand\tov{\tilde v} \newcommand\tow{\tilde w} \newcommand\toz{\tilde z} \newcommand\woi{w_{\omega_i}} \newcommand\wrapped[1]% {\renewcommand\arraystretch{1}% \begin{array}{@{}l@{}}#1\end{array}% } \newcommand{\diag}{\operatorname{diag}} \begin{document} \title{A generalization of Fiedler's lemma and the spectra of $H$-join of graphs} \author{M. Saravanan} \address{Madurai Kamaraj University Constituent College, Sattur, India.} \email{dr.msaravanan8187@gmail.com.} \author{S. P. Murugan} \address{Indian Institute of Science Education and Research, Mohali, India.} \email{spmath000@gmail.com.} \author{G. Arunkumar} \address{Indian Institute of Science, Bangalore, India.} \email{arun.maths123@gmail.com, garunkumar@iisc.ac.in.} \thanks{The authors would like to thank M. Rajesh Kannan, Department of Mathematics, Indian Institute of Technology, Kharagpur, for his valuable comments and suggestions on this work. The first author would like to thank him for the support and the fruitful discussions during his visit to IIT Kharagpur, which is a motivation for this work. The second author acknowledges the institute postdoctoral fellowship of IISER, Mohali. The third author is grateful to Apoorva Khare, Department of Mathematics, Indian Institute of Science, Bangalore, for his constant support and encouragement. The third author also acknowledges the NBHM grant (0204/7/2019/R\&D-II/6831).} \subjclass[2010]{05C50, 05C76} \keywords{Graph operations, Graph eigenvalues, Universal adjacency matrix} \begin{abstract} A new generalization of Fiedler's lemma is obtained by introducing the concept of the main function of a matrix. As applications, the universal spectra of the $H$-join, the spectra of the $H$-generalized join and the spectra of the generalized corona of any graphs (possibly non-regular) are obtained. \end{abstract} \maketitle \section{Introduction} All the graphs considered in this paper are finite and simple. The eigenvalues of a graph $G$ are the eigenvalues of its adjacency matrix $A(G)$. The set of all eigenvalues of $G$ is called the spectrum of $G$, denoted by $\operatorname{spec}(G)$. For more on graphs and their eigenvalues we refer \cite{cvet,cvet2}. Let $H$ be a graph with vertex set $\{v_1,\dots,v_k\}$ and let $\mathcal{F}=\{G_1,G_2, \dots, G_k\}$ be a family of graphs. In \cite{hjn}, the $H$-join operation of the graphs $G_1, G_2, \dots, G_k,$ denoted by $\displaystyle \bigvee_{H}\mathcal{F}$, is obtained by replacing the vertex $v_i$ of $H$ by the graph $G_i$ for $1 \le i \le k$ and every vertex of $G_i$ is made adjacent with every vertex of $G_j$, whenever $v_i$ is adjacent to $v_j$ in $H$. Precisely, $\displaystyle \bigvee_{H}\mathcal{F}$ is the graph with vertex set $V\big(\displaystyle \bigvee_{H} \mathcal{F} \big) = \displaystyle \bigcup_{i=1}^k V(G_i)$ and edge set $E\big(\displaystyle \bigvee_{H} \mathcal{F} \big) = \big(\displaystyle \bigcup_{i=1}^k E(G_i)\big) \cup ( \displaystyle \bigcup_{ v_iv_j \in E(H)} \{ xy : x \in V(G_i), y \in V(G_j) \}).$ In addition, by considering a family of vertex subsets $\mathcal{S}=\{S_1,S_2, \dots, S_k\}$ where $S_i \subset V(G_i)$ for each $1 \le i \le k$, a generalization of $H$-join operation, known as $H$-generalized join operation constrained by vertex sets, $\displaystyle \bigvee_{H,\mathcal{S}}\mathcal{F}$ is introduced in \cite{cdo13} as follows: $V\big(\displaystyle \bigvee_{H,\mathcal{S}}\mathcal{F} \big) = \displaystyle \bigcup_{i=1}^k V(G_i)$ and $E\big(\displaystyle \bigvee_{H,\mathcal{S}}\mathcal{F} \big) = \big(\displaystyle \bigcup_{i=1}^k E(G_i)\big) \cup ( \displaystyle \bigcup_{ v_iv_j \in E(H)} \{ xy : x \in S_i, y \in S_j \}).$ For instance consider the examples in Section \ref{examples}. If we take $S_i = V(G_i)$ for each $1 \le i \le k$, then the $H$-generalized join operation $\displaystyle \bigvee_{H,\mathcal{S}}\mathcal{F}$ coincides with the $H$-join operation of the graphs $G_1,G_2\dots,G_k$. In \cite{sch}, the $H$-join operation of the graphs was initially introduced as generalized composition by Schwenk, denoted by $H[G_1,G_2,\dots,G_k]$. Also, the same operation is studied in some other names as generalized lexicographic product and joined union in \cite{wng,nupa,stev}. When all $G_i$'s are equal to the same graph $G$, it is called the lexicographic product\cite{gdry}, denoted by $H[G]$. The following lemma \cite[Lemma 2.2]{fid} is proved by M. Fiedler and effectively used in the study of finding sufficient conditions for $k$ arbitrary real numbers to be eigenvalues of a non-negative $k \times k$ symmetric matrix. \begin{lem}\cite{fid}\label{clasc} Let $A$ be a symmetric $m \times m$ matrix with eigenvalues $\alpha_1, \alpha_2,\dots,\alpha_m$ and $B$ be a symmetric $n \times n$ matrix with eigenvalues $\beta_1,\beta_2, \dots,\beta_n$. let $u$ be an eigenvector of $A$ corresponding to $\alpha_1$ and $v$ be an eigenvector of $B$ corresponding to $\beta_1$ such that $\Vert u \Vert= \Vert v \Vert = 1$. Then for any constant $\rho$ the matrix $$C = \begin{bmatrix} A & \rho u v^t \\ \rho v u^t & B \end{bmatrix}$$ has eigenvalues $\alpha_2,\dots,\alpha_m,\beta_2,\dots,\beta_n,\gamma_1,\gamma_2$ where $\gamma_1$ and $\gamma_2$ are the eigenvalues of the matrix $$\widehat{C} = \begin{bmatrix} \alpha_1 & \rho \\ \rho & \beta_1 \end{bmatrix}.$$ \end{lem} In \cite{cdo11, hjn, cdo13}, the above lemma is called Fiedler's lemma and it has been used to obtain the eigenvalues of some graphs. In \cite{cdo11}, a generalization of the Fiedler's Lemma is obtained \cite[Lemma 2]{cdo11} by Cardoso et al. which can be applied in the $H$-join of regular graphs when $H=P_k$, path on $k$ vertices. Then in \cite{hjn} Cardoso et al. obtained another generalization of Fiedler's lemma \cite[Theorem 3]{hjn} as follows, which can be applied in the $H$-join of regular graphs for any $H$. \begin{thm}\cite{hjn}\label{gfiedler} Let $M_i$ be a symmetric matrix of order $n_i$ and $u_i$ be an eigenvector of $M_i$ corresponding to the eigenvalue $\alpha_i$, such that $\Vert u_i \Vert=1$ for $1 \le i \le k$. Let $\rho_{ij}$ be a collection of arbitrary scalars such that $\rho_{ij} = \rho_{ji}$ for $1 \le i < j \le k$. Considering $$\bold M=(M_1, M_2, \dots, M_k), \bold u=(u_1, u_2, \dots, u_k)$$ as $k$-tuples, and $$ \rho=(\rho_{12}, \dots, \rho_{1k},\rho_{23}, \dots, \rho_{2k}, \dots \rho_{k-1 k})$$ as $\frac{k(k-1)}{2}$-tuple, the following matrices are defined.\\ $A(\bold M, \bold u, \rho):=$ $\begin{bmatrix} M_1 & \rho_{12} u_1u_2^t & \cdots & \rho_{1k} u_1u_k^t \\ \rho_{21} u_2u_1^t & M_2 & \cdots & \rho_{2k} u_2u_k^t \\ \vdots & \vdots & \ddots & \vdots \\ \rho_{k1} u_ku_1^t & \rho_{k2} u_ku_2^t & \cdots & M_k \end{bmatrix}$ and $\widetilde A(\bold M, \bold u, \rho):=$ $\begin{bmatrix} \alpha_{1} & \rho_{12} & \cdots & \rho_{1k} \\ \rho_{21} & \alpha_{2} & \cdots & \rho_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ \rho_{k1} & \rho_{k2} & \cdots &\alpha_{k} \end{bmatrix}.$ Then $\operatorname{spec}(A(\bold M, \bold u, \rho))= \Bigg(\bigcup_{i=1}^k \left( \operatorname{spec}(M_i)\backslash\{\alpha_{i}\} \right)\Bigg) \cup \operatorname{spec}(\widetilde A(\bold M, \bold u, \rho))$ \end{thm} In \cite{hjn} this result is extensively used to compute the eigenvalues of $H$-join of graphs when the graphs $G_i$'s are regular and in \cite{cdo13} to compute the eigenvalues of $H$-generalized join operation when the subsets $S_i$'s are $(k,\tau)$-regular. In this paper, we obtain a new generalization of the Fiedler's lemma, in terms of characteristic polynomials. The main difference is we are not restricting $\bold u$ as the $k$-tuple of eigenvectors and $\bold M$ as the $k$-tuple of symmetric matrices. But, we consider $\bold u$ as the $k$-tuple of any complex vectors and $\bold M$ as the $k$-tuple of any complex square matrices of appropriate size. We accomplish this task by introducing the concept of the main function of a matrix, in Section \ref{secmf}. Also as an application of our result, we obtain the characteristic polynomial of $H$-join of graphs when the graphs $G_i$'s are any graphs(possibly non-regular). In \cite{sch} it is remarked by Schwenk, that ``In general, it does not appear likely that the characteristic polynomial of the generalized composition can always be expressed in terms of the characteristic polynomials of $H,G_1,G_2,\dots,G_k$". In our paper, we prove that it is possible to express the characteristic polynomial of $H$-join operation of graphs (i.e. generalized composition) in terms of the characteristic polynomials and main functions of $G_1,G_2,\dots,G_k$, and another function obtained from the adjacency matrix of $H$. Moreover for the $H$-join operation of any graphs, we obtain the characteristic polynomial of its universal adjacency matrix. The universal adjacency matrix of a graph $G$ is defined as follows: Let $A(G)$, $I$, $J$, and $D(G)$ be the adjacency matrix of $G$, the identity matrix, the all-one matrix, and the degree matrix of $G$, respectively. Any matrix of the form $U(G) = \alpha A + \beta I + \gamma J + \delta D$ where $\alpha,\beta,\gamma,\delta \in \mathbb R$ and $\alpha \ne 0$ is called the universal adjacency matrix of $G$. Many interesting and important matrices associated to a graph can be obtained as special cases of $U(G)$. For example, from the universal adjacency matrix $U(G)$, we get adjacency matrix $A(G)$, Laplacian matrix $L(G)=D(G)-A(G)$, signless Laplacian matrix $Q(G)=D(G)+A(G)$, and Seidel matrix $S(G)=J-I-2A(G)$ by taking appropriate values for $\alpha, \beta, \gamma$, and $\delta$. In \cite{gerb}, the Laplacian spectra of $H$-join of any graphs is obtained. In \cite{chen19} the characteristic polynomial of the matrix $A(G)-tD(G)$ is obtained for $H$-join of regular graphs. In \cite{wng} the characteristic polynomial of the adjacency matrix of the lexicographic product of any graphs is obtained. Recently in \cite{hmob} universal adjacency spectra of the disjoint union of regular graphs is obtained. In our paper, we obtain the characteristic polynomial and eigenvalues of the universal adjacency matrix of $H$-join of any graphs $G_1, G_2, \dots, G_k$. Then we obtain the characteristic polynomial and eigenvalues of the adjacency matrix of $H$-generalized join of graphs $G_1, G_2, \dots, G_k$, where the subsets $S_i(G) \subset V(G_i)$ are arbitrary for $1 \le i \le k$. Also, we deduce the characteristic polynomial of the generalized corona of graphs by visualizing corona as $H$-join of graphs. Hence many results obtained (mostly for regular graphs) in \cite{hjn, cdo13, chen19, lsk, hmob, wng, wu14}, are generalized here for any graphs. Throughout this paper, we denote the identity matrix of order $n$ by $I_{n}$, the all-one matrix of order $n$ by $J_{n}$ and the all-one vector of size $n \times 1$ by $\textbf{1}_{n}$. \section{The main function of a matrix} \label{secmf} Consider a graph $G$ on $n$ vertices with adjacency matrix $A(G)$. Suppose $A(G)$ has spectral decomposition $A(G)=\Sigma_{i=1}^k \theta_i E_i$, where $\theta_i$'s are distinct eigenvalues of $G$ and $E_i$ is the orthogonal projection on the eigenspace of $\theta_i$. An eigenvalue $\theta_i$ is called a main eigenvalue\cite{rowl} if the corresponding eigenspace $\mathcal{E}(\theta_i)$ is not orthogonal to $\textbf{1}_n$. The cosines of the angles between $\textbf{1}_n$ and the eigenspaces of $A$ are known as main angles of $G$, given by $\beta_i=\dfrac{1}{\sqrt{n}}\Vert E_i \textbf{1}_n\Vert$, for $1\le i \le k$. So $\theta_i$ is a main eigenvalue if and only if $\beta_i \ne 0$. Consider the field of rational functions $\mathbb C(\lambda)$. The $\det(\lambda I -A)$ is a non-zero element of $\mathbb C(\lambda)$ and hence the matrix $\lambda I - A$ is invertible over $\mathbb C(\lambda).$ In \cite{mcln}, the function $\textbf{1}_n^T(\lambda I_n-A(G))^{-1}\textbf{1}_n$ is introduced in the name of coronal of $G$ and is used to find the characteristic polynomial of the corona of two graphs. Since $E_i^2=E_i$, it is easy to see that\\ $$\textbf{1}_n^T(\lambda I_n-A(G))^{-1}\textbf{1}_n=\Sigma_{i=1}^k \dfrac{\textbf{1}_n^T E_i \textbf{1}_n}{\lambda-\theta_i} =\Sigma_{i=1}^k \dfrac{ \Vert E_i \textbf{1}_n \Vert^2}{\lambda-\theta_i}= \Sigma_{i=1}^k \dfrac{n\beta_i^2}{\lambda-\theta_i},$$ in which only non-vanishing terms are those terms corresponding to main eigenvalues. Also in \cite{mcln}, the authors remarked that graphs with different eigenvalues can have the same coronals whereas cospectral graphs can have different coronals. This is because of the fact that the main function of a graph depends not only on the eigenvalues but also on the main angles of the graph. For more on the main angles and main eigenvalues, we refer \cite{rowl} and references therein. Because of these relationships with main eigenvalues and main angles of the graph $G$, in this paper we call this function $\textbf{1}_n^T(\lambda I_n-A(G))^{-1}\textbf{1}_n$, the main function of the graph $G$. Moreover for any vectors $u$ and $v$, and a matrix $M$ of the same dimension, we introduce the following notions. \begin{defn} Let $M$ be an $n \times n$ complex matrix, and let $u$ and $v$ be $ n \times 1$ complex vectors. The main function associated to the matrix $M$ corresponding to the vector $u$ and $v$, denoted by $\Gamma_M(u,v)$, is defined to be $\Gamma_M(u,v;\lambda) = v^{t}(\lambda I - M)^{-1}u \in \mathbb C(\lambda)$. When $u=v$, we denote $\Gamma_M(u,v;\lambda)=\Gamma_M(u;\lambda).$ \end{defn} \begin{defn} Let $M$ be an $n \times n$ normal matrix over $\mathbb{C}$ and let $u$ be an $ n \times 1$ complex vector. An eigenvalue $\lambda$ of $M$ is called as $u$-main eigenvalue if $u$ is not orthogonal to the eigenspace $\mathcal E_M(\lambda)$. In the case of $u=\textbf{1}_n$, the all-one vector, then we don't specify the vector and call eigenvalue $\lambda$ of $M$ as the main eigenvalue of $M$. \end{defn} \begin{lem}\label{evmain} Let $M$ be a matrix of order $n$ with an eigenvector $u$ corresponding to the eigenvalue $\mu$. Then $\Gamma_M(u;\lambda) = \dfrac{\Vert u \Vert^2}{\lambda-\mu}$. \end{lem} \begin{pf} Now $(\lambda I_n - M)u=(\lambda-\mu)u$. Applying $(\lambda I_n - M)^{-1}$ both sides, we can get $u=(\lambda I_n - M)^{-1}(\lambda-\mu)u$, which implies $\dfrac{u^T u}{\lambda-\mu}=u^T (\lambda I_n - M)^{-1}u=\dfrac{\Vert u \Vert^2}{\lambda-\mu}$. \end{pf} \begin{lem}\label{polymain} Let $M$ be an $n \times n$ normal matrix over $\mathbb{C}$, $u$ be an $ n \times 1$ complex vector and $p(M)$ be a polynomial in $M$ with complex coefficients. Then an eigenvalue $\mu$ is a $u$-main eigenvalue of $M$ if and only if $p(\mu)$ is a $u$-main eigenvalue of $p(M)$. \end{lem} \begin{pf} For any eigenvalue of $M$ and corresponding eigenvalue of $p(M)$ the eigenvectors are the same. So the eigenspaces are the same and hence the result follows. \end{pf} Now we can state our main result, a new generalization of Fiedler's lemma. \begin{thm}\label{mainthm} Let $M_i$ be a complex matrix of order $n_i$, and let $u_i$ and $v_i$ be arbitrary complex vectors of size $n_i \times 1$ for $1 \le i \le k$. Let $\rho_{ij}$ be arbitrary complex numbers for $1 \le i,j \le k$ and $i \ne j$. For each $1 \le i \le k$, let $\phi_i(\lambda)=\det(\lambda I_{n_i}-M_i)$ be the characteristic polynomial of the matrix $M_i$ and $\Gamma_i(\lambda) = \Gamma_{M_i}(u_i,v_i;\lambda) = v_i^t (\lambda I - M_i)^{-1} u_i$. Considering \begin{center} the $k$-tuple $\bold M=(M_1, M_2, \dots, M_k)$, 2$k$-tuple $\bold u=(u_1,v_1, u_2,v_2 \dots, u_k,v_k)$ \\ and ${k(k-1)}$-tuple $ \rho=(\rho_{12}, \rho_{12} \dots, \rho_{1k},\rho_{21}, \rho_{23}, \dots, \rho_{2k}, \dots, \rho_{k1}, \rho_{k2}, \dots, \rho_{k-1 k})$ \end{center} the following matrices are defined: $$A(\bold M, \bold u, \rho) := \begin{bmatrix} M_1 & \rho_{12} u_1v_2^t & \cdots & \rho_{1k} u_1v_k^t \\ \rho_{21} u_2v_1^t & M_2 & \cdots & \rho_{2k} u_2v_k^t \\ \vdots & \vdots & \ddots & \vdots \\ \rho_{k1} u_kv_1^t & \rho_{k2} u_kv_2^t & \cdots &M_k \end{bmatrix}$$ $$\text{ and } \widetilde{A}(\bold M, \bold u, \rho) := \begin{bmatrix} \frac{1}{\Gamma_1} & -\rho_{12} & \cdots & -\rho_{1,k} \\ -\rho_{21} & \frac{1}{\Gamma_2} & \cdots & -\rho_{2,k} \\ \vdots & \vdots & \ddots & \vdots \\ -\rho_{k1} & -\rho_{k2} & \cdots &\frac{1}{\Gamma_{k}} \end{bmatrix}.$$ Then the characteristic polynomial of $A(\bold M, \bold u, \rho)$ is given as \begin{equation}\label{maineqn} \det(\lambda I - A(\bold M, \bold u, \rho)) = \Bigg( \Pi_{i=1}^k \phi_i(\lambda) \Gamma_i(\lambda) \Bigg) \det(\widetilde{A}(\bold M, \bold u, \rho)) . \end{equation} \end{thm} Proof of this theorem is given in Section \ref{subsecpfofmainthm}. At first, we deduce Theorem \ref{gfiedler} in terms of characteristic polynomials as a corollary of Theorem \ref{mainthm}. \begin{cor} Consider the notations defined in Theorem \ref{mainthm}. Suppose $u_i=v_i$ is an eigenvector of $M_i$ corresponding to an eigenvalue $\alpha_i$ with $\Vert u_i \Vert=1$, then the characteristic polynomial of $A(\bold M, \bold u, \rho)$ is \begin{center} $\phi(A(\bold M, \bold u, \rho))=\frac{\phi_1}{\lambda-\alpha_1} \frac{\phi_2}{\lambda-\alpha_2} \dots \frac{\phi_k}{\lambda-\alpha_k} \det(\widetilde A (\bold M, \bold u, \rho))$ where $\widetilde A (\bold M, \bold u, \rho)= \begin{bmatrix} \lambda-\alpha_1 & -\rho_{12} & \cdots & -\rho_{1k} \\ -\rho_{21} & \lambda-\alpha_2 & \cdots & -\rho_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ -\rho_{k1} & -\rho_{k2} & \cdots &\lambda-\alpha_k \end{bmatrix}$. \end{center} \begin{proof} Since $\Vert u_i \Vert=1$, by Lemma \ref{evmain} we get $\Gamma_i=\dfrac{1}{\lambda-\alpha_i}.$ Now the proof follows from Theorem \ref{mainthm}. \end{proof} \end{cor} In \cite[Theorem 2.3]{chen19} another generalization of Fiedler's lemma, similar to Theorem \ref{gfiedler}, is given for the matrices with fixed row sum and the result is used to find the generalized characteristic polynomial of $H$-join of regular graphs. We observe that any such matrix has the all-one vector as an eigenvector. By taking $u_i$ to be the all-one vector of appropriate size, we can deduce \cite[Theorem 2.3]{chen19} from Theorem \ref{mainthm}. \section{Proof of the main result} In this section, we prove Theorem \ref{mainthm}. We start with the following essential lemmas. \subsection{Some important lemmas} \begin{lem}\label{lemd}\cite{cvet} Let $A, B, C$ and $D$ be matrices such that $M= \begin{bmatrix} A & B\\ C & D \end{bmatrix} $. If $D$ is invertible, then $$\det(M) = \det(D) \det(A - BD^{-1}C).$$ \end{lem} \begin{lem}\cite{bart,dizh}\label{lemid} Let $A$ be an $n \times n$ invertible matrix, and let $u$ and $v$ be any two $n \times 1$ vectors such that $1 + v^{t}A^{-1}u \ne 0$. Then \begin{enumerate} \item $\det (A+uv^t) = (1+ v^t A^{-1} u) \det (A).$ \item $(A+uv^t)^{-1} = A^{-1} - \dfrac{A^{-1} u v^t A^{-1}}{1+v^tA^{-1}u}.$ \end{enumerate} \end{lem} \begin{lem}\label{lemid2} Let $A$ be an $n \times n$ complex matrix, and let $u$ and $v$ be any $n \times 1$ complex vectors. Also, Let $\Gamma = v^t (\lambda I - A)^{-1} u$. Then \begin{enumerate} \item $\det (\lambda I - A + \alpha uv^t) = (1 + \alpha \Gamma) \det(\lambda I - A) = (1 + \alpha \Gamma) \phi_A(\lambda) $ \item $v^t(\lambda I - A + \alpha uv^t)u= \dfrac{\Gamma}{1 +\alpha \Gamma}$ \end{enumerate} \end{lem} \begin{proof} The proof of (1) follows directly from Lemma \ref{lemid}(1), as $\det (\lambda I - A + \alpha uv^t)=(1+\alpha v^T(\lambda I - A)^{-1}u)\det(\lambda I - A)$. So we prove (2).\\ By Lemma \ref{lemid}(2), $$(\lambda I - A + \alpha uv^t)^{-1}=(\lambda I - A)^{-1}-\alpha \dfrac{(\lambda I - A)^{-1}uv^T(\lambda I - A)^{-1}}{1+\alpha v^T(\lambda I - A)^{-1}u}$$ which implies, $$v^T(\lambda I - A + \alpha uv^t)^{-1}u=\Gamma-\alpha \dfrac{\Gamma^2}{1+\alpha \Gamma}=\dfrac{\Gamma}{1 +\alpha \Gamma}$$ \end{proof} Motivated by \cite[Theorem 8.13.3]{gdry}, we prove the following lemma. \begin{lem}\label{uev} Let $M$ be a complex normal matrix of order $n$ and let $u$ be any $n \times 1$ vector. Then the poles of $u^T(\lambda I - M)^{-1}u$ are the $u$-main eigenvalues of $M$ and are simple. \end{lem} \begin{pf} Let $\{\theta_1, \theta_2, \cdots, \theta_{k}\}$ be the distinct eigenvalues and let $\{\theta_1,\theta_2,\dots,\theta_{m}\}$ be the set of $u$-main eigenvalues of $M$. Suppose the spectral decomposition of $M$ is $M=\Sigma_{j=1}^{k} \theta_j E_{\theta_j}$, where $E_{\theta_j}$ is the orthogonal projection on the eigenspace of $\theta_j$. Then $(\lambda I - M)^{-1}=\Sigma_{j=1}^{k} \frac{E_{\theta_j}}{\lambda-\theta_j}$, and $\Gamma_{M}(u) = u^T(\lambda I - M)^{-1}u=\Sigma_{j=1}^{k} \frac{u^tE_{\theta_j}u}{\lambda-\theta_j}.$ Now, $u^t E_{\theta_j} u \ne 0$ if and only if $\theta_j$ is a $u$-main eigenvalue of $M$. So, $\Gamma_{M}(u)=\Sigma_{j=1}^{m} \frac{u^tE_{\theta_j}u}{\lambda-\theta_j}$ and the result follows. \end{pf} \subsection{Proof of Theorem \ref{mainthm}}\label{subsecpfofmainthm} \begin{proof} We prove the result by using induction on $k$. The base case $k=1$ is clear. We prove the result also for $k=2$ for the sake of understanding. Now, by Lemma \ref{lemd}, we have \begin{align*} \begin{vmatrix} \lambda I_{n_1}-M_1 & -\rho_{12}u_1v_2^t \\ -\rho_{21}u_2v_1^t & \lambda I_{n_2}- M_2 \end{vmatrix} &= \det(\lambda I_{n_2}- M_2) \det (\lambda - M_1 - \rho_{12}\rho_{21} \Gamma_2 u_1 v_1^t)\\ &= \phi_1 \phi_2 (1-\rho_{12}\rho_{21}\Gamma_2\Gamma_1), \text{by Lemma } \ref{lemid2}(1)\\ &= \phi_1 \phi_2 \begin{vmatrix} 1 & -\rho_{12}\Gamma_1 \\ -\rho_{21}\Gamma_2 & 1 \end{vmatrix} \end{align*} This proves the result for the case $k=2$. We assume the result is true for $k-1$. Let $$M=\begin{bmatrix} \lambda I_{n_1} - M_1 & -\rho_{12} u_1v_2^t & \cdots & -\rho_{1k} u_1v_k^t \\ -\rho_{21} u_2v_1^t & \lambda I_{n_2}-M_2 & \cdots & -\rho_{2k} u_2v_k^t \\ \vdots & \vdots & \ddots & \vdots\\ -\rho_{k1} u_kv_1^t & -\rho_{k2} u_kv_2^t & \cdots \cdots &\lambda I_{n_k}-M_k \end{bmatrix}$$ \\ Now, by Lemma \ref{lemd}, we have \begin{equation} \label{pfind} \det(M)=\det(\lambda I_{n_k} - M_k)\det(S) \end{equation} $\text{where } S=\begin{bmatrix} \lambda I_{n_1} - M_1 & -\rho_{12} u_1v_2^t & \cdots & -\rho_{1,k-1} u_1v_{k-1}^t \\ -\rho_{21} u_2v_1^t & \lambda I_{n_2}-M_2 & \cdots & -\rho_{2,k-1} u_2v_{k-1}^t \\ \vdots & \vdots & \ddots & \vdots \\ -\rho_{k-1,1} u_{k-1}v_1^t & -\rho_{k-1,2} u_{k-1}v_2^t & \cdots & \lambda I_{n_{k-1}}-M_{k-1} & \end{bmatrix}\\ -\begin{bmatrix} -\rho_{1k} u_1v_k^t \\ -\rho_{2k} u_2v_k^t \\ \vdots \\ -\rho_{k-1,k} u_{k-1}v_k^t \end{bmatrix} (\lambda I_{n_k} - M_k)^{-1} \begin{bmatrix} -\rho_{k1} u_kv_1^t & -\rho_{k2} u_kv_2^t & \cdots & -\rho_{k,k-1} u_{k}v_{k-1}^t \end{bmatrix}$\\ $=[s_{ij}]$ given by $s_{ij}=\begin{cases} \lambda I_{n_i}-M_i - \Gamma_k \rho_{ik} \rho_{ki} u_iv_i^t \text{ if }i=j\\ -(\rho_{ij} + \Gamma_k \rho_{ik} \rho_{kj}) u_iv_j^t \text{ if }i\ne j \end{cases}$ By Lemma \ref{lemid2}, $\det( \lambda I_{n_i}-M_i - \Gamma_k \rho_{ik} \rho_{ki} u_iv_i^t)=\det( \lambda I_{n_i}-M_i)(1-(\Gamma_k \rho_{ik} \rho_{ki})\Gamma_i)$ and\\ $v_i^t ( \lambda I_{n_i}-M_i - \Gamma_k \rho_{ik} \rho_{ki} u_iv_i^t)^{-1} u_i = \dfrac{\Gamma_i}{1 - (\Gamma_k\rho_{ik} \rho_{ki})\Gamma_i}.$ Applying these results on the induction hypothesis on $S$ we get \begin{align*} \det(S)&=\Pi_{i=1}^{k-1}\Bigg(\det( \lambda I_{n_i}-M_i - \Gamma_k \rho_{ik} \rho_{ki} u_iv_i^t )\dfrac{\Gamma_i}{1 - (\Gamma_k\rho_{ik} \rho_{ki})\Gamma_i}\Bigg)\det(\widetilde{S})\\ &=\Pi_{i=1}^{k-1}\Bigg(\det( \lambda I_{n_i}-M_i)\Gamma_i \Bigg)\det(\widetilde{S}) \end{align*} where $\widetilde{S}=[\widetilde{s}_{ij}]$ given by $\widetilde{s}_{ij}=\begin{cases} \dfrac{1 - \rho_{ik} \rho_{ki} \Gamma_k\Gamma_i}{\Gamma_i} \text{ if }i=j\\ -(\rho_{ij} + \Gamma_k \rho_{ik} \rho_{kj}) \text{ if }i\ne j \end{cases}$ Therefore $\det(S)=\phi_1 \phi_2 \cdots \phi_{k-1} \times \\ \begin{vmatrix} 1-(\rho_{1k}\rho_{k1}\Gamma_k \Gamma_1) & -(\rho_{12} + \Gamma_k \rho_{1,k} \rho_{k2}) \Gamma_1 &\cdots& -(\rho_{1k-1}+ \Gamma_k \rho_{1,k}\rho_{k,k-1}) \Gamma_1 \\ -(\rho_{21} + \Gamma_k \rho_{2,k} \rho_{k1}) \Gamma_2 & 1-(\rho_{2k}\rho_{k2}\Gamma_k \Gamma_2) &\cdots& -(\rho_{2k-1} + \Gamma_k \rho_{2,k}\rho_{k,k-1}) \Gamma_2 \\ \vdots & \vdots & \ddots & \vdots \\ -(\rho_{k-1,1} + \Gamma_k \rho_{k-1,k}\rho_{k1}) \Gamma_{k-1} & -(\rho_{k-1,2} + \Gamma_k \rho_{k-1,k}\rho_{k2}) \Gamma_{k-1}&\cdots & 1-(\rho_{k-1,k}\rho_{k,k-1}\Gamma_k \Gamma_{k-1}) \end{vmatrix}$ $=\phi_1 \phi_2 \cdots \phi_{k-1} \times \bigg( \begin{vmatrix} 1 & -\rho_{12} \Gamma_1 &\cdots& -\rho_{1k-1} \Gamma_1 \\ -\rho_{21} \Gamma_2 & 1 &\cdots& -\rho_{2k-1} \Gamma_2 \\ \vdots & \vdots & \vdots \\ -\rho_{k-1,1} \Gamma_{k-1} & -\rho_{k-1,2} \Gamma_{k-1}&\cdots & 1 \end{vmatrix}$ \\ $- \begin{vmatrix} -\rho_{1k}\rho_{k1}\Gamma_k \Gamma_1 & - \rho_{1,k} \rho_{k2} \Gamma_k\Gamma_1 &\cdots& - \rho_{1,k}\rho_{k,k-1} \Gamma_k\Gamma_1 \\ - \rho_{2,k} \rho_{k1} \Gamma_k \Gamma_2 & - \rho_{2k}\rho_{k2}\Gamma_k \Gamma_2 &\cdots& - \rho_{2,k}\rho_{k,k-1} \Gamma_k \Gamma_2 \\ \vdots & \vdots & \vdots \\ - \rho_{k-1,k}\rho_{k1} \Gamma_k \Gamma_{k-1} & - \rho_{k-1,k}\rho_{k2} \Gamma_k \Gamma_{k-1}&\cdots & -\rho_{k-1,k}\rho_{k,k-1}\Gamma_k \Gamma_{k-1} \end{vmatrix} \bigg)\\ =\phi_1 \phi_2 \cdots \phi_{k-1} \times \begin{vmatrix} 1 & -\rho_{12} \Gamma_1 &\cdots & -\rho_{1,k-1} \Gamma_1 & -\rho_{1k} \Gamma_1 \\ -\rho_{21} \Gamma_2 & 1 &\cdots& -\rho_{2,k-1} \Gamma_2 & -\rho_{2k} \Gamma_2 \\ \vdots & \vdots & \vdots & \vdots & \vdots\\ -\rho_{k-1,1} \Gamma_{k-1} & -\rho_{k-1,2} \Gamma_{k-1}&\cdots & 1 & -\rho_{k-1,k} \Gamma_{k-1} \\ -\rho_{k,1} \Gamma_{k} & -\rho_{k,2} \Gamma_{k}&\cdots & -\rho_{k,k-1} \Gamma_k & 1 \end{vmatrix}$, again by Lemma \ref{lemd}. By substituting this $\det(S)$ value in Equation \eqref{pfind}, we get the required result for $k$. This completes the proof of Theorem \ref{mainthm}. \end{proof} Suppose the matrices $M_i$'s are normal and $\{\theta_1,\theta_2,\dots,\theta_{m_i}\}$ is the set of distinct $u_i$-main eigenvalues of $M_i$, for $1 \le i \le k$. Then as discussed in the proof of Lemma \ref{uev}, we can write \begin{equation}\label{defnfg} \Gamma_i=\dfrac{f_i}{g_i} \text{ where } g_i=\prod_{j=1}^{m_i}(\lambda - \theta_j). \end{equation} Hence by the Theorem \ref{mainthm}, \begin{equation}\label{maineqn1} \det(\lambda I - A(\bold M, \bold u, \rho)) = (\frac{\phi_1}{g_1}) \dots (\frac{\phi_k}{g_k}) \Phi(\lambda) \end{equation} where $\Phi(\lambda)=\begin{vmatrix} g_1(\lambda) & -\rho_{12}f_1(\lambda) & \cdots & -\rho_{1,k}f_1(\lambda) \\ -\rho_{21}f_2(\lambda) & g_2(\lambda) & \cdots & -\rho_{2,k}f_2(\lambda) \\ \vdots & \vdots & \ddots & \vdots \\ -\rho_{k1}f_k(\lambda) & -\rho_{k2}f_k(\lambda) & \cdots &g_k(\lambda) \end{vmatrix} $\\ So we can describe the spectrum of $A(\bold M, \bold u, \rho) $ as follows. \begin{thm}\label{mainthm2} Consider the notations defined above. Suppose the matrices $M_i$'s are normal, then \begin{itemize} \item Every eigenvalue, which is not a $u_i$-main eigenvalue of $M_i$, say $\lambda$ with multiplicity $m(\lambda)$ is an eigenvalue of $A(\bold M, \bold u, \rho) $ with multiplicity $m(\lambda)$. \item Every $u_i$- main eigenvalue of $M_i$, say $\lambda$ with multiplicity $m(\lambda)$ is an eigenvalue of $A(\bold M, \bold u, \rho) $ with multiplicity $m(\lambda)-1$. \item Remaining eigenvalues are the roots of the polynomial $\Phi(\lambda).$ \end{itemize} \end{thm} \begin{proof} By Lemma \ref{uev} the poles of $\Gamma_i$ are $u_i$- main eigenvalues and they are simple. Now the proof easily follows from Equation \eqref{maineqn1}. \end{proof} \section{Universal spectra of the H-join of graphs}\label{secappuni} In this section, by applying Theorem \ref{mainthm}, we obtain the results on characteristic polynomial and spectrum of the universal adjacency matrix of $H$-join of graphs. Consider a graph $H$ on $k$ vertices and a family of graphs $\mathcal{F}=\{G_1,G_2, \dots, G_k\}$. Let $G=\displaystyle \bigvee_{H}\mathcal{F}$ be the $H$-join of graphs in $\mathcal{F}$, and let $n_i$, $A_i$ and $D_i$ be the number of vertices, the adjacency matrix and the degree matrix of the graph $G_i$ respectively, for $1\le i \le k$. Also let $\rho_{ij}$ be the scalars defined by $\rho_{ij} = \rho_{ji}= 1 \text{ if}$ $ij \in E(H)$ and $0 \text{ otherwise}$, for $1\le i,j \le k$ and $i \ne j$. Once and for all we fix an ordering of the vertices of $G$, such that the adjacency matrix of the graph $G$ is given as \begin{equation} \label{AofG} A(G) = \begin{bmatrix} A_1 & \rho_{1,2} \textbf{1}_{n_1}\textbf{1}_{n_2}^t & \cdots & \rho_{1,k} \textbf{1}_{n_1} \textbf{1}_{n_k}^t \\ \rho_{2,1} \textbf{1}_{n_2} \textbf{1}_{n_1}^t & A_2 & \cdots & \rho_{2,k} \textbf{1}_{n_2} \textbf{1}_{n_k}^t \\ \vdots & \vdots & \ddots & \vdots \\ \rho_{k,1} \textbf{1}_{n_k} \textbf{1}_{n_1}^t & \rho_{k,2} \textbf{1}_{n_k} \textbf{1}_{n_2}^t & \cdots & A_k \end{bmatrix}. \end{equation} \subsection{Universal spectra of the $H$-join of graphs}\label{subsecushjoin} The proof of the following lemma is immediate from the definition of the $H$-join of graphs. \begin{lem} Let $H$ be a graph with vertex set $\{v_1,\dots,v_k\}$ and $\mathcal{F}=\{G_1,G_2, \dots, G_k\}$ be a family of $k$ graphs such that $V(G_i)=\{v_{1}^{(i)},\dots,v_{n_i}^{(i)}\}$ for $1 \le i \le k$. Then the degree of the vertex $v_{j}^{(i)}$ in $G$ is given by $$\deg_{G}(v_{j}^{(i)}) = \deg_{G_i}(v_{j}^{(i)}) + w_i, 1 \le i \le k, 1 \le j \le n_i$$ where $w_i = \displaystyle \sum_{v_l \in N_H(v_i)}n_l$ . \end{lem} Let $U(G) = \alpha A(G) + \beta I_n + \gamma J_n + \delta D(G)$ with $\alpha \ne 0$ be the universal adjacency matrix of the graph $G$, where $n=\Sigma_{i=1}^k n_i$. Let $U_i = \alpha A_i + \beta I_{n_i} + \gamma J_{n_i} + \delta D_i$ be the universal adjacency matrix of the graph $G_i$, for $1 \le i \le k$. Therefore by the Equation \eqref{AofG} the universal adjacency matrix of $G$ can be written as follows: \begin{equation} \label{UAofG} U(G)=\begin{bmatrix} U_1 + \delta w_1 I_{n_1} & (\rho_{1,2} \alpha +\gamma)\textbf{1}_{n_1}\textbf{1}_{n_2}^t & \cdots & (\rho_{1,k}\alpha +\gamma)\textbf{1}_{n_1}\textbf{1}_{n_k}^t\\ (\rho_{2,1}\alpha +\gamma)\textbf{1}_{n_2}\textbf{1}_{n_1}^t & U_2 + \delta w_2 I_{n_2} & \cdots & (\rho_{2,k}\alpha +\gamma)\textbf{1}_{n_2}\textbf{1}_{n_k}^t\\ \vdots & \vdots & \ddots & \vdots \\ (\rho_{k,1}\alpha +\gamma)\textbf{1}_{n_k}\textbf{1}_{n_1}^t& (\rho_{k,2}\alpha +\gamma)\textbf{1}_{n_k}\textbf{1}_{n_2}^t& \cdots & U_k + \delta w_k I_{n_k} \end{bmatrix} \end{equation} In the following theorem, we obtain the characteristic polynomial of universal adjacency matrix $U(G)$. \begin{thm}\label{chUofG} Let $H$ be a graph on $k$ vertices and $\mathcal{F}=\{G_1,G_2, \dots, G_k\}$ be a family of any $k$ graphs. Consider the graph $G=\displaystyle \bigvee_{H}\mathcal{F}$. Let $\phi_{i}(\lambda)$ be the characteristic polynomial of $U_i$ and $\Gamma_{i}(\lambda)=\Gamma_{U_i}(\textbf{1}_{n_i};\lambda)$. Then we have the following. i) The characteristic polynomial of the universal adjacency matrix $U(G)$ given in Equation \eqref{UAofG} is $$\phi_{U(G)}(\lambda) = \displaystyle \Pi_{i=1}^k \phi_i(\lambda-\delta w_i) \Gamma_i(\lambda-\delta w_i) \det(\widetilde{U}(G))$$ \begin{equation}\label{UAdofG} \text{where}\,\, \widetilde{U}(G) = \begin{bmatrix} \frac{1}{\Gamma_1(\lambda-\delta w_1)} & -(\rho_{1,2} \alpha +\gamma) & \cdots & -(\rho_{1,k} \alpha +\gamma) \\ -(\rho_{2,1} \alpha +\gamma) & \frac{1}{\Gamma_2(\lambda-\delta w_2)} & \cdots & -(\rho_{2,k} \alpha +\gamma) \\ \vdots & \vdots & \ddots & \vdots \\ -(\rho_{k,1} \alpha +\gamma) & -(\rho_{k,2} \alpha +\gamma) & \cdots &\frac{1}{\Gamma_{k}(\lambda-\delta w_k)} \end{bmatrix} \end{equation} ii) Analogous to the Equations \eqref{defnfg} and \eqref{maineqn1}, we define $f_i, g_i$, and $\Phi(\lambda)$ corresponding to the main eigenvalues of $U_i$ for $1 \le i \le k$. Then the spectrum of $G$ is given as below. \begin{itemize} \item For every eigenvalue $\mu$ of $A_i$ with multiplicity $m(\mu)$, which is not a main eigenvalue, $\mu+\delta w_i$ is an eigenvalue of $G$ with multiplicity $m(\mu)$. \item For every main eigenvalue $\mu$ of $A_i$ with multiplicity $m(\mu)$, $\mu+\delta w_i$ is an eigenvalue of $G$ with multiplicity $m(\mu)-1$. \item Remaining eigenvalues are the roots of the polynomial $\Phi(\lambda)$. \end{itemize} \end{thm} \begin{pf} For each $1\le i \le k$, let $P_i=U_i + \delta w_i I_{n_i}$. Then we have the following relations,\\ $$\phi_{P_i}(\lambda)=\det(\lambda I_{n_i}-(U_i + \delta w_i I_{n_i}))=\phi_{U_i}(\lambda-\delta w_i) \,\,\text{and}$$ $$\Gamma_{P_i}(\lambda)=\textbf{1}_{n_i}^T(\lambda I_{n_i}-(U_i + \delta w_i I_{n_i}))^{-1}\textbf{1}_{n_i}=\Gamma_{U_i}(\lambda-\delta w_i).$$ Let $\widehat{\rho}_{ij}=\widehat{\rho}_{ji}=\rho_{ij} \alpha +\gamma$ for $1\le i<j \le k$. Considering the triplet $(\bold M, \bold u, \widehat{\rho} )$, given by $$\bold M = (P_1,P_2,\dots,P_k), \bold u = ( \bold 1_{n_1}, \bold 1_{n_2}\dots, \bold 1_{n_k})\, \text{and}$$ $$\widehat{\rho}=(\widehat{\rho}_{12},\dots,\widehat{\rho}_{1k},\widehat{\rho}_{23},\dots,\widehat{\rho}_{2k},\dots,\widehat{\rho}_{k-1,k} )$$ we can write the matrices in the Equations \eqref{UAofG} and \eqref{UAdofG} as $U(G)=A(\bold M, \bold u, \widehat{\rho})$ and $\widetilde{U}(G)=\widetilde{A}(\bold M, \bold u, \widehat{\rho})$. Now using Theorem \ref{mainthm} the proof of (i) follows. By Lemma \ref{polymain}, $\mu$ is not a main eigenvalue of $U_i$ if and only if $\mu + \delta w_i$ is not a main eigenvalue of $P_i$. Now by Theorem \ref{mainthm2} the proof of (ii) follows. \end{pf} \begin{cor} \label{specUofG} Consider the notations defined in Theorem \ref{chUofG}. Suppose the graph $G_i$ is $r_i$-regular for $1 \le i \le k$. Then $p_i=\alpha r_i + \beta + \gamma n_i + \delta(r_i+w_i)$ is an eigenvalue of $P_i=U_i + \delta w_i I_{n_i}$ and $$spec(U(G)) = \bigg(\bigcup_{i=1}^k \big( spec(P_i)\backslash p_i \big) \bigg)\cup spec(\widetilde{U'}(G))$$ where $\widetilde{U'}(G)= \begin{bmatrix} p_1 & \sqrt{n_1n_2}\widehat{\rho}_{12} & \cdots & \sqrt{n_1n_k}\widehat{\rho}_{1k} \\ \sqrt{n_2n_1}\widehat{\rho}_{21} & p_2 & \cdots & \sqrt{n_2n_k}\widehat{\rho}_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ \sqrt{n_kn_1}\widehat{\rho}_{k1} & \sqrt{n_kn_2}\widehat{\rho}_{k2} & \cdots &p_k \end{bmatrix}. $ \end{cor} \begin{proof} Clearly $\bold 1_{n_i}$ is an eigenvector of $P_i$ corresponding to the eigenvalue $p_i=\alpha r_i + \beta + \gamma n_i + \delta(r_i+w_i).$ Now, by Lemma \ref{evmain}, we have $\Gamma_i = \dfrac{n_i}{\lambda - p_i}$ and so by Theorem \ref{chUofG}, we get $$\phi(U(G)) = \dfrac{\phi_{1}}{\lambda - p_1} \dfrac{\phi_{2}}{\lambda - p_2} \cdots \dfrac{\phi_{k}}{\lambda - p_k} n_1 n_2 \cdots n_{k} \det(\widetilde{U}(G)). $$ Distributing $n_i$ inside the determinant of $\widetilde{U}(G)$, as $\sqrt{n_i}$ into the $i^{th}$ row and $\sqrt{n_i}$ into the $i^{th}$ column, we can write \begin{align*} n_1 n_2 \cdots n_{k} \det(\widetilde{U}(G))&= \det \begin{bmatrix} (\lambda - p_1) & -\sqrt{n_1n_2}\widehat{\rho}_{12} & \cdots & -\sqrt{n_1n_k}\widehat{\rho}_{1k} \\ -\sqrt{n_2n_1}\widehat{\rho}_{21} & (\lambda - p_2) & \cdots & -\sqrt{n_2n_k}\widehat{\rho}_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ -\sqrt{n_kn_1}\widehat{\rho}_{k1} & -\sqrt{n_kn_2}\widehat{\rho}_{k2} & \cdots &(\lambda - p_k) \end{bmatrix}\\ &=\det (\lambda I_n - \widetilde{U'}(G)). \end{align*} Now the proof follows. \end{proof} \begin{cor} \label{specUofG2} Consider the notations defined in Theorem \ref{chUofG}. Suppose $\alpha+\delta=0$. Then $p_i=\beta + \gamma n_i + \delta w_i$ is an eigenvalue of $P_i=U_i + \delta w_i I_{n_i}$ and $$spec(U(G)) = \bigg(\bigcup_{i=1}^k \big( spec(P_i)\backslash p_i \big) \bigg)\cup spec(\widetilde{U'}(G))$$ where $\widetilde{U'}(G)= \begin{bmatrix} p_1 & \sqrt{n_1n_2}\widehat{\rho}_{12} & \cdots & \sqrt{n_1n_k}\widehat{\rho}_{1k} \\ \sqrt{n_2n_1}\widehat{\rho}_{21} & p_2 & \cdots & \sqrt{n_2n_k}\widehat{\rho}_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ \sqrt{n_kn_1}\widehat{\rho}_{k1} & \sqrt{n_kn_2}\widehat{\rho}_{k2} & \cdots &p_k \end{bmatrix}. $ \end{cor} \begin{proof} It is easy to see that $$(\alpha A_i + \beta I_{n_i} + \gamma J_{n_i} + \delta D_i)\bold 1_{n_i}=(\alpha+\delta)\begin{bmatrix} deg_{G_i}(v_1^{(i)})\\ deg_{G_i}(v_2^{(i)}) \\ \vdots \\ deg_{G_i}(v_{n_i}^{(i)}) \end{bmatrix} + (\beta + \gamma n_i)\bold 1_{n_i}.$$ So $\bold 1_{n_i}$ is an eigenvector of $P_i$ corresponding to the eigenvalue $p_i=\beta + \gamma n_i + \delta w_i.$ Then by the same argument as in the previous corollary, the proof follows. \end{proof} \begin{rem} For any graph $G$, the Laplacian matrix $L(G)$, is obtained from the universal adjacency matrix $U(G)$, by taking $(\alpha, \beta, \gamma, \delta)=(-1,0,0,1)$. So, we can find the Laplacian spectra of $H$-join of any graphs from Corollary \ref{specUofG2}. \end{rem} Let $H$ be a graph on $k$ vertices and $G'$ be any graph. We recall that the Lexicographic product of graphs $H$ and $G'$, denoted by $H[G']$, is obtained as the $H$-join of graphs in $\mathcal{F}=\{G_1,G_2, \dots, G_k\},$ where $G_i=G'$ for $1 \le i \le k$. In \cite{wng}, the authors obtained the characteristic polynomial of $H[G']$\cite[Theorem 2.4]{wng} and investigated the spectrum in various cases. Now we generalize \cite[Theorem 2.4]{wng} by obtaining the characteristic polynomial of the universal adjacency matrix of $H[G']$ when $\delta=0$. \begin{thm} \label{lexthm} Let $H$ be a graph on $k$ vertices and $G'$ be a graph on $n'$ vertices. Consider the graph $G=H[G']$, the lexicographic product of $H$ and $G'$. Suppose $\operatorname{spec}(H)=\{\lambda_1,\lambda_2, \dots, \lambda_k \}.$ Then the characteristic polynomial of universal adjacency matrix of $U(G)$ when $\delta=0$, is $$\phi_{U(G)}(\lambda) = \phi^k(\lambda)\Bigg( \displaystyle \Pi_{i=1}^k (1-\lambda_i \Gamma(\lambda)) \Bigg),$$ where $\phi(\lambda)$ be the characteristic polynomial of $U(G')$ and $\Gamma(\lambda)=\Gamma_{U(G')}(\bold 1_{n'};\lambda)$ when $\delta=0$. \end{thm} \begin{proof} By Theorem \ref{chUofG}, \begin{align*} \phi_{U(G)}(\lambda)&= \phi^k(\lambda)\Gamma^k(\lambda)\det(\frac{1}{\Gamma(\lambda)}I_k-A(H))\\ &=\phi^k(\lambda)\Gamma^k(\lambda)\Bigg(\Pi_{i=1}^k (\frac{1}{\Gamma(\lambda)}-\lambda_i) \Bigg)\\ &=\phi^k(\lambda)\Bigg(\Pi_{i=1}^k (1- \lambda_i \Gamma(\lambda)) \Bigg).\\ \end{align*} \end{proof} \subsection{The generalized characteristic polynomial of the $H$-join of graphs}\label{subsecgench} The generalized characteristic polynomial of a graph $G$ is introduced in \cite{cvet}, as the bivariate polynomial defined by $\phi_G(\lambda,t) = \det(\lambda I - (A(G) - t D(G)))$ where $A(G)$ and $D(G)$ are the adjacency and the degree matrix associated to the graph $G$. As mentioned earlier, in \cite[Theorem 3.1]{chen19} the authors obtained a generalization of Fiedler’s lemma, for the matrices with fixed row sum and as an application, they obtained the generalized characteristic polynomial of $H$-join of regular graphs. In the following theorem, we obtain the generalized characteristic polynomial of $H$-join of any graphs. \begin{thm}\label{gchUofG} Let $H$ be any graph and $\mathcal{F}=\{G_1,G_2, \dots, G_k\}$ be a family of any $k$ graphs. Consider the graph $G=\displaystyle \bigvee_{H}\mathcal{F}$. Let $M(G)=A(G) - t D(G)$ and $M_i=A_i-t D_i$ for $1 \le i \le k$. Let $\phi_{i}$ be the characteristic polynomial of $M_i$ and $\Gamma_{i}=\Gamma_{M_i}(\textbf{1}_{n_i};\lambda)$. Then i) The generalized characteristic polynomial of the graph $G$ is $$\phi_{M(G)}(\lambda) = \displaystyle \Pi_{i=1}^k \phi_i(\lambda+t w_i) \Gamma_i(\lambda+t w_i) \det(\widetilde{M}(G))$$ where $ \widetilde{M}(G) = \begin{bmatrix} \frac{1}{\Gamma_1(\lambda+t w_1)} & -\rho_{12} & \cdots & -\rho_{1k} \\ -\rho_{21} & \frac{1}{\Gamma_2(\lambda+t w_2)} & \cdots & -\rho_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ -\rho_{k1} & -\rho_{k2} & \cdots &\frac{1}{\Gamma_{k}(\lambda+t w_3)} \end{bmatrix}$ ii)Analogous to the Equations \eqref{defnfg} and \eqref{maineqn1}, we define $f_i, g_i$, and $\Phi(\lambda)$ corresponding to the main eigenvalues of $M_i$ for $1 \le i \le k$. Then the spectrum of $M(G)$ is given as below. \begin{itemize} \item For every eigenvalue $\mu$ of $M_i$ with multiplicity $m(\mu)$, which is not a main eigenvalue, $\mu-t w_i$ is an eigenvalue of $M(G)$ with multiplicity $m(\mu)$. \item For every main eigenvalue $\mu$ of $M_i$ with multiplicity $m(\mu)$, $\mu-t w_i$ is an eigenvalue of $M(G)$ with multiplicity $m(\mu)-1$. \item Remaining eigenvalues are the roots of the polynomial $\Phi(\lambda)$. \end{itemize} \end{thm} \begin{proof} The proof is direct from Theorem \ref{chUofG}, by taking $(\alpha, \beta, \gamma, \delta)=(1,0,0,-t)$ in the universal adjacency matrix $U(G).$ \end{proof} \begin{cor} \cite{chen19} \label{specGAofG} Suppose the graph $G_i$ is $r_i$-regular for each $1 \le i \le k$. Then $p_i= r_i-t(r_i+w_i)$ is an eigenvalue of $P_i=M_i -t w_i I_{n_i}$ and $$spec(M(G)) = \bigg(\bigcup_{i=1}^k \big( spec(P_i)\backslash p_i \big) \bigg)\cup spec(\widetilde{M'}(G))$$ where $\widetilde{U'}(G)= \begin{bmatrix} p_1 & \sqrt{n_1n_2}{\rho}_{12} & \cdots & \sqrt{n_1n_k}{\rho}_{1k} \\ \sqrt{n_2n_1}{\rho}_{21} & p_2 & \cdots & \sqrt{n_2n_k}{\rho}_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ \sqrt{n_kn_1}{\rho}_{k1} & \sqrt{n_kn_2}{\rho}_{k2} & \cdots &p_k. \end{bmatrix}. $ \end{cor} \begin{proof} The proof is obtained from Theorem \ref{gchUofG} by the similar arguments used in the Corollary \ref{specUofG} of Theorem \ref{chUofG}. \end{proof} \begin{rem} In \cite{chen19}, from the generalized characteristic polynomial, the spectra of the signless Laplacian, the normalized laplacian are deduced for $H$-join of regular graphs. Similarly, we too can deduce them for $H$-join of any graphs from Theorem \ref{gchUofG}. Also, we can deduce the Seidel spectra of $H$-join of any graphs by taking $(\alpha, \beta, \gamma, \delta)=(-2,-1,1,0)$ in the universal adjacency matrix $U(G)$ in Theorem \ref{chUofG}. \end{rem} \section{Spectra of the $H$-generalized join of graphs}\label{subsechgenjoin} In this section, we obtain the characteristic polynomial of $H$-generalized join of graphs $\displaystyle \bigvee_{H,\mathcal{S}}\mathcal{F}$ introduced in \cite{cdo13}. \begin{defn} Let $G$ be any graph. A vertex subset $S$ of a graph $G$ is said to be $(k,\tau)$-regular if $S$ induces a $k$-regular graph in $G$ and every vertex outside of $S$ has $\tau$ neighbours in $S$. When $G$ is a regular graph, for convenience $S = V(G)$ is considered as $(k,0)$-regular. \end{defn} \begin{defn} Let $G$ be any graph with vertex set $\{v_1,v_2,\dots,v_n\}$. For any subset $S\subset V(G)$, the characteristic vector of $S$, denoted by $\chi_S$, is defined as the 0-1 vector such that $i^{th}$ place of $\chi_S$ is 1 if and only if the vertex $v_i \in S.$ \end{defn} \begin{lem}\cite{cdo13}\label{mainlem} Let $G$ be a graph with a $(k,\tau)$-regular set $S$, where $\tau > 0$, and $\lambda \in \sigma(A(G))$. Then, $\lambda$ is not a main eigenvalue if and only if $\lambda = k - \tau$ or $\chi_S \in (\mathcal E_G(\lambda))^{\perp}$. \end{lem} Fix a $(k,\tau)$-regular subset $S$ of $V(G)$. An eigenvalue $\lambda \in \sigma(G)$ is said to be \textit{special eigenvalue} if $\lambda \ne k -\tau $ and $\lambda$ is not a main eigenvalue. Then by Lemma \ref{mainlem}, if $\lambda$ is a special eigenvalue of $G$ then $\lambda$ is not a $\chi_s$-main eigenvalue. In \cite{cdo13} the authors obtained all eigenvalues of $\displaystyle \bigvee_{H,\mathcal{S}}\mathcal{F}$ when $G_i$ is regular and the subsets $S_i \in \mathcal{S}$ are such that $S_i=V(G_i)$ for $1 \le i \le k$, in which case $\displaystyle \bigvee_{H,\mathcal{S}}\mathcal{F}$ coincides with the $H$-join of regular graphs $\displaystyle \bigvee_{H}\mathcal{F}$. In other cases, it is proved that every special eigenvalue corresponding to $(k_i,\tau_i)$-regular subset $S_i$ is an eigenvalue of $\displaystyle \bigvee_{H,\mathcal{S}}\mathcal{F}$ and thus the subset of eigenvalues is obtained for $\displaystyle \bigvee_{H,\mathcal{S}}\mathcal{F}$. In the following theorem, we obtain the characteristic polynomial of $\displaystyle \bigvee_{H,\mathcal{S}}\mathcal{F}$ for any family of subsets $\mathcal{S}$ and obtain the complete set of eigenvalues. \begin{thm}\label{thmghjoin} Consider a graph $H$ of order $k$ and a family of graphs $\mathcal F = \{G_1,\dots,G_k\}$. Consider also a family of vertex subsets $\mathcal S = \{S_1, \dots, S_k\}$, such that $S_i \in V(G_i)$ for $1 \le i \le k$. Let $G =\displaystyle \bigvee_{H,S} \mathcal F$. Let $n_i$ and $A_i$ be the number of vertices and the adjacency matrix of the graph $G_i$ respectively for $1\le i \le k$. For $1 \le i,j \le k$, let $\rho_{ij}$ be the scalars defined by $\rho_{ij} = 1 \text{ if}$ $ij \in E(H)$ and $0 \text{ otherwise}$. Then we have the following. \\ i) The characteristic polynomial of $G$ is $$\phi_{G}(\lambda) = \displaystyle \Pi_{i=1}^k \phi_i(\lambda) \Gamma_i(\lambda) \det(\widetilde{A}(G))$$ \begin{equation}\label{gjoinAdofG} \text{where}\,\, \widetilde{A}(G) = \begin{bmatrix} \frac{1}{\Gamma_1} & -\rho_{12} & \cdots & -\rho_{1k} \\ -\rho_{21} & \frac{1}{\Gamma_2} & \cdots & -\rho_{2k} \\ \vdots & \vdots & \ddots & \vdots \\ -\rho_{k1} & -\rho_{k2} & \cdots &\frac{1}{\Gamma_{k}}. \end{bmatrix} \end{equation} where $\phi_i(\lambda)=\det(\lambda I_{n_i}-A(G_i))$ and $\Gamma_i(\lambda)=\Gamma_{A_i}(\chi_{S_{i}};\lambda)$ \noindent ii) Analogous to the Equations \eqref{defnfg} and \eqref{maineqn1}, we define $f_i, g_i$ and $\Phi(\lambda)$ corresponding to the $\chi_{S_{i}}$-main eigenvalues of $G_i$ for $1 \le i \le k$. Then the spectrum of $G$ is given as below. \begin{itemize} \item Every eigenvalue $\mu$ of $A_i$ with multiplicity $m(\mu)$, which is not $\chi_{S_{i}}$-main eigenvalue, is an eigenvalue of $G$ with multiplicity $m(\mu)$. \item Every $\chi_{S_{i}}$-main eigenvalue $\mu$ of $A_i$ with multiplicity $m(\mu)$, is an eigenvalue of $G$ with multiplicity $m(\mu)-1$. \item Remaining eigenvalues are the roots of the polynomial $\Phi(\lambda).$ \end{itemize} \end{thm} \begin{proof} By the definition of $\bigvee_{(H,S)} \mathcal F$, the adjacency matrix of $G$ is given as $$ A(G) = \begin{bmatrix} A_1 & \rho_{12} \chi_{S_{1}}\chi_{S_{2}}^t & \cdots & \rho_{1k} \chi_{S_{1}}\chi_{S_{k}}^t \\ \rho_{21} \chi_{S_{2}}\chi_{S_{1}}^t & A_2 & \cdots & \rho_{2k} \chi_{S_{2}}\chi_{S_{k}}^t \\ \vdots & \vdots & \ddots & \vdots \\ \rho_{k1} \chi_{S_{k}}\chi_{S_{1}}^t & \rho_{k2} \chi_{S_{k}}\chi_{S_{2}}^t & \cdots & A_k \end{bmatrix}.$$ Then by direct application of Theorem \ref{mainthm} and Theorem \ref{mainthm2} on $A(G)$, the proofs of (i) and (ii) follow immediately. \end{proof} \section{Spectra of generalized corona of graphs} In \cite[Theorem 3.1]{lsk}, the generalized corona product is defined as below and its characteristic polynomial is calculated. In this subsection, we deduce this result as a corollary of Theorem \ref{mainthm}. This is done by viewing the corona product as $H$-join of suitably chosen graphs. \begin{defn} Let $H'$ be a graph on $k$ vertices. Let $G_1,G_2,\dots,G_k$ be graphs of order $n_1,n_2,\dots,n_k$ respectively. The generalized corona product of $H'$ with $G_1,G_2,\dots,G_k$, denoted by $H' \tilde{\circ} \Lambda_{i=1}^n G_i$, is obtained by taking one copy of graphs $H',G_1,G_2,\dots,G_k$, and joining the $i^{th} $ vertex of $H'$ to every vertex of $G_i$. \end{defn} When $G_i=G'$ for all $i$, the graph $H' \tilde{\circ} \Lambda_{i=1}^n G_i$ is called simply corona of $H'$ and $G'$, denoted by $H' \circ G'$. \begin{thm}\label{scorona} Let $H'$ be a graph with vertex set $V(H')=\{v_1,v_2,\dots,v_k\}$. Let $G_1,G_2,\dots,G_k$ be any graphs. $\rho_{ij} = 1 \text{ if}$ $v_iv_j \in E(H)$ and $0 \text{ otherwise}$. The characteristic polynomial of the generalized corona product $G = H' \tilde{\circ} \Lambda_{i=1}^k G_i$ is given by $$\phi_G(\lambda)= \Pi_{i=1}^k \phi_{G_i}(\lambda) \det(\widetilde A(H')) $$ where $\widetilde A(H')=\begin{bmatrix} \lambda-\Gamma_{G_1}(\lambda) & -\rho_{12} & \cdots & -\rho_{1,k} \\ -\rho_{21} & \lambda-\Gamma_{G_2}(\lambda) & \cdots & -\rho_{2,k} \\ \vdots & \vdots & \cdots & \vdots \\ -\rho_{k1} & -\rho_{k2} & \cdots & \lambda-\Gamma_{G_k}(\lambda) \end{bmatrix}$ \end{thm} \begin{proof} Let $H=H' \circ K_1.$ Let $v_{k+i}$ be the new vertex in $H$ attached with the vertex $v_i$ in the copy of $H'$, for $1 \le i \le k$. Let $\mathcal{F}=\{K_1, K_1, \dots, K_1, G_1, G_2, \dots, G_k\}$. Then we get the following visualization of generalized corona as $H$-join of graphs in $\mathcal{F}$. $$(H' \tilde{\circ} \Lambda_{i=1}^n G_i)=(\displaystyle \bigvee_{H}\mathcal{F}) $$ That is, each $v_i$ is replaced by $K_1$ and $v_{k+i}$ is replaced by $G_i$ in $H$, to form the $H$-join. Now $A(H)= \begin{bmatrix} A(H') & I_k \\ I_k & 0_k \end{bmatrix}$. Since $\phi_{K_1}(\lambda)=\lambda$ and $\Gamma_{K_1}(\lambda)=\dfrac{1}{\lambda}$, by letting $\alpha=1$, $\beta=\gamma=\delta=0$ in Theorem \ref{chUofG}, we get $$\phi_G(\lambda) =\Bigg(\Pi_{i=1}^k (\phi_{K_1}(\lambda) \phi_{G_i}(\lambda) \Gamma_{K_1}(\lambda) \Gamma_{G_i}(\lambda))\Bigg) \det(\widetilde A(H))$$ which implies \begin{equation} \label{pfcorona} \phi_G(\lambda)=\Bigg(\Pi_{i=1}^k (\phi_{G_i}(\lambda)\Gamma_{G_i}(\lambda))\Bigg)\det(\widetilde A(H)) \end{equation} where $\widetilde A(H)=\begin{bmatrix} \lambda I_k-A(H') & -I_k \\ -I_k & diag(\dfrac{1}{\Gamma_{G_1}(\lambda)},\dfrac{1}{\Gamma_{G_2}(\lambda)},\dots,\dfrac{1}{\Gamma_{G_k}(\lambda)}) \end{bmatrix}$.\\ Now by Lemma \ref{lemd}, $\det(\widetilde H)$ is given as\\ $\det\begin{bmatrix} \frac{1}{\Gamma_{G_1}(\lambda)} & 0 & \cdots & 0\\ 0 & \frac{1}{\Gamma_{G_2}(\lambda)} & \cdots & 0\\ \vdots& \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & \frac{1}{\Gamma_{G_k}(\lambda)} \end{bmatrix} \det \Bigg(\lambda I_k-A(H')- \begin{bmatrix} \Gamma_{G_1}(\lambda) & 0 & \cdots & 0\\ 0 & \Gamma_{G_2}(\lambda) & \cdots & 0\\ \vdots& \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & \Gamma_{G_k}(\lambda)) \end{bmatrix} \Bigg) $ $=\Bigg(\Pi_{i=1}^k \dfrac{1}{\Gamma_{G_i}(\lambda)}\Bigg) \det\bigg(\begin{bmatrix} \lambda-\Gamma_{G_1}(\lambda) & 0 & \cdots & 0\\ 0 & \lambda-\Gamma_{G_2}(\lambda) & \cdots & 0\\ \vdots& \vdots & \ddots & \vdots\\ 0 & 0 & \cdots & \lambda-\Gamma_{G_k}(\lambda)) \end{bmatrix}-A(H')\bigg)$ Now by substituting $ \det(\widetilde H)$ in Equation \eqref{pfcorona} we get the required result. \end{proof} \begin{rem} Similarly, we can get the other variants of the spectra of the generalized corona of any graphs. The same work can be done on other variants of corona also by suitable choice of $H$. \end{rem} \section{Examples}\label{examples} To illustrate our results, we compute the characteristic polynomials of two particular examples on $H$-join of graphs and $H$-generalized join of graphs constrained by vertex subsets. Similarly, we can apply our other results also. \begin{example}\label{ex1} Consider the graphs $H=P_3, G_1=P_3,G_2=K_{1,3}$ and $G_3=K_2$ as follows. \begin{center} \scalebox{0.7} { \begin{pspicture}(0,-0.59375)(5.64375,0.59375) \psline[linewidth=0.02cm,dotsize=0.07055555cm 2.0]{*-*}(1.7846875,-0.08875)(3.4246874,-0.06875) \psline[linewidth=0.02cm,dotsize=0.07055555cm 2.0]{*-*}(3.4246874,-0.06875)(5.0646877,-0.04875) \usefont{T1}{ptm}{m}{n} \rput(1.7692188,-0.35875){$v_1$} \usefont{T1}{ptm}{m}{n} \rput(3.4092188,-0.35875){$v_2$} \usefont{T1}{ptm}{m}{n} \rput(5.0692186,-0.35875){$v_3$} \usefont{T1}{ptm}{m}{n} \rput(0.12671874,0.40125){$H:$} \end{pspicture} } \scalebox{0.7} { \begin{pspicture}(0,-2.6092188)(13.309063,2.6092188) \psline[linewidth=0.02cm,dotsize=0.07055555cm 2.0]{*-*}(1.9699908,1.4957812)(1.99,-0.14421874) \psline[linewidth=0.02cm,dotsize=0.07055555cm 2.0]{*-*}(1.99,-0.14421874)(2.010009,-1.7842187) \usefont{T1}{ptm}{m}{n} \rput(0.56453127,0.32578126){$G_1:$} \rput{90.43272}(8.840372,-7.1823015){\psarc[linewidth=0.02,dotsize=0.07055555cm 2.0]{*-*}(7.9843173,0.7957781){1.73}{24.734303}{154.47658}} \psline[linewidth=0.02cm,dotsize=0.07055555cm 2.0]{*-*}(7.2170687,0.86002195)(7.246196,-0.7798413) \psline[linewidth=0.02cm,dotsize=0.07055555cm 2.0]{*-*}(7.246196,-0.7798413)(7.275323,-2.4197047) \psline[linewidth=0.02cm,dotsize=0.07055555cm 2.0]{*-*}(11.469991,1.4757811)(11.49,-0.16421875) \psdots[dotsize=0.136](11.51,-1.8242188) \usefont{T1}{ptm}{m}{n} \rput(2.5745313,1.5457813){$v_1^{(1)}$} \usefont{T1}{ptm}{m}{n} \rput(2.5945313,-0.09421875){$v_2^{(1)}$} \usefont{T1}{ptm}{m}{n} \rput(2.5945313,-1.7142187){$v_3^{(1)}$} \usefont{T1}{ptm}{m}{n} \rput(7.8545313,2.4057813){$v_1^{(2)}$} \usefont{T1}{ptm}{m}{n} \rput(5.304531,0.30578125){$G_2:$} \usefont{T1}{ptm}{m}{n} \rput(10.324532,0.30578125){$G_3:$} \usefont{T1}{ptm}{m}{n} \rput(7.8145313,0.90578127){$v_2^{(2)}$} \usefont{T1}{ptm}{m}{n} \rput(7.8545313,-0.7342188){$v_3^{(2)}$} \usefont{T1}{ptm}{m}{n} \rput(7.8745313,-2.3742187){$v_4^{(2)}$} \usefont{T1}{ptm}{m}{n} \rput(12.074532,1.5257813){$v_1^{(3)}$} \usefont{T1}{ptm}{m}{n} \rput(12.094531,-0.11421875){$v_2^{(3)}$} \usefont{T1}{ptm}{m}{n} \rput(12.1145315,-1.7742188){$v_3^{(3)}$} \end{pspicture} } \end{center} Let $\mathcal{F}=\{G_1,G_2,G_3 \}$. Then the $H$-join graph $G=\displaystyle \bigvee_{H}\mathcal{F}$ is given as \begin{center} \scalebox{0.9} { \begin{pspicture}(0,-2.455)(8.867687,2.455) \psline[linewidth=0.02cm,fillcolor=black,dotsize=0.07055555cm 2.0]{*-*}(3.4196784,1.5299999)(3.4396875,-0.11) \psline[linewidth=0.02cm,fillcolor=black,dotsize=0.07055555cm 2.0]{*-*}(3.4396875,-0.11)(3.4596968,-1.7499999) \usefont{T1}{ppl}{m}{n} \rput(0.14921875,0.36){$G:$} \rput{90.43272}(7.8163457,-6.097542){\psarc[linewidth=0.02,dotsize=0.07055555cm 2.0]{*-*}(6.934005,0.8299969){1.73}{24.734303}{154.47658}} \psline[linewidth=0.02cm,fillcolor=black,dotsize=0.07055555cm 2.0]{*-*}(6.166756,0.89424074)(6.1958833,-0.7456226) \psline[linewidth=0.02cm,fillcolor=black,dotsize=0.07055555cm 2.0]{*-*}(6.1958833,-0.7456226)(6.2250104,-2.385486) \psline[linewidth=0.02cm,fillcolor=black,dotsize=0.07055555cm 2.0]{*-*}(8.739678,1.5099999)(8.759687,-0.13) \psdots[dotsize=0.136](8.779688,-1.79) \psline[linewidth=0.02cm](3.3796875,1.59)(6.1996875,2.41) \psline[linewidth=0.02cm](3.3796875,1.55)(6.1196876,0.89) \psline[linewidth=0.02cm](3.4396875,1.53)(6.2196875,-2.37) \psline[linewidth=0.02cm](3.4396875,-0.11)(6.1996875,2.43) \psline[linewidth=0.02cm](3.4196875,-0.17)(6.1996875,0.91) \psline[linewidth=0.02cm](3.4396875,-0.11)(6.2196875,-0.73) \psline[linewidth=0.02cm](3.4196875,-0.13)(6.2596874,-2.43) \psline[linewidth=0.02cm](3.4596875,-1.75)(6.2196875,2.43) \psline[linewidth=0.02cm](3.4596875,-1.75)(6.1596875,0.89) \psline[linewidth=0.02cm](3.4796875,-1.73)(6.2396874,-0.73) \psline[linewidth=0.02cm](3.4396875,-1.79)(6.2596874,-2.35) \psline[linewidth=0.02cm](6.2396874,2.37)(8.739688,1.47) \psline[linewidth=0.02cm](6.1796875,2.37)(8.719687,-0.13) \psline[linewidth=0.02cm](6.2196875,2.33)(8.739688,-1.77) \psline[linewidth=0.02cm](6.1796875,0.87)(8.719687,1.47) \psline[linewidth=0.02cm](6.2196875,0.85)(8.719687,-0.13) \psline[linewidth=0.02cm](6.1796875,0.95)(8.719687,-1.73) \psline[linewidth=0.02cm](6.2396874,-0.75)(8.779688,1.51) \psline[linewidth=0.02cm](6.2396874,-0.71)(8.739688,-0.15) \psline[linewidth=0.02cm](6.1796875,-0.77)(8.699688,-1.77) \psline[linewidth=0.02cm](6.2396874,-2.39)(8.779688,-1.81) \psline[linewidth=0.02cm](3.4396875,1.53)(6.2196875,-0.73) \end{pspicture} } \end{center} We see that $\phi_1(\lambda)=\lambda^3-2\lambda $, $\phi_2(\lambda)=\lambda^4-3\lambda^2$, $\phi_3(\lambda)=\lambda^3-\lambda $, $\Gamma_1(\lambda)=\dfrac{3\lambda+4}{\lambda^2-2}$, $\Gamma_2(\lambda)=\dfrac{4\lambda+6}{\lambda^2-3}$ and $\Gamma_3(\lambda)=\dfrac{3\lambda-1}{\lambda^2-\lambda}$. The characteristic polynomial of $G$ is $\lambda^3 (\lambda^3+4\lambda^2-\lambda-6)(\lambda^3-5\lambda^2-8\lambda+2)(\lambda+1)$ which is equal to $$\phi_1(\lambda)\phi_2(\lambda)\phi_3(\lambda)\Gamma_1(\lambda)\Gamma_2(\lambda)\Gamma_3(\lambda) \det\begin{bmatrix} \dfrac{1}{\Gamma_1(\lambda)}&-1&0\\ -1&\dfrac{1}{\Gamma_2(\lambda)}&-1\\ 0& -1 &\dfrac{1}{\Gamma_3(\lambda)} \end{bmatrix}.$$ \end{example} \begin{example}\label{ex2} Consider $H$ and $\mathcal{F}$ as in Example \ref{ex1}. Let $S_1=\{v_1^{(1)},v_2^{(1)}\}$, $S_2=\{v_1^{(2)},v_2^{(2)},v_4^{(2)}\}$ and $S_3=\{v_2^{(3)},v_3^{(3)}\}$. Then the $H$-generalized join graph $G=\displaystyle \bigvee_{H,S}\mathcal{F}$ is given as \begin{center} \scalebox{0.9} { \begin{pspicture}(0,-2.455)(8.858,2.455) \psline[linewidth=0.02cm,fillcolor=black,dotsize=0.07055555cm 2.0]{*-*}(3.4099908,1.5299999)(3.43,-0.11) \psline[linewidth=0.02cm,fillcolor=black,dotsize=0.07055555cm 2.0]{*-*}(3.43,-0.11)(3.450009,-1.7499999) \usefont{T1}{ppl}{m}{n} \rput(0.36453125,0.36){$G:$} \rput{90.43272}(7.806585,-6.087855){\psarc[linewidth=0.02,dotsize=0.07055555cm 2.0]{*-*}(6.9243174,0.8299969){1.73}{24.734303}{154.47658}} \psline[linewidth=0.02cm,fillcolor=black,dotsize=0.07055555cm 2.0]{*-*}(6.1570687,0.89424074)(6.186196,-0.7456226) \psline[linewidth=0.02cm,fillcolor=black,dotsize=0.07055555cm 2.0]{*-*}(6.186196,-0.7456226)(6.215323,-2.385486) \psline[linewidth=0.02cm,fillcolor=black,dotsize=0.07055555cm 2.0]{*-*}(8.729991,1.5099999)(8.75,-0.13) \psdots[dotsize=0.136](8.77,-1.79) \psline[linewidth=0.02cm](3.37,1.59)(6.19,2.41) \psline[linewidth=0.02cm](3.37,1.55)(6.11,0.89) \psline[linewidth=0.02cm](3.43,1.53)(6.21,-2.37) \psline[linewidth=0.02cm](3.43,-0.11)(6.19,2.43) \psline[linewidth=0.02cm](3.41,-0.17)(6.19,0.91) \psline[linewidth=0.02cm](3.41,-0.13)(6.25,-2.43) \psline[linewidth=0.02cm](6.17,2.37)(8.71,-0.13) \psline[linewidth=0.02cm](6.21,2.33)(8.73,-1.77) \psline[linewidth=0.02cm](6.21,0.85)(8.71,-0.13) \psline[linewidth=0.02cm](6.17,0.95)(8.71,-1.73) \psline[linewidth=0.02cm](6.23,-2.39)(8.77,-1.81) \psline[linewidth=0.02cm](6.25,-2.37)(8.73,-0.09) \end{pspicture} } \end{center} Here, $\phi_1(\lambda)=\lambda^3-2\lambda $, $\phi_2(\lambda)=\lambda^4-3\lambda^2$ and $\phi_3(\lambda)=\lambda^3-\lambda $. Based on the choices of $S_1,S_2$ and $S_3$ we get $\Gamma_1(\chi_{S_1};\lambda)=\dfrac{2\lambda^2+2\lambda-1}{\lambda^3-2\lambda}$, $\Gamma_2(\chi_{S_2};\lambda)=\dfrac{3\lambda}{\lambda^2-3}$ and $\Gamma_3(\chi_{S_3};\lambda)=\dfrac{2\lambda^2-1}{\lambda^3-\lambda}$. The characteristic polynomial of $G$ is $\lambda^4(\lambda^6-18\lambda^4-6\lambda^3+35\lambda^2+6\lambda-15)$ which is equal to $$\phi_1(\lambda)\phi_2(\lambda)\phi_3(\lambda)\Gamma_1(\chi_{S_1};\lambda)\Gamma_2(\chi_{S_2};\lambda)\Gamma_3(\chi_{S_3};\lambda) \det\begin{bmatrix} \dfrac{1}{\Gamma_1(\chi_{S_1})}&-1&0\\ -1&\dfrac{1}{\Gamma_2(\chi_{S_2})}&-1\\ 0& -1 &\dfrac{1}{\Gamma_3(\chi_{S_3})} \end{bmatrix}.$$ \end{example}
1,314,259,994,029
arxiv
\section{Introduction} Despite the success of deep neural networks (DNNs) in computer vision~\cite{alexnet,vgg,resnet}, natural language processing~\cite{lstm,attention,bert} and speech recognition~\cite{asr1,asr2}, DNNs are notoriously vulnerable to adversarial attacks\cite{advattack} that inject carefully crafted imperceptible perturbations into the input and are able to deceive the model with a great chance of success. There are three main orthogonal approaches for combating adversarial attacks --- (i) Using adversarial attacks as a data augmentation mechanism by including adversarially perturbed samples in the training data to induce robustness in the trained model~\cite{AT,TRADES,FAT,GAIRAT,RST,MART,CAT,DAT}; (ii) Preprocessing the input data with a denoising function or deep network \cite{denoise1,denoise2,denoise3,denoise4,denoise5} to counteract the effect of adversarial perturbations; and (iii) Training an auxiliary network to \emph{detect} adversarial examples and deny providing inference on adversarial samples~\cite{metzen2017detecting,li2017adversarial,rouhani2017curtail,rouhani2018towards,grosse2017statistical,safetynet,lid,mahalanobis,nnif,spectraldefense,libre,lng}. Our work falls under adversarial example detection as it does not require retraining the main network (as in adversarial training) nor degrade the input quality (as in preprocessing defenses). Existing adversarial example detection methods~\cite{safetynet,lid,mahalanobis,nnif,spectraldefense} need to train auxiliary networks in a binary classification manner (\textit{e.g.}\ benign versus adversarial attack(s)). The shortcoming of this strategy is that the detector is trained on specific attack(s) that are available and known at training time. To ensure good detection performance at inference time, the detection network needs to be trained on a large number of attacks. Otherwise, the detection network will perform poorly on unseen attacks during training (\textit{i.e.}\ out of domain) or even on seen attacks during training due to overfitting~\cite{rice2020overfitting}. We argue that a good adversarial detection method should be able to detect any adversarial attack, even if the defender is unaware of the type of adversarial attack. To this end, we propose to frame the adversarial sample detection as an anomaly detection problem, in which only one detection model is constructed and trained on only benign samples, such that the detection model is \emph{attack-agnostic}. We propose an anomaly detection framework for identifying adversarial examples by measuring the statistical deviations caused by adversarial perturbations. We consider the deviation of two complementary features that reflect the interaction of adversarial perturbation with the data and models. The first feature is Least Significant Component Feature\ (LSCF), which maps data to a subspace where the distribution of benign images is compact, while the distribution of adversarial images is spread. The second feature is Hessian Feature\ (HF), which uses the second order derivatives as a measure of the distortion caused by adversarial perturbation to the model's loss landscape. Our results underscores the utility of each of the two features and their complementary nature. The contributions of this paper are: \begin{enumerate}[leftmargin=*, noitemsep, topsep=0pt] \item An anomaly detection framework for adversarial detection that measures statistical deviation caused by adversarial perturbations on two proposed features, LSCF\ and HF, which are theoretically justified and capture the interaction of adversarial perturbations with data and model. \item Empirical analysis demonstrating the effectiveness of our method on detecting eight different adversarial attacks on three datasets. Our method achieves 94.9\% AUC on CIFAR10, 89.7\% AUC on CIFAR100 and 94.6\%AUC on SVHN. \item Comprehensive evaluation showing the computational efficiency, sensitivity to hyper parameters, cross-model generalization, and closeness to binary classification upper bound of the proposed anomaly detection method. \end{enumerate} \section{Related Work}\label{sec.related_work} \noindent \textbf{Adversarial Attacks} deceive DNNs by adding carefully crafted perturbations that are imperceptible to humans. Attacks can be classified according to their perturbation constraints. Typical attacks such as Fast Gradient Sign Method (FGSM)~\cite{fgsm}, Basic Iterative Method (BIM)~\cite{bim} and Projected Gradient Method (PGD)~\cite{pgd} are $l_{\infty}$ attacks which allow perturbation of all pixels but limit the maximum deviation a pixel can have. $l_2$ attacks, such as Carlini \& Wagner (CW)~\cite{cw} and DeepFool~\cite{deepfool}, limit the total deviation of all pixels that an example can have. $l_0$ attacks try to change as few pixels as possible but allow more perturbation budget for modified pixels, including one-pixel-attack (OnePixel)~\cite{onepixel} and SparseFool~\cite{sparsefool}. Recently, AutoAttack~\cite{autoattack} can better deceive DNN models by combining AutoPGD~\cite{autoattack}, Fast Adaptive Boundary Attack (FAB)~\cite{fab} and Square attack~\cite{square}. \noindent \textbf{Detecting Adversarial Examples} The general approach for detecting adversarial attacks is to train an auxiliary model using benign and adversarial examples. Various network architectures and features have been used. Metzen \textit{et. al.}~\cite{metzen2017detecting} use the activations to train subnetwork detectors. Li \textit{et. al.}~\cite{li2017adversarial} use PCA projected features to train cascaded detectors. Statistics of the examples are used in ~\cite{rouhani2017curtail,rouhani2018towards}. Grosse \textit{et. al.}~\cite{grosse2017statistical} use Bayesian Uncertainty and train a logistic regression detector. Lu \textit{et. al.}~\cite{safetynet} train quantized RBF-SVM classifier on top of the penultimate ReLU features. Ma \textit{et. al.}~\cite{lid} use the feature distance of example to its nearest neighbours as images' fingerprint. Lee \textit{et. al.}~\cite{mahalanobis} train classifier with confidence score computed from Mahalanobis distance under Gaussian discriminant analysis. Cohen \textit{et. al.}~\cite{nnif} leverage influence function~\cite{influencefunction} and fit a k-NN model to detect adversarial examples. Harder \textit{et. al.}~\cite{spectraldefense} convert examples into frequency domain and detect adversarial examples using Fourier features. Deng \textit{et. al.}~\cite{libre} detect adversarial examples by converting models into Bayesian neural networks. Abusnaina \textit{et. al.}~\cite{lng} use neighborhood connectivity and graph neural network to detect adversarial examples. These mentioned approaches, while performing well, are all trained with supervision, limiting their generalization to unseen attacks. In contrast, our method is trained in anomaly detection fashion and thus generalizes better to unseen attacks. \noindent \textbf{Anomaly Detection} aims to detect unusual samples in data. Classic approaches includes One-class SVM~\cite{ocsvm}, Random Forest~\cite{randomforest}, Kernel Density Estimation~\cite{kde}, Local Outlier Factor~\cite{lof}, Elliptic Envelope~\cite{ellipticenvelope}. Deep Anomaly Detection~\cite{dsvdd,salehi2021multiresolution,reiss2021panda,li2021cutpaste,mohseni2020self,beggel2019robust,yi2020patch,pang2019deep,bergmann2020uninformed,deecke2018image,park2020learning,perera2019ocgan,perera2019learning,golan2018deep} takes the advantage of DNNs to have better scalability and performance on high dimensional data. In this work, we apply classic anomaly detection approaches for detecting adversarial examples as we find that deep learning-based anomaly detectors are not adequate for detecting adversarial attacks since they learn image-level semantic representations that cannot capture local image subtleties introduced by adversarial attacks. \section{Attack-Agnostic Adversarial Detection}\label{sec:h3ad} We present our Hessian and Eigen-decomposition-based Adversarial Detection\ (HEAD) by first motivating adversarial detection as an anomaly detection problem. Then, we introduce Least Significant Component Feature\ and Hessian Feature\ and explain the rationale for using them to detect adversarial attacks. \subsection{Challenges And Rationale}\label{sec:rationale} A fundamental assumption of existing adversarial attack detection \cite{metzen2017detecting,li2017adversarial,rouhani2017curtail,rouhani2018towards,grosse2017statistical,safetynet,lid,mahalanobis,nnif,spectraldefense,libre,lng} as well as adversarial augmentation methods ~\cite{safetynet,lid,mahalanobis,nnif,spectraldefense} is that adversarial attacks are known and samples can easily be generated using these attacks to train the detector or augment the main model being defended. This assumption, however, is not realistic, since more often than not the defender does not know the attacks \emph{a priori} and therefore samples cannot be easily generated to train a supervised detector or train an adversarially-augmented model. The absence of attack samples (\textit{i.e.}\ negative) training examples and the need to be attack-agnostic both motivate framing the adversarial detection as an anomaly detection problem. More formally, the task of the defender is to protect the model trained on \emph{only benign examples} $X$ against adversarial examples $\widehat{X}$ that are unknown during training. The detector $D$ will be trained only on benign examples and will give a score $s(x)=D(f(\mathbf{x}))$ for each testing sample $\mathbf{x}$, indicating the likelihood that $\mathbf{x}$ is an adversarial attack, where $f \in \mathbb{R}^m \times \mathbb{R}^n$ is a feature extraction function and $m$ and $n$ are the dimensions of input and feature spaces, respectively. The feature extractor could be any hand-crafted function, such as principal component analysis (PCA), or method crafted specifically for adversarial detection such as LID~\cite{lid} and Mahalanobis~\cite{mahalanobis}. Similarly, $D$ can be any arbitrary anomaly detector, \textit{e.g.}, classic approaches such as kernel density estimator~\cite{kde} and One Class SVM~\cite{ocsvm}, or DNN-based methods like DSVDD~\cite{dsvdd}. We propose to detect adversarial examples using two complementary features of the image that reflect the interaction between adversarial perturbation and dataset as well as DNN models, respectively. The first property, namely Least Significant Component Feature, quantifies the deviation of adversarial examples from the statistics of benign samples by eigen-decomposing the dataset and mapping the data to the eigenvectors that have \textit{smallest} eigenvalues. The second property, namely Hessian Feature, distinguishes adversarial from benign images by inspecting the curvature of the loss landscape locally at the geometric optima of the model, which can be measured by the Hessian matrix of the loss \textit{w.r.t.}\ to the inputs (or intermediate layer outputs). \subsection{Least Significant Component Feature\ (LSCF)}\label{sec.pca} As mentioned in \Cref{sec.related_work}, we hypothesize that extracting image features that capture global context will not work well, since they tend to miss small subtleties introduced by the adversarial perturbations. Therefore, we hypothesize that we need to extract image features that are sensitive to small imperceptible image noise. To extract the LSCF, we use principal component analysis (PCA) to project the raw benign images to a new space with orthonomal basis (\textit{i.e.}\ eigenvectors) in which different dimensions are linearly uncorrelated. Rather than retaining the projections that correspond to the largest eignvalues (\textit{i.e.}\ eigenfaces \cite{139758}), we retain the projections on the directions with smallest eignvalues. Hence, the features consist of the least significant components of the data. Suppose that the training data $\mathbf{D} \in \mathbb{R}^{N \times m}$ has $N$ samples and $m$ input dimension. Its covariance matrix $\mathbf{C} = \mathbf{D}^{\top} \mathbf{D} / (N-1)$ can be decomposed into $\mathbf{C} = \mathbf{V}\mathbf{L}\mathbf{V}^\top$, where the columns of $\mathbf{V}=[\mathbf{v}_1 \mathbf{v}_2 ... \mathbf{v}_m]$ are the eigenvetors of $\mathbf{C}$, and $\mathbf{L}$ is a diagonal matrix having eigenvalues of $\mathbf{C}$ in descending order on its diagonal (\textit{i.e.}, $\mathrm{diag}(\mathbf{L})=[\lambda_1 \lambda_2 ... \lambda_m], \lambda_i \geq \lambda_{i+1} \forall i \in \{1 .. m-1\}$). The LSCF\ of image $\mathbf{x}$ is transformed by the eigenvectors $\mathbf{v}$ that have the smallest eigenvalues. \begin{equation} f_{LSCF}(\mathbf{x}) = \mathbf{x}^\top \mathbf{v} \in \mathbb{R}^{1 \times d} \end{equation} where $d$ is the dimension of LSCF\ and $\mathbf{v}=[\mathbf{v}_m \mathbf{v}_{m-1}... \mathbf{v}_{m-d+1}] \in \mathbb{R}^{m \times d}$ is the last $d$ columns of $\mathbf{V}$. We explain the reason for mapping images on the least significant eigenvectors by estimating an upper bound on the expected deviation caused by perturbation for different eigenvectors. Let $p_i=\mathbf{x}^\top \mathbf{v}_i$ be the mapping of image $\mathbf{x}$, and $p'_i=(\mathbf{x}+\Delta\mathbf{x})^\top \mathbf{v}_i$ be the mapping of the adversarially perturbed image on the $i$th eigenvector, respectively. Since the transformation is linear, the change of $p_i$ caused by $\Delta \mathbf{x}$ is $\Delta p_i=\Delta \mathbf{x}^\top \mathbf{v}_i$. The variance of $p_i$ measures the spread of data in the direction of $\mathbf{v}_i$ and hence has $\mathrm{Var}(p_i) = \lambda_i$, while the variance of $p'_i$ has an upper bound of $\lambda_i + \mathbf{E}(\Delta p_i^2)$, as shown in \Cref{eq.upperbound}, \begin{align}\label{eq.upperbound} \mathrm{Var}(p'_i) &= \mathrm{Var}(p_i) + \mathrm{Var}(\Delta p_i) \nonumber \\ &= \lambda_i + \mathbf{E}(\Delta p_i^2) - \mathbf{E}(\Delta p_i)^2 \nonumber \\ &\leq \lambda_i + \mathbf{E}(\Delta p_i^2) \end{align} assuming that the adversarial perturbation is independent from the data. Further, the expected deviation of perturbation $\mathbf{E}(\Delta p_i^2)$ can be no larger than the norm of the perturbation $\|\Delta\mathbf{x}\|^2$ as shown in \Cref{eq.upperbound_of_derivation} \begin{alignat}{3}\label{eq.upperbound_of_derivation} \mathbf{E}(\Delta p_i^2) &= \mathbf{E}((\Delta \mathbf{x}^\top \mathbf{v}_i)^2) && \overnumber{1}{\leq} \mathbf{E}(\|\Delta\mathbf{x}\|^2 \|\mathbf{v}_i\|^2) \nonumber \\ &\overnumber{2}{=} \mathbf{E}(\|\Delta\mathbf{x}\|^2) && \overnumber{3}{=} \|\Delta\mathbf{x}\|^2 \end{alignat} where $\circlednumber{1}$ is due to the Cauchy–Schwarz inequality, $\circlednumber{2}$ holds as $\mathbf{v}_i$ is an eigenvector with $\|\mathbf{v}_i\|=1$, and $\circlednumber{3}$ holds since for adversarial attacks, the injected perturbation budget $\|\Delta \mathbf{x}\|$ is the same for all images (if the maximum budget is always achieved). By combining \Cref{eq.upperbound,eq.upperbound_of_derivation}, we obtain that $\mathrm{Var}(p'_i)$ can be no larger than $\lambda_i + \|\Delta\mathbf{x}\|^2$. The empirical analysis in \Cref{fig:tightness} suggests that the actual variance of perturbed projected perturbed images on the least significant eigenvector is much closer to that upper bound than random noises. The perturbation budget in the figure varies from 1/255 to 8/255, and for each budget, the result is the average of 1,000 CIFAR10~\cite{cifar} images. When the difference between $\mathrm{Var}(p'_i)$ and $\mathrm{Var}(p_i)$ is large, we can easily distinguish benign images from adversarial images by mapping them onto eigenvector $\mathbf{v}_i$. We quantify this difference by the ratio of $\mathrm{Var}(p'_i) / \mathrm{Var}(p_i)$, which has an upper bound of $1 + \|\Delta \mathbf{x}\| / \lambda_i$, from \Cref{eq.upperbound,eq.upperbound_of_derivation}. Since the perturbation budget $\|\mathbf{x}\|$ is predefined before attack, the value of $\lambda_i$ determines the differentiability between $p'_i$ and $p_i$. The smaller the value of $\lambda_i$, the easier to distinguish adversarial from benign images. As a result, mapping data onto the least significant components gives highest distinguishability of attack. \Cref{fig:pca_feat_perturb} visualizes the distribution of projected values for 1,000 adversarial and benign images on major principal components and least significant components. We can see that the distributions for the two types of images are indistinguishable in the major PCA components, but are clearly distinguishable in the least significant components. \begin{figure}[!t] \centering \begin{minipage}[t]{0.32\textwidth} \centering \includegraphics[height=0.9\linewidth,trim=3px 7px 5px 3px]{figures/tightness.pdf} \caption{The gap between upperbound in \cref{eq.upperbound_of_derivation} and deviations of $p_i$ caused by adversarial (FGSM), Gaussian and Uniform perturbation on CIFAR10.} \label{fig:tightness} \end{minipage}\hfill \begin{minipage}[t]{0.32\textwidth} \centering \includegraphics[height=0.9\linewidth,trim=2px 5px 5px 2px]{figures/gn_hessian_similarity.pdf} \caption{As a good approximation for Hessian, there is a strong correlation between the matrix modulus of GGN and Hessian.} \label{fig:GGN_Hessian_approximation} \end{minipage}\hfill \begin{minipage}[t]{0.32\textwidth} \centering \includegraphics[height=0.9\linewidth,trim=2px 5px 5px 3px]{figures/gn_hessian_time.pdf} \caption{The computation time of Hessian and GGN under different image sizes.} \label{fig:GGN_Hessian_time} \end{minipage} \end{figure} \begin{figure}[!t] \centering \begin{minipage}[t]{0.58\textwidth} \includegraphics[width=\linewidth]{figures/ablation/pca_and_lscf_histogram.pdf} \caption{Distribution of images' mapping on three eigenvectors of Principal Components (upper) and Least Significant Component s (bottom) with PGD10 ($\epsilon$=8/255) on CIFAR10.} \label{fig:pca_feat_perturb} \end{minipage}\hfill \begin{minipage}[t]{0.39\textwidth} \includegraphics[width=\linewidth]{figures/ablation/hessian_histogram.pdf} \caption{The Hessian modulus distribution of benign and PGD10 ($\epsilon$=8/255) CIFAR10. $bx\_ry$ is block $x$ ReLU $y$ in VGG16.} \label{fig:modulus_distribution} \end{minipage} \end{figure} \subsection{Hessian Feature}\label{sec.hessfeat} When studying model optimization, \cite{jastrzkebski2017three,zhu2018anisotropic} observed that perturbation on model weights can improve generalization, and \cite{wang2020assessing,tsuzuku2020normalized,wang2018identifying} later proved that such improvement happens because the perturbation changes the smoothness of the loss function's landscape, which can be measured by the Hessian matrix of the loss. Motivated by this observation, we hypothesize that the Hessian can be used to characterize the loss landscape and find locations that are exploited by the adversarial perturbations. In fact, the adversarial attack creation problem is very similar to the problem of model optimization in the sense that they bear similarity to Lagrange duality. More formally, model optimization can be expressed by \Cref{eq.model_optim} \begin{align}\label{eq.model_optim} \underset{\mathbf{W}}{\mathrm{minimize}} \quad & L[Y, f(\mathbf{W}, \mathbf{X})] \qquad \mathrm{s.t.} \quad \mathbf{X} = \mathbf{D} \end{align} while the target of adversarial attack can be written as \Cref{eq.adv_atk} \begin{align} \label{eq.adv_atk} \underset{\mathbf{X}}{\mathrm{maximize}} \quad & L[Y, f(\mathbf{W^*}, \mathbf{X})] + \sum_{u_i} u_i \|\mathbf{X}_i - \mathbf{D}_i\|_p \\ & \mathrm{s.t.} \quad u_i \leq 0, \quad \forall i \in [1, N] \nonumber \end{align} where $L$ is the loss function, $\mathbf{W}$ represents the parameters of the model $f$, $\mathbf{D}$ is the training data, $Y$ is corresponding target, and $\mathbf{W^*}$ is the optimized (\textit{i.e.}\ trained) model weights that achieves $\underset{\mathbf{W}}{\mathrm{inf}}(L[Y, f(\mathbf{W}, \mathbf{X})])$. If we regard the Lagrange regularizers as the adversarial perturbation constraints (\textit{i.e.}, $l_\infty$, $l_2$ or $l_0$), \Cref{eq.adv_atk} can be seen as the adversarial attack against the dataset where $L[Y, f(\mathbf{W^*}, \mathbf{X})]$ corresponds to maximizing the prediction error and $\sum_{u_i} u_i \|\mathbf{X}_i - \mathbf{D}_i\|_p$ corresponds to limiting the perturbation budget under $l_p$ constraint.\footnote{We slightly abuse the name of Lagrange regularizer as the norm $\|\cdot\|_p$ is not required in Lagrange duality.} Such correspondence in duality motivates us to measure the statistical deviation of the Hessian matrix to detect adversarial examples. The Hessian we use is the second-order derivative of the loss \textit{w.r.t.} the input or the outputs of the intermediate layers, \textit{i.e.} \begin{equation} \mathrm{H} \equiv \frac{\partial^2 L(\mathbf{x})}{\partial^2\mathbf{x}} \quad \text{where} \quad \mathrm{H}[i, j] = \frac{\partial^2 L(\mathbf{x})}{\partial x_i \partial x_j}, \end{equation} $\mathbf{x}$ is input or the outputs of the intermediate layers of the model (\textit{e.g.}, outputs of ReLU layers), $x_i$ and $x_j$ are the $i$th and $j$th entry (\textit{e.g.}, pixels in image) of the input, respectively. The size Hessian matrix is proportional to the square of the input dimension, which can cause computational and performance problems for anomaly detection models due to the curse of high dimensionality~\cite{curseofhighdim}. For example, the dimension of the Hessian \textit{w.r.t.} the input in the case of CIFAR10 images is $3072 \times 3072 = 9,437,184$. This dimensionality is prohibitive for any existing anomaly detector. In order to handle this challenge, we use the modulus of the Hessian as an approximation of the Hessian. Specifically, we use $l1$ norm $\| \mathrm{H} \|_1 = \sum_{i, j} | \mathrm{H}[i, j] |$ of the matrix as we empirically found that the result of using $l1$ and $l2$ norm does not have big difference. The modulus operation reduces the spatial dimension of Hessian matrix to only one scalar number. \Cref{fig:modulus_distribution} shows the Hessian modulus distribution of benign and PGD10 images. The distributions suggest that the modulus of the Hessian can be used to separate benign and adversarial samples. Nevertheless, our final Hessian feature includes multiple dimensions by using the moduli of Hessian matrices for multiple network layers along with the Hessian matrix for the input. \input{4_GN_matrix} \subsection{Generalized Gauss-Newton Matrix for Approximating Hessian Matrix} We compute the Generalized Gauss-Newton matrix~\cite{schraudolph2002fast,martens2010deep,martens2014new} instead to significantly speed up calculating the Hessian. Let $L$ be the loss, $\mathbf{x}$ be the variable to the loss (\textit{e.g.}\ images) and $\mathbf{z}$ be the inputs of penultimate layer (\textit{e.g.}\ the Softmax layer in DNNs). The GGN can be computed as \begin{equation} \mathrm{G} = (\mathrm{J}^{\mathbf{z}}_{\mathbf{x}})^\top \otimes \mathrm{H}^L_{\mathbf{z}} \otimes \mathrm{J}^{\mathbf{z}}_{\mathbf{x}} \end{equation} where $\otimes$ is the matrix multiplication, $\mathrm{J}^{\mathbf{z}}_{\mathbf{x}}$ is the Jacobian of the penultimate layer $\textbf{z}$ \textit{w.r.t.}\ the input $\mathbf{x}$ and $\mathrm{H}^L_{\mathbf{z}}$ is the Hessian of loss $L$ \textit{w.r.t.}\ penultimate layer $\textbf{z}$. Please note that the ground truth label is not required during computation and any choice of label will give the same result since GGN/Hessian modulus only show the curvature of the loss landscape. Though GGN approximates Hessian well~\cite{schraudolph2002fast,martens2010deep,martens2014new}, it is unclear how good the modulus of GGN approximates the modulus of Hessian. We empirically show the approximation accuracy by randomly picking 1,000 samples from CIFAR10 and computing their Hessian and GGN. \Cref{fig:GGN_Hessian_approximation} shows the matrix modulus of Hessian and GGN while \Cref{fig:GGN_Hessian_time} summarizes the computation time of Hessian and GGN under different image sizes. We notice that GGN is strongly correlated to Hessian, while being much more computationally efficient to calculate. Therefore we use GNN as a substitute for Hessian~\cite{schraudolph2002fast}. \section{Experimental Evaluation} {{ \subsection{Benchmarks and Baselines}\label{sec.expt_setting} To evaluate the performance of HEAD, we conduct a series of experiments on the CIFAR10~\cite{cifar}, CIFAR100~\cite{cifar}, and SVHN~\cite{svhn} datasets and compare HEAD's performance using several anomaly detection methods. As described in Section~\ref{sec:h3ad}, HEAD-based anomaly detectors are trained on benign images only. We base our experiments on the VGG16~\cite{vgg} model. There is no specific reason for this choice of model, beyond convenience and (relatively lower) computational requirements, the methodology itself is model-agnostic. All of results are obtained with \textit{Nvidia} 1080Ti GPUs. \noindent \textbf{Baseline features:} To provide a broad set of comparative baseline features, we compare HEAD\ against one naive image feature (PCA), two hand-crafted features (LID~\cite{lid} and Mahalanobis~\cite{mahalanobis}), and one self-trained deep feature (DSVDD~\cite{dsvdd}). We extract 32-dimensional principal components for PCA. Our choice of LID and Mahalanobis is driven by the fact that they do not require supervision to compute features and have less complexity than other methods such as \cite{libre}, which requires the modification of the underlying model followed by finetuning. For both LID and Mahalanobis, we follow the original papers but change the target network to VGG16\cite{vgg} to provide a fair comparison to the HEAD\ features. DSVDD integrates both the feature extractor and anomaly detector. We train the feature extractor for 100 epochs and tune the anomaly detector for 50 epochs, following \cite{dsvdd}. \noindent \textbf{HEAD\ features:} As explained in \ref{sec:h3ad}, HEAD\ features consists of two parts. First, we extract a 32-dimensional LSCF feature, as described in \Cref{sec.pca}. We then compute the Hessian feature by evaluating the Hessian of the loss \textit{w.r.t.} ~the input and the intermediate features from the ReLU layers to form a 13-dimensional feature, as described in \Cref{sec.hessfeat}. The LSCF and Hessian features are concatenated to create a 45-dimensional HEAD\ feature for each image. \noindent \textbf{Anomaly Detectors:} We train both kernel density estimator (KDE) and One-Class SVM (OCSVM) based anomaly detectors on each set of features. For KDE, we evaluate using Gaussian, Epanechnikov, exponential, linear, and uniform kernels. For OCSVM, we evaluate using RBF, Sigmoid, linear and polynomial kernels. We also conduct a grid search for hyperparameters and report the best performance, the corresponding ablation studies can be found in the Appendix. \noindent \textbf{Adversarial Attacks:} Each anomaly detector is evaluated across eight standard attacks. For $l_\infty$ attacks with max perturbation 8/255, we use (1) PGD10~\cite{pgd}, (2) FGSM~\cite{fgsm} and (3) BIM~\cite{bim}. For $l_2$ attacks with total perturbation budget of 1, we use (4) DeepFool~\cite{deepfool} and (5) CW~\cite{cw}. For $l_0$ attacks, we use (6) OnePixel~\cite{onepixel} and (7) SparseFool~\cite{sparsefool} with hyperparameter $lam=3$. For combined attacks with perturbation budget equals to 0.3 under $l_\infty$, we use (8) AutoAttack~\cite{autoattack}. We use the \textit{torchattacks}'~\cite{kim2020torchattacks} implementation for all attacks. \subsection{Evaluation Results}\label{sec.baseline} Each anomaly detector is evaluated using the area under receiver operating characteristic curve (ROC AUC) on all adversarial attacks. We report performance on both each attack variant as well as the overall performance across all eight attacks. The results are summarized in \Cref{tab:compare_to_baseline}. Note that, unlike supervised training where the detector is trained and evaluated on a specific attack, here, the same anomaly detector is used to detect \textbf{any} of the eight attacks. We observe that, with very few exceptions, across all anomaly detection variants, HEAD-based anomaly detectors demonstrate the best performance. In general, features that represent holistic image features, such as PCA and DSVDD, do not perform well. The subtle and localized adversarial perturbations are likely overwhelmed by these global image features. HEAD\ features, in particular, perform well against both $l_\infty$ attacks and AutoAttack. We find that AutoAttack is easy to detect for all but the PCA-based anomaly detectors. We speculate that the reason for this behavior is that ensemble attacks leave more traces of tampering and are therefore easier to detect. HEAD\ features appear to be particularly robust to $l_\infty$ attacks vis-\'a-vis the other approaches. Even on $l_2$ and $l_0$ attacks, HEAD\ features perform better than most of the compared features. Across all attacks, HEAD\ features achieve almost 95\% AUC on CIFAR10 and SVHN, and almost 90\% AUC on CIFAR100. }} \begin{table}[!t] \centering \scriptsize \setlength{\tabcolsep}{2.5pt} \begin{tabular}{llccccccccc} \toprule & & \multicolumn{3}{|c|}{$l_\infty$ Attacks} & \multicolumn{2}{c|}{$l_2$ Attacks} & \multicolumn{2}{c|}{$l_0$ Attacks} & \multicolumn{1}{c|}{Combined} & \multirow{2}{*}{Overall} \\ \cmidrule(lr){3-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} \cmidrule(lr){10-10} Dataset & Method & PGD10 & FGSM & BIM & DeepFool & CW & SparseFool & OnePixel & AutoAttack & \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \multirow{9}{*}{CIFAR10} & PCA+OCSVM & 0.497 & 0.498 & 0.497 & 0.500 & 0.500 & 0.109 & 0.497 & 0.293 & 0.424 \\ & PCA+KDE & 0.501 & 0.498 & 0.500 & 0.501 & 0.500 & 0.502 & 0.500 & 0.501 & 0.500 \\ & DSVDD~\cite{dsvdd} & 0.569 & 0.614 & 0.566 & 0.505 & 0.507 & 0.901 & 0.505 & 0.958 & 0.641 \\ & LID+OCSVM & 0.551 & 0.596 & 0.575 & 0.583 & 0.585 & 0.914 & 0.559 & 0.968 & 0.666\\ & LID~\cite{lid}+KDE & 0.610 & 0.702 & 0.639 & 0.654 & 0.656 & 0.924 & 0.615 & 0.971 & 0.721 \\ & Mah.+OCSVM & 0.880 & 0.787 & 0.898 & \textbf{0.852} & 0.837 & 0.963 & 0.668 & \textbf{0.989} & 0.859 \\ & Mah.~\cite{mahalanobis}+KDE & 0.896 & 0.887 & 0.893 & 0.603 & 0.587 & 0.899 & 0.578 & 0.966 & 0.789 \\ \cmidrule(lr){2-2} \cmidrule(lr){3-11} & HEAD+OCSVM (Ours) & \underline{0.999} & \textbf{0.999} & \underline{0.999} & 0.841 & \underline{0.941} & \underline{0.985} & \underline{0.821} & 0.988 & \underline{0.947} \\ & HEAD+KDE (Ours) & \textbf{1.000} & \textbf{0.999} & \textbf{1.000} & \underline{0.846} & \textbf{0.943} & \textbf{0.986} & \textbf{0.825} & \textbf{0.989} & \textbf{0.949} \\ \cmidrule(lr){1-11} \multirow{9}{*}{CIFAR100} & PCA+OCSVM & 0.497 & 0.497 & 0.497 & 0.500 & 0.500 & 0.221 & 0.497 & 0.353 & 0.445 \\ & PCA+KDE & 0.498 & 0.501 & 0.499 & 0.500 & 0.500 & 0.501 & 0.500 & 0.497 & 0.500\\ & DSVDD & 0.568 & 0.629 & 0.564 & 0.502 & 0.504 & 0.777 & 0.501 & 0.852 & 0.612 \\ & LID+OCSVM & 0.570 & 0.579 & 0.581 & 0.504 & 0.501 & 0.758 & 0.520 & 0.845 & 0.607\\ & LID+KDE & 0.642 & 0.655 & 0.654 & 0.511 & 0.515 & 0.768 & 0.549 & 0.849 & 0.643 \\ & Mah.+OCSVM & 0.708 & 0.719 & 0.709 & \textbf{0.816} & 0.811 & 0.772 & \textbf{0.883} & \textbf{0.916} & 0.792\\ & Mah.+KDE & 0.845 & 0.926 & 0.848 & 0.535 & 0.541 & 0.760 & 0.530 & 0.798 & 0.723 \\ \cmidrule(lr){2-2} \cmidrule(lr){3-11} & HEAD+OCSVM (Ours) & \textbf{0.999} & \underline{0.999} & \textbf{0.998} & 0.728 & \underline{0.814} & \underline{0.898} & 0.819 & 0.906 & \underline{0.895} \\ & HEAD+KDE (Ours) & \textbf{0.999} & \textbf{1.000} & \textbf{0.998} & \underline{0.733} & \textbf{0.816} & \textbf{0.901} & \underline{0.820} & \underline{0.908} & \textbf{0.897} \\ \cmidrule(lr){1-11} \multirow{9}{*}{SVHN} & PCA+OCSVM & 0.499 & 0.501 & 0.499 & 0.500 & 0.500 & 0.242 & 0.497 & 0.342 & 0.448 \\ & PCA+KDE & 0.500 & 0.499 & 0.499 & 0.499 & 0.499 & 0.501 & 0.500 & 0.495 & 0.499 \\ & DSVDD & 0.717 & 0.812 & 0.714 & 0.524 & 0.527 & 0.911 & 0.521 & 0.981 & 0.713 \\ & LID+OCSVM & 0.680 & 0.640 & 0.693 & 0.654 & 0.680 & 0.927 & 0.525 & 0.984 & 0.723\\ & LID+KDE & 0.761 & 0.747 & 0.772 & 0.726 & 0.749 & 0.938 & 0.560 & 0.986 & 0.780 \\ & Mah.+OCSVM & 0.747 & 0.699 & 0.766 & \textbf{0.917} & 0.941 & 0.966 & 0.663 & \textbf{0.994} & 0.837 \\ & Mah.+KDE & 0.833 & 0.748 & 0.848 & 0.904 & 0.914 & 0.909 & 0.638 & 0.971 & 0.846 \\ \cmidrule(lr){2-2} \cmidrule(lr){3-11} & HEAD+OCSVM (Ours) & \textbf{1.000} & \textbf{1.000} & \textbf{1.000} & 0.868 & \underline{0.975} & \underline{0.992} & \textbf{0.954} & 0.993 & \underline{0.934} \\ & HEAD+KDE (Ours) & \textbf{1.000} & \textbf{1.000} & \textbf{1.000} & \textbf{0.917} & \textbf{0.979} & \textbf{0.994} & \underline{0.946} & \textbf{0.994} & \textbf{0.946}\\ \bottomrule \end{tabular} \caption{The ROC AUC performance on detecting eight adversarial attacks. Best performance is reported in \textbf{bold} and second best with \underline{underline}.} \label{tab:compare_to_baseline} \end{table} \subsection{Cross-model Adversarial Detection}\label{sec.cross_model} Adversarial examples generated by one model are known to be transferrable in that they can deceive a trained model with a different architecture~\cite{transferattack}. In such cases, the defender attempts to detect adversarial images generated by an unknown model. To evaluate this scenario, we generate adversarial images with a ResNet18~\cite{resnet} model and the defender's task is to protect a VGG16~\cite{vgg} model. For cross-model adversarial detection with LID~\cite{lid} and Mahalanobis~\cite{mahalanobis}, we find that the baseline anomaly detectors perform quite poorly. To provide a stronger comparison, we instead compare against the LID and Mahalanobis supervised models. (Note that supervised models are trained on adversarial images of VGG16 but evaluated on adversarial images of ResNet18.) The supervised model is a binary classifier consisting of four fully connected layers with output dimensions of 64, 32, 8, and 1. ReLU layers and batch normalization layers~\cite{batchnorm} are attached after the first three fully connected layers, and Sigmoid layer after the last one. We optimize this model with SGD~\cite{sgd}, with learning rate = 0.001, for 100 epochs using binary cross-entropy loss. For HEAD\ however, we use the same anomaly detection models as in \Cref{sec.expt_setting}. As shown in \Cref{tab:cross_model}, across all datasets and attacks, HEAD\ based anomaly detectors significantly outperforms the supervised LID and Mahalanobis feature based models. Only on $l_{2}$ DeepFool attack, Mahlanobis-based supervised model slightly outperforms the HEAD-based anomaly detector. To reiterate, a HEAD\ based anomaly detector, trained on benign images only, outperforms supervised models, trained on LID or Mahalanobis features, on the cross-model adversarial detection task. \begin{table}[!t] \centering \scriptsize \setlength{\tabcolsep}{2.5pt} \begin{tabular}{llccccccccc} \toprule & & \multicolumn{3}{|c|}{$l_\infty$ Attacks} & \multicolumn{2}{c|}{$l_2$ Attacks} & \multicolumn{2}{c|}{$l_0$ Attacks} & \multicolumn{1}{c|}{Combined} & \multirow{2}{*}{Overall} \\ \cmidrule(lr){3-5} \cmidrule(lr){6-7} \cmidrule(lr){8-9} \cmidrule(lr){10-10} Dataset & Method & PGD10 & FGSM & BIM & DeepFool & CW & SparseFool & OnePixel & AutoAttack & \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} \cmidrule(lr){6-6} \cmidrule(lr){7-7} \cmidrule(lr){8-8} \cmidrule(lr){9-9} \cmidrule(lr){10-10} \cmidrule(lr){11-11} \multirow{4}{*}{CIFAR10} & LID (Binary Classification) & 0.594 & 0.901 & 0.588 & \underline{0.617} & 0.652 & 0.830 & 0.677 & 0.876 & 0.717 \\ & Mah. (Binary Classification) & 0.702 & 0.991 & 0.704 & \textbf{0.628} & 0.658 & 0.838 & 0.674 & 0.740 & 0.742 \\ \cmidrule(lr){2-2} \cmidrule(lr){3-11} & HEAD+OCSVM (Ours) & \textbf{1.000} & \textbf{1.000} & \textbf{0.999} & 0.589 & \underline{0.880} & \underline{0.969} & \textbf{0.882} & \textbf{0.988} & \underline{0.913} \\ & HEAD+KDE (Ours) & \textbf{1.000} & \textbf{1.000} & \textbf{0.999} & 0.590 & \textbf{0.883} & \textbf{0.970} & \underline{0.881} & \textbf{0.988} & \textbf{0.914} \\ \cmidrule{1-11} \multirow{4}{*}{CIFAR100} & LID (Binary Classification) & 0.650 & 0.831 & 0.636& 0.499 & 0.504 & 0.810 & 0.596 & 0.842 & 0.671 \\ & Mah. (Binary Classification) & 0.737 & 0.985 & 0.713 & \textbf{0.531} & 0.556 & 0.839 & 0.662 & 0.752 & 0.722 \\ \cmidrule(lr){2-2} \cmidrule(lr){3-11} & HEAD+OCSVM (Ours) & \textbf{0.999} & \textbf{0.999} & \textbf{0.998} & 0.527 & \underline{0.762} & \underline{0.901} & \underline{0.814} & \underline{0.917} & \underline{0.861} \\ & HEAD+KDE (Ours) & \textbf{0.999} & \textbf{0.999} & \textbf{0.998} & \underline{0.530} & \textbf{0.765} & \textbf{0.905} &\textbf{ 0.815} & \textbf{0.919} & \textbf{0.866} \\ \cmidrule{1-11} \multirow{4}{*}{SVHN} & LID (Binary Classification) & 0.776 & 0.777 & 0.788 & 0.593 & 0.609 & 0.931 & 0.583 & 0.890 & 0.743 \\ & Mah. (Binary Classification) & 0.797 & 0.847 & 0.808 &\textbf{0.647} & 0.659 & 0.942 & 0.651 & 0.801 & 0.769 \\ \cmidrule(lr){2-2} \cmidrule(lr){3-11} & HEAD+OCSVM (Ours) & \textbf{1.000} & \textbf{1.000} & \textbf{1.000} & \underline{0.605} & \underline{0.898} & \textbf{0.993} & \underline{0.928} & \underline{0.994} & \underline{0.927} \\ & HEAD+KDE (Ours) & \textbf{1.000} & \textbf{1.000} & \textbf{1.000} & 0.602 & \textbf{0.901} & \underline{0.992} & \textbf{0.930} & \textbf{0.995} & \textbf{0.928} \\ \bottomrule \end{tabular} \caption{The ROC AUC performance on detecting cross model adversarial attacks. Best performance is reported in \textbf{bold} and second best with \underline{underline}.} \label{tab:cross_model} \end{table} \subsection{Sensitivity and Ablation Studies}\label{sec.ablation} To further understand the properties of the HEAD\ features, we conduct experiments on CIFAR10 to evaluate (i) effectiveness of LSCF\ and HF, (iii) performance gap between anomaly detection and binary classification, (iii) method sensitivity to the anomaly detectors, and (iv) method robustness when distinguish benign noisy images and adversarial images. The result of (iii) and (iv) are provided in the Appendix due to page limitation. \noindent \textbf{Effectiveness of Least Significant Component Feature\ And Hessian Feature\ components of HEAD: } To compare the effectiveness of Least Significant Component Feature\ and Hessian Feature\ we ablate on the number of feature components. Specifically, for LSCF, we use 0, 4, 16, 32, 64-dimensional feature variants. For HF, we use 0, 1 (only input), 5 (from input to $b2\_r2$), 9 (from input to $b4\_r1$) and 13-dimensional (from input to $b5\_r3$) features. When one feature size (LSCF ~or HF) is changed, we use the best number of feature components for the other feature. Results are detailed in the \Cref{tab:ablation_dim}. Both features show improved performance as the number of feature components increases. We observe that LSCF\ and HF\ are complementary in that the largest performance gains are obtained when LSCF\ and HF\ are concatenated. For LSCF, performance plateaus at 32 dimensions and does not increase with doubling the number of dimensions. Based on this ablation study, we choice 13-dimensional HF\ and 32-dimensional LSCF\ in the experiments of the remaining paper. \begin{table}[!htbp] \centering \scriptsize \begin{minipage}[t]{0.48\textwidth} \centering \begin{tabular}{r|cc} \toprule \textbf{HF\ Dimension} & \textbf{ROC AUC} & \textbf{Improve} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} 0 & 0.885 & - \\ 1 & 0.936 & +0.051 \\ 5 & 0.946 & +0.010\\ 9 & 0.948 & +0.002\\ 13 & 0.949 & +0.001 \\ \bottomrule \end{tabular} \end{minipage}\hfill \centering \begin{minipage}[t]{0.48\textwidth} \centering \begin{tabular}{r|cc} \toprule \textbf{LSCF\ Dimension} & \textbf{ROC AUC} & \textbf{Improve} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} 0 & 0.860 & - \\ 4 & 0.923 & +0.063 \\ 16 & 0.939 & +0.016\\ 32 & 0.949 & +0.010\\ 64 & 0.949 & +0.000\\ \bottomrule \end{tabular} \end{minipage} \caption{The effectiveness of different dimensional Hessian Feature\ (left) and Least Significant Component Feature\ (right). The performance is shown in ROC AUC over all attacks. Dimension=0 implies the feature is not used. The right column shows the incremental performance improvement over the prior row.} \label{tab:ablation_dim} \end{table} \noindent \textbf{Binary benign/attack classification vs. anomaly detection:} Anomaly detection, in general, does not require examples of the outliers, \textit{i.e.}\ the adversarial images in this study. An interesting question is what, if any, performance improvement could be gained by incorporating knowledge of the adversarial examples? To answer this question, we use a binary benign/attack classifier to provide an upper bound on the performance, where we train neural networks on benign and adversarial images as inputs with image class (benign or adversarial) as the output. The binary classifier has the same architecture as the previously described LID and Mahalanobis models in \Cref{sec.cross_model}. \Cref{fig:compare_to_supervision} compares supervised training with anomaly detection using three different input features: LID, Mahanolobis, and HEAD. Across all input features, we find that supervised training provides better performance over the corresponding anomaly detector. However, anomaly detection using HEAD\ features \emph{of only benign samples} show a much smaller performance gap with supervised training, and performance is quite comparable, reinforcing the suitability of HEAD\ features for adversarial image detection, since knowledge of potential attacks and the ability to generate samples from these attacks is not practical. \begin{figure}[!t] \centering \includegraphics[width=\linewidth]{figures/ablation/supervised_upperbound.pdf} \caption{The performance of adversarial detection using anomaly detection and binary classification on CIFAR10. The results of anomaly detection and binary classification are shown in pure color bars and shadowed bars, respectively. The overall performance for binary classification is the average performance of eight attacks.} \label{fig:compare_to_supervision} \end{figure} \section{Conclusion}\label{sec:conclusion} We frame adversarial detection as an anomaly detection problem to better reflect the challenge of detecting adversarial examples in real life. We propose Hessian and Eigen-decomposition-based Adversarial Detection, which measures the statistical deviation caused by adversarial perturbation on two complementary features: LSCF, which captures the deviation of adversarial images from the benign data, and HF, which reflects the deformation of the model's loss landscape at adversarialy perturbed images. We provide the theoretical rationale behind using LSCF\ and HF. We propose using the Generalized Gauss-Newton as a very efficient and faithful approximation to the Hessian matrix in HF. Empirical results prove the effectiveness of HEAD\ and show that comparable performance to binary classification based adversarial detection can be achieved with anomaly detection. Our method does not use any outlier examples upon training anomaly detection, which could be a limitation in cases where outlier examples are easy to obtain. We defer the study of this case to our future research. \section{Additional Ablation Studies} \noindent \textbf{Sensitivity to anomaly detector parameters:} KDE requires a choice of kernel and bandwidth, and OCSVM requires a selection of kernel and $\nu$ value. We evaluate KDE using Gaussian, Epanechnikov, exponential, linear, and uniform kernels with bandwidth values from 1 to 25. \Cref{fig:kde_ablation} shows the overall AUCs for these parameter values. The results indicate the choice of the kernel is not critical, since all kernels achieve similar performance with an appropriate bandwidth choice. For OCSVM, we evaluate using RBF, Sigmoid, linear and polynomial kernels with $\nu$ values from 0.1 to 0.9. Results are shown in \Cref{fig:ocsvm_ablation}. Unlike KDE, OCSVM is sensitive to the choice of kernel, with the RBF kernel significantly outperforming all other kernels. That said, with an appropriate choice of hyperparameters, HEAD-based detector performance is insensitive to the choice of anomaly detector. \begin{figure}[!htbp] \centering \begin{minipage}[t]{0.44\textwidth} \centering \includegraphics[width=\linewidth]{figures/ablation/KDE_ablation.pdf} \caption{Ablation study of using different KDE kernels and kernel bandwidth. } \label{fig:kde_ablation} \end{minipage}\hfill \begin{minipage}[t]{0.44\textwidth} \centering \includegraphics[width=\linewidth]{figures/ablation/OCSVM_ablation.pdf} \caption{Ablation study of using different OCSVM kernels and $\nu$ values. } \label{fig:ocsvm_ablation} \end{minipage} \end{figure} \begin{table}[!h] \centering \begin{tabular}{c|cc|cc} \toprule \textbf{Noise Type} & \multicolumn{2}{c|}{\textbf{Gaussian}} & \multicolumn{2}{c}{\textbf{Uniform}} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-3} \cmidrule(lr){4-5} Noise Level & AUC & Drop & AUC & Drop \\ \cmidrule(lr){1-1} \cmidrule(lr){2-2} \cmidrule(lr){3-3} \cmidrule(lr){4-4} \cmidrule(lr){5-5} 0 & 0.949 & - & 0.949 & - \\ 1/255 & 0.929 & -0.020 & 0.934 & -0.015 \\ 2/255 & 0.910 & -0.019 & 0.920 & -0.014 \\ 4/255 & 0.886 & -0.024 & 0.900 & -0.020 \\ \hspace{-1px}\cellcolor{gray!25}8/255 & \cellcolor{gray!25}0.867 & \cellcolor{gray!25}-0.019 & \cellcolor{gray!25}0.880 & \cellcolor{gray!25}-0.020 \\ 16/255 & 0.834 & -0.033 & 0.856 & -0.024 \\ 32/255 & 0.784 & -0.050 & 0.813 & -0.043 \\ \bottomrule \end{tabular} \caption{Performance of adversarial anomaly detector on distinguishing noisy benign images and adversarial images.} \label{tab:against_noise} \end{table} \noindent \textbf{Robustness To Harmless Random Noise:} While random noise can be viewed as a perturbation to clean images, they do not generally result in wrong predictions except at high noise levels. A good adversarial anomaly detector should be able to distinguish noisy benign images from adversarial images. To evaluate this behavior we train anomaly detectors on benign images (without noise) and test on noisy benign images and adversarial images. As additive noise, we use either zero-mean Gaussian noise with standard deviation set to a specified noise level, or zero-mean uniform noise with maximum value equal to a specified noise level. \Cref{tab:against_noise} details overall performance under six different noise levels using the KDE detector. The gray band in the table represents the noise level equivalent to the perturbation budget used in the adversarial attacks. We observe that when noise levels are low, the performance of the detectors does not drop significantly, and remains higher than 85\% AUC. Even when the noise level is double that of the adversarial perturbation budget (\textit{i.e.}, noise level=16/255), the performance is still above 80\% AUC. In general, HEAD-based anomaly detectors appear to be robust to random noise no larger than perturbation budgets, while experiencing larger performance drop under strong noise (\textit{e.g.}, noise level=32/255). \section*{Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{} \item Did you describe the limitations of your work? \answerYes{} - limitations are discussed in Section~\ref{sec:conclusion} \item Did you discuss any potential negative societal impacts of your work? \answerNA{}{} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerYes{} \item Did you include complete proofs of all theoretical results? \answerYes{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerNo{} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{} \item Did you mention the license of the assets? \answerNA{} - the datasets used are available in the public domain \item Did you include any new assets either in the supplemental material or as a URL? \answerYes{} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{} - the datasets used are available in the public domain \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{} \end{enumerate} \end{enumerate} \section{Submission of papers to NeurIPS 2022} Please read the instructions below carefully and follow them faithfully. \subsection{Style} Papers to be submitted to NeurIPS 2022 must be prepared according to the instructions presented here. Papers may only be up to {\bf nine} pages long, including figures. Additional pages \emph{containing only acknowledgments and references} are allowed. Papers that exceed the page limit will not be reviewed, or in any other way considered for presentation at the conference. The margins in 2022 are the same as those in 2007, which allow for $\sim$$15\%$ more words in the paper compared to earlier years. Authors are required to use the NeurIPS \LaTeX{} style files obtainable at the NeurIPS website as indicated below. Please make sure you use the current files and not previous versions. Tweaking the style files may be grounds for rejection. \subsection{Retrieval of style files} The style files for NeurIPS and other conference information are available on the World Wide Web at \begin{center} \url{http://www.neurips.cc/} \end{center} The file \verb+neurips_2022.pdf+ contains these instructions and illustrates the various formatting requirements your NeurIPS paper must satisfy. The only supported style file for NeurIPS 2022 is \verb+neurips_2022.sty+, rewritten for \LaTeXe{}. \textbf{Previous style files for \LaTeX{} 2.09, Microsoft Word, and RTF are no longer supported!} The \LaTeX{} style file contains three optional arguments: \verb+final+, which creates a camera-ready copy, \verb+preprint+, which creates a preprint for submission to, e.g., arXiv, and \verb+nonatbib+, which will not load the \verb+natbib+ package for you in case of package clash. \paragraph{Preprint option} If you wish to post a preprint of your work online, e.g., on arXiv, using the NeurIPS style, please use the \verb+preprint+ option. This will create a nonanonymized version of your work with the text ``Preprint. Work in progress.'' in the footer. This version may be distributed as you see fit. Please \textbf{do not} use the \verb+final+ option, which should \textbf{only} be used for papers accepted to NeurIPS. At submission time, please omit the \verb+final+ and \verb+preprint+ options. This will anonymize your submission and add line numbers to aid review. Please do \emph{not} refer to these line numbers in your paper as they will be removed during generation of camera-ready copies. The file \verb+neurips_2022.tex+ may be used as a ``shell'' for writing your paper. All you have to do is replace the author, title, abstract, and text of the paper with your own. The formatting instructions contained in these style files are summarized in Sections \ref{gen_inst}, \ref{headings}, and \ref{others} below. \section{General formatting instructions} \label{gen_inst} The text must be confined within a rectangle 5.5~inches (33~picas) wide and 9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point type with a vertical spacing (leading) of 11~points. Times New Roman is the preferred typeface throughout, and will be selected for you by default. Paragraphs are separated by \nicefrac{1}{2}~line space (5.5 points), with no indentation. The paper title should be 17~point, initial caps/lower case, bold, centered between two horizontal rules. The top rule should be 4~points thick and the bottom rule should be 1~point thick. Allow \nicefrac{1}{4}~inch space above and below the title to rules. All pages should start at 1~inch (6~picas) from the top of the page. For the final version, authors' names are set in boldface, and each name is centered above the corresponding address. The lead author's name is to be listed first (left-most), and the co-authors' names (if different address) are set to follow. If there is only one co-author, list both author and co-author side by side. Please pay special attention to the instructions in Section \ref{others} regarding figures, tables, acknowledgments, and references. \section{Headings: first level} \label{headings} All headings should be lower case (except for first word and proper nouns), flush left, and bold. First-level headings should be in 12-point type. \subsection{Headings: second level} Second-level headings should be in 10-point type. \subsubsection{Headings: third level} Third-level headings should be in 10-point type. \paragraph{Paragraphs} There is also a \verb+\paragraph+ command available, which sets the heading in bold, flush left, and inline with the text, with the heading followed by 1\,em of space. \section{Citations, figures, tables, references} \label{others} These instructions apply to everyone. \subsection{Citations within the text} The \verb+natbib+ package will be loaded for you by default. Citations may be author/year or numeric, as long as you maintain internal consistency. As to the format of the references themselves, any style is acceptable as long as it is used consistently. The documentation for \verb+natbib+ may be found at \begin{center} \url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf} \end{center} Of note is the command \verb+\citet+, which produces citations appropriate for use in inline text. For example, \begin{verbatim} \citet{hasselmo} investigated\dots \end{verbatim} produces \begin{quote} Hasselmo, et al.\ (1995) investigated\dots \end{quote} If you wish to load the \verb+natbib+ package with options, you may add the following before loading the \verb+neurips_2022+ package: \begin{verbatim} \PassOptionsToPackage{options}{natbib} \end{verbatim} If \verb+natbib+ clashes with another package you load, you can add the optional argument \verb+nonatbib+ when loading the style file: \begin{verbatim} \usepackage[nonatbib]{neurips_2022} \end{verbatim} As submission is double blind, refer to your own published work in the third person. That is, use ``In the previous work of Jones et al.\ [4],'' not ``In our previous work [4].'' If you cite your other papers that are not widely available (e.g., a journal paper under review), use anonymous author names in the citation, e.g., an author of the form ``A.\ Anonymous.'' \subsection{Footnotes} Footnotes should be used sparingly. If you do require a footnote, indicate footnotes with a number\footnote{Sample of the first footnote.} in the text. Place the footnotes at the bottom of the page on which they appear. Precede the footnote with a horizontal rule of 2~inches (12~picas). Note that footnotes are properly typeset \emph{after} punctuation marks.\footnote{As in this example.} \subsection{Figures} \begin{figure} \centering \fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}} \caption{Sample figure caption.} \end{figure} All artwork must be neat, clean, and legible. Lines should be dark enough for purposes of reproduction. The figure number and caption always appear after the figure. Place one line space before the figure caption and one line space after the figure. The figure caption should be lower case (except for first word and proper nouns); figures are numbered consecutively. You may use color figures. However, it is best for the figure captions and the paper body to be legible if the paper is printed in either black/white or in color. \subsection{Tables} All tables must be centered, neat, clean and legible. The table number and title always appear before the table. See Table~\ref{sample-table}. Place one line space before the table title, one line space after the table title, and one line space after the table. The table title must be lower case (except for first word and proper nouns); tables are numbered consecutively. Note that publication-quality tables \emph{do not contain vertical rules.} We strongly suggest the use of the \verb+booktabs+ package, which allows for typesetting high-quality, professional tables: \begin{center} \url{https://www.ctan.org/pkg/booktabs} \end{center} This package was used to typeset Table~\ref{sample-table}. \begin{table} \caption{Sample table title} \label{sample-table} \centering \begin{tabular}{lll} \toprule \multicolumn{2}{c}{Part} \\ \cmidrule(r){1-2} Name & Description & Size ($\mu$m) \\ \midrule Dendrite & Input terminal & $\sim$100 \\ Axon & Output terminal & $\sim$10 \\ Soma & Cell body & up to $10^6$ \\ \bottomrule \end{tabular} \end{table} \section{Final instructions} Do not change any aspects of the formatting parameters in the style files. In particular, do not modify the width or length of the rectangle the text should fit into, and do not change font sizes (except perhaps in the \textbf{References} section; see below). Please note that pages should be numbered. \section{Preparing PDF files} Please prepare submission files with paper size ``US Letter,'' and not, for example, ``A4.'' Fonts were the main cause of problems in the past years. Your PDF file must only contain Type 1 or Embedded TrueType fonts. Here are a few instructions to achieve this. \begin{itemize} \item You should directly generate PDF files using \verb+pdflatex+. \item You can check which fonts a PDF files uses. In Acrobat Reader, select the menu Files$>$Document Properties$>$Fonts and select Show All Fonts. You can also use the program \verb+pdffonts+ which comes with \verb+xpdf+ and is available out-of-the-box on most Linux machines. \item The IEEE has recommendations for generating PDF files whose fonts are also acceptable for NeurIPS. Please see \url{http://www.emfield.org/icuwb2010/downloads/IEEE-PDF-SpecV32.pdf} \item \verb+xfig+ "patterned" shapes are implemented with bitmap fonts. Use "solid" shapes instead. \item The \verb+\bbold+ package almost always uses bitmap fonts. You should use the equivalent AMS Fonts: \begin{verbatim} \usepackage{amsfonts} \end{verbatim} followed by, e.g., \verb+\mathbb{R}+, \verb+\mathbb{N}+, or \verb+\mathbb{C}+ for $\mathbb{R}$, $\mathbb{N}$ or $\mathbb{C}$. You can also use the following workaround for reals, natural and complex: \begin{verbatim} \newcommand{\RR}{I\!\!R} \newcommand{\Nat}{I\!\!N} \newcommand{\CC}{I\!\!\!\!C} \end{verbatim} Note that \verb+amsfonts+ is automatically loaded by the \verb+amssymb+ package. \end{itemize} If your file contains type 3 fonts or non embedded TrueType fonts, we will ask you to fix it. \subsection{Margins in \LaTeX{}} Most of the margin problems come from figures positioned by hand using \verb+\special+ or other commands. We suggest using the command \verb+\includegraphics+ from the \verb+graphicx+ package. Always specify the figure width as a multiple of the line width as in the example below: \begin{verbatim} \usepackage[pdftex]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.pdf} \end{verbatim} See Section 4.4 in the graphics bundle documentation (\url{http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf}) A number of width problems arise when \LaTeX{} cannot properly hyphenate a line. Please give LaTeX hyphenation hints using the \verb+\-+ command when necessary. \begin{ack} Use unnumbered first level headings for the acknowledgments. All acknowledgments go at the end of the paper before the list of references. Moreover, you are required to declare funding (financial activities supporting the submitted work) and competing interests (related financial activities outside the submitted work). More information about this disclosure can be found at: \url{https://neurips.cc/Conferences/2022/PaperInformation/FundingDisclosure}. Do {\bf not} include this section in the anonymized submission, only in the final paper. You can use the \texttt{ack} environment provided in the style file to autmoatically hide this section in the anonymized submission. \end{ack} \section*{References} References follow the acknowledgments. Use unnumbered first-level heading for the references. Any choice of citation style is acceptable as long as you are consistent. It is permissible to reduce the font size to \verb+small+ (9 point) when listing the references. Note that the Reference section does not count towards the page limit. \medskip { \small [1] Alexander, J.A.\ \& Mozer, M.C.\ (1995) Template-based algorithms for connectionist rule extraction. In G.\ Tesauro, D.S.\ Touretzky and T.K.\ Leen (eds.), {\it Advances in Neural Information Processing Systems 7}, pp.\ 609--616. Cambridge, MA: MIT Press. [2] Bower, J.M.\ \& Beeman, D.\ (1995) {\it The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System.} New York: TELOS/Springer--Verlag. [3] Hasselmo, M.E., Schnell, E.\ \& Barkai, E.\ (1995) Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3. {\it Journal of Neuroscience} {\bf 15}(7):5249-5262. }
1,314,259,994,030
arxiv
\section{Introduction} Let $\mathbb{N}$, $\mathbb{Z}$ and $\mathbb{Q}$ be the sets of non-negative integers, integers and rational numbers respectively. Given a positive integer $n$ and an $n\times n$ matrix $C=[a_{ij}]$ with $a_{ij}\in\mathbb{N}$, let $\Gamma$ be the \textit{quiver} defined by $C$, i.e., $\Gamma$ is the directed graph with $n$ vertices $\{1, 2, \cdots, n\}$ equipped with $a_{ij}$ arrows from $i$ to $j$ ($1\le i,j \le n$). We attach an indeterminate $X_i$ to vertex $i$ ($1\le i\le n$) and $X_{ij}^{(k)}$ to the $k$-th arrow from $i$ to $j$ ($1\le i,j \le n$ and $1\le k \le a_{ij}$). Let $\Omega$ be the set of all arrows in $\Gamma$, thus $\Omega$ can be identified with the set $\{ X_{ij}^{(k)} | \,1\le i,j\le n$, $1\le k \le a_{ij} \text{ and } a_{ij} > 0\}$. The following is an example of a quiver and its defining matrix: \[ \begin{tikzcd}[row sep=large, column sep = large] \arrow[loop left, distance=3em, "{X_{11}^{(1)}}"] \underset{1}{\circ}\arrow[r, bend left, "{X_{12}^{(1)}}"] & \arrow[l, bend left, "{X_{21}^{(1)}}" ] \underset{2}{\circ} \arrow[r, bend left, "{X_{23}^{(1)}}"] \arrow[r, bend right, swap, "{X_{23}^{(2)}}"] & \underset{3}{\circ} \end{tikzcd}, \quad \left[ \begin{array}{ccc} 1 & 1 & 0 \\ 1 & 0 & 2 \\ 0 & 0 & 0 \end{array} \right]. \] Let $\mathbb{F}$ be a field. A monomial in the following form is called a \textit{simple relation} for $\Gamma$: $$X_{i_0i_1}^{(k_1)}X_{i_1i_2}^{(k_2)}X_{i_2i_3}^{(k_3)}\cdots X_{i_{s-1} i_{s}}^{(k_s)},$$ where $i_0$ is called the \textit{starting point} of the relation and $i_{s}$ is called the \textit{ending point}. It is necessary that $a_{i_0i_1}a_{i_1i_2}a_{i_2i_3}\cdots a_{i_{s-1} i_{s}} \ne 0$ and $1\le k_m \le a_{i_{m-1}i_m}$ ($1\le m \le s$). It is evident that every simple relation is induced by a path in $\Gamma$. If $i_0 = i_s$ then the relation is called \textit{cyclic}. So simple cyclic relations are induced by oriented cycles in $\Gamma$. A \textit{relation} for $\Gamma$ is a linear combination over $\mathbb{F}$ of simple relations that share the same starting point and the same ending point. A relation is called \textit{cyclic} if all of its summands are cyclic. For example, the following is a cyclic relation for the quiver mentioned above: $$X_{11}^{(1)}X_{12}^{(1)}X_{21}^{(1)} - X_{12}^{(1)}X_{21}^{(1)}X_{11}^{(1)}.$$ In this paper, we are only interested in cyclic relations. Given $\alpha = (\alpha _1, \cdots, \alpha_n)\in\mathbb{N}^n$, a \textit{representation of quiver} $\Gamma$ of dimension $\alpha$ over a field $\mathbb{F}$ is a function $\sigma: \Omega \mapsto \{ \textit{Matrices over } \mathbb{F} \}$ such that $\sigma(X_{ij}^{(k)})$ has order $\alpha_i \times\alpha_j$ for $1\le i , j\le n$ and $1\le k \le a_{ij}$. Note that matrices with $0$ rows or $0$ columns are permitted. $\alpha$ is called the \textit{dimension vector} of $\sigma$ and denoted by $\dim \sigma$. The following diagram defines a representation of the quiver mentioned above with dimension vector $(1,2,2$): \[ \begin{tikzcd}[row sep=large, column sep = large, ampersand replacement=\&] \arrow[loop left, distance=3em, "{\begin{bmatrix}-1 \end{bmatrix}}"] \underset{1}{\circ}\arrow[r, bend left, "{\begin{bmatrix}1 & \!\! -1\end{bmatrix}}"] \& \arrow[l, bend left, "{\begin{bmatrix}1 \\ 1\end{bmatrix}}" ] \underset{2}{\circ} \arrow[r, bend left, "{\begin{bmatrix}1 & 1 \\0 & 1\end{bmatrix}}"] \arrow[r, bend right, swap, "{\begin{bmatrix}1 & 0 \\0 & 1\end{bmatrix}}"] \& \underset{3}{\circ} \end{tikzcd}. \] Given two representations $\sigma$ and $\tau$ with dimension vectors $\alpha$ and $\beta$ respectively, an $n$-tuple $(H_1, \cdots, H_n)$ of matrices over $\mathbb{F}$ is called a \textit{homomorphism} from $\sigma$ to $\tau$ if $H_i$ has order $\alpha_i \times \beta_i$ for $1\le i \le n$ and $\sigma(X_{ij}^{(k)}) H_j=H_i\tau(X_{ij}^{(k)})$ for all $1\le i , j\le n$ and $1\le k \le a_{ij}$. If $\dim \sigma = \dim \tau$ and $H_1, \cdots, H_n$ are all nonsingular, then $(H_1, \cdots, H_n)$ is called an \textit{isomorphism}. The \textit{direct sum} of $\sigma$ and $\tau$, denoted by $\sigma\oplus\tau$, is defined by $(\sigma\oplus\tau)(X_{ij}^{(k)}) = \sigma(X_{ij}^{(k)})\oplus\tau(X_{ij}^{(k)})$ ($1\le i , j\le n$ and $1\le k \le a_{ij}$). A representation is called \textit{decomposable} if it is isomorphic to a direct sum of two representations with non-zero dimension vectors. An indecomposable representation of $\Gamma$ over $\mathbb{F}$ is called \textit{absolutely indecomposable} if it is still indecomposable when it is considered as a representation of $\Gamma$ over $\overline{\mathbb{F}}$, the algebraic closure of $\mathbb{F}$. Let $\text{Mat}(m\times n, \mathbb{F})$ be the set of all $m\times n$ matrices over $\mathbb{F}$ for $m, n\in\mathbb{N}$. Given $\alpha\in\mathbb{N}^n$, let $\text{Rep}(\alpha, \mathbb{F})$ be the set of all representations of $\Gamma$ of dimension $\alpha$ over $\mathbb{F}$. Thus $\text{Rep}(\alpha, \mathbb{F})$ is naturally identified with the following affine variety: $$\bigoplus_{X_{ij}^{(k)} \in \Omega}\text{Mat}(\alpha_i\!\times\!\alpha_j, \mathbb{F}).$$ Let $\text{GL}(m,\mathbb{F})$ be the General Linear Group of order $m$ over $\mathbb{F}$. Given $\alpha\in\mathbb{N}^n$, let $\text{GL}(\alpha,\mathbb{F}) = \prod_{i=1}^n\!\text{GL}(\alpha_i,\mathbb{F})$, then the linear algebraic group $\text{GL}(\alpha,\mathbb{F})$ acts on $\text{Rep}(\alpha, \mathbb{F})$ as follows: \begin{align*} \text{GL}(\alpha,\mathbb{F}) \times \text{Rep}(\alpha, \mathbb{F}) & \to \text{Rep}(\alpha, \mathbb{F}) \\ (g, \sigma) &\mapsto g\sigma, \end{align*} where $g\sigma(X_{ij}^{(k)}) = g_i^{-1}\sigma(X_{ij}^{(k)})g_j$ for $g = (g_1,\cdots, g_n) \in \text{GL}(\alpha,\mathbb{F})$. It is obvious that two representations from $\text{Rep}(\alpha,\mathbb{F})$ are isomorphic if and only if they are in the same orbit. Given a representation $\sigma$ of $\Gamma$ and a simple relation $R$ for $\Gamma$, where $$R=X_{i_0i_1}^{(k_1)}X_{i_1i_2}^{(k_2)}X_{i_2i_3}^{(k_3)}\cdots X_{i_{s-1} i_{s}}^{(k_s)},$$ $\sigma$ acts on $R$ naturally, i.e., $$\sigma(R) =\sigma\!\left(X_{i_0i_1}^{(k_1)}\right)\sigma\!\left(X_{i_1i_2}^{(k_2)}\right)\sigma\!\left(X_{i_2i_3}^{(k_3)}\right)\cdots \sigma\!\left(X_{i_{s-1}i_{s}}^{(k_s)}\right).$$ This action is naturally extended to any relation for $\Gamma$. $\sigma$ is said to be \textit{respecting} $R$ if $\sigma(R)=0$; $\sigma$ is said to be \textit{respecting} $R$ \textit{nilpotently} if $\sigma(R)$ is a nilpotent matrix, i.e., $\sigma(R)^m=0$ for some $m\in\mathbb{N}$. From now on, let $\mathbb{F} = \mathbb{F}_q$, the finite field with $q$ elements where $q$ is a prime power, $\mathcal{R}$ a set of cyclic relations for $\Gamma$. For $\alpha\in\mathbb{N}^n$, let $\text{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R}$ be the set all representations of $\Gamma$ of dimension $\alpha$ over $\mathbb{F}_q$ that respect all relations in $\mathcal{R}$ nilpotently, i.e., $$ \text{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R} = \{\sigma\in \text{Rep}(\alpha, \mathbb{F}_q) : \sigma(R) \textit{ is nilpotent for all } R\in\mathcal{R}\}. $$ It is evident that $\text{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R}$ is closed under the action of $\text{GL}(\alpha,\mathbb{F}_q)$. In what follows, we assume that $|\text{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R}|$ is a polynomial in $q$ with rational coefficients for any $\alpha\in\mathbb{N}^n$, i.e., there exists $r(\alpha,q) \in\mathbb{Q}[q]$ such that $|\text{Rep}(\alpha, \mathbb{F}_{q^d})_\mathcal{R}| = r(\alpha,q^d)$ for $d\ge 1$. Let $M(\alpha,q)$ ($I(\alpha, q)$, $A(\alpha,q)$) be the number of isomorphism classes of representations (indecomposable representations, absolutely indecomposable representations respectively) of $\Gamma$ of dimension $\alpha$ over $\mathbb{F}_q$ which respect all relations in $\mathcal{R}$. Counting formulae for $M(\alpha,q)$, $I(\alpha, q)$ and $A(\alpha,q)$ exist in Hua \cite{JH 2000} for quivers without relations and in Hua \cite{JH 2021} for the quiver with $1$ vertex and $g$ loops with nilpotent relations. It is widely known that $A(\alpha,q)$'s are of significant importance because of their deep connections with Geometric Invariant Theory, Quantum Group Theory and Representation Theory of Kac-Moody Algebras (Kac \cite{VK 1983}, Ringel \cite{CR 1990} and Hausel \cite{TH 2010}). In case $\Gamma$ has no edge-loops, a theorem of Kac \cite{VK 1983} shows that the dimension vectors of absolutely indecomposable representations of $\Gamma$ over $\mathbb{F}_q$ are precisely the positive roots of the root system of the Kac-Moody algebra associated with $\Gamma$. Kac \cite{VK 1983} also conjectured that the constant term of the polynomial counting isomorphism classes of absolutely indecomposables with a given dimension vector is the same as the root multiplicity of the given dimension vector. This conjecture was proved by Crawley-Boevey and Van den Bergh \cite{C-V 2004} for indivisible dimension vectors and by Hausel \cite{TH 2010} in general. This paper proves analogous results of Hua \cite{JH 2000}\cite{JH 2021} on quivers with nilpotent relations under the assumption above. \section{Numbers of Stabilizers} This paper uses the same methodology as Hua \cite{JH 2000}. A key step is to determine the number of representations that are stabilized by conjugacy classes of $\text{GL}(\alpha, \mathbb{F}_q)$, which are parametrized by $n$-tuples of partitions and monic irreducible polynomials over $\mathbb{F}_q$. A \textit{partition} $\lambda = (\lambda_1,\lambda_2,\cdots,\lambda_s)$ is a finite sequence of positive integers such that $\lambda_{i}\ge\lambda_{i+1}$ for $1\le i \le s-1$. The unique partition of $0$ is $(0)$. $|\lambda| :=\sum_{i\ge 1}\lambda_i$ is called the \textit{weight} of $\lambda$. Let $\mathcal{P}$ be the set of partitions of all non-negative integers. Let $f(x) = a_0 + a_1x + a_2x^2+ \dots + a_{n-1}x^{n-1} + x^n\in \mathbb{F}_q[x]$ be a polynomial over $\mathbb{F}_q$ and $c(f)$ be its \textit{companion matrix}, i.e., $$ \newcommand*{\temp}{\multicolumn{1}{|}{}} c(f) = \left[ \begin{array}{ccccc} 0 & 1 & 0 & \dots & 0 \\ 0 & 0 & 1 & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \dots & 1 \\ -a_0 & -a_1 & -a_2 & \dots & -a_{n-1} \end{array} \right]. $$ For any $m\in\mathbb{N}\backslash\{0\}$, let $J_m(f)$ be the \textit{Jordan block matrix} of order $m$ with $c(f)$ on the main diagonal, i.e., $$ \newcommand*{\temp}{\multicolumn{1}{|}{}} J_m(f) = \left[ \begin{array}{ccccc} c(f) & I & 0 & \dots & 0 \\ 0 & c(f) & I & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \dots & I \\ 0 & 0 & 0 & \dots & c(f) \end{array} \right]_{m\times m}, $$ where $I$ is the identity matrix of order $\deg(f)$. For $\lambda=(\lambda_1, \lambda_2, \dots, \lambda_s)\in\mathcal{P}$, let $J_{\lambda}(f)$ be the \textit{direct sum} of $J_{\lambda_i}(f)$ $(i=1,\dots,s)$, i.e., $$J_{\lambda}(f) = J_{\lambda_1}(f) \oplus J_{\lambda_2}(f) \oplus \dots \oplus J_{\lambda_s}(f),$$ which stands for $$ \newcommand*{\temp}{\multicolumn{1}{|}{}} \left[ \begin{array}{cccc} J_{\lambda_1}(f) & 0 & \dots & 0 \\ 0 & J_{\lambda_2}(f) & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & J_{\lambda_s}(f) \end{array} \right]. $$ \begin{dfn} For any matrix of order $m\times n$, the \textit{arm length} of index $(i,j)$ is one plus the number of minimal moves from $(i,j)$ to $(1,n)$, where diagonal moves are not permitted. Thus the arm length distribution is as follows: $$ \left[ \begin{array}{llllll} n & n-1 & \dots & 3 & 2 & 1\\ n+1 & n & \dots & 4 & 3 & 2\\ n+2 & n+1 & \dots & 5 & 4 & 3\\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ m+n & m+n-1 & \dots & m+2 & m+1 & m \end{array} \right]_{m\times n}. $$ The \textit{arm rank} of a matrix $M = [a_{ij}]$ of order $m\times n$, denoted by $ar(M)$, is the largest arm length of indexes of non-zero elements of $M$, i.e., $$ar(M) = \max\left\{\textit{arm length of }(i,j) \,|\, a_{ij} \ne 0 \textit{ where } 1\le i\le m, 1\le j\le n\right\}.$$ \end{dfn} \begin{dfn} A matrix $M = [a_{ij}]$ of order $m\times n$ is of type-U if it satisfies the following conditions: \begin{itemize} \item $a_{ij} = a_{st}$ if $(i,j)$ and $(s,t)$ have the same arm length, \item the arm rank of $M$ is at most $\min\{m,n\}$. \end{itemize} \end{dfn} Thus a type-U matrix of order $m\times n$ has either the following form when $m\ge n$: $$ \newcommand*{\temp}{\multicolumn{1}{|}{}} \left[ \begin{array}{lllll} a_1 & a_2 & \dots & a_{n-1} & a_n \\ 0 & a_1 & \dots & a_{n-2} & a_{n-1} \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \dots & a_1 & a_2 \\ 0 & 0 & \dots & 0 & a_1 \\ \cline{1-5} 0 & 0 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \dots & 0 & 0 \end{array} \right]_{m \times n}, $$ or the following form when $m\le n$: $$ \newcommand*{\temp}{\multicolumn{1}{|}{}} \left[ \begin{array}{lllllllll} 0 & \dots & 0 & \temp & a_1 & a_2 & \dots & a_{m-1} & a_m \\ 0 & \dots & 0 & \temp & 0 & a_1 & \dots & a_{m-2} & a_{m-1} \\ \vdots & \vdots & \vdots & \temp & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & \dots & 0 & \temp & 0 & 0 & \dots & a_1 & a_2 \\ 0 & \dots & 0 & \temp & 0 & 0 & \dots & 0 & a_1 \end{array} \right]_{m \times n}. $$ \begin{thm}[Turnbull \& Aitken \cite{T-A 1948}]\label{T-A Thm} Let $\lambda=(\lambda_1, \lambda_2, \dots, \lambda_s)$ and $\mu=(\mu_1, \mu_2, \dots, \mu_t)$ be two partitions and $f(x)=x-a_0$ with $a_0\in\mathbb{F}_q$, then any matrix $U$ over $\mathbb{F}_q$ that satisfies $J_\lambda(f)U= UJ_\mu(f)$ can be written as an $s\times t$ block matrix in the following form: $$ \newcommand*{\temp}{\multicolumn{1}{|}{}} \left[ \begin{array}{cccc} U_{11} & U_{12} & \dots & U_{1t} \\ U_{21} & U_{22} & \dots & U_{2t} \\ \vdots & \vdots & \ddots & \vdots \\ U_{s1} & U_{s2} & \dots & U_{st} \end{array} \right], $$ where each submatrix $U_{ij}$ is a type-U matrix over $\mathbb{F}_q$ of order $\lambda_i\times \mu_j$ for all $(i,j)$ where $1\le i \le s$ and $1\le j \le t$. \end{thm} As an example, let $\lambda = (3, 2, 2), \mu=(3,3,2)$ and $f(x) = x-t\in\mathbb{F}_q[x]$, then $$\newcommand*{\temp}{\multicolumn{1}{|}{}} J_\lambda(f)=\left[ \begin{array}{ccccccccc} t & 1 & 0 & \temp & 0 & 0 & \temp & 0 & 0 \\ 0 & t & 1 & \temp & 0 & 0 & \temp & 0 & 0 \\ 0 & 0 & t & \temp & 0 & 0 & \temp & 0 & 0 \\ \cline{1-9} 0 & 0 & 0 & \temp & t & 1 & \temp & 0 & 0 \\ 0 & 0 & 0 & \temp & 0 & t & \temp & 0 & 0 \\ \cline{1-9} 0 & 0 & 0 & \temp & 0 & 0 & \temp & t & 1 \\ 0 & 0 & 0 & \temp & 0 & 0 & \temp & 0 & t \end{array} \right] , J_\mu(f)=\left[ \begin{array}{cccccccccc} t & 1 & 0 & \temp & 0 & 0 & 0 & \temp & 0 & 0 \\ 0 & t & 1 & \temp & 0 & 0 & 0 & \temp & 0 & 0 \\ 0 & 0 & t & \temp & 0 & 0 & 0 & \temp & 0 & 0 \\ \cline{1-10} 0 & 0 & 0 & \temp & t & 1 & 0 & \temp & 0 & 0 \\ 0 & 0 & 0 & \temp & 0 & t & 1 & \temp & 0 & 0 \\ 0 & 0 & 0 & \temp & 0 & 0 & t & \temp & 0 & 0 \\ \cline{1-10} 0 & 0 & 0 & \temp & 0 & 0 & 0 & \temp & t & 1 \\ 0 & 0 & 0 & \temp & 0 & 0 & 0 & \temp & 0 & t \end{array} \right]. $$ Every matrix $U$ which satisfies $J_\lambda(f)\,U = UJ_\mu(f)$ can be written as a block matrix in the following form: $$ \newcommand*{\temp}{\multicolumn{1}{|}{}} U=\left[ \begin{array}{cccccccccc} a & b & c & \temp & h & i & j & \temp & m & n \\ 0 & a & b & \temp & 0 & h & i & \temp & 0 & m \\ 0 & 0 & a & \temp & 0 & 0 & h & \temp & 0 & 0 \\ \cline{1-10} 0 & p & q & \temp & 0 & d & e & \temp & k & l \\ 0 & 0 & p & \temp & 0 & 0 & d & \temp & 0 & k \\ \cline{1-10} 0 & u & v & \temp & 0 & r & s & \temp & f & g \\ 0 & 0 & u & \temp & 0 & 0 & r & \temp & 0 & f \end{array} \right]. $$ For a partition $\lambda\in\mathcal{P}$, let $m_\lambda^{\underline{i}}$ be the \textit{multiplicity} of $i$, i.e., $m_\lambda^{\underline{i}}$ is the number of parts equal to $i$ in $\lambda$, and $\lambda$ can be written in its ``\textit{exponential form}" $(1^{m_1}2^{m_2}3^{m_3}\cdots)$, where $m_i = m_\lambda^{\underline{i}}$. Let $\lambda'=(\lambda_1', \lambda_2', \lambda_3', \dots)$ be the \textit{conjugate partition} of $\lambda$, which means that $\lambda_i'$ is the number of parts in $\lambda$ that are greater than or equal to $i$ for all $i\ge 1$. Let $\lambda$ and $\mu$ be two partitions, $\lambda'=(\lambda_1', \lambda_2', \lambda_3', \dots)$ and $\mu'=(\mu_1', \mu_2', \mu_3', \dots)$ be their conjugate partitions, we define two types of ``\textit{inner product}" of $\lambda$ and $\mu$ as follows: \begin{equation}\label{inner prodt dfn} \langle\lambda, \mu\rangle = \sum_{i\ge 1}\lambda_i'\mu_i' \textit{ and } (|\lambda,\mu|) =\langle\lambda,\mu\rangle - \sum_{s\ge 1}m_\lambda^{\underline{s}}m_\mu^{\underline{s}}. \end{equation} $\langle\lambda, \mu\rangle$ can also be expressed in the following forms: \begin{equation}\label{inner prodt dfn alt} \langle\lambda, \mu\rangle = \sum_{i,j\ge 1}\min(i,j)m_\lambda^{\underline{i}}m_\mu^{\underline{j}} = \sum_{i,j\ge 1}\min(\lambda_i,\mu_j). \end{equation} For an $n$-tuple of partitions $\pi = (\pi_1, \cdots, \pi_n)\in\mathcal{P}^n$ and $s\in\mathbb{N}\backslash\{0\}$, we define the \textit{multiplicity vector} of order $s$ by $d_\pi^{\,\underline{s}} := (m_{\pi_1}^{\underline{s}}, \cdots, m_{\pi_n}^{\underline{s}})$. Thus, $(|\pi_1|, \cdots, |\pi_n|) = \sum_{s\ge 1}s d_\pi^{\,\underline{s}}$. \begin{cor}\label{cor 1} Let $\lambda, \mu\in\mathcal{P}$ be two partitions and $f(x)=x-a_0$ with $a_0\in\mathbb{F}_q$, then the number of matrices $U$ over $\mathbb{F}_q$ that satisfy $J_\lambda(f)U = UJ_\mu(f)$ is equal to $q^{\langle\lambda,\mu\rangle}$. \end{cor} \begin{proof} This is a direct consequence of Theorem \ref{T-A Thm} and identity (\ref{inner prodt dfn alt}). An alternative proof is given in Hua \cite{JH 2000}. \end{proof} \begin{cor}\label{full stabilizer} Given $\pi=(\pi_1,\cdots,\pi_n)\in\mathcal{P}^n$ and $f(x)=x-a_0$, let $\alpha=(|\pi_1|,\cdots,|\pi_n|)\in\mathbb{N}^n$ and $g=(J_{\pi_1}\!(f),\cdots,J_{\pi_n}\!(f))\in\textup{GL}(\alpha,\mathbb{F}_q)$. The stabilizer of $g$ in $\textup{Rep}(\alpha, \mathbb{F}_q)$ is defined as: $$X_g=\left\{\sigma\in\textup{Rep}(\alpha,\mathbb{F}_q) : g\sigma = \sigma\right\}.$$ There holds: $$ |X_g| = q^{\sum_{1\le i,j\le n} a_{ij}\langle\pi_i,\pi_j\rangle}.$$ \end{cor} \begin{proof} For $\sigma\in\text{Rep}(\alpha,\mathbb{F}_q)$, $\sigma\in X_g$ if and only if $ J_{\pi_i}\!(f)\sigma(X_{ij}^{(k)}) = \sigma(X_{ij}^{(k)}) J_{\pi_j}\!(f)$ for all $X_{ij}^{(k)} \in \Omega$,. Thus \begin{align*} |X_g| &= \prod_{X_{ij}^{(k)}\in \Omega} |\left\{ U\in\text{Mat}(|\pi_i|\!\times\!|\pi_j|, \mathbb{F}_q) : J_{\pi_i}\!(f)U = UJ_{\pi_j}\!(f) \right\}| \end{align*} Corollary \ref{cor 1} implies that $$ |X_g| = \prod_{X_{ij}^{(k)}\in \Omega} q^{\langle\pi_i,\pi_j\rangle} = \prod_{1\le i,j\le n}q^{a_{ij}\langle\pi_i,\pi_j\rangle} = q^{\sum_{1\le i,j\le n} a_{ij}\langle\pi_i,\pi_j\rangle}. $$ \end{proof} \begin{dfn} Let $U=[u_{ij}]$ be a type-U matrix of order $m\times n$, the \textit{core} of $U$, denoted by $U_0$, is defined as follows: \begin{align*} U_0= \begin{cases} \quad 0 &\!\!\!\text{matrix of order $m\times n$ if $m \ne n$}, \\ u_{11}I, &\!\!\!\text{where $I$ is the identity matrix of order $n$ if $m = n$}. \end{cases} \end{align*} Obviously, the core of a type-U matrix is also a type-U matrix. If $U = [U_{ij}]$ is a block matrix of type-U matrices, then the \textit{core} of $U$, denoted by $U_0$, is the block matrix $[(U_{ij})_0]$. \end{dfn} The following is an example of a block matrix of type-U matrices and its core: $$ \newcommand*{\temp}{\multicolumn{1}{|}{}} U=\left[ \begin{array}{cccccccccc} a & b & c & \temp & h & i & j & \temp & m & n \\ 0 & a & b & \temp & 0 & h & i & \temp & 0 & m \\ 0 & 0 & a & \temp & 0 & 0 & h & \temp & 0 & 0 \\ \cline{1-10} 0 & p & q & \temp & 0 & d & e & \temp & k & l \\ 0 & 0 & p & \temp & 0 & 0 & d & \temp & 0 & k \\ \cline{1-10} 0 & u & v & \temp & 0 & r & s & \temp & f & g \\ 0 & 0 & u & \temp & 0 & 0 & r & \temp & 0 & f \end{array} \right], U_0=\left[ \begin{array}{cccccccccc} a & 0 & 0 & \temp & h & 0 & 0 & \temp & 0 & 0 \\ 0 & a & 0 & \temp & 0 & h & 0 & \temp & 0 & 0 \\ 0 & 0 & a & \temp & 0 & 0 & h & \temp & 0 & 0 \\ \cline{1-10} 0 & 0 & 0 & \temp & 0 & 0 & 0 & \temp & k & 0 \\ 0 & 0 & 0 & \temp & 0 & 0 & 0 & \temp & 0 & k \\ \cline{1-10} 0 & 0 & 0 & \temp & 0 & 0 & 0 & \temp & f & 0 \\ 0 & 0 & 0 & \temp & 0 & 0 & 0 & \temp & 0 & f \end{array} \right]. $$ \begin{lem}\label{lemma 1} Let $M$ be an $m\times n$ type-U matrix and $N$ an $n\times k$ type-U matrix, then MN is an $m \times k$ type-U matrix and $(MN)_0 = M_0N_0$. \end{lem} \begin{lem}\label{lemma 2} Let $M=[M_{ij}]$ be an $m\times n$ block matrix of type-U matrices and $N=[N_{ij}]$ an $n\times k$ block matrix of type-U matrices such that $M$ and $N$ have compatible multiplication orders, i.e., the number of columns in $M_{is}$ is equal to the number of rows in $N_{sj}$ for $1\le i \le m, 1 \le j \le k$ and $1\le s \le n$. Then MN is an $m \times k$ block matrix of type-U matrices and $(MN)_0 = M_0N_0$. \end{lem} The details of the proofs of the above two lemmas are left to the reader. \begin{lem}\label{lemma 3} Let $\lambda\in\mathcal{P}$, $f(x)=x-a_0\in\mathbb{F}_q[x]$ and $U$ a block matrix of type-U matrices satisfying $J_\lambda(f) U = UJ_\lambda(f)$. Then $U$ is nilpotent if and only if $U_0$ is nilpotent. \end{lem} \begin{proof} It can be shown by induction on the number of distinct parts in $\lambda$ that $U-U_0$ is always nilpotent. Suppose that $U$ is nilpotent, then $U^m=0$ for some $m\in\mathbb{N}$. Thus Lemma \ref{lemma 2} implies that $(U_0)^m = (U^m)_0 = 0$. Thus $U_0$ is nilpotent. Conversely, suppose that $U_0$ is nilpotent, then $(U_0)^m=0$ for some $m\in\mathbb{N}$. Lemma \ref{lemma 2} implies that $(U^m)_0 = (U_0)^m = 0$. Since $U^m - (U^m)_0$ is always nilpotent, $U^m$ is nilpotent, and hence $U$ is nilpotent. \end{proof} \begin{thm}\label{degree 1} Given $\pi=(\pi_1,\cdots,\pi_n)\in\mathcal{P}^n$ and $f(x)=x-a_0\in\mathbb{F}_q[x]$, let $\alpha=(|\pi_1|,\cdots,|\pi_n|)\in\mathbb{N}^n$ and $g=(J_{\pi_1}\!(f),\cdots,J_{\pi_n}\!(f))\in\textup{GL}(\alpha,\mathbb{F}_q)$, and $X_g=\left\{\sigma\in\textup{Rep}(\alpha,\mathbb{F}_q) : g\sigma = \sigma\right\}$ the stabilizer of $g$ in $\textup{Rep}(\alpha, \mathbb{F}_q)$. There holds: $$ |X_g \cap \textup{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R}| = q^{\sum_{1\le i,j\le n} a_{ij}(|\pi_i,\pi_j|)} \prod_{s\ge 1}\!r(d_{\pi}^{\,\underline{s}} , q),$$ where $d_\pi^{\,\underline{s}} = (m_{\pi_1}^{\underline{s}}, \cdots, m_{\pi_n}^{\underline{s}})$ is the multiplicity vector of order $s$ induced by $\pi$ and $r(d_{\pi}^{\,\underline{s}} , q) = |\textup{Rep}(d_{\pi}^{\,\underline{s}}, \mathbb{F}_{q})_\mathcal{R}|$ for $s\ge 1$. \end{thm} \begin{proof} For any $\sigma\in X_g$, Theorem \ref{T-A Thm} implies that $\sigma(X_{ij}^{(k)})$ is a block matrix of type-U matrices for all $X_{ij}^{(k)}\in \Omega$. The \textit{core} of $\sigma$ denoted by $\sigma_0$, is the representation of $\Gamma$ defined by: $\sigma_0(X_{ij}^{(k)}) = \sigma(X_{ij}^{(k)})_0$ for all $1\le i , j\le n$ and $1\le k \le a_{ij}$. Obviously, $\sigma_0\in X_g$. Let $R$ be a simple cyclic relation for $\Gamma$ and assume that the starting point (also the ending point) of $R$ is $e$. Then $\sigma(R)$ is a block matrix of type-U matrices and it satisfies $J_{\pi_e}\!(f) \sigma(R) = \sigma(R) J_{\pi_e}\!(f)$. Lemma \ref{lemma 2} implies that $\sigma(R)_0 = \sigma_0(R)$ and Lemma \ref{lemma 3} implies that $\sigma(R)$ is nilpotent if and only if $\sigma_0(R)$ is nilpotent. This equivalence still holds when $R$ is a relation for $\Gamma$. Thus we have $$\sigma \in X_g \cap \textup{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R} \textit{ if and only if } \sigma_0 \in X_g \cap \textup{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R}.$$ Let $\pi^{(s)} = (\pi_1^{(s)}, \cdots, \pi_n^{(s)}) := (s^{m_{\pi_1}^{\underline{s}}}, \cdots, s^{m_{\pi_n}^{\underline{s}}}) \in \mathcal{P}^n$ for $s\ge 1$, where each component $s^{m_{\pi_i}^{\underline{s}}}$ ($i=1,\cdots,n$) is a partition in its ``exponential form'', $\alpha^{(s)} = (sm_{\pi_1}^{\underline{s}}, \cdots, sm_{\pi_n}^{\underline{s}}) \in \mathbb{N}^n$ and $g^{(s)} = (J_{\pi_1^{(s)}}\!(f), \cdots, J_{\pi_n^{(s)}}\!(f)) \in \text{GL}(\alpha^{(s)}, \mathbb{F}_q)$. Since every matrix $\sigma_0(X_{ij}^{(k)})$ for $X_{ij}^{(k)}\in \Omega$ is a block matrix and all of its non-square submatrices are $0$, $\sigma_0$ can be written as a direct sum of representations of $\Gamma$ in the following form: $$\sigma_0 \cong \oplus_{s\ge 1}\tau^{(s)}, $$ where $\tau^{(s)} \in \text{Rep}(\alpha^{(s)}, \mathbb{F}_q)$ and $\tau^{(s)} \in X_{g^{(s)}}$ for all $s\ge 1$. There are only finitely many terms in the above sum because $(|\pi_1|, \cdots, |\pi_n|) = \dim \sigma_0 = \sum_{s\ge 1}\dim \tau^{(s)}$. Treating every matrix $\tau^{(s)}(X_{ij}^{(k)})$ for $X_{ij}^{(k)}\in \Omega$ as a linear transformation between vector spaces, and applying a base change in the underlying vector space for vertex $v$ ($1\le v \le n$) which has dimension $sm_{\pi_v}^{\underline{s}}$ by the following mapping: $$ \left[ \arraycolsep=1.4pt \begin{array}{llll} v_1 & v_2 & \cdots & v_m \\ v_{m+1} & v_{m+2} & \cdots & v_{m+m} \\ \vdots & \vdots & \cdots & \vdots \\ v_{(s-1)m+1} & v_{(s-1)m+2} & \cdots & v_{(s-1)m+m} \end{array} \right] \mapsto \left[ \arraycolsep=3pt \begin{array}{llll} v_1 & v_{s+1} & \cdots & v_{(m-1)s+1} \\ v_2 & v_{s+2} & \cdots & v_{(m-1)s+2} \\ \vdots & \vdots & \cdots & \vdots \\ v_{s} & v_{s+s} & \cdots & v_{(m-1)s+s} \end{array} \right], $$ where $m=m_{\pi_v}^{\underline{s}}$, it transforms $\tau^{(s)}$ into $s$ copies of identical representations: $$\tau^{(s)} \, \cong \, \underbrace{\delta^{(s)} \oplus \cdots \oplus \delta^{(s)}}_\text{$s$ copies},$$ where $\dim \delta^{(s)} = (m_{\pi_1}^{\underline{s}}, \cdots, m_{\pi_n}^{\underline{s}}) = d_{\pi}^{\,\underline{s}}.$ For example, let $\Gamma = \tilde{A}_3$, the quiver with 4 vertices and 4 arrows which form a loop, $\pi=((2^23^2),(2^24^1),(2^13^1),(1^22^1) )\in\mathcal{P}^4$, then $\pi^{(2)} = (2^2, 2^2, 2^1, 2^1) \in \mathcal{P}^4$, $g^{(2)} = (J_{(2^2)}\!(f), J_{(2^2)}\!(f), J_{(2^1)}\!(f), J_{(2^1)}\!(f))$, every $\tau^{(2)}\in X_{g^{(2)}}$ should have the form as the left diagram below. After bases are changed in the underlying vector spaces as described, the representation on the left can be transformed into the representation on the right: \[ \begin{tikzcd}[row sep=huge, column sep = huge, ampersand replacement=\&] \overset{1}{\circ} \arrow[r, "{\newcommand*{\temp}{\multicolumn{1}{|}{}} \left(\arraycolsep=2pt \def\arraystretch{0.8} \begin{array}{ccccc} a & 0 & \temp & b & 0 \\ 0 & a & \temp & 0 & b \\ \cline{1-5} c & 0 & \temp & d & 0\\ 0 & c & \temp & 0 & d \end{array}\right)}"] \& \overset{2}{\circ} \arrow[d, "{\newcommand*{\temp}{\multicolumn{1}{|}{}} \left(\arraycolsep=2pt \def\arraystretch{0.8} \begin{array}{cc} e & 0 \\ 0 & e \\ \cline{1-2} f & 0 \\ 0 & f \end{array}\right)}" ] \\ \underset{4}{\circ} \arrow[u, "{\newcommand*{\temp}{\multicolumn{1}{|}{}} \left(\arraycolsep=2pt \def\arraystretch{0.8} \begin{array}{ccccc} h & 0 & \temp & i & 0 \\ 0 & h & \temp & 0 & i \end{array}\right)}" ] \& \underset{3}{\circ} \arrow[l, "{\newcommand*{\temp}{\multicolumn{1}{|}{}} \left(\arraycolsep=2pt \def\arraystretch{0.8} \begin{array}{cc} g & 0 \\ 0 & g \end{array}\right)}"] \end{tikzcd} \mapsto \begin{tikzcd}[row sep=huge, column sep = huge, ampersand replacement=\&] \overset{1}{\circ} \arrow[r, "{\newcommand*{\temp}{\multicolumn{1}{|}{}} \left(\arraycolsep=2pt \def\arraystretch{0.8} \begin{array}{ccccc} a & b & \temp & 0 & 0 \\ c & d & \temp & 0 & 0 \\ \cline{1-5} 0 & 0 & \temp & a & b \\ 0 & 0 & \temp & c & d \end{array}\right)}"] \& \overset{2}{\circ} \arrow[d, "{\newcommand*{\temp}{\multicolumn{1}{|}{}} \left(\arraycolsep=2pt \def\arraystretch{0.8} \begin{array}{ccc} e & \temp & 0 \\ f & \temp & 0 \\ \cline{1-3} 0 & \temp & e \\ 0 & \temp & f \end{array}\right)}" ] \\ \underset{4}{\circ} \arrow[u, "{\newcommand*{\temp}{\multicolumn{1}{|}{}} \left(\arraycolsep=2pt \def\arraystretch{0.8} \begin{array}{ccccc} h & i & \temp & 0 & 0 \\ \cline{1-5} 0 & 0 & \temp & h & i \end{array}\right)}" ] \& \underset{3}{\circ} \arrow[l, "{\newcommand*{\temp}{\multicolumn{1}{|}{}} \left(\arraycolsep=2pt \def\arraystretch{0.8} \begin{array}{ccc} g & \temp & 0 \\ \cline{1-3} 0 & \temp & g \end{array}\right)}"] \end{tikzcd}. \] It follows that $\sigma_0$ respects a relation $R$ for $\Gamma$ nilpotently if and only if $\tau^{(s)}$ respects $R$ nilpotently for all $s\ge 1$, if and only if $\delta^{(s)}$ respects $R$ nilpotently for all $s\ge 1$, i.e., $$ \sigma_0 \in X_g \cap \textup{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R} \textit{ if and only if } \delta^{(s)} \in \textup{Rep}(d_{\pi}^{\,\underline{s}}, \mathbb{F}_q)_\mathcal{R} \textit{ for all } s\ge 1. $$ Since $|\textup{Rep}(d_{\pi}^{\,\underline{s}}, \mathbb{F}_q)_\mathcal{R}| = r(d_{\pi}^{\,\underline{s}} , q)$, Corollary \ref{full stabilizer} implies that \begin{align*} |X_g \cap \textup{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R}| & = q^{\sum_{1\le i,j\le n}(a_{ij}\langle\pi_i,\pi_j\rangle - a_{ij}\sum_{s\ge 1}m_{\pi_i}^{\underline{s}}m_{\pi_j}^{\underline{s}})} \prod_{s\ge 1}\!r(d_{\pi}^{\,\underline{s}} , q)\\ & = q^{\sum_{1\le i,j\le n} a_{ij}(|\pi_i,\pi_j|)} \prod_{s\ge 1}\!r(d_{\pi}^{\,\underline{s}} , q). \end{align*} \end{proof} \begin{thm}\label{stabilizers} Given $\pi=(\pi_1,\cdots,\pi_n)\in\mathcal{P}^n$ an $n$-tuple of partitions and $f(x)\in\mathbb{F}_q[x]$ a monic irreducible polynomial of degree $d$, let $\alpha=d(|\pi_1|,\cdots,|\pi_n|)$, $g=(J_{\pi_1}\!(f),\cdots,J_{\pi_n}\!(f))\in\textup{GL}(\alpha,\mathbb{F}_q)$ and $X_g=\left\{\sigma\in\textup{Rep}(\alpha,\mathbb{F}_q) : g\sigma = \sigma\right\}$ the stabilizer of $g$ in $ \textup{Rep}(\alpha, \mathbb{F}_q)$. There holds: $$ |X_g \cap \textup{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R}| = q^{d\sum_{1\le i,j\le n} a_{ij}(|\pi_i,\pi_j|)} \prod_{s\ge 1}\!r(d_{\pi}^{\,\underline{s}} , q^d) .$$ \end{thm} \begin{proof} Suppose that $d>1$ as the case for $d=1$ has been proved in Theorem \ref{degree 1}. Let $c(f)$ be the companion matrix for $f$ and $\langle c(f) \rangle$ be the subalgebra of $\textup{Mat}(d\times d, \mathbb{F}_q)$ generated by $c(f)$. Since $f$ is the characteristic equation of $c(f)$, $c(f)$ satisfies the polynomial $f$, i.e., $f(c(f))=0$. Since $f$ is irreducible, $f$ is the minimal polynomial satisfied by $c(f)$. This implies that $I, c(f), c(f)^2, \cdots, c(f)^{d-1}$ form a basis for $\langle c(f) \rangle$ over $\mathbb{F}_q$, i.e., $$\langle c(f) \rangle = \left\{\sum_{i=0}^{d-1}a_ic(f)^i \,|\, a_i\in\mathbb{F}_q, 0\le i \le d-1 \right\}.$$ Thus $\langle c(f) \rangle$ is a commutative subalgebra of $\textup{Mat}(d \times d, \mathbb{F}_q)$ and the following map is an isomorphism: \begin{align*} \mathbb{F}_q[x]/(f(x)) &\to \langle c(f) \rangle \\ x &\mapsto c(f). \end{align*} Since $f$ is irreducible, $\mathbb{F}_q[x]/(f(x))$ is isomorphic to the finite field $\mathbb{F}_{q^d}$, and hence $\langle c(f) \rangle$ is a finite field with $q^d$ elements. When $\deg(f) > 1$, Theorem \ref{T-A Thm} still holds as long as all submatrices $U_{ij}$ take values from the finite field $\langle c(f) \rangle$. All arguments in the proof of Theorem \ref{degree 1} still work with $\mathbb{F}_q$ being replaced by $\langle c(f) \rangle$. Thus Theorem \ref{degree 1} implies the desired results. \end{proof} \section{Counting Formulae} Let $\varphi_r(q)=(1-q)(1-q^2)\cdots(1-q^r)$ for $r\ge 1$ and $\varphi_0(q)=1$. For $\lambda = (1^{n_1}2^{n_2}3^{n_3}\cdots)\in\mathcal{P}$ in its ``\textit{exponential form}", we define $b_\lambda(q) = \prod_{i\ge1}\varphi_{n_i}(q)$. Let $\phi_n(q)$ be the number of monic irreducible polynomials of degree $n$ in $\mathbb{F}_q[x]$ with $x$ excluded. It is known that for any positive integer $n$, \begin{equation}\phi_n(q) = \frac{1}{n}\sum_{d\,|\,n}\mu(d)(q^{\frac{n}{d}}-1),\end{equation} where the sum runs over all divisors of $n$ and $\mu$ is the Möbius function. \begin{dfn} For $\pi = (\pi_1, \cdots, \pi_n)\in\mathcal{P}^n$, let $X^{|\pi|} = X_1^{|\pi_1|} \cdots X_n^{|\pi_n|}$ and $\mathbb{Q}(q)$ the field of rational functions in $q$ over the rational field $\mathbb{Q}$. We define a formal power series in $\mathbb{Q}(q)[[X_1,\cdots,X_n]]$ as follows: $$ P(X_1,\cdots,X_n, q) = \sum_{\pi\in\mathcal{P}^n} \! \frac{q^{\sum_{1\le i, j \le n}\!a_{ij}(|\pi_i, \pi_j|)}\!\prod_{s\ge 1}\!r(d_{\pi}^{\,\underline{s}} , q)} {\prod_{1\le i \le n}\!q^{\langle \pi_i, \pi_i\rangle} b_{\pi_i}\!(q^{-1})}X^{|\pi|}. $$ Note that $((0), \cdots, (0))\in\mathcal{P}^n$ gives rise to a term equal to $1$ in the sum above. \end{dfn} \begin{thm} \label{burnside} For $\alpha=(\alpha_1, \cdots, \alpha_n)\in\mathbb{N}^n$, let $X^\alpha = X_1^{\alpha_1}\cdots X_n^{\alpha_n}$. There holds: $$ \sum_{\alpha\in\mathbb{N}^n}^\infty M(\alpha, q)X^\alpha = \prod_{d=1}^\infty\left( P(X_1^d,\cdots,X_n^d, q^d) \right) ^{\phi_d(q)}. $$ \end{thm} \begin{proof} The method applied in Theorem 4.3 from Hua \cite{JH 2000} still works here. In current context, the Burnside orbit counting formula is applied to $\text{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R}$ and the number of stabilizers for $X_g$ is given by Theorem \ref{stabilizers}. Repeating the arguments there yields the desired result. \end{proof} \begin{dfn} For $\alpha\in\mathbb{N}^n\backslash\{0\}$, let $\bar{\alpha} = \gcd(\alpha_1, \cdots, \alpha_n)$. Define rational functions $H(\alpha,q)$ for all $\alpha\in\mathbb{N}^n\backslash\{0\}$ as follows: $$ \log\left(P(X_1,\cdots,X_n, q) \right) = \sum_{\alpha\in\mathbb{N}^n\backslash\{0\}}^\infty \!H(\alpha,q)X^\alpha , $$ where $\log$ is the formal logarithm, i.e., $\log(1+x) = \sum_{i\ge 1} (-1)^{i-1} x^i/i$. \end{dfn} \begin{thm}\label{A poly}The following identity holds for all $\alpha\in\mathbb{N}^n\backslash\{0\}$: $$ A(\alpha,q) = (q-1)\sum_{d\,|\,\bar{\alpha}}\frac{\mu(d)}{d}H\Big(\frac{\alpha}{d}, q^d\Big), $$ where the sum runs over all divisors of $\bar{\alpha}$. \end{thm} \begin{proof} This is the counterpart of Theorem 4.6 from Hua \cite{JH 2000} with slight adjustment on the definition of $H(\alpha,q)$, same arguments apply. \end{proof} Analogues of Theorem 4.6 of Hua \cite{JH 2000} have been proved by Bozec, Schiffmann \& Vasserot \cite{B-S-V 2018} for Lusztig nilpotent varieties and their variants using techniques from Algebraic Geometry. Their definition of nilpotency is stronger than the one used here. In the language of $\lambda$-ring and Adams operator, Theorem \ref{A poly} is equivalent to the following identities in the ring of formal power series $\mathbb{Q}(q)[[X_1,\cdots,X_n]]$: \begin{align*} \sum_{\alpha\in\mathbb{N}^n\backslash\{0\}}\!A(\alpha,q)X^\alpha =& \,\,(q-1)\text{Log}\left(P(X_1,\cdots,X_n,q)\right), \\ P(X_1,\cdots,X_n,q) =& \,\,\text{Exp}\left(\frac{1}{q-1} \sum_{\alpha\in\mathbb{N}^n\backslash\{0\}}\!A(\alpha,q)X^\alpha\right). \end{align*} For the definitions of operator $\text{Log}$ and $\text{Exp}$, we refer to the Appendix in Mozgovoy \cite{SM 2007}. Under the assumption that $r(\alpha, q)$ is a polynomial in $q$ with rational coefficients for all $\alpha\in\mathbb{N}^n$, $H(\alpha,q)$'s must be rational functions in $q$, so are $A(\alpha,q)$'s. As $A(\alpha,q)$'s take integer values for all prime powers $q$, $A(\alpha,q)$'s must be polynomials in $q$ with rational coefficients. It follows from Lemma 2.9 of Bozec, Schiffmann \& Vasserot \cite{B-S-V 2018} that $A(\alpha,q)\in\mathbb{Z}[q]$. Kac \cite{VK 1983} implies that the degree of $A(\alpha,q)$ is at most $1-\langle\alpha,\alpha\rangle$ where $\langle-,-\rangle$ is the Euler form defined by quiver $\Gamma$. Theorem \ref{A poly} implies that if $r(\alpha,q)$'s are known for all $\alpha\in\mathbb{N}^n$ then $A(\alpha,q)$'s are known. $I(\alpha,q)$ and $M(\alpha,q)$ can be calculated by the following identities: \begin{equation}\label{count I} I(\alpha,q) = \sum_{d\,|\,\bar{\alpha}}\frac{1}{d}\sum_{r\,|\,d}\mu\Big(\frac{d}{r}\Big)A\Big(\frac{\alpha}{d},q^r\Big), \end{equation} \begin{equation}\label{count M} \sum_{\alpha\in\mathbb{N}^n} M(\alpha,q)X^\alpha = \prod_{\alpha\in\mathbb{N}^n\backslash\{0\}}^\infty(1-X^\alpha)^{-I(\alpha,q)}. \end{equation} Identity (\ref{count I}) is the counterpart of the first identity of Theorem 4.1 from Hua \cite{JH 2000} and identity (\ref{count M}) is a consequence of the Krull--Schmidt Theorem from representation theory. It follows that $I(\alpha,q)$ and $M(\alpha,q)$ are polynomials in $q$ with rational coefficients for all $\alpha\in\mathbb{N}^n$. \begin{thm}\label{gwki} Let $\Delta^+ = \{\alpha : A(\alpha,q) \ne 0, \alpha\in\mathbb{N}^n\}$ and $$A(\alpha,q) = \sum_{s= 0}^{1-\langle\alpha,\alpha\rangle}t_{\alpha,s}\,q^s,$$ where $t_{\alpha,s}\in \mathbb{Z}$ and $\langle-,-\rangle$ is the Euler form defined by $\Gamma$. The following identity holds in $\mathbb{Q}(q)[[X_1,\cdots,X_n]]$: \begin{align*} P(X_1,\cdots,X_n, q) = \!\prod_{\alpha\in\Delta^+}\!\!\prod_{s= 0}^{1-\langle\alpha,\alpha\rangle}\prod_{i=0}^\infty(1-q^{s+i}X^\alpha)^{t_{\alpha,s}}. \end{align*} \end{thm} \begin{proof} This is the counterpart of Theorem 4.9 from Hua \cite{JH 2000}, same arguments apply. \end{proof} Kac conjecture now a theorem confirms that Theorem 4.9 of Hua \cite{JH 2000} is a $q$-deformation of Weyl-Kac denominator identity, thus Theorem \ref{gwki} here may also be regarded as a $q$-deformation of Weyl-Kac denominator identity for some generalized Kac-Moody algebra. Defining such algebras would be a very interesting problem. In view of Lemma 2.9 of Bozec, Schiffmann \& Vasserot \cite{B-S-V 2018}, assuming $r(\alpha,q)\in\mathbb{Q}[q]$ is equivalent to assuming $r(\alpha,q)\in\mathbb{Z}[q]$. \begin{cjc}\label{cjc} Under the assumption that $r(\alpha, q)$ exists and $r(\alpha,q)\in\mathbb{Z}[q]$ for all $\alpha\in\mathbb{N}^n$, all coefficients of polynomial $A(\alpha,q)$ are non-negative integers. \end{cjc} \section{Special Cases} \textbf{Case 1.} Let $\mathcal{R}$ be an empty set. Every representation of $\Gamma$ respects $\mathcal{R}$ nilpotently, thus $\text{Rep}(\alpha, \mathbb{F}_q)_\mathcal{R} =\text{Rep}(\alpha, \mathbb{F}_q)$. Since $r(\alpha, q) = q^{\sum_{1\le i,j \le n}a_{ij}\alpha_i\alpha_j}$ for $\alpha\in\mathbb{N}^n$, $r(d_{\pi}^{\,\underline{s}} , q) =q^{ \sum_{1\le i,j \le n} a_{ij}m_{\pi_i}^{\underline{s}}m_{\pi_j}^{\underline{s}}}$ for $\pi\in\mathcal{P}^n$ and $s\in\mathbb{N}\backslash\{0\}$. Thus, \begin{align*} P(X_1,\cdots,X_n, q) &= \sum_{\pi\in\mathcal{P}^n} \! \frac{q^{\sum_{1\le i, j \le n}\!a_{ij}(|\pi_i, \pi_j|)} \prod_{s\ge 1}\!\!\left(q^{\sum_{1\le i, j \le n} a_{ij}m_{\pi_i}^{\underline{s}}m_{\pi_j}^{\underline{s}} }\right) } {\prod_{1\le i \le n}\!q^{\langle \pi_i, \pi_i\rangle} b_{\pi_i}\!(q^{-1})}X^{|\pi|} \\ &= \sum_{\pi\in\mathcal{P}^n} \! \frac{q^{\sum_{1\le i, j \le n}\!\left(a_{ij}(|\pi_i, \pi_j|) +\sum_{s\ge 1}a_{ij}m_{\pi_i}^{\underline{s}}m_{\pi_j}^{\underline{s}}\right)}} {\prod_{1\le i \le n}\!q^{\langle \pi_i, \pi_i\rangle} b_{\pi_i}\!(q^{-1})}X^{|\pi|} \\ &=\sum_{\pi\in\mathcal{P}^n} \! \frac{q^{\sum_{1\le i, j \le n}\!a_{ij}\langle\pi_i, \pi_j\rangle}} {\prod_{1\le i \le n}\!q^{\langle \pi_i, \pi_i\rangle} b_{\pi_i}\!(q^{-1})}X^{|\pi|}. \end{align*} Thus Theorem \ref{burnside} is equivalent to Theorem 4.9 of Hua \cite{JH 2000}. \textbf{Case 2.} Let $\Gamma$ be the quiver with one vertex and $g$ edge-loops, i.e., the quiver defined by the matrix $[g]$ where $g\ge 1$, and $\mathcal{R} = \{X_{11}^{(i)} : 1 \le i \le g\}$. The isomorphism classes of representations of $\Gamma$ over $\mathbb{F}_q$ that respect $\mathcal{R}$ nilpotently are in one-to-one correspondence with the orbits of $g$-tuples of nilpotent matrices over $\mathbb{F}_q$ under simultaneous conjugation. Since the number of $n\times n$ nilpotent matrices over $\mathbb{F}_q$ is $q^{n^2-n}$ according to Fine \& Herstein \cite{F-H 1958}, $r(n, q) = q^{g(n^2-n)}$ for $n\in\mathbb{N}$. Thus, \begin{align*} P(X, q) &= \sum_{\pi\in\mathcal{P}} \! \frac{q^{g(|\pi, \pi|)} \prod_{s\ge 1}\!q^{g\left((m_{\pi}^{\underline{s}})^2 - m_{\pi}^{\underline{s}}\right) }} {q^{\langle \pi, \pi\rangle} b_{\pi}(q^{-1})}X^{|\pi|} \\ &= \sum_{\pi\in\mathcal{P}} \! \frac{q^{g(|\pi, \pi|) + \sum_{s\ge 1} g(m_{\pi}^{\underline{s}})^2 - \sum_{s\ge 1} g\left(m_{\pi}^{\underline{s}}\right) }} {q^{\langle \pi, \pi\rangle} b_{\pi}(q^{-1})}X^{|\pi|} \\ &= \sum_{\pi\in\mathcal{P}} \! \frac{q^{g(\langle\pi, \pi\rangle -l(\pi))}} {q^{\langle \pi, \pi\rangle} b_{\pi}(q^{-1})}X^{|\pi|}, \end{align*} where $\l(\pi) = \sum_{s\ge 1}m_{\pi}^{\underline{s}}$ is the length of $\pi$. Thus Theorem \ref{burnside} is equivalent to Theorem 4.1 of Hua \cite{JH 2021}. \textbf{Case 3.} Let $\Gamma$ be the quiver defined by the following matrix, where $g$ is an integer greater than 0: $$ \left[ \begin{array}{cc} 0 & 1 \\ g & 0 \end{array} \right], $$ and $\mathcal{R} = \{ X_{12}^{(1)}X_{21}^{(i)} : 1 \le i \le g \}$ a set of cyclic relations for $\Gamma$. Let $[n]_q = \prod_{i=0}^{n-1}(q^n - q^i)$ for $n\ge 1$ and $[0]_q=1$. Thus $|\text{GL}(n,\mathbb{F}_q)| = [n]_q$. Given a dimension vector $(m,n)\in\mathbb{N}^2$ and a non-negative integer $r\le \min(m,n)$, let $D_{(m,n,r)}$ be the $m\times n$ matrix in the following form: $$ \left[ \begin{array}{cc} I & 0 \\ 0 & 0 \end{array} \right], $$ where $I$ is the identity matrix of order $r$. Let $\mathcal{C}_{(m,n,r)}$ be the centralizer of $D_{(m,n,r)}$ in $\text{GL}((m,n),\mathbb{F}_q)$, i.e., $$ \mathcal{C}_{(m,n,r)} = \left\{(M, N)\in\text{GL}((m,n),\mathbb{F}_q) : M^{-1} D_{(m,n,r)} N = D_{(m,n,r)} \right \}, $$ and hence the number of $m\times n$ matrices over $\mathbb{F}_q$ which have rank $r$ is equal to ${|\text{GL}((m,n),\mathbb{F}_q)|}/{|\mathcal{C}_{(m,n,r)}|}$. For any $(M, N)\in\text{GL}((m,n),\mathbb{F}_q)$, $M$ and $N$ can be written as block matrices as follows: $$M = \left[ \begin{array}{ll} A_{r\times r} & B_{r\times (m-r)} \\ C_{(m-r) \times r} & D_{(m-r)\times (m-r)} \end{array} \right], N = \left[ \begin{array}{ll} E_{r\times r} & F_{r\times (n-r)} \\ G_{(n-r) \times r} & H_{(n-r)\times (n-r)} \end{array} \right],$$ where the orders of the submatrices are indicated by their subscripts. Since $$M^{-1} D_{(m,n,r)} N = D_{(m,n,r)} \text{ if and only if $A=E$, $C=0$ and $F=0$}, $$ it follows that \begin{align*} \left|\mathcal{C}_{(m,n,r)} \right| = \, & [r]_q [m-r]_q q^{r(m-r)} [n-r]_q q^{(n-r)r} \\ = \, & [r]_q [m-r]_q [n-r]_q q^{r(m+n) - 2r^2}. \end{align*} Let $$ \mathcal{N}_{(m,n,r)} = \left\{N\in\textup{Mat}(n\times m, \mathbb{F}_q) : D_{(m,n,r)} N \textit{ is nilpotent } \right \}. $$ Any $n\times m$ matrix $N$ can be written as a block matrix as follows: $$ \left[ \begin{array}{ll} A_{r\times r} & B_{r\times (m-r)} \\ C_{(n-r) \times r} & D_{(n-r)\times (m-r)} \end{array} \right], $$ where the orders of the submatrices are indicated by their subscripts. $D_{(m,n,r)} N$ is nilpotent if and only if $A$ is nilpotent, therefore $$ |\mathcal{N}_{(m,n,r)} | = q^{r^2-r}q^{mn-r^2} = q^{mn-r}. $$ Let $$\mathcal{E}_{(m,n,r)} = \{\sigma\in\text{Rep}((m,n), \mathbb{F}_q)_{\mathcal{R}}:\sigma(X_{12}^{(1)}) \textit{ has rank } r \}.$$ Since the number of $m\times n$ matrices over $\mathbb{F}_q$ that have rank $r$ is equal to ${|\text{GL}((m,n),\mathbb{F}_q)| }/{|\mathcal{C}_{(m,n,r)}|}$, \begin{align*} |\mathcal{E}_{(m,n,r)}| &= \frac{|\text{GL}((m,n),\mathbb{F}_q)| }{|\mathcal{C}_{(m,n,r)}|} |\mathcal{N}_{(m,n,r)} |^g = \frac{[m]_q [n]_q q^{g(mn-r)}}{[r]_q [m-r]_q [n-r]_q q^{r(m+n) -2r^2}}. \end{align*} Since $\text{Rep}((m,n), \mathbb{F}_q)_{\mathcal{R}}$ is a disjoint union of $\mathcal{E}_{(m,n,r)}$ where $0\le r \le \min(m,n)$, \begin{equation}\label{gen kronecker} r((m,n), q) = \sum_{r=0}^{\min(m,n)} \frac{[m]_q [n]_q q^{g(mn-r)}}{[r]_q [m-r]_q [n-r]_q q^{r(m+n) -2r^2}}. \end{equation} It follows that $r((m,n), q)$ is a polynomial in $q$ with integral coefficients and hence $A((m,n),q)$ can be calculated by Theorem \ref{A poly}. When $g=1$, $\Gamma$ is the quiver below known as affine Dynkin quiver $\tilde{A}_1$: \[ \begin{tikzcd}[row sep=large, column sep = large] \underset{1}{\circ}\arrow[r, bend left, "{X_{12}^{(1)}}"] & \arrow[l, bend left, "{X_{21}^{(1)}}" ] \arrow[l, bend left, "{X_{21}^{(1)}}" ] \underset{2}{\circ} \end{tikzcd}. \] Thus we have $$ P(X_1,X_2, q) = \sum_{\pi\in\mathcal{P}^2} \frac{q^{2(|\pi_1, \pi_2|)}\prod_{s\ge 1}\!r(d_{\pi}^{\,\underline{s}} , q)} {\prod_{1\le i \le 2}q^{\langle \pi_i, \pi_i\rangle} b_{\pi_i}\!(q^{-1})}\,X^{|\pi|}, $$ where $r(d_{\pi}^{\,\underline{s}} , q)$ is given by identity (\ref{gen kronecker}) with $g=1$. For any $(m,n)\in\mathbb{N}^2\backslash\{(0,0)\}$, according to Donovan \& Freislich \cite{D-F 1973} and Dlab \& Ringel \cite{D-R 1976}, $A((m,n),q)$ has the following form: \begin{align*} A((m,n),q) = \begin{cases} 2 & \textit{ if } |m-n| = 0, \\ 1 & \textit{ if } |m-n| = 1, \\ 0 & \textit{ if } |m-n| > 1. \end{cases} \end{align*} Thus Theorem \ref{gwki} amounts to the following identity: $$ P(X_1,X_2, q) = \prod_{n=1}^\infty\prod_{i=0}^\infty(1-q^iX_1^nX_2^{n-1})(1-q^iX_1^{n-1}X_2^n)(1-q^iX_1^nX_2^n)^2. $$ In all cases above, $r(\alpha, q)$'s are known polynomials in $q$ with integral coefficients, thus $A(\alpha, q)$'s are computable by Theorem \ref{A poly}. All sample results given in Hua \cite{JH 2000}\cite{JH 2021} are consistent with the conjecture above. \vspace{0.2cm} \textbf{\large{Acknowledgments}} \vspace{0.1cm} The authors would like to thank Xueqing Chen for his constructive comments and suggestions on the draft of this paper.
1,314,259,994,031
arxiv
\section{Introduction} The idea that Lorentz symmetry might be spontaneously broken began to catch on when it was shown that mechanisms in string theory might lead to this form of symmetry breaking.\cite{ks} Since then, spontaneous Lorentz breaking has been examined in its own right in a number of contexts, including investigating its phenomenological effects and its effects on gravity. However, as soon as a theory allows spontaneous breaking of a symmetry, well-known consequences from particle physics must be considered and addressed. The first is the Goldstone theorem, which states that when a continuous symmetry is spontaneously broken, massless Nambu-Goldstone (NG) modes appear. The second is the possibility of a Higgs mechanism, resulting in massive gauge fields, for the case when the symmetry is local. The third is the possibility that additional massive modes might appear (analogous to the Higgs boson in the case of the electroweak model). Clearly, all three of these can have physical implications and must be accounted for in any theory with spontaneous symmetry breaking. In this work, these processes are examined for the case where it is Lorentz symmetry that is spontaneously broken.\cite{ks,akgrav,rbak,rbffak} First, the fate of the NG modes is examined. Then, since Lorentz symmetry is a local symmetry in the context of gravity, the possibility of a Higgs mechanism is considered. Lastly, the possibility of additional massive modes (analogous to the Higgs particle) is considered as well. An explicit illustration of these processes is given for the case of a bumblebee model, in which a vector field acquires a nonzero vacuum value. \section{Spontaneous Lorentz Breaking} In a gravitational theory, Lorentz symmetry acts in local frames, transforming tensor components with respect to a local basis, e.g., $T_{abc}$ (where Latin indices denote components with respect to a local frame). Similarly, diffeomorphisms act in the spacetime manifold, transforming components with respect to the spacetime coordinate system, e.g., $T_{\lambda\mu\nu}$ (denoted using Greek indices). These local and spacetime tensor components are linked by a vierbein. For example, the spacetime metric and local Minkowski metric are related by \begin{equation} g_{\mu\nu} = e_\mu^{\,\,\, a} e_\nu^{\,\,\, b} \eta_{ab} . \label{vier} \end{equation} With a vierbein formalism, spinors can naturally be incorporated into a theory. A vierbein formalism also parallels gauge theory, with Lorentz symmetry acting as a local symmetry group. The spin connection $\omega_\mu^{\,\,\, ab}$ enters in covariant derivatives that act on local tensor components and plays the role of the gauge field for the Lorentz symmetry. In contrast, the metric excitations, e.g., $h_{\mu\nu} = g_{\mu\nu} - \eta_{\mu\nu}$, act as the gauge fields for the diffeomorphism symmetry. In the context of a vierbein formalism, there are primarily two geometries that can be distinguished. In a Riemannian geometry (with no torsion), the spin connection is nondynamical and does not propagate. However, in a Riemann-Cartan geometry (with nonzero torsion), the spin connection must be treated as independent degrees of freedom that in principle can propagate. Local Lorentz symmetry is spontaneously broken when a local tensor field acquires a nonzero vacuum expectation value (vev), e.g., for the case of a three-component tensor, \begin{equation} <T_{abc}> \, = t_{abc} . \label{Tvev} \end{equation} Spontaneous Lorentz breaking can be introduced into a theory dynamically by adding a potential term $V$ to the Lagrangian. For example, a potential of the form \begin{equation} V \sim(T_{\lambda\mu\nu} \, g^{\lambda\alpha} g^{\mu\beta} g^{\nu\gamma} \, T_{\alpha\beta\gamma} \pm \, t^2)^2 , \label{VT2} \end{equation} consisting of a quadratic function of products of the tensor components $T_{\lambda\mu\nu}$, has a minimum when \begin{equation} T_{\lambda\mu\nu} \, g^{\lambda\alpha} g^{\mu\beta} g^{\nu\gamma} \, T_{\alpha\beta\gamma} = \mp \, t^2 . \label{condT} \end{equation} Solutions of Eq.\ \rf{condT} span a degenerate space of possible vacuum solutions. Spontaneous Lorentz breaking occurs when a particular vacuum value $t_{abc}$ in the local frame is chosen, obeying $\mp t^2 = t_{abc} \, \eta^{pa} \eta^{qb} \eta^{rc} \, t_{pqr}$, where the sign depends on the timelike or spacelike nature of the tensor. \section{Nambu-Goldstone Modes} Consider a theory with a tensor vev in a local Lorentz frame, $<T_{abc}> \, = t_{abc}$, which spontaneously breaks Lorentz symmetry. Since the vacuum value for the vierbein is also a constant or fixed function, e.g., $<e_\mu^{\,\,\, a}> \, = \delta_\mu^{\,\,\, a}$, the spacetime tensor therefore has a vev as well, \begin{equation} <T_{\lambda\mu\nu}> \, = t_{\lambda\mu\nu} . \label{Tmunuvev} \end{equation} This means that when Lorentz symmetry is spontaneously broken, diffeomorphisms are spontaneously broken as well. This implies that NG modes should appear (in the absence of a Higgs mechanism) for both of these broken symmetries. In general, the NG modes consist of field excitations that stay within the minimum of the potential $V$. They therefore obey the condition \rf{condT}. A solution of this condition is given in terms of the vierbein and the local vev, \begin{equation} T_{\lambda\mu\nu} = \lvb \lambda a \lvb \mu b \lvb \nu c \, t^{abc} . \end{equation} As a general rule, there can be up to as many NG modes as there are broken symmetries. Since the maximal case corresponds to six broken Lorentz generators and four broken diffeomorphisms, there can therefore be up to ten NG modes. Where do the NG modes reside? In general, the answer depends on the choices of gauge. However, one natural choice is to put all of the NG modes into the vierbein. A counting argument shows this is possible. The vierbein $e_\mu^{\,\,\, a}$ has 16 components. With no spontaneous Lorentz violation, typically the six Lorentz and four diffeomorphism degrees are used to gauge away ten components, leaving up to six independent degrees of freedom. (Note that a general gravitational theory can have up to six propagating metric modes, but general relativity is special in that there are only two). In contrast, in a theory with spontaneous Lorentz breaking, up to all ten NG modes can potentially propagate as additional degrees of freedom in the vierbein. \section{Gravitational Higgs Mechanisms} With two sets of broken symmetries, local Lorentz transformations and diffeomorphisms, there are potentially two types of Higgs mechanisms. Furthermore, there is the possibility that additional massive modes can exist as excitations that do not stay in the minimum of the potential $V$. For the case of the broken diffeomorphisms. it was shown that the conventional Higgs mechanism involving the metric does not occur.\cite{ks} This is because the mass term that is generated by covariant derivatives involves the connection, which consists of derivatives of the metric and not the metric itself. As a result, no mass term for the metric is generated according to the usual Higgs mechanism. However, it was also shown that because of the form of the potential, e.g., as in Eq.\ \rf{VT2}, quadratic terms involving the metric can arise. This results in an alternative form of the Higgs mechanism\cite{ks} that has no direct analogue in nonabelian gauge theory. (In nonabelian gauge theory, the potential $V$ involves only the scalar Higgs fields and not the gauge fields. In contrast here, both the metric and tensor fields enter in the massive-field excitations). The additional mass terms that arise in this alternative Higgs mechanism can potentially modify gravity in a way that avoids the van Dam, Veltmann, and Zakharov discontinuity.\cite{vdvz} They are therefore potentially interesting in studies of modified gravity theory. In contrast, for the case of the broken Lorentz symmetry, it is found that a conventional Higgs mechanism can occur.\cite{rbak} In this case, the relevant gauge field is the spin connection. This field appears directly in covariant derivatives acting on local tensor components, and for the case where the local tensors acquire a vev, quadratic mass terms for the spin connection can be generated. However, a viable Higgs mechanism involving the spin connection can occur only if the spin connection is a dynamical (i.e., propagating) field. This then requires that there is nonzero torsion and that the geometry is Riemann-Cartan. As a result, a conventional Higgs mechanism for the spin connection is possible, but only in a Riemann-Cartan geometry. However, even if torsion is permitted, constructing a viable model with a massive propagating spin connection that is ghost- and tachyon-free remains a challenging and open problem.\cite{rbak} Therefore, for simplicity in the remainder of this work, a Riemann spacetime (with no torsion) is assumed. In this restricted context, the only possible process giving rise to massive modes is the alternative Higgs mechanism, in which massive modes are due to excitations that do not stay in the minimum of the potential $V$. \section{Bumblebee Models} To investigate further the effects of NG and massive modes in theories with spontaneous Lorentz violation, it is useful to work in the context of a definite model. The simplest example involves a vector field with a nonzero vev. Models of this type are known as bumblebee models.\cite{ks,akgrav} Examples have been studied in various forms by a number of authors.\cite{ks,akgrav,rbak,rbffak,kl01,baak05,kb06,ejm,kt02,mof03,grip04,cl04,bp05,ems05,acfn,clmt06} Bumblebee models are defined as field theories with a vector field $B_\mu$ that acquires a nonzero vev, $<B_\mu> \, = b_\mu$. The vev is induced by a potential $V$ in the Lagrangian that has a minimum when the vacuum solution holds. Bumblebee models can be defined with generalized kinetic terms for the vector and gravitational fields. However, for brevity, an example with a Maxwell kinetic term is considered here. The Lagrangian then has the form ${{\mathcal L}} = {{\mathcal L}}_{\rm G} + {{\mathcal L}}_{\rm B} + {{\mathcal L}}_{\rm M}$, where ${{\mathcal L}}_{\rm G}$ describes the pure-gravity sector, ${{\mathcal L}}_{\rm M}$ describes the matter sector (including possible interactions with $B_\mu$), and \begin{equation} {{\mathcal L}}_{\rm B} = - \frac 1 4 B_{\mu\nu} B^{\mu\nu} - V(B_\mu B^\mu \pm b^2) , \label{BBL} \end{equation} describes the bumblebee field. (For simplicity, additional possible interactions between the curvature tensor and $B_\mu$ are neglected here as well). The bumblebee field strength in Riemann spacetime is $B_{\mu\nu} = \partial_\mu B_\nu - \partial_\nu B_\mu$. A noteworthy feature of all bumblebee models is that they do not have local $U(1)$ gauge symmetry. This symmetry is broken explicitly by the presence of the potential $V$. However, it is common to include couplings to matter that involve the notion of charge in the matter sector. For example, terms involving current couplings with charged matter can be included by defining, ${\mathcal L}_{\rm M} = B_\mu J^\mu$ with $D_\mu J^\mu = 0$. In this case, the theory has a global $U(1)$ symmetry that gives rise to charge conservation in the matter sector. This assumption also implies that initial values can be chosen that maintain stability of the Hamiltonian.\cite{rbffak} Different forms of the potential $V$ can be considered. One example is a smooth quadratic potential, \begin{equation} V = {\textstyle{1\over 2}} \kappa (B_\mu B^\mu \pm b^2)^2 , \label{Vkappa} \end{equation} where $\kappa$ is a constant (of mass dimension zero). This type of potential allows both NG excitations (that stay within the potential minimum) as well as massive excitations (that do not). An alternative would be to consider a linear Lagrange-multiplier potential \begin{equation} V = \lambda (B_\mu B^\mu \pm b^2) , \label{Vsigma} \end{equation} where the Lagrange-multiplier field $\lambda$ imposes a constraint that only allows NG excitations in $B_\mu$ and excludes massive-mode excitations. However, for definiteness here, the smooth potential \rf{Vkappa} is chosen, which allows a massive-mode excitation. For such a bumblebee model, three Lorentz symmetries and one diffeomorphism are broken. Therefore, up to four NG modes can appear. However, the diffeomorphism NG mode is found not to propagate.\cite{rbak} It drops out of the kinetic terms and is purely an auxiliary field. In contrast, the Lorentz NG modes do appear in the form of two massless transverse modes and one auxiliary mode. These have properties similar to the photon in electrodynamics, which raises the interesting possibility that photons might be described as NG modes in theories with spontaneous Lorentz violation.\cite{rbak} Previous links between QED gauge fields, fermion composites, and NG modes have been uncovered in flat spacetime (with global Lorentz symmetry).\cite{ngphoton} However, bumblebee models are different. They consist of theories with a noncomposite vector field, have no local U(1) gauge symmetry, and give rise to photons as NG modes in the presence of gravity. Note that bumblebee models also include possible couplings between the vacuum value $b_\mu$ and a matter current $J^\mu$. Such an interaction can provide an unmistakable signature of physical Lorentz violation that would distinguish it from any gauge-fixed form of QED. Note as well that any such signal would be contained in the Standard-Model Extension(SME).\cite{sme} Thus, on-going investigations of Lorentz breaking using the SME have sensitivity to all signals of spontaneous Lorentz breaking involving couplings between matter and the background vevs. To determine more thoroughly whether conventional Einstein-Maxwell solutions can emerge from bumblebee models, the role of the massive mode must be investigated.\cite{rbffak} This mode constitutes an additional degree of freedom beyond those of the NG modes. It also alters the form of the initial-value problem. For simplicity, only the case of a purely timelike vacuum vector $b_\mu = (b,0,0,0)$ is considered here. In this case, in the weak-field limit, it is found that the massive mode does not propagate as a free field. Instead, it remains purely an auxiliary field that has no time dependence. As a result, its value is fixed by the initial conditions at $t=0$. Although it does not propagate, the massive mode can nevertheless alter the form of the static potentials. An example of this can be seen by solving for the modified static potentials in the presence of a point particle with mass $m$ and charge $q$. It is found that both the electromagnetic and gravitational potentials are modified by the presence of the massive mode, where the specific forms of the modified potentials depend on the assumed initial value of the static massive mode. There are therefore numerous cases that could be explored, including examples that might be relevant in considering alternative explanations of dark matter. However, in the large-mass limit (e.g., approaching the Planck scale), excitation of the massive mode is highly suppressed, and the static potentials approach the conventional Coulomb and Newtonian forms. In the limit of a vanishing massive mode, these become exact expressions. As a result, it is found that the usual Einstein-Maxwell solutions (describing both propagating photons and the usual static potentials) can emerge from a bumblebee model (without local U(1) symmetry), in which local Lorentz symmetry is spontaneously broken. \section*{Acknowledgments} This work was supported by NSF grant PHY-0554663.
1,314,259,994,032
arxiv
\section{Introduction} Let $A$ be a brace of cardinality $p^{n}$ where $p>n+1$ is prime, and $ann (p^{i})$ be the set of elements of additive order at most $p^{4}$ in this brace. A pre-Lie ring related to the brace $A/ann(p^{2})$ was constructed in \cite{shsm}. The aim of this paper is to show that this construction can be reversed to recover the brace $A/ann(p^{2})$ by applying a formula similar to the formula for the group of flows to the obtained pre-Lie ring. This formula is the same for braces which have the same additive group. The main result of this paper is the following application of this result: \begin{theorem}\label{Main} Let $p$ be a prime number, and $n<p-1$ be a natural number. Let $(A, +, \circ )$ be a brace of cardinality $p^{n}$, and let $ann (p^{4})$ be the set of elements of additive order at most $p^{4}$ in this brace. Then the multiplication in the brace $A/ann(p^{4})$ is the same as the multiplication in the group of flows of some left nilpotent pre-Lie algebra $(A/ann(p^{4}), +, \cdot )$. Moreover, the addition in the pre-Lie algebra $(A/ann(p^{4}), +, \cdot )$ and in the brace $(A/ann(p^{4}), +, \circ )$ is the same (where $(A/ann(p^{4}), +, \circ )$ is the factor brace of the brace $(A, +, \circ )$ by the ideal $ann(p^{4})$). \end{theorem} Note that, by Lemma $17$, \cite{passage}, $ann(p^{i})$ is an ideal in the above brace $A$. Therefore the brace $A/ann(p^{i})$ is well defined. Recall that the passage from finite pre-Lie algebras to finite braces was first discovered by Wolfgang Rump in \cite{Rump}. He used the exponential function $e^{a}$ to pre-Lie algebras and showed that the obtained structure is a brace. This construction can also be described using the group of flows developed in \cite{AG}. For more details, see \cite{passage}. It is an open question as to whether or not every brace with such cardinality could be be obtained from some pre-Lie algebra in this way. This is known to be true for right nilpotent braces for sufficiently large $p$ \cite{passage}, and it is also known to be true for $\mathbb R$-braces \cite{Rump}, where for $\mathbb R$-braces the correspondence is local. In \cite{Rump}, page 141, Wolfgang Rump suggested a potential way of approaching this question using $1$ cocycles. However, there are complications, because the additive group of a Lie algebra and the additive group of the brace may not be identical in the case when the adjoint group of the brace is obtained by using the Lazard's correspondence from this Lie algebra. Notice that Theorem \ref{789} implies that if $A$ is a brace of cardinality $p^{n}$ for a prime number $p$ and a natural number $n<p-1$ then the factor brace $A/ann(p^{2})$ is obtained by the formula from Theorem \ref{789} from some pre-Lie ring with the same additive group. It is not clear whether the formula from Theorem \ref{789} is the same as the formula for the group of flows. We will use the same notation as in \cite{shsm}. For the background section we refer the reader to the background section of \cite{shsm}. \section{ Left nilpotency of the pre-Lie rings constructed in \cite{shsm} } Let $(A, +, \circ )$ be a brace. One of the mappings used in connection with braces are the maps $\lambda _{a}:A\rightarrow A$ for $a\in A$. Recall that for $a,b, c\in A$ we have $\lambda _{a}(b) = a \circ b - b$, $\lambda _{a\circ c}(b) = \lambda _{a}(\lambda _{c}(b))$. Our first result shows that the pre-Lie rings constructed in \cite{shsm} are left nilpotent. For the reader's convenience the main result and the construction from \cite{shsm} is quoted in Theorem \ref{1} at the end of this paper. \begin{lemma}\label{2} Let the notation be as in Theorem \ref{1}. Then the obtained pre-Lie ring $(A, +, \bullet )$ is left nilpotent. \end{lemma} \begin{proof} By Lemma \ref{citepassage}, we have $pA = A^{\circ p}$. Consequently, if $a\in A$ then \[pa=a_{1}^{\circ p} \circ a_{2}^{\circ p} \circ \cdots \circ a_{m}^{\circ p}\] for some $m$ and some $a_{1}, \cdots , a_{m}\in A$. Recall that $\circ $ is an associative operation, so we do not need to put brackets in this formula. By Lemma \ref{citenote}, we have that \[a^{\circ p} *b = \sum_{k=1}^{p-1}{p\choose k}e_{k},\] where $e_{1} = a*b$, $e_{2} = a*(e_{1})$, and for each $i$, $e_{i+1} =a*e_{i}$. Notice that if $b\in A^{i}$ then $e_{j} \in A^{i+j}$, hence $e_{p-1}\in A^{p-1}\subseteq A^{n+1}=0$. By using this formula, we see that if $e\in A$, $b\in A^{i}$ and $c\in A^{i+1}$ for some $i$ then \[ \lambda _{e^{\circ p}}(pc+b)=e^{\circ p}*(pc+b)+pc+b=pc' +b,\] for some $c'\in A^{i+1}$ (since ${p\choose k}$ is divisible by $p$ for $0<k<p$). Recall that $A^{i+1}=A*A^{i}$. Let $b\in A^{i}$ for some $i$. Notice that by the multiplicative property of $\lambda $ maps we have: \[(pa)*b+b=\lambda _{pa}(b)=\lambda _{a_{1}^{\circ p}\circ \cdots \circ a_{m}^{\circ p}}(b) = \lambda _{a_{1}^{\circ p}}(\cdots (\lambda _{a_{m-1}^{\circ p}} (\lambda _{a_{m}^{\circ p}}(b)\cdots )))\subseteq pA^{i+1}+b.\] This implies \[(pa)*b+b\in pA^{i+1} +b,\] hence $[\wp^{-1}((pa) * b)] \in [A^{i+1}]$, provided that $b\in A^{i}$, where $[A^{i+1}]=\{[a]:a\in A^{i+1}\}$, and, as usual, we use notation $[a]=[a]_{ann(p^{2})}$. Therefore, by the formula for operation $\bullet $ from Theorem \ref{1}, we get \[[a]\bullet [b] \in [A^{i+1}],\] for $b\in A^{i}$, $a\in A$. Consequently, \[[b_{1}] \bullet ([b_{2}]\bullet ( \cdots ([b_{n}] \bullet [b_{n+1}]) ))\in [A^{n+1}]= 0,\] for all $b_{1}, . . . b_{n+1}\in A$, since $A^{n+1}=0$ in every brace of cardinality $p^{n}$ for a prime $p$, by a result of Rump \cite{rump}. \end{proof} \section{ Relations between $\odot $ and $\bullet $} Let $(A, +, \circ )$ be a brace, and let $a\in A$. In this section, by $[a]$, we mean the coset of $A$ in the factor brace $A/ann(p^{2})$, so $[a]=\{a+i: i\in ann (p^{2})\}$. Recall that $[a]$ is an element of the factor brace $A/ann(p^{2})$. We will denote the multiplication and the addition in the brace $A/ann(p^{2})$ by the same symbols as the addition and the multiplication in brace $A$. Recall that $[a]+[b]=[a+b]$ and $[a]*[b]=[a*b]$, $[a\circ b]=[a*b+a+b]$. We recall a lemma from \cite{shsm}. \begin{lemma}\label{6} Let $(A, +, \circ )$ be a brace of cardinality $p^{n}$, $p$ is a prime number, and $p>n+1$. Let $\wp^{-1}:pA\rightarrow A$ be defined as in \cite{shsm}. Let $a,b\in A$, then $[a]=[a]_{ann(p^{2})}$, $[b]=[b]_{ann(p^{2})}$ are elements of $A/ann(p^{2})$. Define \[[a]\odot [b]=[\wp^{-1}((pa)*b)],\] then this is a well defined binary operation on $A/ann(p^{2})$. \end{lemma} Notice that $\wp^{-1}$ can be defined in the same way for all braces whose additive group is the same group $(A, +)$. Recall also that if $a=px$ then $\wp^{-1} (a)$ is an element in $A$ such that $p(\wp^{-1}(x))=a$. {\em Remark $1$.} Let $(A, +, \circ )$ be a brace of cardinality $p^{n}$ for some prime number $p$, and for some natural number $n<p-1$. Note that in the proof of the following Lemma \ref{F} only the following facts are used: \begin{itemize} \item $p^{p-1}a=0$ for all $a\in A$. \item The product of any $p-1$ elements from $pA$, under the operation $*$, is zero (by Proposition \ref{b}). \item For every $i$, $p^{i}A$ is an ideal in $A$, by \cite{Engel}. \item The formula from Lemma \ref{co}. \item Operation $\wp ^{-1}:pA\rightarrow A$, which only depends on the additive group, and it can be assumed that for all braces with the same additive group this operation $\wp ^{-1}$ is the same. \item The operation of taking the coset $[a]=[a]_{ann(p^{2})}$ of an element $a\in A$. This coset only depends on the additive group of this brace and does not depend on the multiplicative group of this brace. \item The inductive assumption, which gives formulas which only depend on the additive group of brace $A$. \end{itemize} $ $ {\bf Definition 1.} Let $V_{j}$ denote the set of all non-associative words in non-commuting variables $X_{1},\ldots , X_{j-1},X_{j}$ (where $X_{j}$ appears only once at the end of each word and all words have length larger than $2$). Let $w$ be a non-associative word in variables $X_{1}, \ldots X_{j}$, so $w\in V_{j}$ Let $(A, +, \circ )$ be a brace, and let $x_{1}, \ldots , x_{j}\in A$. Let $w\langle [x_{1}], \ldots , [x_{j}]\rangle $ be the specialisation of $w$, for $X_{i}=[x_{i}]$ and under the operation $\odot $, and let $w\{x_{1}, \ldots , x_{j}\}$ be the specialisation of $w$, for $X_{i}=x_{i}$ and under the operation $*$. For example, let $w=(X_{1}X_{2})X_{3}$, and let $a,b\in A$, then $w\langle [a], [a], [b]\rangle =([a]\odot [a])\odot [b]$ and $w{a,a,b}=(a*a)*b$. We will use this notation in the following lemma. \begin{lemma} \label{F} Let $p$ be a prime number and $n<p-1$ be a natural number. Let $(A, +)$ be an abelian group of cardinality $p^{n}$, where $p$ is a prime number and $n<p-1$ is a natural number. Let $j>2$ and let $W\in V_{j}$ be a non-associative word in variables $X_{1}, \ldots X_{j}$, where each $X_{i}$ appears only once, and they appear in the order $X_{1}, \ldots, X_{j}$. Let $i_{1}, \ldots i_{j-1}>0$, where $i_{j}\geq 0$ are all natural numbers. There are integers $\beta _{v}$ for $v\in V_{j}$, with only a finite number of these integers nonzero, and such that for each brace $(A, +, \circ )$ with the additive group $(A, +)$ and for each $x_{1}, \ldots , x_{j}$ we have \[[\wp^{-1}W\{p^{i_{1}}x_{1}, \ldots , p^{i_{j}}x_{j}\}]= p^{i_{1}+\ldots +i_{j}-1}W\langle [x_{i}], \ldots , [x_{j}] \rangle +p^{i_{1}+\ldots +i_{j}}\sum_{v\in V_{j}}\beta _{v}v\langle [x_{1}], \ldots , [x_{j}] \rangle ,\] \end{lemma} \begin{proof} Denote \[w=W\{p^{i_{1}}x_{1}, \ldots , p^{i_{j}}x_{j}\}.\] We proceed on induction, in the decreasing order, on \[i_{1}+i_{2}+\cdots +i_{j}.\] If $i_{1}+\cdots +i_{j}\geq p-1\geq n+1$ then $w$ is zero by Proposition \ref{b}. Then \[[\wp^{-1}(w)]=[\wp^{-1}(0)]=[0]=p^{i_{1}+\cdots +i_{j}-1}[\bar w]\] since $[p^{p-2}A]=[0]$ so $p^{i_{1}+\ldots +i_{j}-1}[\bar w]=[0].$ Therefore, the result is true. Let $k$ be a natural number. Suppose now that the result holds in cases when $power(w)>k$, and for all $j$ and we will show that the result holds also in the case when $power(w)=k$ and for all $j$. For this fixed $power(w)=k$, we will proceed by induction on $j$ (in the usual increasing order). The smallest possible $j$ to consider is $j=2$. Recall that $w$ is a product of elements $p^{i_{1}}x_{1}, \ldots , p^{i_{j}}x_{j}\in A$ under the operation $*$. We proceed by induction on $j$. {\bf Case $j=2$.} Notice that for $j=2$ we have $w=(p^{k'}x_{1})*(p^{m}x_{2})$, where $k'+m=k$. If $k'=1$ then $w=(px_{1})*x_{2}$. Consequently \[[\wp^{-1}(w)]=[\wp^{-1}((px_{1})*(p^{m}x_{2})] = p^{m}[\wp^{-1}((px_{1})*x_{2})]=p^{m}[x_{1}\odot x_{2}]=p^{k-1}[x_{i}\odot x_{2}].\] It remains to consider the case $i_{1}=k'>1$, $i_{2}=m$. By Lemma \ref{citenote}: \[ (px_{1})^{\circ {k'-1}}=p^{k'}x_{1}+\sum_{i=2}^{p-1}{{p^{k'-1}\choose i}}y_{i},\] where $y_{2}=(px_{1})*(px_{1}), \ldots $ and $y_{i+1}=(px_{1})*y_{i}$ for $i=2,3, \ldots , p-1$. Consequently \[ (px_{1})^{\circ {k'-1}}=p^{k'}x_{1}+\sum_{i=2}^{p-1}\zeta _{i}p^{k'-1}y_{i},\] for some integers $\zeta _{i}\geq 0$. This follows because $-x=(p^{p}-1)x=x+\cdots +x$ for $x\in A$. Notice that the sum ends at the $p-1$-th place since $y_{i}\in A^{i}$ and $A^{n+1}=0$ (since $A$ has cardinality $p^{n}$ by \cite{rump}, and $n+1\leq p-1$, hence $A^{p-1}=0$). Therefore \[(p^{k'}x_{1} )*(p^{m}x_{2})= ((px_{1})^{\circ p^{k'-1}}+\sum_{i=2}^{p-1}\zeta _{i}p^{k'-1}y_{i})*(p^{m}x_{2}).\] By Lemma \ref{co} and Corollary \ref{777} we have \[(p^{k'}x_{1} )*x_{i_{2}}= (px_{1})^{\circ p^{k'-1}}*(p^{m}x_{2})+d\] where $d$ is a sum of some products of some number of copies of elements $p^{k'-1}y_{i}$ for $i=2,3, \ldots $ and element $p^{m}x_{2}$ at the end, and also possibly some number of copies of element $(px_{1})^{\circ p^{k'-1}}$. Observe that $p^{k-1}y_{i}$ is a product of $i-1$ copies of element $px_{1}$ and element $p^{k'-1}(px_{1})=p^{k'}x_{1}$ at the end, so $p^{k'-1}y_{i+1}=(px_{1})*((px_{1})*(\cdots *( (px_{1})*(p^{k'}x))\cdots )$, hence in this presentation $p$ appears at least $k'+1$ times near elements $x_{i}$ in this product which is equal to $p^{k'-1}y_{i}$ for $i=2, 3, \ldots $. Then, by applying Lemma \ref{co} several times to the element $(px_{1})^{\circ p^{k'-1}}=p^{k'}x_{1}+\sum_{i=2}^{p-1}p^{k'-1}\zeta _{i}y_{i}$ which appears in these products we get that $d$ is a sum of some products of elements $p^{k'-1}y_{i}$ and the element $p^{m}x_{2}$ at the end, and possibly also some other elements (which are products of $y_{i}\in pA$ and $p^{k'}x_{1}$). Notice that the process of applying Lemma \ref{co} will terminate at some stage, because we will obtain longer products of elements from $pA$, and by Proposition \ref{b} such products of more than $p-1$ elements are zero. Notice also that $p$ appears at least $k'+1+m=k+1$ times near elements $x_{i}$ in such products. Recall that $[\wp(x+y)]=[\wp^{-1}(x)]+[\wp^{-}(y)]$ for $x,y\in pA$, by \cite{shsm}. Observe now that \[[\wp^{-1}((px_{1})^{\circ p^{k'-1}}*(p^{m}x_{2}))]= [\wp^{-1}(p^{k'-1} ((px_{1})*(p^{m}x_{2})))]+[\wp^{-1}(c)],\] where \[c=\sum_{i=2}^{p-1}\zeta_{i}\wp^{-1}(p^{k'-1}z_{i}),\] where $z_{2}=(px_{1})*((px_{1})*(p^{m}x_{2}))$ and for $i=2, 3, \ldots $ we have $z_{i+1}=(px_{1})*z_{i}$ (by Lemma \ref{citenote}). Notice that $p^{k'-1}z_{2}=(px_{1})*((px_{1})*(p^{m+k'-1}x_{2}))$ and $p^{k'-1}z_{i+1}=(px_{1})*(p^{k'-1}z_{i})$. Therefore, in this presentation of elements $p^{k'-1}z_{i}$, $p$ appears at least $m'+k'+1$ times near elements $x_{i}$. Notice also that \[[\wp^{-1}(p^{k'-1} ((px_{1})*(p^{m}x_{2})))]=p^{k'-1+m}[\wp^{-1}((px_{1})*x_{2})]=p^{k-1}[x_{1}]\odot [x_{2}].\] Combining it all together, we obtain \[[\wp^{-1}((p^{k'}x_{1})*(p^{m}x_{2}))]=p^{k-1}[x_{1}]\odot [x_{2}]+[\wp^{-1}(d)]+ [\wp^{-1}(c)].\] Observe that this equation was proved without using the inductive assumption, so it is true for all $k'>0, m\geq 0$. Recall that $k'+m=k$. Remark $1$ and the inductive assumption on $k$ (applied for numbers larger than $k$) shows that $[\wp^{-1}(d)]+ [\wp^{-1}(c)]$ can be written as some sum \[p^{k}\sum_{v\in V_{j}}\beta _{v, i_{1}, \ldots , i_{j},w}v\langle [x_{1}], \ldots , [x_{j}] \rangle \] This proves the case $j=2$. {\bf Case $j>2$.} Suppose now that $j>2$. Recall that $w$ is a product of elements $p^{i_{1}}x_{1}, \ldots , p^{i_{j}}x_{j}\in A$ under the operation $*$. Notice that there is $t$ such that $w$ is a product of elements $p^{i_{1}}x_{1}, \ldots p^{i_{t-1}}x_{t-1}$ and element $p^{i_{t}}x_{t}*p^{i_{t+1}}x_{t+1}$ and elements $p^{i_{t+2}}x_{t+2},\ldots , p^{i_{j}}x_{j}\in A. $ Recall that several lines prior we proved that \[[\wp^{-1}((p^{k'}x_{1})*(p^{m}x_{2}))]=p^{k-1}[x_{1}]\odot [x_{2}]+[\wp^{-1}(d)]+ [\wp^{-1}(c)].\] Observe that this equation was proved without the use of inductive assumption, so it is true for all $k'>0,m\geq 0$. We can apply it to $p^{i_{t}}x_{t}$ instead of $p^{k'}x_{1}$ and to $p^{i_{t+1}}x_{t+1}$ instead of $p^{m}x_{2}$, and obtain that: \[[\wp^{-1}((p^{i_{t}}x_{t})*(p^{i_{t+1}}x_{t+1}))]=p^{i_{1}+i_{2}-1}[x_{t}]\odot [x_{t+1}]+[\wp^{-1}(d')]+ [\wp^{-1}(c')],\] for some $d'$, $c'$ (which were calculated in the same way as $d$ and $c$ above were calculated). Observe that \[c'+d'=\sum_{i}r_{i}\] where each $r_{i}$ is a product of some copies of elements from the set $\{p^{l}x_{t}, p^{l}x_{t+1}:l=1,2, \ldots \}$ and $p$ appears more than $i_{t}+i_{t+1}$ times near elements $x_{t}, x_{t+1}$ in each product (which is equal $r_{i}$). Therefore, by multiplying the above equation by $p$ we get \[(p^{i_{t}}x_{t})*(p^{i_{t+1}}x_{t+1})=p^{i_{t}+i_{t+1}-1}h+c'+d' +a,\] for some $h\in A$ such that $[h]=[x_{1}]\odot [x_{2}]$ and for some $a\in A$. Moreover $a\in ann(p)=\{x\in A:px=0\},$ since $a$ is an element from $ann(p^{2})$ multiplied by $p$. By Lemma \ref{co} and Corollary \ref{777} we have that $w=w'+\sum_{i}w_{i}+a'$ where $w'$ (and respectively $w_{i}$ and $a'$) is a product of elements $p^{i_{1}}x_{1}, \ldots p^{i_{t-1}}x_{t-1}$ and element $p^{i_{t}+i_{t+1}}h$ (and respectively element $r_{i}$ and $a'$) and elements $p^{i_{t+2}}x_{t+2},\ldots , p^{i_{j}}x_{j}\in A$. Notice that $a'$ belongs to $ann(p)$, since $ann(p)$ is an ideal in $A$ by \cite{passage}. Notice that $w'$ is a product of less than $j$ elements (which are $[x_{i}]$ for $i\neq t$, $i\neq t+1$ and the element $[x_{t}\odot x_{t+1}]$), and $p$ appears $k$ times in this product (near elements $[x_{i}]$ for $i\neq t$, $i\neq t+1$ and near the element $[x_{t}\odot x_{t+1}]$), so we can apply the inductive assumption on $j$ to $w'$ to get \[[\wp^{-1}(w')]=p^{k-1}w\langle [x_{1}], \ldots , [x_{j}]\rangle +p^{k}\sum_{v\in V_{j}}\beta _{v}'v\langle [x_{1}], \ldots , [x_{j}] \rangle,\] for some integers $\beta _{v}'$. On the other hand, observe that $p$ appears more than $k$ times when we present each $w_{i}$ as a product of elements $p_{l}x_{i}, p_{l}[x_{t}]\odot [x_{t+1}]$ for some $i, l$ (near elements $x_{i}$ and near the element $[x_{t}]\odot [x_{t+1}]$), hence we can apply the inductive assumption on $k$ (applied to numbers larger than $k$) to elements $w_{i}$). We get that \[[\wp^{-1}(w_{i})]=p^{k}\sum_{v\in V_{j}}\beta _{v}''v\langle [x_{1}], \ldots , [x_{j}] \rangle \] for some integers $\beta _{v}'$. Notice also that $\wp^{-1}(a)\in ann(p^{2})$ since $a'\in ann(p)$, hence $[\wp^{-1}(a)]=[0]$. Observe that we implicitly used Remark $1$. When combined together this concludes the proof. \end{proof} The aim of this section is to prove the following result: \begin{theorem}\label{11} Let $p$ be a prime number and $n<p-1$ be a natural number. Let $(A, +)$ be an abelian group of cardinality $p^{n}$. Let $V$ denote the set of all non-associative words in non-commuting variables $X,Y$ (where $Y$ appears only once at the end of each word and $X$ appears at least twice in each word in $V$). Then there are integers $\alpha _{w}$, for $w\in V$, such that only a finite number of them is non-zero and that the following holds: For each brace $(A, +, \circ )$ with the additive group $(A, +)$ and for each $a, c\in A$ we have \[[a]\odot [c]= a\bullet c +p\sum_{w\in V}\alpha _{w}w([a],[c]),\] where $w([a],[c])$ is a specialisation of the word $w$ for $X=[a]$, $Y=[c]$, and the multiplication in $w(a,c)$ is the same as the multiplication in the pre-Lie ring $(A/ann (p^{2}), +, \bullet )$ constructed in Theorem \ref{1} from the brace $(A, +, \circ )$. So for example if $w=((XX)X)Y$ then $w(a,c)=(([a]\bullet [a])\bullet [a])\bullet [c])$. \end{theorem} \begin{proof} Observe that, by applying Corollary \ref{777} and Lemma \ref{co} several times (and using the fact that $p^{p-1}A=0$) we can write each element $[(\xi^{i} pa)*c]$ as a sum of the element $\xi^{i}[ (pa)*c]$ and $t(pa,c,i)$, where $t(pa,c, i)$ is a sum of products of elements $pa$ and element $c$ at the end, with $ap$ appearing at least two times in each product. Consequently \[[\wp^{-1}([(\xi^{i} pa)*c)]=\xi^{i} [\wp^{-1}((pa)*c)]+[\wp^{-1}(t(pa,c,i))].\] By Lemma \ref{F} applied to the products which are summands of $t(pa, c,i)$ we have: \[[\wp^{-1}([(\xi^{i} pa)*c)]=\xi ^{i}[a]\odot [c]+ p[f(pa,c, i)]\] where $f(pa,c,i)$ is a sum of products of at least two copies of element $a$ and element $b$ appearing only once at the end, under operation $\odot $. Recall the formula for $[a]\bullet [c]$ from Theorem \ref{1}. By applying this formula we get \[[a]\bullet [c]= (p-1) [a]\odot [c]+p\sum_{i=0}^{p-2}\xi ^{i}f(a,c, i).\] Notice that $\sum_{i=0}^{p-2}\xi ^{i}f(a,c, i)$ is a sum of some products of $[a]$ and $[c]$ under operation $\odot $ and by Remark $1$ the types of these products do not depend on the particular elements $[a], [c]$ which were used (they only depend on the additive group of $A$), moreover a product may appear several times in this sum. Therefore \[[\wp^{-1}([(\xi^{i} pa)*c)]=\xi ^{i}[a]\odot [c]+ p\sum_{w\in V}\zeta _{w,i} w\langle [a],[c]\rangle \] for some $\zeta _{w}$ which don't depend on $a$ and $c$ (and $\zeta _{w}$ are the same in all braces with the same additive group $(A, +)$), where $w\langle [a],[c]\rangle $ is a specialisation of word $w\in V$ for $X=[a]$, $Y=[c]$, and the multiplication in $w\langle [a],[c]\rangle $ is $\odot $. For example in $w=(XX)Y$ then $w\langle [a], [c]\rangle =([a]\odot [a])\odot [c]$. Consequently \[[a]\bullet [c]=(p-1)[a]\odot [c]+p\sum_{w\in V}m _{w} w\langle [a], [c]\rangle ,\] where $m_{w}=\sum_{i=0}^{p-2}\xi ^{p-1-i}\zeta _{w,i}$. {\bf Part 1.} Let $x_{1}, \ldots , x_{j}\in A$. Let $u$ be a non-associative word in variables $X_{1}, \ldots X_{j}$ (and each $X_{i}$ appears only once, and they appear in the order $X_{1}, \ldots, X_{j}$.) Let $\langle [x_{1}], \ldots , [x_{j}]\rangle $ be the specialisation of $u$ for $X_{i}=[x_{i}]$ and under the operation $\odot $, and $u([x_{1}], \ldots , [x_{j}])$ be the specialisation of $u$ for $X_{i}=[x_{i}]$ and under the operation $\bullet $. We will now prove by induction on $j$, that if $[x_{1}], \ldots , [x_{j-1}] \in S_{1}, [x_{j}]\in S_{2}$ then there are integers $m_{w,u}$, such that \[u([x_{1}], \ldots , [x_{j}])=(p-1)^{j -1}u\langle [x_{1}], \ldots , [x_{j}]\rangle+p\sum_{w\in W}m _{w,u} w\langle [x_{1}], \ldots , [x_{j}]\rangle .\] Moreover, $W$, the set of non-associative words of length at least $3$, and in $j$ variables $X_{1}, \ldots, X_{j}$ for some $j$, and $w([x_{1}], \ldots , [x_{j}])$, is a specialisation of $w\in W$ for $X_{i}=[x_{i}]$ under the operation $\odot $. For $j=2$ the result follows from the first part of our proof, since we have shown that \[[a]\bullet [c]=(p-1)[a]\odot [c]+p\sum_{w\in W}m _{w} w\langle [a], [c]\rangle .\] Therefore, for $u=X_{1}X_{2}$ we take $m_{v, u}=m_{v}$. We proceed by induction on $j$. Let $j>2$. There is $1<t<j$ such that $u=vy$ for some word $v$ in variables $X_{1}, \ldots , X_{t}$, and some word $y$ in variables $X_{t+1}, \ldots , X_{j}$. By the inductive assumption: \[v([x_{1}], \ldots , [x_{t}])=(p-1)^{t-1}\langle [x_{1}], \ldots , [x_{t}]\rangle +p\sum_{w\in W}m_{w,v}w\langle [x_{1}], \ldots , [x_{t}]\rangle \] and \[y([x_{t+1}], \ldots , [x_{j}])=(p-1)^{j-t-1}y\langle [x_{t+1}], \ldots , [x_{j}]\rangle +p\sum_{w\in W}m_{w,y}w\langle [x_{t+1}], \ldots , [x_{j}]\rangle .\] It follows that, \[u([x_{1}], \ldots , [x_{j}])=v([x_{1}], \ldots , [x_{t}])\bullet y([x_{t+1}], \ldots , [x_{j}])=\] \[=(p-1)^{t-1}v\langle [x_{1}], \ldots , [x_{t}]\rangle +p\sum_{w\in W}m_{w,v}w\langle [x_{1}], \ldots , [x_{t}]\rangle \bullet z\] where \[z= (p-1)^{j-t-1}y\langle [x_{t+1}], \ldots , [x_{j}]\rangle +p\sum_{w\in W}m_{w,y}w\langle [x_{t+1}], \ldots , [x_{j}]\rangle .\] Recall that operation $\bullet $ is distributive with respect to addition, since $\bullet $ is a pre-Lie operation. Therefore \[u([x_{1}], \ldots , [x_{j}])=(p-1)^{j-2}v\langle [x_{1}], \ldots , [x_{t}]\rangle \bullet y\langle [x_{t+1}], \ldots , [x_{j}]\rangle + \] \[+p\sum_{w, w'\in W}m_{w,w',v,y}w\langle [x_{1}], \ldots , [x_{t}]\rangle \bullet w'\langle [x_{t+1}], \ldots , [x_{j}]\rangle ,\] for some integers $m_{w,w', v, y}$. Moreover, in the summation the word $w$ only depends on variables $X_{1}, \ldots , X_{j}$, and the word $w'$ depends only on variables $X_{t+1}, \ldots , X_{j}$. Therefore, \[w\langle [x_{1}], \ldots , [x_{t}]\rangle \odot w'\langle [x_{t+1}], \ldots , [x_{j}]\rangle=w''\langle [x_{1}], \ldots , [x_{j}]\rangle \] where $w''=ww'$ (so $w'$ is the word which is obtained by putting the word $w'$ after $w$, it could be also written as $w''=(w)(w')$). Observe also that \[v([x_{1}], \ldots, [x_{t}])\odot y\langle [x_{t+1}], \ldots , [x_{j}]\rangle =u\langle [x_{1}], \ldots , [x_{j}]\rangle ,\] since $u=vy$. The result now follows from the formula \[[a]\bullet [c]=(p-1)[a]\odot [c]+p\sum_{w\in W}m _{w} w\langle [a], [c]\rangle ,\] applied for $[a]=v\langle [x_{1}], \ldots , [x_{j}]\rangle $ and $[c]=y\langle [x_{1}], \ldots , [x_{j}]\rangle $. We obtain \[v\langle [x_{1}], \ldots , [x_{t}]\rangle \bullet y\langle [x_{t+1}], \ldots , [x_{j}]\rangle=\] \[ (p-1) v\langle [x_{1}], \ldots, [x_{t}]\rangle \odot y\langle [x_{t+1}], \ldots , [x_{j}]\rangle+p\sum_{w, w'\in W}m_{w,w',v,y}'w\langle [x_{1}], \ldots , [x_{t}]\rangle \bullet w' \langle [x_{t+1}], \ldots , [x_{j}]\rangle ,\] for some integers $m_{w,w', v, y}'$. We can apply a similar argument to $a=w\langle [x_{1}], \ldots , [x_{t}]\rangle $ and $c= w' \langle [x_{t+1}], \ldots , [x_{j}]\rangle.$ This can then be substituted to the right hand side of the above equation (which has $u([x_{1}], \ldots [x_{j}])$ on the left hand side) to obtain: \[u([x_{1}], \ldots , [x_{j}])=(p-1)^{j -1}u\langle [x_{1}], \ldots , [x_{j}]\rangle+p\sum_{w\in W}m _{w,u} w\langle [x_{1}], \ldots , [x_{j}]\rangle .\] {\bf Part 2.} We are now ready to proof our result that \[[a]\odot [c]=(p-1)[a]\bullet [c]+p\sum_{w\in V}\alpha _{w} w([a], [c]).\] We start with where $w([a], [c])$ is the specialisation under the operation $\bullet $. Let $E_{[a],[c]}\subseteq A$ denote the set of products of some copies of element $[a]$ ($[a]$ appearing at least once) and element $[c]$ at the end of each word ($[c]$ appearing only once) under the operation $\odot $. Moreover, we assume that both $[a]$ and $[c]$ appear in each product in the set $E_{[a],[c]}$. Let $V_{[a],[c]}$ be a vector whose entries are elements from $E_{[a],[c]}$ arranged in such a way that longer products appear before shorter products. Let $U_{[a],[c]}$ be the corresponding vector obtained from the corresponding products of $[a]$ and $[c]$ under the operation $\bullet $. So for example if $([a]\odot [a])\odot ([a]\odot [c])$ is the $i$-th entry of $V_{[a],[c]}$ then $([a]\bullet [a])\bullet ([a]\bullet [c])$ is the $i$-th entry of $U_{[a],[c]}$. Let $RFM$ denote the set of row-finite matrices (where the rows and columns are enumerated by the set of natural numbers) with integer entries. It is known that $RMF$ is a ring, and the product of any copies of matrices $M$ and $D$ is well defined and belongs to $RFM$. By Part $1$ above we obtain \[U_{[a],[b]}= DV_{[a],[b]}+ pMV_{[a],[b]}\] for some matrix $M$ in RFM and some diagonal matrix $D$ in RFM whose diagonal entries are $(p-1)^{i-1}$ for some $i$. Notice that $(p-1)(-(1+p+p^{2}+\cdots +p^{p-1}))\equiv 1 \mod p^{p}$. Recall also that $p^{p-1}A=0$. Therefore \[V_{[a],[b]}=D'U_{[a],[b]}-pD'MV_{[a],[b]},\] where $D'$ is a diagonal matrix whose entries are $(-(1+p+p^{2}+\cdots +p^{p-1}))^{i}$. We can now substitute the above formula for $V_{[a], [b]}$ in the right hand side and obtain: \[V_{[a],[b]}=D'U_{[a],[b]}-pD'MD'U_{[a], [b]}+p^{2}D'MD'MV_{[a],[b]},\] for some matrix $M'$. We can continue to substitute the expression for $V_{[a],[b]}$ on the right hand side. This process will stop after at most $p$ steps, since $p^{p-1}A=0$. This will give \[V_{[a],[b]}=D^{-1}U_{[a],[b]}+pM'U_{[a],[b]},\] for some matrix $M'$ from RFM with integer entries. This concludes the proof. \end{proof} \section{ Relations between $*$ and $\odot $} In this section we will investigate some properties of the binary operation $\odot $ defined in Lemma \ref{6}. Notice that if $p$ is a prime number and $1\leq i<p$ is a natural number then \[{p\choose i}={\frac pi}({{{p-1} \choose {i-1}}}),\] hence ${{{p-1} \choose {i-1}}/i}$ is an integer. \begin{lemma}\label{7} Let $p$ be a prime number. Let $(A, +, \circ )$ be a brace of cardinality $p^{n}$ with $p>n+1$. Let $a\in A$, define \[f(a)=\sum_{i=1}^{p-1}({{{p-1} \choose {i-1}}/i})e_{i},\] where $e_{1}=a$, $e_{2}=a*a$ and $e_{i+1}=a*e_{i}$ for all $i$. Then $pf(a)=a^{\circ p}$. Moreover, there are integers $\alpha _{1}, \ldots , \alpha _{p-1}$ which only depend on the additive group $(A, +)$ of the brace $A$ (and do not depend on element $a$), and such that \[ [a]=\sum_{i=1}^{p-1} \alpha _{i}[f_{i}(a)] ,\] where $[f_{1}(a)]=[f(a)]$, $[f_{2}(a)]=[f(a)]\odot [f(a)]$, $[f_{i+1}(a)]=[f(a)]\odot [f_{i}(a)]$, where $\odot $ is defined as in Lemma \ref{6}. Moreover, $\alpha _{1}=1$. As usual, by $[a]$ we mean $[a]_{ann(p^{2})}$. \end{lemma} \begin{proof} We will use a formula from Lemma \ref{citenote}, namely \[a^{\circ p}=\sum_{i=1}^{p-1}{p\choose k}e_{i},\] where $e_{1}=a$ and $e_{i+1}=a*e_{i}.$ It works since $A^{p}=0$ as $n+1<p$. Observe also that \[[f(a)]\odot [b]=[\wp^{-1}(a^{\circ p}*b)]\] by the definition of $\odot $. By using this formula we see that $[f(a)]=[a]+\sum_{i>1}\beta _{i}[e_{i}]$ for some integers $\beta _{i}$. Similarly \[[f(a)]\odot [f(a)]=[\wp^{-1}(a^{\circ p}*f(a))]=[a*a]+\sum_{i>2}\beta _{i}'[e_{i}],\] for some integers $\beta _{i}'$. We proceed by induction on $j$. Assume that we have proved that \[[f_{j}(a)]=[e_{j}]+\sum_{i>j}\beta _{i}''[e_{i}]\] for some integers $\beta _{i}''$. It follows that \[f_{j}(a)=e_{j}+\sum_{i>j}\beta _{i}''e_{i}+a',\] for some $a'\in ann(p^{2})$. Recall that for $x,y\in pA$ we have $[\wp^{-1}(x+y)]=[\wp^{-1}(x)]+\wp^{-1}(y)]$. It follows that \[[f_{j+1}(a)]=[f(a)]\odot [f_{j}(a)]=[\wp^{-1}(a^{\circ p}*(e_{j}+\sum_{i>j}\beta _{i}''e_{i}+a' ))]=\] \[[\wp^{-1}(a^{\circ p}*(e_{j}))] + \sum_{i>j}\beta _{i}''[\wp^{-1}(a^{\circ p}*e_{i})]+[\wp^{-1}(a^{\circ p}*a')]=\] \[=[e_{j+1}]+\sum_{i>j+1}\beta _{i}'''[e_{i}]\] for some integers $\beta _{i}'''$. Notice that $[\wp^{-1}(a^{\circ p}*a')]=[0]$ since $p^{2}\wp^{-1}(a^{\circ p}*a')=p(a^{\circ p}*a')=0$, since $a'\in ann(p^{2})$. Let $f$ be the vector whose entries are elements $[f_{i}(a)]$ and let $E$ be the vector whose entries are elements $[e_{i}]$. Then \[f=ME\] for some upper triangular matrix (with integer entries), whose diagonal entries are $1$. It follows that $E=M'f$ for some upper diagonal matrix with integer entries with $1$'s on the diagonal (because $p^{n}A=0$). By looking at the first entry of $E$ and the first entry of $M'f$ we get the required conclusion. \end{proof} \begin{proposition} Let notation and assumptions be as in Lemma \ref{7}, and let $b\in A$. Then there are integers $\gamma _{1}, \ldots , \gamma _{p-1}$ which only depend on the additive group $(A, +)$ of the brace $A$ and do not depend on element $a$ such that \[ [a*b]=\sum_{i=1}^{p-1} \gamma _{i}[q_{i}(a,b)] \] where $[q_{1}(a)]=[f(a)]\odot [b]$, $[q_{2}(a)]=[f(a)]\odot [q_{1}(a,b)]$, $[q_{i+1}(a,b)]=[f(a)]\odot [q_{i}(a, b)]$, where $\odot $ is defined as in Lemma \ref{6}. Moreover, $\gamma _{1}=1$. As usual by $[a]$ we mean $[a]_{ann(p^{2})}$. \end{proposition} \begin{proof} The proof is similar to the proof of Lemma \ref{7}. By a formula from Lemma \ref{citenote}, \[a^{\circ p}*b=\sum_{i=1}^{p-1}{p\choose k}e_{i}'\] where $e_{1}'=a*b$ and $e_{i+1}'=a*e_{i}'$ for $i=1,2, \ldots , p-2$. This formula works since $n+1<p$. Notice that \[[q_{1}(a,b)]=[f(a)]\odot [b]=[\wp^{-1}((pf(a))*b)]=[\wp^{-1}(a^{\circ p}*b)]= [a*b]+\sum_{i>1}\beta _{i}[e_{i}']\] for some integers $\beta _{i}$. We will proceed by induction on $j$. Assume that \[[q_{j}(a,b)]=[e_{j}']+\sum_{i>j}\beta _{i}'[e_{i}']\] for some integers $\beta _{i}'$. Reasoning similarly as in the proof of Lemma \ref{7} we can show that \[[q_{j+1}(a,b)]=[f(a)]\odot [q_{j}(a,b)]=[e_{j+1}']+\sum_{i>j+1}\beta _{i}''[e_{i}']\] for some integers $\beta _{i}''$. Let $f$ be the vector whose entries are elements $[q_{i}(a,b)]$ and let $E$ be the vector whose entries are elements $[e_{i}']$. Then \[f=ME\] for some upper triangular matrix (with integer entries) whose diagonal entries are $1$. It follows that \[E=M'f\] for some upper diagonal matrix with integer entries with $1$'s on the diagonal (because $p^{n}A=0$). By looking at the first entry of $E$ and the first entry of $M'f$ we get the required conclusion. This concludes the proof. \end{proof} \section{ Some properties of function $f$} Let $p$ be a prime number. Let $(A, +, \circ )$ be a brace of cardinality $p^{n}$ with $p>n+1$. Let $a\in A$. Recall that \[f(a)=\sum_{i=1}^{p-1}({{{p-1} \choose {i-1}}/i})e_{i},\] where $e_{1}=a$, $e_{2}=a*a$ and $e_{i+1}=a*e_{i}$ for all $i$. Then $pf(a)=a^{\circ p}$. In this section we investigate properties of this function. \begin{theorem}\label{f(a)} Let $p$ be a prime number. Let $(A, +, \circ )$ be a brace of cardinality $p^{n}$ with $p>n+1$. For $a\in A$ let $f(a)=\sum_{i=1}^{p-1}({{{p-1} \choose {i-1}}/i})e_{i}.$ Then the map \[[a]\rightarrow [f(a)]\] is an injective function on $A/ann(p^{2})$. Consequently, since the set $A/ann(p^{2})$ is finite, it follows that this function is a bijection. As usual we denote $[a]=[a]_{ann(p^{2})}$. \end{theorem} \begin{proof} Let $a,b\in A$. Suppose that $[f(a)]=[f(b)]$ then $f(a)-f(b)\in ann(p^{2})$, hence $p(p(f(a)-pf(b))=0$, so $pf(a)-pf(b)\in ann(p)$. Recall that $pf(a)=a^{\circ p}$, hence \[a^{\circ p}=b^{\circ p}+e',\] for some $e'\in ann(p)$. We will now show that all products \[[x_{1}*(x_{2}(*\cdots *(x_{k-1}*x_{k})\cdots ))]\] for $x_{1}, \ldots , x_{k}\in\{a,b\}$ are equal. For $k=n+1$ the result is true because all such products of length $n+1$ will be zero since $A^{n+1}=0$. We proceed by induction on $k$ in the decreasing order. Let $i$ be a natural number, $i<n+1$. We will show the result is true for $k=i$ provided that it is true for all $k>i$. We will first show that \[[a*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots )))]=[b*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots )))]\] for all $x_{1}, \ldots x_{k}\in \{a,b\}$. Observe that $a^{\circ p}-b^{\circ p}\in ann(p)$ yields \[a^{\circ p}*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots ))=\] \[b^{\circ p}*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots ))+e,\] for some $e\in ann(p)$, since $ann(p)$ is an ideal in the brace $A$ by \cite{passage}. It follows from Lemma \ref{co} applied to $a'=a^{\circ p}$ and $b'=b^{\circ p}-a^{\circ p}$ and $c'=x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots )$. By Lemma \ref{citenote} we have: \[a^{\circ p}*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots ))=\] \[=pa*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots )))+{\frac {p(p-1)}2}a*(a*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots )))+\cdots .\] Similarly, \[b^{\circ p}*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots ))=\] \[=pb*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots ))) +{\frac {p(p-1)}2}b*(b*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots ))) +\cdots .\] The above three equations combined together imply after applying $\wp^{-1}$ to both sides \[[a*(x_{1}*(x_{2}*(\cdots *(x_{k-2}*x_{k-1})\cdots )))+{\frac {p-1}2}a*(a*(x_{1}*\cdots *(x_{k-2}*x_{k-1})\cdots ))+\cdots ]=\] \[[b*(x_{1}*(x_{2}*(\cdots *(x_{k-2}*x_{k-1})\cdots ))) +{\frac {p-1}2}b*(b*(x_{1}*\cdots *(x_{k-2}*x_{k-1})\cdots ))+\cdots ] +[\wp^{-1}(e)].\] Notice that $[\wp^{-1}(e)]=[0]$ since $p^{2}\wp^{-1}(e)=pe=0$. Notice that by the inductive assumption \[[a*(a*(x_{1}*(*\cdots *(x_{k-2}*x_{k-1})\cdots )))]=[b*(b*(x_{1}*(*\cdots *(x_{k-2}*x_{k-1})\cdots )))].\] This also holds for the next products in the above sum (involving more $a$ and $b$) by the inductive assumption. The two above arguments show that \[[a*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots )))]=[b*(x_{1}*(x_{2}(*\cdots *(x_{k-2}*x_{k-1})\cdots )))]\] for this fixed $k$ and for all $x_{1}, \ldots x_{k-1}\in \{a,b\}$. Notice now that we can use a similar argument by putting in the ith place $a^{\circ p}$ on the left-hand side and $b^{\circ p}$ on the right hand-side, without changing the elements $x_{1}, x_{2}, \ldots, x_{k-1} $: \[x_{1}*(\cdots *(x_{j}*(a^{\circ p}*(x_{j+1}*(\cdots *(x_{k-2}*x_{k-1})\cdots ))))\cdots )=\] \[x_{1}(*\cdots *(x_{j}*(b^{\circ p}*(x_{j+1}*(*\cdots *(x_{k-2}*x_{k-1})\cdots ))))\cdots )+e''\] for all $x_{1}, \ldots x_{k-1}\in \{a,b\}$, for some $e''\in A$. Reasoning similarly as when we did to show that $e\in ann(p)$, we obtain that $e''\in ann(p)$. By Lemma \ref{citenote} we have \[a^{\circ p}*(x_{j+1}*(\cdots *(x_{k-2}*x_{k-1})\cdots ))=\sum_{i=1}^{p-1}{p\choose i}y_{i}\] where $y_{1}= a*(x_{j+1}*(\cdots *(x_{k-2}*x_{k-1})\cdots )))$, and $y_{i+1}=a*y_{i}$ for $i=1,2, \ldots , p-2$. Similarly \[b^{\circ p}*(x_{j+1}*(\cdots *(x_{k-2}*x_{k-1})\cdots )))=\sum_{i=1}^{p-1}{p\choose i}y_{i}'\] where $y_{1}'= b*(x_{j+1}*(\cdots *(x_{k-2}*x_{k-1})\cdots )))$, and $y_{i+1}=b*y_{i}$ for $i=1,2, \ldots , p-2$. By applying $\wp ^{-1}$ to both sides of equation \[x_{1}*(\cdots *(x_{j}*(a^{\circ p}*(x_{j+1}*(\cdots *(x_{k-2}*x_{k-1})\cdots ))))\cdots )=\] \[x_{1}(*\cdots *(x_{j}*(b^{\circ p}*(x_{j+1}*(*\cdots *(x_{k-2}*x_{k-1})\cdots ))))\cdots )+e''\] we obtain after taking cosets \[[x_{1}*(\cdots *(x_{j}*\sum_{i=1}^{p-1}\alpha _{i}y_{i})\cdots )]=[x_{1}*(\cdots *(x_{j-1}*\sum_{i=1}^{p-1}\alpha _{i}y_{i}')\cdots )],\] where ${p\choose i}=p\alpha _{i}$ for each $i$, so $\alpha _{1}=1$. By the inductive assumption, \[[x_{1}*(\cdots *(x_{j}*y_{i})\cdots )]=[x_{1}*(\cdots *(x_{j}*y_{i}')\cdots )]\] for $i>1$. Therefore \[[x_{1}*(\cdots *(x_{j}*y_{1})\cdots )]=[x_{1}*(\cdots *(x_{j-1}*y_{1}')\cdots )].\] By the definition of $y_{1}$, we obtain \[[x_{1}*(\cdots *(x_{j}*(a*(x_{j+1}*(\cdots *(x_{k-2}*x_{k-1})\cdots ))))]=\] \[[x_{1}*(\cdots *(x_{j}*(b*(x_{j+1}*(*\cdots *(x_{k-2}*x_{k-1})\cdots ))))]\] for all $x_{1}, \ldots x_{k-1}\in \{a,b\}.$ We have thus proved that \[[x_{1}*(x_{2}(*\cdots *(x_{k-1}*x_{k})\cdots ))]=[x_{1}'*(x_{2}'(*\cdots *(x_{k-1}'*x_{k}')\cdots ))],\] provided that $x_{j}=x_{j}'$ in all places except in one place, and all $x_{i}, x_{i}'\in \{a,b\}$. Observe now that when we change one index at a time this implies: \[[x_{1}*(x_{2}(*\cdots *(x_{k-1}*x_{k})\cdots ))]=[x_{1}*(x_{2}(*\cdots *(x_{k-1}*x_{k}')\cdots ))],\] and then \[[x_{1}*(x_{2}(*\cdots *(x_{k-1}*x_{k}')\cdots ))]=[x_{1}*(x_{2}(*\cdots *(x_{k-2}*(x_{k-1}'*x_{k}'))\cdots )))],\] and continuing to change the $i$-th digit at $i$-th time we eventually get \[[x_{1}'*(x_{2}'(*\cdots *(x_{k-1}'*x_{k})\cdots ))]=[x_{1}'*(x_{2}'(*\cdots *(x_{k-1}'*x_{k}')\cdots ))].\] These equations imply that \[[x_{1}*(x_{2}(*\cdots *(x_{k-1}*x_{k})\cdots ))]=[x_{1}'*(x_{2}'(*\cdots *(x_{k-1}'*x_{k}')\cdots ))].\] This proves the inductive argument. Notice that for $k=1$ we have $[a]=[b]$, as required. \end{proof} \begin{theorem}\label{g(a)} Let $p$ be a prime number. Let $(A, +, \circ )$ be a brace of cardinality $p^{n}$ with $p>n+1$. For $a\in A$ define \[g(a)=f^{(p^{p}!-1)}(a)\] where $f^{(1)}(a)=f(a)$ and for every $i$ we denote $f^{(i+1)}(a)=f(f^{(i)}(a))$. Then \[[f(g(a))]=[g(f(a))]=[a].\] Moreover \[[f(x)]=[g^{(p^{p}!-1)}(a)]\] where $g^{(1)}(a)=a$ and for every $i$ we denote $g^{(i+1)}(a)=g(g^{(i)}(a))$. \end{theorem} \begin{proof} Observe first that since $[a]\rightarrow [f(a)]$ is a bijection on $A/ann(p^{2})$, then for every $i$, $[f^{(i)}(a)]\rightarrow [f^{(i+1)}(a)]$ is a bijective function on $A/ann(p^{2})$, and hence for every $k>0$ \[[a]\rightarrow [f^{(k)}(a)]\] is a bijective function on $A/ann(p^{2})$. Notice that there are $p^{n}+1$ elements $[f^{(1)}(a)], [f^{(2)}(a)], \cdots , [f^{(p^{n}+1)}(a)]$, and since brace $A$ has cardinality $p^{n}$ it follows that \[[f^{(i)}(a)]=[f^{(j)}(a)],\] for some $1\leq i<j\leq p^{n}+1$, hence $j-i\leq p^{n}\leq p^{p}$. Notice that the function \[[f^{(i)}(a)]\rightarrow [a] \] is a bijective function on $A/ann(p^{2})$. Applying this function to both sides of equation \[[f^{(i)}(a)]=[f^{(j)}(a)],\] we obtain \[[a]=[f^{(j-i)}(a)].\] Notice that $j-i\leq p^{n}$ implies that $j-i$ divides $p^{n}!$, so it divides $p^{p}!$. Therefore \[[f^{(p^{p}!)}(a)]=[a].\] This shows that \[[f(g(a))]=[g(f(a))]=[a].\] On the other hand, by the first part of this proof it follows that $[a]\rightarrow [g(a)]$ is a bijective function, since \[[g(a)]=[f^{(p^{p}!-1)}(a)].\] Observe now that \[[g^{(p^{p}!-1)}(a)]=[f^{((p^{p}!-1)^{2})}(a)]=[f(a)],\] since $[f^{(p^{p}!)}(a)]=[a]$. \end{proof} \section{ Recovering braces from pre-Lie algebras} We are now able to prove the following theorem: \begin{theorem}\label{789} Let $p$ be a prime number and $n<p-1$ be a natural number. Let $(A, +)$ be an abelian group of cardinality $p^{n}$. Let $V$ denote the set of all non-associative words in non-commuting variables $X,Y$ (where $Y$ appears only once at the end of each word and $X$ appears at least once in each word in $V$). Then there are integers $e _{w}$, for $w\in V$, such that only a finite number of them is non-zero and that the following holds: For each brace $(A, +, \circ )$ with the additive group $(A, +)$ and for each $a, c\in A$ we have \[[a]* [c]=\sum_{w\in V}e _{w}w([a],[c]),\] where $w([a],[c])$ is a specialisation of the word $w$ for $X=[a]$, $Y=[c]$, and the multiplication in $w(a,c)$ is the same as the multiplication in the pre-Lie ring $(A/ann (p^{2}), +, \bullet )$ constructed in Theorem \ref{1} from the brace $(A, +, \circ )$. So for example if $w=((XX)X)Y$ then $w(a,c)=(([a]\bullet [a])\bullet [a])\bullet [c])$. \end{theorem} \begin{proof} By Lemma \ref{7} we have that \[[a]=\sum_{i=1}^{p-1} \alpha _{i}[f_{i}(a)]\] where $[f_{i+1}(a)]=[f(a)]\odot [f_{i}(a)]$. By applying it for $a=g(a)$, where $g(a)$ is as in Theorem \ref{g(a)}, so $g(a)=f^{(p^{p}!-1)}(a)$ and $[f(g(a))=[a]$, we obtain: \[[g(a)]=\sum_{i=1}^{p-1} \alpha _{i}[h_{i}(a)]\] where $[h_{1}(a)]=[a]$ and $[h_{i+1}(a)]=[a]\odot [h_{i}(a)]$ and $\alpha _{i}$ are some integers. Observe that integers $\alpha _{i}$ only depend on the additive group $(A, +)$ of the brace $A$ and do not depend on the multiplicative group $(A, \circ )$ and do not depend on element $a$. Therefore $[g(a)]$ can be obtained by applying the operations $+$ and $\odot $ several times to some copies of element $[a]$, and the method is the same for all braces with the same additive group $(A, +)$. {\bf Fact 1.} By Theorem \ref{g(a)} we obtain $[f(a)]=[g^{p^{p}!-1}(a)]$, therefore $[f(a)]$ can be obtained by applying the operations $\odot $ and $+$ several times to some copies of element $[a]$, and the method and order of applying these operations does not depend on element $a$, and the same method works for all braces with the additive group $(A, +)$. {\bf Fact 2.} By Lemma \ref{7} we have that $[a*b]=[a]*[b]$ can be written by using operations $\odot $ and $+$ applied several times to some copies of element $[f(a)]$ and to the element $[b]$ and to sums of the obtained elements (and the method does not depend on elements $f(a)$ and $b$) and the same method works for all braces with the additive group $(A, +)$. $ $ By combining the above Fact $1$ and Fact $2$ we obtain that $[a]*[b]$ can be obtained by applying operation $\odot $ to elements $a$ and $b$, and the algorithm for which order to apply these operations $\odot, +$ does not depend on elements $a, b$. Moreover the same method works for all braces with the additive group $(A, +)$. By using Theorem \ref{1} we can write product $\odot $ by using the pre-Lie operation $\bullet $ and $+$. Notice that the operation $+$ is this pre-Lie ring is the same as the addition in the factor brace $(A/ann(p^{2}), +, \circ )$, so it only depends on the multiplicative group of brace $A$. Since $\bullet $ is a pre-Lie algebra multiplication it is distributive with respect to the addition, the obtained result can be simplified and written as \[\sum_{w(a,b)\in W}\beta _{w}w(a,b),\] where $W$ is the set of products of some number of element $[a]$ and element $[b]$ at the end under the pre-Lie operation $\bullet $, where $\beta _{w}$ are some integers which don't depend on elements $a, b$ and do not depend on the multiplicative group of the brace $A$ (and only depend on the additive group $(A, +)$). This concludes the proof. \end{proof} \section{ Braces $A/ann(p^{4})$ and groups of flows } In this section we will prove Theorem \ref{Main}. Notice that, as shown in \cite{passage}, the construction of the group of flows is well defined for every left nilpotent pre-Lie ring $(A, +, \cdot)$. Moreover, the group of flows is a brace with the same addition as in the original pre-Lie ring, and we will call this brace just the group of flows of the pre-Lie ring $(A, +, \cdot)$. The construction of the group of flows was first introducted in \cite{AG}. The connection between braces and groups of flows was discovered by Wolfgang Rump in 2014 \cite{Rump}. We begin with a Lemma similar to the result obtained in the last section of \cite{shsm}. \begin{lemma}\label{similar} Let $(A, +, \cdot)$ be a left nilpotent pre-Lie ring of cardinality $p^{n}$ for some prime number $p$ and some natural number $n<p-1$. Let $(A/ann(p^{2}), +, \cdot)$ be the factor pre-Lie ring by the ideal $ann(p^{2})=\{a\in A:p^{2}a=0\}$. Elements of this pre-Lie ring are cosets $[a]=\{a+i:i\in ann(p^{2})\}$. We denote the multiplication and addition in this pre-Lie ring with the same symbols a in the original ring: \[[a]\cdot [b]=[a\cdot b], [a]+[b]=[a+b].\] Let $(A, +, \circ)$ be the brace obtained from the pre-Lie ring $(A, +, \cdot)$ using the construction of the group of flows. Let $(A/ann (p^{2}), +, \bullet )$ be the corresponding pre-Lie ring, constructed as in \cite{shsm} from the brace $(A, +, \circ )$ (notice that this construction is quoted in Theorem \ref{1}). Then $(A/ann (p^{2}), +, \bullet )$ and $(A/ann(p^{2}, +, \cdot)$ are the same pre-Lie rings, so \[[x]\bullet [y]=(p-1)([x\cdot y]),\] for $[x], [y]\in A/ann(p^{2})$. \end{lemma} \begin{proof} Notice that the addition in both pre-Lie rings is the same as the addition in the additive group $A/ann(p^{2})$ of the factor brace $A/ann(p^{2})$. We need to show that \[[x]\bullet [y]=(p-1)([x\cdot y]),\] for $[x], [y]\in A/ann(p^{2})$. We will use the same notations for the construction of the group of flows as in \cite{passage}. Recall that the formula for the operation $*$ in the group of flows of pre-Lie algebra $(A, +, \cdot )$ is \[a*b= {\Omega (a)}\cdot b+{\frac 1{2!}}{\Omega (a)}\cdot ({\Omega (a)}\cdot b)+ {\frac 1{3 !}}{\Omega (a)}\cdot ({\Omega (a)}\cdot ({\Omega (a)}\cdot b))+\ldots \] where $a\circ b=a*b+a+b$ in brace $(A, +, \circ)$. By Lemma \ref{aga} ($11$ from \cite{passage}) we have that $\Omega (a)=a+\sum_{w }\alpha _{w}w(a)$ for some integers $\alpha _{w}$, where $w$ are finite non associative words in variable $x$ (of degree at least $2$), and $w(a)$ is a specialisation of $w$ at $a$ (so for example if $w=x\cdot (x\cdot x)$ then $w(a)=a\cdot (a\cdot a)$). Moreover, there is $m$ such that $\alpha _{j}=0$ provided that $j>m$. Let $a,b\in A$, we will denote $a'=pa$. Observe now that \[[\xi ^{i}a]\odot [b]=[\wp^{-1}((\xi ^{i}a')*b]=[\wp^{-1}({\Omega (\xi ^{i}a')}\cdot b+{\frac 1{2!}}{\Omega (\xi ^{i}a')}\cdot ({\Omega (\xi ^{i}a')}\cdot b)+\ldots )].\] Observe also that \[[\Omega (\xi ^{i}a')]=\xi ^{i}a'+\sum_{w }\alpha _{w}w(\xi ^{i}a')=\xi ^{i}a+\sum_{k=2}^{m}(\xi ^{i})^{k}f_{k}(a'),\] where \[f_{k}(a')=\sum_{w\in W_{k}}\alpha _{w}w(a')\] where $W_{k}$ consists of words of length $k$. Therefore, \[[\xi ^{i}a]\odot [b]=[\wp^{-1}( \xi ^{i}a'+\sum_{k=2}^{mp}(\xi ^{i})^{k}t_{k}(a',b))],\] where $t_{k}(a',b)$ is a linear combination (with integer coefficients) of some products of $j$ copies of $a'$ and element $b$ at the end under the operation $\cdot $. Notice, that for $j>p-1$ each such product will be zero because it will belong to $p^{p-1}A=0$, because $a'=pa$ and the operation $\cdot $ is distributive. Therefore, \[[\xi ^{i}a]\odot [b]=[\wp^{-1}( \xi ^{i}a'\cdot b+\sum_{k=2}^{p-2}(\xi ^{i})^{k}t_{k}(a',b))],\] Notice that \[\xi ^{p-1-i}[\xi ^{i}a]\odot [b]=[\wp^{-1}( a'\cdot b+\sum_{k=2}^{p-2}(\xi ^{k-1})^{i}t_{k}(a',b))].\] Recall also the formula $[a]\bullet [b]=\sum_{i=0}^{p-2}\xi ^{p-1-i}[\xi ^{i}a]\odot [b]$, where $\xi ^{p-1}\equiv 1 \mod p^{n}$. Combining the above equations together we get \[[a]\bullet [b]=[\wp^{-1}( \sum_{i=0}^{p-2} (a'\cdot b+\sum_{k=2}^{p-2}(\xi ^{k-1})^{i}t_{k}(a',b)))].\] Notice that $(1-\xi ^{i})\sum_{j=1}^{p-2}(\xi ^{i})^{j}=(\xi ^{i})^{p-1}-1 =0$, provided that $0<i<p-1$. Consequently \[[a]\bullet [b]=[\wp^{-1}(\sum_{i=0}^{p-1}a'\cdot b)]=[\wp^{-1}((p-1)a'\cdot b )]=(p-1) [a\cdot b],\] since $\cdot $ is a pre-Lie algebra operation, so \[a'\cdot b=(pa)\cdot b=p(a\cdot b).\] This concludes the proof. \end{proof} {\bf Remark 2.} Let $(A, +, \circ _{1})$ and $(A, +, \circ _{2})$ be two braces with the same additive group $(A, +)$. Suppose that the cardinality of $A$ is $p^{n}$ for some natural number $n$, and some prime number $p>n+1$. Let $(A/ann(p^{2}), +, \bullet _{1})$ and $(A/ann(p^{2}), +, \bullet _{2})$ be the corresponding pre-Lie rings constructed as in \cite{shsm} (see Theorem \ref{1} for details). Note that the set $ann(p^{2})=\{a\in A: p^{2}a=0\}$ is the same in both braces. Therefore the set $A/ann(p^{2})$ can be defined only by using the additive group $(A, +)$ without using operations $\circ _{1}, \circ _{2}$. Let $(A/ann (p^{2}), +, \circ _{1})$ be the factor brace obtained from the brace $(A, +, \circ _{1})$ and let $(A/ann (p^{2}), +, \circ _{2})$ be the factor brace obtained from brace $(A, +, \circ _{2})$. As usual we use the same notation for operations $+, \circ $ in the brace and in the factor brace. Observe that by Theorem \ref{789} if $x \bullet_{1} y=x \bullet _{2} y$ for all $x,y\in A/ann(p^{2})$ then $x\circ _{1} y=x\circ _{2}y$, for $x,y\in A/ann(p^{2})$, and hence the braces $(A, +, \circ _{1})$ and $(A/ann (p^{2}), +, \circ _{2})$ are the same. $ $ {\bf Proof of Theorem \ref{Main}.} As usual we will denote the operations of the addition and of the multiplication in the factor brace by the same symbols as in the original brace. Let \[(A, +, \circ )\] be a brace of cardinality $p^{n}$ for some natural number $n$, and some prime number $p>n+1$. Recall that $ann(p^{2})=\{a\in A:p^{2}a=0\}$. Let \[(A/ann (p^{2}), +, \bullet )\] be the pre-Lie ring, constructed as in \cite{shsm} from the brace $(A, +, \circ )$ (notice that this construction is quoted in Theorem \ref{1}). We will denote the subset of elements of order $p^{2}$, in the additive group $(A/ann (p^{2}), +)$, by $I$. Observe that \[I=\{[a]\in A/ann(p^{2}): [p^{2}a]=[0].\}\] Observe that we can define the factor pre-Lie ring $((A/ann (p^{2}))/I, +, \bullet _{1} )$ of the pre-Lie ring $(A/ann (p^{2}), +, \bullet )$ by the ideal $I$. In this pre-Lie ring $((A/ann (p^{2}))/I, +, \bullet _{1} )$ we have $[x]_{I}\bullet _{1} [y]_{I}=[x\bullet y]_{I}$, $[x]_{I}+[y]_{I}=[x+y]_{I}$ for $x,y\in A/ann(p^{2})$ (where $[x]_{I}, [y]_{I}$ are elements of this pre-Lie ring $(A/ann (p^{2}))/I$). We will call the pre-Lie ring \[((A/ann (p^{2}))/I, +, \bullet _{1} )\] the {\em pre-Lie ring $1$}. On the other hand, let \[(A/ann(p^{2}), +, \circ )\] be the factor brace of the brace $(A, +, \circ )$, by the ideal $ann(p^{2})$. Recall that $I$ denotes the set of elements of the additive order at most $p^{2}$ in the group $(A/ann(p^{2}), +)$. Let \[((A/ann(p^{2}))/I, + ,\bullet _{2})\] be the pre-Lie ring constructed as in \cite{shsm} (so constructed as in Theorem \ref{1}) from the brace $(A/ann(p^{2}), +, \circ )$. We will call this pre-Lie ring the {\em pre-Lie ring $2$}. We also consider the factor brace of the brace \[(A/ann(p^{2}), +, \circ )\] by the ideal $I$. We will call this {\em brace $2$}, and denote it as $((A/ann(p^{2}))/I, +, \circ )$. $ $ We will now show that pre-Lie ring $1$ is the same as pre-Lie ring $2$. Observe that the additive groups of these pre-Lie rings are the same since the construction of the group of flows does not change the additive group. Observe that for $x,y\in A$ by $[x], [y] $ we denote elements of the additive group $(A/ann (p^{2}), +)$ and by $[[x]]_{I}, [[y]]_{I}$ we denote elements of the additive group $((A/ann (p^{2}))/I, +)$. We need to show that \[[[x]]_{I}\bullet _{1} [y]_{I}=[x]]_{I}\bullet _{2} [[y]]_{I},\] for $x,y\in A/ann(p^{2})$. Observe that \[[x]_{I}\bullet _{2} [y]_{I}=[[x]\bullet [y] ]_{I}=[[\wp^{-1}((px)\cdot y)]]_{I},\] where for $x,y\in A$ we define $x\cdot y=\sum_{i=0}^{p-2}\xi ^{p+1-i}((\xi ^{i}x)*y).$ Observe that \[[x]_{I}\bullet _{2} [y]_{I}= [\wp^{-1}([px] \cdot [y])]_{I},\] where for $x,y\in A$ we define $[x]\cdot [y]=\sum_{i=0}^{p-2}\xi ^{p+1-i}([\xi ^{i}x]*[y])$. By the definition of $I$, we know that if $z, w\in A/ann(p^{2})$ then $[z]_{I}=[w]_{I}$ if and only if $z-w\in I$, which means $p^{2}z=p^{2}w$. Therefore, to show that $[[x]]_{I}\bullet _{1} [y]_{I}=[[x]]_{I}\bullet _{2} [[y]]_{I}$ it suffices to show that \[p^{2}[\wp^{-1}((px)\cdot y)]=p^{2}\wp^{-1}([px] \cdot [y]).\] This is equivalent to \[p[(px)\cdot y]=p([px] \cdot [y]),\] and this is true by the definition of the factor brace $(A/ann(p^{2}), +, *)$. Consequently, operations $\bullet _{1}$ and $\bullet _{2}$ are the same. It follows that the pre-Lie ring $1$ is the same as the pre-Lie ring $2$. We will now introduce {\em pre-Lie ring $3$}. Consider the pre-Lie ring $(A/ann(p^{2}), +, \bullet _{3})$ such that the addition in this pre-Lie ring is the same as the addition in the pre-Lie ring $1$, and the multiplication is defined as \[[x]\bullet _{3} [y]=-(1+p+p^{2}+\ldots p^{p-1})([x]\bullet _{1} [y]),\] for $x,y\in A$. Notice that this gives a well defined pre-Lie ring (see \cite{Lazard} for a proof of this remark). We will call the pre-Lie ring $(A/ann(p^{2}), +, \bullet _{3})$ {\em pre-Lie ring $3$}. We will now introduce {\em brace $1$}. Let $(A/ann (p^{2}), +, \circ _{3})$ be the brace which is constructed as the group of flows from pre-Lie ring $3$. Let $((A/ann (p^{2}))/I, +, \bullet _{4})$ be the pre-Lie ring constructed from the brace $(A/ann (p^{2}), +, \circ _{3})$ by the construction from Theorem \ref{1} (so by the construction from \cite{shsm}). We will call this pre-Lie ring \[((A/ann (p^{2}))/I, +, \bullet _{4})\] the {\em pre-Lie ring $4$}. By Lemma \ref{similar} we obtain that the addition in the pre-Lie ring $1$ is the same as the addition in the pre-Lie ring $4$. Observe also that by Lemma \ref{similar} we have for $a,b\in A/ann (p^{2})$ \[[a]_{I}\bullet _{4} [b]_{I}=(p-1)([a]_{I}\bullet _{3}[b]_{I})=-(1+p+p^{2}+\cdots )(p-1)([a]_{I}\bullet _{1}[b]_{I})=[a]_{I}\bullet _{1} [b]_{I},\] since $p^{n}a=a$ for every $a\in A/ann(p^{2})$. Therefore the pre-Lie ring $4$ is the same as the pre-Lie ring $1$. Let {\em brace $1$} be the factor brace of the brace $(A/ann (p^{2}), +, \circ _{3})$ by the ideal $I$. Observe that brace $1$ is the group of flows of pre-Lie ring $5$ which is the factor ring of pre-Lie ring $3$ by the ideal $I$ (because the brace $(A/ann (p^{2}), +, \circ _{3})$ is the group of flows of pre-Lie ring $3$). Recall that pre-Lie ring $1$ is the same as pre-Lie ring $2$. Moreover, pre-Lie ring $4$ is the same as pre-Lie ring $1$. Therefore pre-Lie ring $4$ is the same as pre-Lie ring $2$. By using Theorem \ref{789} we obtain that brace $1$ and brace $2$ are the same, since it is possible to recover brace $1$ from pre-Lie ring $4$, and brace $2$ from pre-Lie ring $2$ by the formula from Theorem \ref{789}. Therefore the brace $2$ is the group of flows, since brace $1$ is the group of flows of a left nilpotent pre-Lie algebra. It remains to show that brace $2$ is the same as the factor brace of the brace $(A, +, \circ )$ by the ideal $ann(p^{4})$. Notice that we can map $[[a]]_{I}\rightarrow [a]_{ann(p^{4})}$ for $a\in A$. Notice that this map is well defined because if $[[a]]_{I}=[[b]]_{I}$ that means that $[a]-[b]$ is in the $I$, so $[p^{2}a]=[p^{2}b]$. This in turn means that $p^{2}(p^{2}a)=p^{2}(p^{2}b)$, which means $[a]_{ann (p^{4})}=[b]_{ann(p^{4})}$. Notice that this map is a homomorphism of braces since $[[a]]_{I}*[[b]]_{I}=[[a*b]]_{I}\rightarrow [a*b]_{ann(p^{4})}=[a]_{ann(p^{4})}*[b]_{ann(p^{4})}$. This shows that brace $2$ is the same as the factor brace $(A/ann(p^{4}), +, \circ )$. \section{ Some results from other papers which were used in previous sections} For the convenience of the reader we recall some results from other papers used in previous sections. All of these results are also listed in \cite{shsm}. By a result of Rump \cite{rump}, for a prime number $p$, every brace of order $p^{n}$ is left nilpotent. Recall that, Rump introduced {\em left nilpotent} and {\em right nilpotent} braces and radical chains $A^{i+1}=A*A^{i}$ and $A^{(i+1)}=A^{(i)}*A$ for a left brace $A$, where $A=A^{1}=A^{(1)}$. A left brace $A$ is left nilpotent if there is a number $n$ such that $A^{n}=0$, where inductively $A^{i}$ consists of sums of elements $a*b$ with $a\in A, b\in A^{i-1}$. A left brace $A$ is right nilpotent if there is a number $n$ such that $A^{(n)}=0$, where $A^{(i)}$ consists of sums of elements $a*b$ with $a\in A^{(i-1)}, b\in A$. We recall Lemma 15 from \cite{Engel}: \begin{lemma}\label{citeEngel}\cite{Engel} Let $s$ be a natural number and let $(A, +, \circ)$ be a left brace such that $A^{s}=0$ for some $s$. Let $a, b\in A$, and as usual define $a*b=a\circ b-a-b$. Define inductively elements $d_{i}=d_{i}(a,b), d_{i}'=d_{i}'(a, b)$ as follows: $d_{0}=a$, $d_{0}'=b$, and for $1\leq i$ define $d_{i+1}=d_{i}+d_{i}'$ and $d_{i+1}'=d_{i}*d_{i}'$. Then for every $c\in A$ we have \[(a+b)*c=a*c+b*c+\sum _{i=0}^{2s} (-1)^{i+1}((d_{i}*d_{i}')*c-d_{i}*(d_{i}'*c)).\] \end{lemma} Let $A$ be a brace, let $a\in A$ and let $n$ be a natural number. Let $a^{\circ n}=a\circ \cdots \circ a$ denote the product of $n$ copies of $a$ under the operation $\circ $. We recall Lemma 14 from \cite{note} (Lemma 15 in the arXiv version) \begin{lemma}\label{citenote}\cite{note} Let $A$ be a left brace, let $a, b\in A$ and let $n$ be a positive integer. Then, \[a^{\circ n} *b = \sum_{i=1}^{n}{n\choose i}e_{i},\] where $e_{1} = a*b$, $e_{2} = a*e_{1}$, and for each $i$, $e_{i+1} =a*e_{i}$. Moreover, \[a^{\circ n} = \sum_{i=1}^{n}{n\choose i}a_{i},\] where $a_{1} = a$, $a_{2} = a*a_{1}$, and for each $i$, $a_{i+1} =a*a_{i}$. \end{lemma} Let $A$ be a brace, by $A^{\circ p^{i}}$ we denote the subgroup of $(A, \circ )$ generated by the elements $a^{\circ p^{i}}$, where $a\in A$. We recall Proposition $15$ from \cite{passage}: \begin{lemma}\label{citepassage}\cite{passage} Let $i,n$ be natural numbers. Let $A$ be a brace of cardinality $p^{n}$ for some prime number $p>n+1$. Then, $p^{i}A=\{p^{i}a:a\in A\}$ is an ideal in $A$ for each $i$. Moreover \[A^{\circ p^{i}}=p^{i}A.\] \end{lemma} $ $ We also recall Lemma $4$ from \cite{Engel}: \begin{lemma}\label{Engelxi}\cite{Engel} Let $p>2$ be a prime number. Let $\xi=\gamma ^{p^{p-1}}$ where $\gamma $ is a primitive root modulo $p^{p}$, then $\xi ^{p-1}\equiv 1 \mod p^{p}$. Moreover, $ \xi ^{j}$ is not congruent to $1$ modulo $p$ for natural $0<j<p-1$. \end{lemma} Let $A$ be a brace of cardinality $p^{n}$ where $p$ is a prime number larger than $n+1$. Denote $pA=\{pa:a\in A\}$ where $pa$ is the sum of $p$ copies of $a$. We now recal several results from \cite{shsm}: \begin{proposition}\label{b}\cite{shsm} Let $n$ be a natural number. Let $A$ be a brace of cardinality $p^{n}$ where $p$ is a prime number larger than $n+1$. Then $pA$ is a brace, and the product of any $p-1$ elements of $pA$ is zero. Therefore, $pA$ is a strongly nilpotent brace of nilpotency index not exceeding $p-1$. Moreover, every product of any $i$ elements from the brace $pA$ and any number of elements from $A$ belongs to $p^{i}A$. Hence the product of any $p-1$ elements from the brace $pA$ and any number of elements from the brace $A$ is zero. \end{proposition} \begin{lemma}\label{co}\cite{shsm} Fix a prime number $p$. Let $W$ be as above. Then there are integers $\beta _{w}$ such that only a finite number of them is non zero and that the following holds: For each brace $(A, +, \circ )$ of cardinality $p^{n}$ with $n<p-1$ and for each $a, b\in pA$, $c\in A$ we have \[(a+b)*c=a*c+b*c+ a*(b*c)-(a*b)*c+\sum_{w\in W}\beta _{w}w\langle a, b, c\rangle ,\] where $w\langle a, b, c\rangle $ is a specialisation of the word $w$ for $X=a$, $Y=b$, $Z=c$ and the multiplication in $w\langle a,b\rangle $ is the same as the operation $*$ in the brace $A$ (recall that $a*b=a \circ b-a-b$). So for example if $w=(((XX)X)Y)Z$ then $w\langle a,b, c\rangle =(((a*a)*a)*b)*c$. \end{lemma} \begin{corollary}\label{777}\cite{shsm} Let $p$ be a prime number and let $m$ be a number. Let $W'$ be the set of nonassociative words in variables $X, Z$ where $Z$ appears only once at the end of each word and $X$ appears at least twice in each word. Then there are integers $\gamma _{w}$ such that only a finite number of them is non zero and the following holds: For each brace $(A, +, \circ )$ of cardinality $p^{n}$ with $p>n+1$ and for each $a, b\in A$ we have \[(ma)*c=m(a*c)+\sum_{w\in W}\gamma _{w}w\langle a,c\rangle ,\] where $w\langle a, c\rangle $ is a specialisation of the word $w$ for $X=a$, $Y=c$, and the multiplication in $w\langle a,c\rangle $ is the same as the operation $*$ in the brace $A$ (recall that $a*c=a \circ c-a-c$). \end{corollary} \begin{theorem}\label{1}\cite{shsm} Let $(A, +, \circ )$ be a brace of cardinality $p^{n}$, where $p$ is a prime number such that $p>n+1$. Let $ann (p^{2})$ be defined as before, so $ann(p^{2})=\{a\in A: p^{2}a=0\}$. Let $\odot $ be defined as \cite{shsm}, so $[x]\odot [y]=[\wp^{-1}((px)*y)]$. Let $\xi =\gamma ^{p^{p-1}}$ where $\gamma $ is a primitive root modulo $p^{p}$. Define a binary operation $\bullet $ on $A/ann(p^{2})$ as follows: \[[x]\bullet [y]=\sum_{i=0}^{p-2} \xi ^{p-1-i} [\xi ^{i}x]\odot [y],\] for $x,y\in A$. Then $A/ann(p^{2})$ with the binary operations $+$ and $\bullet $ is a pre-Lie ring. \end{theorem} The following result was proved in \cite{passage}: \begin{lemma}\label{aga}\cite{passage} Let $(A, +, \circ)$ be a left nilpotent pre-Lie ring of cardinality $p^{n}$ for some prime $p>n+1$. Let $\Omega:A\rightarrow A$ be the inverse function of $W$ (so $W(\Omega (a))=a$) where $W(a)=e^{L_{a}}(1)-1$, where $1\in A^{identity}$. Then there are $\alpha _{w}\in \mathbb Z$ not depending on $a$ such that $\Omega (a)=a+\sum_{w }\alpha _{w}w(a)$ where $w$ are finite non associative words in variable $x$ (of degree at least $2$), and $w(a)$ is a specialisation of $w$ at $a$ (so for example if $w=x\cdot (x\cdot x)$ then $w(a)=a\cdot (a\cdot a)$). \end{lemma} {\bf Acknowledgements.} The author acknowledges support from the EPSRC programme grant EP/R034826/1 and from the EPSRC research grant EP/V008129/1.
1,314,259,994,033
arxiv
\section{#1}} \newcommand{\addtocounter{section}{1} \setcounter{equation}{0}{\addtocounter{section}{1} \setcounter{equation}{0} \section*{Appendix \Alph{section}}} \renewcommand{\theequation}{\arabic{equation}} \textwidth 164mm \textheight 214mm \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \def\buildrel < \over {_{\sim}}{\buildrel < \over {_{\sim}}} \def\buildrel > \over {_{\sim}}{\buildrel > \over {_{\sim}}} \parindent=0.7truecm \parskip=0.2truecm \begin{document} \topmargin 0pt \oddsidemargin=-0.4truecm \evensidemargin=-0.4truecm \renewcommand{\thefootnote}{\fnsymbol{footnote}} \newpage \setcounter{page}{0} \begin{titlepage} \begin{flushright} FTUV/94--26\\ 15A-40561-INT94-13\\ \end{flushright} \begin{center} {\large NEUTRINO MAGNETIC MOMENTS AND THE SOLAR NEUTRINO PROBLEM} \footnote{Talk given at the 6th International Symposium ``Neutrino Telescopes'', Venice, February 22--24, 1994} \\ \vspace{0.5cm} {\large E.Kh. Akhmedov} \footnote{On leave from NRC ``Kurchatov Institute'', Moscow 123182, Russia} \vspace{0.5cm}\\ {\em Institute of Nuclear Theory, Henderson Hall, HN-12,\\ University of Washington, Seattle, WA 98195}\\ {\em and}\\ {\em Instituto de Fisica Corpuscular (IFIC--CSIC)\\ Departamento de Fisica Teorica, Universitat de Valencia\\ Dr. Moliner 50, 46100 Burjassot (Valencia), Spain}\\ \vspace{0.5cm} \end{center} \begin{abstract} Present status of the neutrino magnetic moment solutions of the solar neutrino problem is reviewed. In particular, we discuss a possibility of reconciling different degrees of suppression and time variation of the signal (or lack of such a variation) observed in different solar neutrino experiments. It is shown that the resonant spin--flavor precession of neutrinos due to the interaction of their transitions magnetic moments with solar magnetic field can account for all the available solar neutrino data. For not too small neutrino mixing angles ($\sin 2\theta_0 \buildrel > \over {_{\sim}} 0.2$) the combined effect of the resonant spin--flavor precession and neutrino oscillations can result in an observable flux of solar $\bar{\nu}_{e}$'s. \end{abstract} \end{titlepage} \vspace{2cm} \vspace{.5cm} \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{footnote}{0} \newpage \section{Introduction} The solar neutrino problem, i.e. the deficiency of the observed flux of solar neutrinos as compared to the predictions of the standard solar model, remains one of the major unresolved puzzles of modern physics and astrophysics. Although the astrophysical solution of the problem is not yet completely ruled out, it is very unlikely to be the true reason of the discrepancy provided all the experimental data (and in particular, the results of the Homestake experiment for the whole period of its operation) are taken seriously. It is for this reason that the particle--physics solutions to the problem are currently considered to be more favorable \cite{BHL,B,Sm1}. There are several possible neutrino--physics solutions of the solar neutrino problem, the most popular one being resonant neutrino oscillations in the matter of the sun (the MSW effect \cite{MSW}). In my talk I will concentrate, however, on another type of solutions related to possible existence of large magnetic or transition magnetic moments of neutrinos. In this case neutrino spin precession \cite{C,VVO} or spin--flavor precession \cite{VVO,Akhm1,LM} can occur in the magnetic field of the sun, converting a fraction of solar $\nu_{eL}$ into $\nu_{eR}$ or into $\nu_{\mu R}$, $\nu_{\tau R}$, $\bar{\nu}_{\mu R}$ or $\bar{\nu}_{\tau R}$. Although $\bar{\nu}_{\mu R}$ and $\bar{\nu}_{\tau R}$ are not sterile, they cannot be observed in Homestake, SAGE and GALLEX experiments and can only be detected with a small cross section in the Kamiokande experiment. Spin--flavor precession of neutrinos can be resonantly enhanced in the matter of the sun \cite{Akhm1,LM}, in direct analogy with the MSW effect. Neutrino spin precession and resonant spin--flavor precession (RSFP) can account for both the deficiency of solar neutrinos and time variations of the solar neutrino flux in anticorrelation with solar activity for which there are some indications in the Homestake data. This comes about because the toroidal magnetic field of the sun is strongest in the periods of active sun. Some remarks about time structure of the Homestake data are in order. The data compares better with an assumption of a time--dependent signal than with that of a constant one, hinting to an anticorrelation with solar activity. The existing analyses of the data using different statistical methods gave fairly big values of the correlation coefficient between the data and sun--spot number \cite{BP,BSSS,K,FV,NM}. These analyses, however, were performed before 1990 and so did not take into account more recent runs 109-126. These runs do not show a tendency to vary in time, similarly to runs 19--59. A recent analysis of Stanev \cite {St} which updated the one of ref. \cite{BSSS} included the runs 109-126 and showed that this results in the correlation coefficient being decreased by an order of magnitude as compared to the previously obtained one, but the correlation probability is still large: confidence level of the correlation with the sunspot number $s$ is 0.96 instead of 0.996, and that of the correlation with $s|z|$ where $z$ is the latitude of the line of sight is 0.99 instead of 0.9993. The correlation with the 22-yr cycle is even better than the correlation with the 11-yr one. Therefore, the possibility that the solar neutrino flux anticorrelates with solar activity still persists and deserves further study. At the same time, the Kamiokande group did not observe any time variation of the solar neutrino signal in their experiment, which allowed them to put an upper limit on the possible time variation, $\Delta Q/Q < 30\%$ at 90\% c.l.. Therefore a question naturally arises as to whether one can reconcile a strong time variation in the Homestake experiment with a small (or no) time variation in Kamiokande. Recently, it has been shown \cite{ALP1} (see also \cite{MN2,Pu,Kr} that the RSFP scenario is capable of accounting for all the existing solar neutrino data, including their time structure or lack of such a structure. In particular, it can explain mild suppression of the flux in the gallium experiments and naturally reconcile strong time variations of the signal observed in the Homestake experiment with small time variations allowed by the Kamiokande data. Let me now briefly describe how this works. \section{GALLEX and SAGE} Although the gallium solar neutrino experiments SAGE and GALLEX have been operating for too short a time and so are unable to confirm or disprove 11-yr variations of the signal, they still provide us with an information which is relevant for the magnetic moment scenarios. The point is that most of the data have been taken during the period of high solar activity. One could therefore expect a strong suppression of the signal in the gallium experiments, which has not been observed. This disfavors the ordinary spin precession scenario since it is neutrino-energy independent and so predicts the same degree of suppression and time variation of the signal in all the solar neutrino experiments. At the same time, the resonant spin--flavor precession (RSFP) is strongly energy dependent and so naturally results in different suppressions and time variations in different experiments. In particular, the $pp$ neutrinos which are expected to give the major contribution to the signal in the gallium experiments, have low energies and so should encounter the RSFP resonance at high densities, somewhere in the radiation zone or in the core of the sun (since the resonant density is inversely proportional to neutrino energy). We know that the magnetic field does exist and may be quite strong in the convective zone of the sun ($0.7R_\odot\leq r \leq R_\odot$). However, it is not clear if strong enough magnetic field can exist deeper in the sun, i.e. in the radiation zone or in solar core. If the inner magnetic field of the sun is week, the RSFP will not be efficient there and the $pp$ neutrinos will leave the sun intact, in accordance with the observations of GALLEX and SAGE. One can turn the argument around and ask the following question: If we believe in the RSFP mechanism, what is the maximal allowed inner magnetic field which is not in conflict with the gallium experiment? The answer turns out to be $(B_i)_{max}\approx 3\times 10^6$ G assuming the neutrino transition magnetic moment $\mu=10^{-11}\mu_B$ \cite{ALP1}. \section{Reconciling Homestake and Kamiokande data} It is more difficult to explain how one can reconcile strong time dependence of the signal observed in the Homestake experiment with no or very little time variation of the Kamiokande data. The key points here are that \cite{Akhm2,BMR,OS,ALP1} (1) The two experiments are sensitive to slightly different parts of the solar neutrino spectrum: Homestake is sensitive to both energetic ${}^8$B neutrinos and medium--energy ${}^7$Be and $pep$ neutrinos, whereas the Kamiokande experiment is only sensitive to the high--energy part of ${}^8$B neutrinos ($E>7.5$ MeV); (2) For Majorana neutrinos, the RSFP converts left--handed $\nu_{e}$ into right--handed $\bar{\nu}_{\mu}$ (or $\bar{\nu}_{\tau}$) which are sterile for the Homestake experiment (since their energy is less than the muon or tauon mass) but do contribute to the event rate in the Kamiokande experiment through their neutral--current interaction with electrons. Although the $\bar{\nu}_{\mu}e$ cross section is smaller than the $\nu_{e}e$ one, it is non-negligible, which reduces the amplitude of the time variation of the signal in the Kamiokande experiment. It turns out that the above two points are enough to account for the differences in the time dependences of the signals in the Homestake and Kamiokande experiments. The calculated event rates in the Homestake and Kamiokande experiments decrease with increasing convective zone magnetic field until they reach their minima, and then start to increase. The minimum of the Kamiokande signal is situated at a lower magnetic field then the one of the Homestake signal due to the energy dependence of the RSFP and the above point (1). Also, it is shallower than the minimum of the Homestake signal due to the point (2). For these reasons, for a certain range of variation of the solar magnetic field $B_{\bot}$ the Homestake signal can decrease significantly with increasing $B_{\bot}$ whereas the Kamiokande signal is near its minimum and therefore does not change much \cite{BMR,OS,ALP1}. \section{Fitting the data} In a recent paper \cite{ALP1} all the available solar neutrino data have been analyzed in the framework of the RSFP disregarding neutrino mass mixing. However, in the general case one should include neutrino mixing effects as well. The motivation for that is as follows: (1) RSFP requires non-vanishing flavor-off-diagonal neutrino magnetic moments, i.e. implies lepton flavor non-conservation. Therefore neutrino oscillations $must$ also take place. In general one should therefore consider the RSFP and neutrino oscillations (including the MSW effect) jointly. The results of ref. \cite{ALP1} are only valid in the small mixing angle limit. (2) It has been shown in \cite{ALP1} that all the existing solar neutrino data can be fitted within the RSFP scenario for certain model magnetic field profiles and certain values of neutrino parameters $\mu$ and $\Delta m^2$. It would be interesting to see how the neutrino mixing modifies these results. (3) In ref. \cite{Akhm3} it has been suggested that the combined action of the RSFP and MSW effect in the convective zone of the sun can relax the lower limit on the product $\mu B_{\bot}$ of neutrino magnetic moment and solar magnetic field required to account for the data. The main idea was that the MSW effect can assist the RSFP to cause the time variations of the neutrino flux by improving the adiabaticity of the RSFP (this can occur when the RSFP and MSW resonances overlap). It would be interesting to confront this idea with the new experimental data. Combined action of the RSFP and the MSW effect on solar neutrinos has been considered in a number of papers \cite{LM,Akhm4,MN1,BalHL,Akhm5}. However, the data of the gallium experiment were not available that time. We therefore re-analyzed all the available solar neutrino data in the framework of the RSFP scenario taking into account possible neutrino mixing and oscillations effects \cite{ALP2}. It was assumed that the $\nu_{e}$ -- $\nu_{\mu}$ mixing is generated by a Majorana neutrino mass term, and that there exists a $\nu_{eL}$ -- $\bar{\nu}_{\mu R}$ transition magnetic moment $\mu$. The evolution equation for a system of two Majorana neutrinos and their antiparticles in the flavor basis is \begin{equation} i\frac{d}{dt}\left(\begin{array}{l} \nu_{eL}\\ \bar{\nu}_{eR}\\ \nu_{\mu L}\\ \bar{\nu}_{\mu R} \end{array}\right ) {}~=~\left ( \begin{array}{cccc} N_1-c_{2}\delta & 0 & s_{2}\delta & \mu B_{\bot}\\ 0 & -N_1-c_{2}\delta & -\mu B_{\bot} & s_{2}\delta\\ s_{2}\delta & -\mu B_{\bot} & N_2+c_{2}\delta & 0\\ \mu B_{\bot} & s_{2}\delta & 0 & -N_2+c_{2}\delta \end{array}\right )\left(\begin{array}{l} \nu_{eL}\\ \bar{\nu}_{eR}\\ \nu_{\mu L}\\ \bar{\nu}_{\mu R} \end{array}\right) \end{equation} Here $B_{\bot}(t)$ is the transverse magnetic field, $$N_1\equiv \sqrt{2}G_{F}(n_{e}-n_{n}/2),~~N_2\equiv \sqrt{2}G_{F}(-n_{n}/2), {}~~\delta\equiv \Delta m^{2}/4E,$$ \begin{equation} s_{2}\equiv\sin 2\theta_{0},~~c_{2}\equiv \cos 2\theta_{0}, \end{equation} where $G_{F}$ is the Fermi constant, $n_{e}$ and $n_{n}$ are the electron and neutron number densities, the rest of the notation being obvious. The zeros in the effective Hamiltonian in eq. (1) are related to the fact that diagonal magnetic moments of Majorana neutrinos are precluded by $CPT$ invariance. We have calculated the neutrino signals in the chlorine, gallium and Kamiokande experiments using ten different model magnetic field profiles (see \cite{ALP2} for more details). The results of our analysis are briefly summarized below. (1) For small mixing angles, $\sin 2\theta_0\buildrel < \over {_{\sim}}$0.1, the results of our previous study \cite{ALP1} are only slightly modified. (2) For moderate mixing angles, $\sin 2\theta_0\buildrel > \over {_{\sim}}$0.2, some of the magnetic field profiles which proved to give good fit of the data for vanishing $\theta_0$, no longer work: they result in too strong a suppression of the signal in the gallium experiments since the adiabaticity of the MSW effect for the low--energy $pp$ neutrinos gets too good. Reasonable fit can still be achieved for very large mixing angles, $\sin 2\theta_0\approx 1$, but in this case a large flux of electron antineutrinos would be produced in contradiction with an upper limit derived from the Kamiokande and LSD data \cite{BFMM,LSD} (see below, point (5)). (3) Possible way out of this situation is to use the model magnetic field profiles with their maximum being shifted towards the outer regions of the convective zone. This would require lower values of of $\Delta m^2$ for the RSFP to be efficient, which in turn would decrease the adiabaticity of the MSW effect, and the flux of the $pp$ neutrinos will be essentially unsuppressed. We have tried three such new magnetic field configurations and they produced good fit of all the data. (4) Typical values of the neutrino parameters required to account for the data are $\Delta m^2 \simeq (10^{-8}$--$10^{-7})$ eV$^2$, $\sin 2\theta_0 \buildrel < \over {_{\sim}}$ 0.2--0.4, depending on the magnetic field configuration; for neutrino transition magnetic moment $\mu=10^{-11}\mu_B$ the maximum magnetic field in the solar convective zone should vary in time in the range (15--30) kG. (5) As have been noticed above (points (2) and (3)), some magnetic field configurations which used to give a good fit to the data for vanishing $\theta_0$, no longer do so for not too small mixing angles and, conversely, some other profiles which failed to reproduce the data for $\theta_0=0$ do give a good fit for moderate $\theta_0$. This is, in fact, a rather unpleasant situation: whether or not a given magnetic field profile fits the data depends on the neutrino mixing angle which is unknown. Possible way out is to look for the $\bar{\nu}_{eR}$ signal from the sun. The point is that if neutrinos experience the RSFP in the sun and also have mass mixing, a flux of electron antineutrinos can be produced which is in principle detectable in the SNO, Super--Kamiokande and Borexino experiments even in the case moderate neutrino mixing angles \cite{LM,Akhm4,Akhm5,RBLBPP,BL1}. The main mechanism of the $\bar{\nu}_{eR}$ production is $\nu_{eL}\rightarrow \bar{\nu}_{\mu R}\rightarrow \bar{\nu}_{eR}$, where the first transition is due to the RSFP in the sun and the second one is due to the vacuum oscillations of antineutrinos on their way between the sun and the earth. The salient feature of this flux is that it should vary in time in $direct$ correlation with solar activity. The detection of the solar $\bar{\nu}_{eR}$ flux would be a signature of the combined effect of the RSFP and neutrino oscillations. It could allow one to discriminate between small mixing angle and moderate mixing angle solutions. The $\bar{\nu}_{eR}$ flux can be significantly enhanced if the solar magnetic field changes its direction along the neutrino trajectory \cite{APS1,APS2,BL2}. In this case one can have a detectable $\bar{\nu}_{eR}$ flux even if the neutrino magnetic moment is too small or the solar magnetic field is too weak to account for the solar neutrino problem \cite{BL2,AS}. To summarize, the RSFP is a viable scenario which is capable of accounting for all the presently existing solar neutrino data, including their time structure (or lack of such a structure). It also gives very specific predictions for the forthcoming solar neutrino experiments, such as strong time dependence of the ${}^7$Be neutrino flux, absence of a suppression and time variation in the neutral--current events at SNO, and an observable flux of solar $\bar{\nu}_{eR}$'s for moderate neutrino mixing angles. \section*{Acknowledgements} The hospitality of the National Institute for Nuclear Theory at the University of Washington where this work was completed is gratefully acknowledged. This work has been supported by the sabbatical grant from Spanish Ministry of Education and Science and by the U.S. Department of Energy under grant DE-FG06-90ER40561.
1,314,259,994,034
arxiv
\subsection{Semantics and Observational Equivalence for \ensuremath{\mathsf{DPIA}}\xspace} \label{sec:equivalence} We use Reddy's relationally parametric automata-theoretic semantics of SCIR \cite{DBLP:journals/entcs/Reddy13}. In this model, the interpretations of phrase types are parameterised by automata describing the permitted state transitions. The model supports relationally parametric reasoning \cite{reynolds83types}, which enables reasoning about locality and information hiding (following \cite{DBLP:journals/jacm/OHearnT95}) and the use of automata permits reasoning about phrases that do not affect the state (i.e. passive phrases). Reddy's model does not include indexed types or compound data types, as we do in \ensuremath{\mathsf{DPIA}}\xspace, but these are straightforward to add to the model. Indexed types are interpreted by set-theoretic functions whose codomain depends on the input (we do not interpret the data type polymorphism using parametricity because we are only interested in parametric reasoning for local state). Compound data types are interpreted as the following sets when they appear in expressions: {\small \begin{mathpar} \sem{\mathsf{num}} = \mathrm{num} \sem{\delta_1 \times \delta_2} = \sem{\delta_1} \times \sem{\delta_2} \sem{\underline{n}.\delta} = \{0,\dots,n-1\} \to \sem{\delta} \end{mathpar}} where the set $\mathrm{num}$ is some set of number-like objects used for scalar values. This interpretation of data types allows us to straightforwardly interpret the functional primitives of \autoref{fig:func-prim} in the model, using the standard interpretation of $\prim{map}$ and $\prim{reduce}$ explicitly given in \cite{SteuwerFeLiDu2015/icfp}. This yields a coincidence property that we state in \autoref{sec:coincidence} below. Acceptors for compound data types are interpreted using the separating product construction, which ensures that disjoint components of compound acceptors are always non-interfering. This allows us to interpret $\prim{parfor}~n$ as $n$ parallel transformations on $n$ pieces of disjoint state. Reddy's semantics assigns an interpretation to every \ensuremath{\mathsf{DPIA}}\xspace phrase. For observational equivalence of \ensuremath{\mathsf{DPIA}}\xspace programs, we are interested in closed programs. A closed program is a command phrase whose free identifiers are all have types of the form $\mathsf{var}[\delta]$ for possibly different $\delta$. Our notion of observational equivalence is standard. Two well-typed phrases $\typ{\Delta}{\Pi}{\Gamma}{P_1, P_2}{\theta}$ are equivalent iff for all closing contexts $C[-]$, the programs $C[P_1]$ and $C[P_2]$, when instantiated with the standard interpretation of variables (\cite{DBLP:journals/entcs/Reddy13}, Figure 2), describe the same mapping of initial to final values. We formally write $\typ{\Delta}{\Pi}{\Gamma}{P_1 \simeq P_2}{\theta}$, or informally $P_1 \simeq P_2$. Note that this relation is automatically an equivalence and is congruent with all the constructs of \ensuremath{\mathsf{DPIA}}\xspace. For the purposes of compilation, this notion of equivalence is justifiable: we are only interested in the relationship between the initial and final values of each variable, not the intermediate states. \subsection{Functional Coincidence} \label{sec:coincidence} Our correctness criterion will have no force if we do not first establish that the assignment $\mathit{out} :=_\delta E$ means what we think it means. We state our coincidence property formally as follows. Let $\typ{\cdot}{x_1 : \mathsf{exp}[\delta_1], ..., x_n : \mathsf{exp}[\delta_n]}{\cdot}{E}{\mathsf{exp}[\delta]}$ be some expression phrase of \ensuremath{\mathsf{DPIA}}\xspace built from the functional primitives in \autoref{fig:func-prim}. Let $\sem{E} : \sem{\delta_1} \times ... \times \sem{\delta_n} \to \sem{\delta}$ be the functional reference semantics of $E$. Use $E$ to construct a closed phrase: \begin{displaymath} \typ{\cdot}{\cdot}{v_1 : \mathsf{var}[\delta_1], ..., v_n : \mathsf{var}[\delta_n], \mathit{out} : \mathsf{var}[\delta]}{\mathit{out}.1 :=_\delta E[v_1.2/x_1, ..., v_n.2/x_n]}{\mathsf{comm}} \end{displaymath} Then for all $a_1 \in \sem{\delta_1}, ..., a_n \in \sem{\delta_n}, a \in \sem{\delta}$, the interpretation of this command maps the store $(\mathit{v_1} \mapsto a_1, ..., v_n \mapsto a_n, \mathit{out} \mapsto a)$ to the store $(\mathit{v_1} \mapsto a_1, ..., v_n \mapsto a_n, \mathit{out} \mapsto \sem{E}(a_1, ..., a_n))$. In other words, this program updates the variable $\mathit{out}$ with the result of the expression, and leaves every other variable unaffected. We use \citet{SteuwerFeLiDu2015/icfp}'s interpretation of the functional primitives in our \ensuremath{\mathsf{DPIA}}\xspace semantics, so this property is immediate. \subsection{Correctness of the Translation from Functional to Imperative} \label{sec:correctness-proof} We structure our proof by first stating a collection of equivalences that can be proved in Reddy's model, and then use them to prove that the translation of \autoref{sec:translation-i} and \autoref{sec:translation-ii} is correct (\autoref{thm:correctness}). The properties of contextual equivalences that we use in our proof, in addition to the fact that $\simeq$ is a congruent equivalence relation, are as follows. \begin{enumerate} \item $\beta\eta$-equality for non-dependent and dependent functions: {\small\begin{mathpar} (\lambda x.~P)Q \simeq P[Q/x] P \simeq (\lambda x.~P x) (\Lambda x.~P)e \simeq P[e/x] P \simeq (\Lambda x.~P x) \end{mathpar}} Full $\beta\eta$-equality for functions is one of the defining features of Idealised Algol and its descendents \cite{Reynolds97}. These are all justified by Reddy's model (and indeed almost all models of Idealised Algol-like languages). \item The $\prim{parfor}$-based implementation of $\prim{mapI}$ (\autoref{sec:translation-ii}) satisfies the following equivalence: \begin{smallequation}\label{propty:map} \prim{mapI}~n~\delta_1~\delta_2~(\lambda x~o.~o:=_{\delta_2}F~x)~E~A \quad \simeq \quad A :=_{n.\delta_2} \prim{map}~n~\delta_1~\delta_2~F~E \end{smallequation} By the definition of array assignment given in \autoref{sec:translation-i}, this property is equivalent to: \begin{displaymath} \prim{mapI}~n~\delta_1~\delta_2~(\lambda x~o.~o:=_{\delta_2}F~x)~E~A \quad \simeq \quad \prim{mapI}~n~\delta_2~\delta_2~(\lambda x~o.~o:=_{\delta_2}x)~(\prim{map}~\delta_1~\delta_2~F~E)~A \end{displaymath} Expanding the definition of $\prim{mapI}$, and $\beta$-reducing, we must show: \begin{displaymath} \qquad\prim{parfor}~n~A~(\lambda i~o.~o:=_{\delta_2}F~(\prim{idx}~n~\delta_1~E~i)) \simeq \prim{parfor}~n~A~(\lambda i~o.~o:=_{\delta_2}\prim{idx}~n~\delta_2~(\prim{map}~n~\delta_1~\delta_2~F~E)~i) \end{displaymath} which is immediate from the way that array data types are interpreted as functions from indices to values. \item Reddy's model validates the following equivalence involving the use of temporary storage. For all expressions $E$ and continutations $C$ that are non-interfering, we have: \begin{smallequation}\label{propty:temp-storage} \prim{new}~\delta~(\lambda \mathit{tmp}.~\mathit{tmp}.1 :=_\delta E; C(\mathit{tmp}.2)) \quad \simeq \quad C(E) \end{smallequation} This equivalence relies crucially on the fact that $C$ and $E$ cannot interfere, so we can take a complete copy of $E$ before invoking $C$. If $C$ were able to write to storage that is read by $E$, then it would not be safe to cache $E$ before invoking $C$. In Reddy's model, we use parametricity to relate the two uses of $C$: one in a store that contains the state that $E$ reads, and one in a store that contains the result of evaluating $E$. Using parametricity and restriction to only the identity state transition on $E$'s portion of the store further ensures that $C$ does not interfere with $E$. \item The $\prim{for}$-loop based implementation of $\prim{reduceI}$ should satisfy the following equivalence. \begin{smallequation}\label{propty:reduce} \prim{reduceI}~n~\delta_1~\delta_2~(\lambda x~y~o.~o :=_{\delta_2}F~x~y)~I~E~C \quad\simeq\quad C(\prim{reduce}~n~\delta1~\delta_2~F~I~E) \end{smallequation} Substituting in the implementation of $\prim{reduceI}$, and $\beta$-reducing, this is equivalent to showing: \begin{displaymath} \prim{new}~\delta_2~(\lambda v.~v.1 :=_{\delta_2} I; \prim{for}~n~(\lambda i.~v.1 :=_{\delta_2} F~v.2~(\prim{idx}~E~i)); C(v.2)) \simeq C(\prim{reduce}~n~\delta1~\delta_2~F~I~E) \end{displaymath} Because the acceptor-expression pair $v$ has been freshly allocated, it acts like a so-called ``good variable'' in Idealised Algol terminology. This means that the following equivalence holds, using the fact that neither $F$ nor $E$ interfere with $v$: \begin{displaymath} \prim{for}~n~(\lambda i.~v.1 :=_{\delta_2} F~v.2~(\prim{idx}~E~i)) \simeq v.1 :=_{\delta_2} \prim{reduce}~n~\delta_1~\delta_2~F~v.2~E \end{displaymath} Now \autoref{propty:reduce} follows from \autoref{propty:temp-storage}. \item Finally, we need agreement between the data layout combinators and their acceptor counterparts: \begin{displaymath} \begin{array}{lcl} A :=_{\delta_1 \times \delta_2}~\prim{pair}~\delta_1~\delta_2~E_1~E_2 &\simeq&(\prim{pairAcc_1}~\delta_1~\delta_2~A :=_{\delta_1} E_1; \prim{pairAcc_2}~\delta_1~\delta_2~A :=_{\delta_2} E_2) \\ A :=_{n.(\delta_1 \times \delta_2)}~\prim{zip}~n~\delta_1~\delta_2~E_1~E_2 &\simeq& (\prim{zipAcc_1}~n~\delta_1~\delta_1~A :=_{n.\delta_1} E_1; \prim{zipAcc_2}~n~\delta_1~\delta_1~A :=_{n.\delta_2} E_2) \\ A :=_{n.m.\delta} \prim{split}~n~m~\delta~E &\simeq& \prim{splitAcc}~n~m~\delta~A :=_{nm.\delta} E \\ A :=_{nm.\delta} \prim{join}~n~m~\delta~E &\simeq& \prim{joinAcc}~n~m~\delta~A :=_{n.m.\delta} E \end{array} \end{displaymath} The first equivalence follows directly from the definition of assignment at pair type, and $\beta$-reduction for pairs. The others are all straightforwardly justified in Reddy's model, given the interpretation of acceptors for compound data types using separating products, described above. \end{enumerate} \begin{theorem} \label{thm:correctness} The translations $\semA{-}_-(-)$ and $\semE{-}_-(-)$ defined in \autoref{fig:transone} and \autoref{fig:transtwo} satisfy the following observational equivalences for all acceptors $A$ and functional expressions $E$ with disjoint sets of active identifiers: {\small\begin{mathpar} \semA{E}_\delta(A) \simeq A :=_\delta E \semE{E}_\delta(C) \simeq C(E) \end{mathpar}} \end{theorem} \begin{proof} By mutual induction on the steps of the translation process. The cases for variables in both translations are immediate. The cases for the first-order combinators on numbers follow from the induction hypotheses and $\beta$-reduction. For example, for $\prim{negate}$: \begin{displaymath} \semA{\prim{negate}~E}_{\mathsf{num}}(A) = \semE{E}_{\mathsf{num}}(\lambda x.~A := \prim{negate}~x) \simeq (\lambda x.~A := \prim{negate}~x)(E) \simeq A := \prim{negate}~E \end{displaymath} The case for the first-order combinators in the continuation-passing translation are similar. The acceptor-passing translation of $\prim{map}$ uses the induction hypothesis to establish the correctness of the translations of the subterms, $\beta$-equality, and then the correctness property of $\prim{mapI}$ (\autoref{propty:map}): \begin{displaymath} \begin{array}{lcl} \semA{\prim{map}~n~\delta_1~\delta_2~F~E}_{n.\delta_2}(A) &=&\semE{E}_{n.\delta_1}(\lambda x.~\prim{mapI}~n~\delta_1~\delta_2~(\lambda x~o.~\semA{F~x}_{\delta_2}(o))~x~A) \\ &\simeq& \prim{mapI}~n~\delta_1~\delta_2~(\lambda x~o.~\semA{F~x}_{\delta_2}(o))~E~A) \\ &\simeq& \prim{mapI}~n~\delta_1~\delta_2~(\lambda x~o.~o:=_{\delta_2} F~x)~E~A \\ &\simeq& A :=_{n.\delta_2} \prim{map}~n~\delta_1~\delta_2~F~E \end{array} \end{displaymath} The continuation-passing translation of $\prim{map}$ relies on the acceptor-passing translation and additionally \autoref{propty:temp-storage} that using temporary storage is unobservable. The acceptor-passing and contination-passing translations for $\prim{reduce}$ both rely on \autoref{propty:reduce} establishing the correctness of $\prim{reduceI}$. The acceptor-passing translations of the data layout combinators rely on the corresponding properties for $\prim{zip}$, $\prim{split}$, $\prim{join}$ and $\prim{pair}$. The acceptor-passing cases for $\prim{fst}$ and $\prim{snd}$ follow from the induction hypothesis and $\beta$-equality. The correctness of the contination-passing translations for the data layout combinators also follow by applying the induction hypothesis and using $\beta$-equality. \end{proof} \subsection{Experimental Setup} With the help of the original authors, \citep{SteuwerFeLiDu2015/icfp}, we reproduced their results using the same methodology as them. We used three three different OpenCL platforms: 1) an Nvidia GeForce GTX TITAN X with CUDA 8 and driver 375.26 installed; 2) an AMD Radeon HD 7970 GPU with AMD-APP 3.0 and driver 15.300 installed; 3) an Intel Xeon E5530 CPU with 8 physical cores distributed across two sockets and hyper-threading enabled. We used the same set of benchmarks with two input sizes. For \emph{scal}, \emph{asum}, and \emph{dot}, we used vectors of 16 (small) and 128 (large) millions elements. For \emph{gemv}, input matrices of $4096^2$ (small) and $8192^2$ (large) elements were used. We used the OpenCL profiling API for measuring OpenCL kernel runtime and the CPU runtime was measured using the \textit{gettimeofday} function. We did not measure data transfer time, as we were only interested in the quality of the generated OpenCL kernel. Each experiment was repeated 1000 times and we report median runtimes. We compare against the manually written and optimised code from the vendor-provided libraries: CUBLAS version 8.0 from Nvidia, clBLAS version 2.12 from AMD, and MKL version 11.1 from Intel. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.315\linewidth} \includegraphics[width=\linewidth]{plots/nvidia} \caption{Nvidia GPU} \label{fig:results-nvidia} \end{subfigure} \hspace{0.015\linewidth} \begin{subfigure}[b]{0.315\linewidth} \includegraphics[width=\linewidth]{plots/amd} \caption{AMD GPU} \label{fig:results-amd} \end{subfigure} \hspace{0.015\linewidth} \begin{subfigure}[b]{0.315\linewidth} \includegraphics[width=\linewidth]{plots/intel} \caption{Intel CPU} \label{fig:results-intel} \end{subfigure} \caption{Performance comparison of code compiled via the formal translation from DPIA to OpenCL vs. informal translation of ICFP~2015~\citep[cf.][]{SteuwerFeLiDu2015/icfp} and vs. platform-specific libraries. The formal translation from DPIA to OpenCL introduces no performance overhead compared to ICFP~2015 and matches or outperforms highly tuned libraries on all three platforms. } \label{fig:results} \end{figure*} \subsection{Overhead of Formal Translation} \autoref{fig:results} shows the runtime performance of the OpenCL kernels generated via the formal translation described in \autoref{sec:translation}. The graphs are normalised by the performance of the OpenCL code generated from the technique described by \citet{SteuwerFeLiDu2015/icfp} (labelled ICFP 2015). Bars lower than 1.0 indicate a performance loss and bars higher than 1.0 a performance gain. The performance of the OpenCL code generated by the method of \citeauthor{SteuwerFeLiDu2015/icfp} and the code generated from DPIA is almost identical in all cases with less than 5\% difference. This demonstrates that our formal translation process does not introduce significant overheads. \subsection{Performance Comparison vs. Platform-Specific Libraries} The performance results comparing DPIA generated OpenCL kernels against platform-specific libraries provided by Nvidia, AMD, and Intel show that for most benchmarks and input sizes the generated code matches the library performance. For some cases, such as gemv on AMD or asum on Intel we even clearly outperform the library implementations by a factor of up to five times. These performance results are similar to the results published by \citet{SteuwerFeLiDu2015/icfp} and show that by exploring parallelisation strategies using semantics preserving rewrite rules it is possible to outperform manually written code. In this paper, we have extended the formal rewriting from the purely functional to the imperative level while achieving the same impressive performance results. \subsection{Expressing Parallelisation Strategies in Functional Code} Here is an expression that describes the dot product of two vectors $\mathit{xs}$ and $\mathit{ys}$: \begin{smallequation} \label{code-ex:simple-dot-product} \prim{reduce}~(+)~0~(\prim{map}~(\lambda x.~\prim{fst}~x*\prim{snd}~x)~(\prim{zip}~\mathit{xs}~\mathit{ys})) \end{smallequation} This expression can be read in two ways. Firstly, read mathematically, it is a declarative specification of the dot product. Secondly, it can be read as a strategy for computing dot products. Reading right-to-left, we have a pipeline arrangement. Let us make the following assumptions: \emph{i)}~$\prim{zip}$ is not materialised (it only affects how later parts of the pipeline read their input); \emph{ii)}~$\prim{map}$ is executed in parallel across the array; and \emph{iii)}~$\prim{reduce}$ is executed sequentially. Then we can read this expression as embodying a naive ``parallel map, sequential reduce'' strategy. Such a naive strategy is not always best. If we try to execute one parallel job per element of the input arrays, then depending on the underlying architecture we will either fail (e.g., on GPUs with a fixed number of execution units), or generate so many threads that coordination of them will dominate the runtime (e.g., on CPUs). The overall strategy of ``parallel, then sequential'' is likely not the most efficient, either. We can give a more refined strategy given information about the underlying architecture. For instance, GPUs support nesting of parallelism by organising threads into groups, or \emph{work-items} into \emph{work-groups}, using OpenCL terminology. If we know that the input is of size $n \times 128 \times 2048$, we can explicitly control how parallelism can be mapped to the GPU hierarchy. The following expression distributes the work among $n$ groups of $128$ local threads, each processing $2048$ elements in one go, by directly reducing over the multiplied pairs of elements: \begin{smallequation} \label{code-ex:dot-product-complex} \begin{array}[t]{@{}l} \prim{reduce}~(+)~0~ (\prim{join}~(\prim{mapWorkgroup}\\ \qquad\qquad\qquad\quad\begin{array}[t]{@{}l} (\lambda \mathit{zs}_1.~\prim{mapLocal}~(\lambda\mathit{zs}_2.~\prim{reduce}~ (\lambda x~a.~(\prim{fst}~x * \prim{snd}~x) + a)~0~(\prim{split}~2048~\mathit{zs}_2))~\mathit{zs}_1) \\ (\prim{split}~(2048 * 128)~(\prim{zip}~\mathit{xs}~\mathit{ys}))))) \end{array} \end{array} \end{smallequation} Although this expression gives much more information about how to process the computation on the GPU, we have not left the functional paradigm, so we still have access to the straightforward mathematical reading of this expression. We can use equational reasoning to prove that this is semantically equivalent to (\ref{code-ex:simple-dot-product}). Equational reasoning can also be used to generate (\ref{code-ex:dot-product-complex}) from (\ref{code-ex:simple-dot-product}). Indeed \citet{SteuwerFeLiDu2015/icfp} have shown that stochastic search techniques are effective at automatically discovering parallelisation strategies that match hand-coded ones. However, even with a specified parallelisation strategy we cannot execute this code directly. We need to translate the functional code to an imperative language like OpenCL or CUDA in a way that preserves our chosen strategy. This paper presents a formal approach to solving this translation problem. \subsection{Strategy Preserving Translation to Imperative Code} What is the simplest way of converting a functional program to an imperative one? Starting with our zip-map-reduce formulation of dot-product (\ref{code-ex:simple-dot-product}), we can turn it into an imperative program simply by assigning its result to an output variable $\mathit{out}$: \begin{displaymath} \mathit{out} := \prim{reduce}~(+)~0~(\prim{map}~(\lambda x.~\prim{fst}~x*\prim{snd}~x)~(\prim{zip}~\mathit{xs}~\mathit{ys})) \end{displaymath} Unfortunately, this is not suitable for compilation targets like OpenCL or CUDA. While assignment statements are the bread-and-butter of such languages, their expression languages certainly do not include such modern amenities as higher order $\prim{map}$ and $\prim{reduce}$ functions. To translate these away, we introduce a novel \emph{acceptor-passing} translation $\semA{E}_\delta(\mathit{out})$. The key idea is that for any expression $E$ producing data of type $\delta$, the translation $\semA{E}_\delta(\mathit{out})$ is an imperative program that has the same effect as the assignment $\mathit{out} := E$ and is free from higher-order combinators. This translation is mutually defined with a continuation passing translation $\semE{E}_\delta(C)$ that takes a parameterised command $C$ that will consume the output, instead of taking an output variable. The definition of the translation is given in \autoref{sec:translation-i}. We introduce it here by example. Applied to our dot-product code, our translation first replaces the $\prim{reduce}$ by a corresponding imperative combinator $\prim{reduceI}$. We will see below that $\prim{reduceI}$ is straightforwardly implemented in terms of variable allocation and a for-loop. \begin{displaymath} \begin{array}{cl} & \semA{\prim{reduce}~(+)~0~(\prim{map}~(\lambda x.~\prim{fst}~x*\prim{snd}~x)~(\prim{zip}~\mathit{xs}~\mathit{ys}))}_{\mathsf{num}}(\mathit{out}) \\[.25em] =&\begin{array}[t]{@{}l} \semE{\prim{map}~(\lambda x.~\prim{fst}~x*\prim{snd}~x)~(\prim{zip}~\mathit{xs}~\mathit{ys})}_{n.\mathsf{num}}(\lambda x.~\\ \quad \semE{0}_{\mathsf{num}}(\lambda y.~ \prim{reduceI}~n~(\lambda x~y~o.~\semA{x + y}_{\mathsf{num}}(o))~y~x~(\lambda r.~\semA{r}(A)))) \end{array} \end{array} \end{displaymath} The $\prim{map}$ is now translated, by the continuation passing translation, into allocation of a temporary array and an imperative $\prim{mapI}$ combinator. As with $\prim{reduceI}$, the $\prim{mapI}$ combinator is straightforwardly implementable in terms of a (parallel) for-loop. The operator $\prim{new}~\delta~\mathit{ident}$ declares a new storage cell of type $\delta$ named $\mathit{ident}$, where storage cells are represented as pairs of an acceptor (i.e., ``writer'', ``l-value'') part $\mathit{ident}.1$ and an expression (i.e. ``reader'', ``r-value'') part $\mathit{ident}.2$. Our language, which we introduce in \autoref{sec:typeSystem}, is a variant of Reynolds' Idealised Algol \citep{Reynolds78}. \begin{displaymath} \begin{array}{cl} =&\prim{new}~(n.\mathsf{num})~(\lambda \mathit{tmp}.~ \begin{array}[t]{@{}l} \semE{\prim{zip}~\mathit{xs}~\mathit{ys}}_{n.(\mathsf{num}\times\mathsf{num})}(\lambda x.~ \prim{mapI}~n~(\lambda x~o.~\semA{\prim{fst}~x * \prim{snd}~x}_{\mathsf{num}}(o))~x~\mathit{tmp}.1); \\ \semE{0}_{\mathsf{num}}(\lambda y.~ \prim{reduceI}~n~(\lambda x~y~o.~\semA{x + y}_{\mathsf{num}}(o))~y~\mathit{tmp}.2~(\lambda r.~\semA{r}(A)))) \end{array} \end{array} \end{displaymath} Readers familiar with other translations of data parallel functional programs into imperative loops may be surprised at the allocation of a temporary array here. Typically, the compilation process would be expected to automatically fuse the computation of the $\prim{map}$ into the translation of the $\prim{reduce}$. However, this is precisely what we do \emph{not} want from a predictable compilation process for parallelism. If fusion is desired, it is carried out before this translation is applied and directly encoded in the functional program, as seen earlier in example~(\ref{code-ex:dot-product-complex}). The parallelism strategy described by the functional code here precisely states ``parallel map, followed by sequential reduce''. Predictability of the translation is essential for more complex parallelism strategies that exploit parallelism hierarchies and even different memory address spaces as we will see later in \autoref{sec:opencl:addrspace}. Continuing the translation process, we substitute $\mathit{out}$, the arithmetic expressions and the $\prim{zip}$, leaving two uses of the ``intermediate-level'' combinators $\prim{mapI}$ and $\prim{reduceI}$: \begin{displaymath} \begin{array}{cl} =&\prim{new}~(n.\mathsf{num})~(\lambda \mathit{tmp}.~ \begin{array}[t]{@{}l} \prim{mapI}~n~(\lambda x~o.~o := \prim{fst}~x * \prim{snd}~x)~(\prim{zip}~\mathit{xs}~\mathit{ys})~\mathit{tmp}.1; \\ \prim{reduceI}~n~(\lambda x~y~o.~o := x + y)~0~\mathit{tmp}.2~(\lambda r.~\mathit{out} := r)) \\ \end{array} \end{array} \end{displaymath} These combinators are now replaced by parallel and sequential for-loops, which we describe in a further translation stage in \autoref{sec:translation-ii}. \begin{smallequation}\label{codeex:dpia-dot-product} \begin{array}{cl} =&\prim{new}~(n.\mathsf{num})~(\lambda \mathit{tmp}.~\begin{array}[t]{@{}l} \prim{parfor}~n~\mathit{tmp}.1~(\lambda i~o.~ o := \prim{fst}~(\prim{idx}~(\prim{zip}~\mathit{xs}~\mathit{ys})~i) * \prim{snd}~(\prim{idx}~(\prim{zip}~\mathit{xs}~\mathit{ys})~i)); \\ \prim{new}~\mathsf{num}~(\lambda \mathit{accum}.~ \begin{array}[t]{@{}l} \mathit{accum}.1 := 0;\\ \prim{for}~n~(\lambda i.~\mathit{accum}.1 := \mathit{accum}.2 + \prim{idx}~\mathit{tmp}.2~i); \\ \mathit{out} := \mathit{accum.2})) \\ \end{array} \\ \end{array} \\ \end{array} \end{smallequation} The sequential $\prim{for}$ loops of our intermediate language are standard; $\prim{for}~n~(\lambda i.~b)$ executes the body $b$ $n$ times with iteration counter $i$. Parallel $\prim{parfor}$ loops are slightly more complex due to the way they explicitly take a parameter (here named $\mathit{tmp}.1$) that describes where to place the results of each iteration in a data-race free way. We describe this fully in \autoref{sec:dpia-prims}. We are now left with an imperative program, albeit with a non-standard parallel-for construct and complex data access expressions involving $\prim{fst}$, $\prim{zip}$, $\prim{idx}$ and so on. Our final translation to pseudo-C (\autoref{sec:translation-iii}) resolves these data layout expressions into explicit indexing computations: \begin{lstlisting} float tmp[N]; parfor (int i = 0; i < N; i += 1) tmp[i] = xs[i] * ys[i]; float accum = 0.0; for (int i = 0; i < N; i += 1) accum = accum + tmp[i]; output = accum; \end{lstlisting} This resulting low level imperative code precisely implements the strategy ``parallel map, followed by sequential reduce'' described by our original functional expression (\ref{code-ex:simple-dot-product}). Our original dot-product code does not produce particularly complex code, but our translation method scales to more detailed parallelism strategies. The alternative dot-product code in (\ref{code-ex:dot-product-complex}), which rearranges the $\prim{map}$ and $\prim{reduce}$ combinators in order to better exploit parallel hardware, yields the following code: \begin{lstlisting} float tmp[N/2048]; parfor (int i = 0; i < N/(2048*128); i += 1) { parfor (int j = 0; j < 128; j += 1) { float accum = 0.0; for (int k = 0; k < 2048; k += 1) { accum = (xs[(2048*128 * i) + (128 * j) + k] * ys[(2048*128 * i) + (128 * j) + k]) + accum; } tmp[((128 * i) + j)] = accum; } } float accum = 0.0; for (int i = 0; i < N/2048; i += 1) { accum = accum + tmp[i]; } output = accum; \end{lstlisting} As we shall see in \autoref{sec:experimentalResults}, given a target-architecture optimised parallelisation strategy defined in functional code, our translation process produces OpenCL code with performance on a par with previous ad hoc code generators, and with hand written code. Key to our translation methodology is a single intermediate language that can express pure functional expressions and deterministic race free parallel imperative programs, and which is amenable to formal reasoning. In the next section, we describe our language for this task, $\ensuremath{\mathsf{DPIA}}\xspace$: \emph{Data Parallel Idealised Algol}. \subsection{A Short Introduction to OpenCL} \label{sec:opencl:opencl} The OpenCL programming model distinguishes between the managing \emph{host program} and the \emph{kernel programs} which are executed on parallel on an OpenCL enabled \emph{device}. Kernel programs are special functions written in the OpenCL C programming language which is a dialect of C with parallel-specific restrictions and extensions. Our work focuses purely on the generation of the OpenCL kernel. A kernel function is executed in parallel on an OpenCL device by multiple \emph{work-items} (threads) which can optionally be organised in \emph{work-groups}. Each work-item is uniquely identified by a \emph{global id}, or a combination of a \emph{group id} and a \emph{local id} internal to the group. These ids are used to determine which part of the data is accessed by each threads. OpenCL also defines different \emph{memory spaces} which correspond to memories with distinct performance characteristics. The \emph{global memory} is visible by all the threads and is usually the largest, but also the slowest memory on an OpenCL device. The \emph{local memory} is shared among the work-items of a work-group and is order of magnitudes faster than global memory (comparable to cache performance). Finally, the \emph{private memory} is the fastest memory, but very small and can not be used for data shared among work-items (private memory usually corresponds to registers). On some architectures vector instructions are crucial for achieving high performance. OpenCL supports special \emph{vector types} such as \texttt{float4} where operations on a value of this type are performed by the vector units in the processor. Vector types are only available for a small number of underlying numerical data types (\emph{e.g.},\xspace \texttt{int}, \texttt{float}) and a fixed number of sizes: 2, 3, 4, 8, and 16. \subsection{OpenCL Specific Data Parallel Programming Primitives} Following the work of \citet{SteuwerFeLiDu2015/icfp} we have designed a set of parallel programming primitives reflecting the OpenCL programming model in an extension of \ensuremath{\mathsf{DPIA}}\xspace. Their work has shown that the design presented below allows generation of efficient OpenCL code with performance comparable to expert written code. \paragraph{Parallelism Hierarchy} To exploit the different parallelism levels of the OpenCL thread hierarchy with global work-items, local work-items organised in work-groups, and sequential execution inside a single work-item, we introduce four variants of the functional $\prim{map}$ primitive, all with the same type as the original: \begin{displaymath} \quad\left.\begin{array}{@{}l@{~}} \prim{mapGlobal}\\ \prim{mapWorkgroup}\\ \prim{mapLocal}\\ \prim{mapSeq} \end{array}\right\} : (n : \mathsf{nat}) \to (\delta_1~\delta_2 : \mathsf{data}) \to (\mathsf{exp}[\delta_1] \to \mathsf{exp}[\delta_2]) \to \mathsf{exp}[n.\delta_1] \to \mathsf{exp}[n.\delta_2]\hskip \textwidth minus \textwidth \strut \end{displaymath} We also add four corresponding intermediate imperative combinators, specialising the $\prim{mapI}$ used above: \begin{displaymath} \quad\left.\begin{array}{@{}l@{~}} \prim{mapIGlobal}\\ \prim{mapIWorkgroup}\\\prim{mapILocal}\\\prim{mapISeq} \end{array}\right\} : \begin{array}{@{}l@{}} (n : \mathsf{nat}) \to (\delta_1~\delta_2 : \mathsf{data}) \to (\mathsf{exp}[\delta_1] \to \mathsf{acc}[\delta_2] \to_{\textrm{p}} \mathsf{comm}) \to\\ \qquad \mathsf{exp}[n.\delta_1] \to \mathsf{acc}[n.\delta_2] \to \mathsf{comm} \end{array}\hskip \textwidth minus \textwidth \strut \end{displaymath} Finally, we add three OpenCL-specific variations of the $\prim{parfor}$ imperative primitive: \begin{displaymath} \quad\left.\begin{array}{@{}l@{~}} \prim{parforGlobal}\\\prim{parforWorkgroup}\\\prim{parforLocal} \end{array}\right\} : (n : \mathsf{nat}) \to (\delta : \mathsf{data}) \to \mathsf{acc}[n.\delta] \to (\mathsf{exp}[\mathrm{idx}(n)] \to \mathsf{acc}[\delta] \to_{\textrm{p}} \mathsf{comm}) \to \mathsf{comm}\hskip \textwidth minus \textwidth \strut \end{displaymath} We reuse the sequential $\prim{for}$ primitive for the translation of $\prim{mapSeq}$. The specification of the translation of the specialised $\prim{map}*$ functional primitives down to the corresponding variations of $\prim{parfor}$ via their intermediate imperative counterparts is defined exactly as for the $\prim{map} \to \prim{mapI} \to \prim{parfor}$ translation in \autoref{sec:translation}. Semantically, all these variants of $\prim{map}$ and $\prim{parfor}$ are equivalent to the originals, so the correctness proof in \autoref{sec:correctness} is unaffected. The additional information present in the names is only used by the OpenCL code generator. However, in future work, we want to formalise the OpenCL model to ensure by construction that we always generate valid OpenCL kernels that respect the parallelism hierarchy. \paragraph{Address Spaces} \label{sec:opencl:addrspace} To account for OpenCL's multiple address spaces, we add three primitives which wrap a function, i.\,e., take a function as its argument and return a function of the same type: \begin{displaymath} \quad \prim{toGlobal},\prim{toLocal},\prim{toPrivate} : (\delta_1~\delta_2 : \mathsf{data}) \to (\mathsf{exp}[\delta_1] \to \mathsf{exp}[\delta_2]) \to \mathsf{exp}[\delta_1] \to \mathsf{exp}[\delta_2]\hskip \textwidth minus \textwidth \strut \end{displaymath} Semantically, these functions are all the identity. As above, the additional information is only used by the OpenCL code generator. During the translation these functions are replaced by specialised $\prim{new}$ primitives parameterised with the OpenCL memory space and perform the memory allocation in the indicated memory space: \begin{displaymath} \quad \prim{newGlobal},\prim{newLocal},\prim{newPrivate} : (\delta : \mathsf{data}) \to (\mathsf{var}[\delta] \to \mathsf{comm}) \to \mathsf{comm} \hskip \textwidth minus \textwidth \strut \end{displaymath} By default, $\prim{map}$ allocates memory in global memory for its output during the continuation-passing translation. When $\prim{map}$ is wrapped in, e.g., $\prim{toLocal}$ this will perform the memory allocation and trigger the acceptor-passing translation of $\prim{map}$ where it does not allocate memory itself, but rather writes to the provided acceptor. As for the parallelism hierarchy, in future work we plan to extend our formal treatment to include the OpenCL memory model and track address space use with an effect system. This will allow us to ensure that the address spaces are only used correctly. \paragraph{Vectorisation} To support the OpenCL vector types we extended \ensuremath{\mathsf{DPIA}}\xspace's type system with an additional vector data type. This is defined similar to the array data type, but more restricted so that the element data type has to be $\mathsf{num}$ and the length must be one of the legal choices defined by OpenCL. Arrays of non-vector type can be turned into an array of vector type using the $\prim{asVector}$ primitive which behaves similar to the $\prim{split}$ primitive: \begin{displaymath} \quad\begin{array}{@{}l@{}}\prim{asVector}_{\underline{n}}\end{array} : (m : \mathsf{nat}) \to (\delta : \mathsf{data}) \to \mathsf{exp}[m\underline{n}.\delta] \to \mathsf{exp}[m.\mathsf{num}\langle \underline{n} \rangle]\quad (\text{where } \mathsf{num}\langle \underline{n} \rangle \text{ is a vector type})\hskip \textwidth minus \textwidth \strut \end{displaymath} Similarly to $\prim{join}$ which flattens a two dimensional array, $\prim{asScalar}$ turns an array of vector type into an array of non-vector type: \begin{displaymath} \quad\begin{array}{@{}l@{}}\prim{asScalar}_{\underline{n}}\end{array} : (m : \mathsf{nat}) \to (\delta : \mathsf{data}) \to \mathsf{exp}[m.\mathsf{num}\langle \underline{n} \rangle] \to \mathsf{exp}[m\underline{n}.\delta]\quad (\text{where } \mathsf{num}\langle \underline{n} \rangle \text{ is a vector type})\hskip \textwidth minus \textwidth \strut \end{displaymath} \subsection{Translating Dot-product to OpenCL} We pick up the dot product example~(\ref{code-ex:dot-product-complex}) given in \autoref{sec:motivation} to show how a mild variation which makes use of the OpenCL-specific primitives is translated to real OpenCL. The example shown here uses the $\prim{mapWorkgroup}$ and $\prim{mapLocal}$ primitives together with the vectorisation primitives $\prim{asVector}$ and $\prim{asScalar}$. \begin{displaymath} \begin{array}[t]{@{}l} \prim{asScalar_4}~(\prim{join}~(\prim{mapWorkgroup}\\ \qquad\qquad\qquad\qquad\begin{array}[t]{@{}l} (\lambda \mathit{zs}_1.~\prim{mapLocal}~(\lambda\mathit{zs}_2.~\prim{reduce}~ (\lambda x~a.~(\prim{fst}~x * \prim{snd}~x) + a)~0~(\prim{split}~8192~\mathit{zs}_2))~\mathit{zs}_1) \\ (\prim{split}~8192~(\prim{zip}~(\prim{asVector_4}~\mathit{xs})~(\prim{asVector_4}~\mathit{ys})))))) \end{array} \end{array} \end{displaymath} This is the code used in the experimental evaluation (\autoref{sec:experimentalResults}) and shows excellent performance on an Intel CPUs compared to the reference MKL implementation. Vectorisation is crucial on Intel CPUs for achieving high performance. This purely functional program with OpenCL-specific primitives is translated to the following imperative program. The translation largely follows the steps explained in \autoref{sec:translation} extended to cover the OpenCL-specific primitives, as explained above. \begin{displaymath} \begin{array}[t]{@{}l} \prim{parforWorkgroup}~(N / 8192)~(\prim{joinAcc}~(N/8192)~64~(\prim{asScalarAcc_4}~(N / 128)~\mathit{out}))~(\lambda~ gid~o .\\ \enspace \prim{parforLocal}~64~o~(\lambda~lid~o .\\ \enspace \enspace \prim{newPrivate}~\mathsf{num\langle 4 \rangle}~\mathit{accum}.\\ \enspace \enspace \enspace \mathit{accum}.1 := 0;\\ \enspace \enspace \enspace \prim{for}~2048~(\lambda~i.\\ \enspace \enspace \enspace \enspace \mathit{accum}.1 := \mathit{accum}.2~+\\ \enspace \enspace \enspace \enspace \enspace ( \prim{fst}~(\prim{idx}~(\prim{idx}~(\prim{split}~2048~(\prim{idx}~(\prim{split}~(8192*4)~(\prim{zip}~(\prim{asVector_4}~xs)~(\prim{asVector_4}~ys)))~gid))~lid)~i) )~*\\ \enspace \enspace \enspace \enspace \enspace ( \prim{snd}~(\prim{idx}~(\prim{idx}~(\prim{split}~2048~(\prim{idx}~(\prim{split}~(8192*4)~(\prim{zip}~(\prim{asVector_4}~xs)~(\prim{asVector_4}~ys)))~gid))~lid)~i) )\ );\\ \enspace \enspace \enspace \mathit{out} := \mathit{accum.2}\ ) )\\ \end{array} \end{displaymath} We generate the following OpenCL kernel where each line corresponds to a line of the imperative DPIA program. \begin{lstlisting}[mathescape] kernel void KERNEL(global float *out, const global float *restrict xs, const global float *restrict ys, int N) { for (int g_id = get_group_id(0); g_id < N / 8192; g_id += get_num_groups(0)){$\label{line:parforWorkgroup}$ for (int l_id = get_local_id(0); l_id < 64; l_id += get_local_size(0)){$\label{line:parforLocal}$ float4 accum; accum = (float4)(0.0, 0.0, 0.0, 0.0); for (int i = 0; i < 2048; i += 1) { accum = (accum + (vload4(((2048 * l_id) + (8192 * 4 * g_id) + i), xs) *$\label{line:vload1}$ vload4(((2048 * l_id) + (8192 * 4 * g_id) + i), ys))); }$\label{line:vload2}$ vstore4(accum, ((64 * g_id) + l_id), out); } } }$\label{line:vstore}$ \end{lstlisting} The $\prim{parforWorkgroup}$ and $\prim{parforLocal}$ primitives have been translated into $\prim{for}$ loops in line~\ref{line:parforWorkgroup} and~\ref{line:parforLocal} which use the OpenCL functions \texttt{get\_group\_id} and \texttt{get\_local\_id} for distributing iterations across parallel executing work-groups and work-items. Loading elements as vector data types from the \texttt{float} arrays \texttt{xs} and \texttt{ys} requires using the OpenCL provided function \texttt{vload4} in lines~\ref{line:vload1} and~\ref{line:vload2}. Similarly, storing the computed value with vector data type in the output array uses the \texttt{vstore4} function in line~\ref{line:vstore}. \subsection{Memory allocation in Data Parallel Idealised Algol for OpenCL} Our translation from functional to imperative programs leaves us with programs which perform statically bounded memory allocation. The lifetime of every memory allocation is known because it is bounded by the scope of the $\prim{new}$ primitive. Nevertheless, the memory allocation occurs dynamically as part of the execution of the program. In C these allocations can be performed with \texttt{malloc} on the heap or \texttt{alloca} on the stack. However, OpenCL does not support dynamic memory allocation. Furthermore, OpenCL demands that all temporary buffers in global and local memory -- even with statically known size -- have to be allocated prior to the kernel execution and passed as pointers to the kernel function. In order to generate valid OpenCL, we perform an additional translation step to hoist all $\prim{newGlobal}$ and $\prim{newLocal}$ primitives to the very top of the program where we will eventually turn them into kernel arguments. $\prim{new}$ primitives can be nested inside parallel for loops, so when hoisting memory allocations out of the loop the amount of memory has to be multiplied by the number of loop iterations, so that every loop iteration has its distinct location to write to. To hoist the allocations we traverse the imperative program and for each parallel for loop we encounter we remember the number of iterations and the loop variable. Once we reach a $\prim{newGlobal}$ or $\prim{newLocal}$ primitive, we replace it with its body and substitute the appropriate acceptor-expression pair for its variable that correctly points to the right place in the globally allocated data structure. The following imperative \ensuremath{\mathsf{DPIA}}\xspace program implements dot-product with two memory allocations nested in the $\prim{parforGlobal}$ loop. The allocation in global memory has to be hoisted out while the nested allocation in private memory ($\prim{newPrivate}$) is permitted in OpenCL, and will translate to the allocation of a scalar stack variable. \begin{displaymath} \quad\begin{array}[t]{@{}l} \prim{parforGlobal}~n~\mathit{out}~(\lambda i~o .\\ \quad \prim{newGlobal}~1024.\mathsf{num}~\mathit{tmp}.\\ \quad \quad \prim{for}~1024~(\lambda j.~ \prim{idx}~\mathit{tmp}.1~j := (\prim{idx}~(\prim{idx}~(\prim{split}~1024~\mathit{xs})~i)~j) * (\prim{idx}~(\prim{idx}~(\prim{split}~1024~\mathit{ys})~i)~j)\ );\\ \quad \quad \prim{newPrivate}~\mathsf{num}~\mathit{accum}.\\ \quad \quad \quad \mathit{accum}.1 := 0;~\prim{for}~1024~(\lambda j.~\mathit{accum}.1 := \mathit{accum}.2 + (\prim{idx}~\mathit{tmp}.2~j)\ );~\mathit{out} := \mathit{accum.2}\ )\\ \end{array}\hskip \textwidth minus \textwidth \strut \end{displaymath} To hoist out the allocation in global memory we first visit $\prim{parforGlobal}$, remember the number of iterations $n$ and the loop variable $i$. Then, we replace the $\prim{newGlobal}$ with its body in which we have replaced $\mathit{tmp}$ with $(\prim{idx}~\mathit{tmp'}~i)$. We indicate the places where uses of $\mathit{tmp}$ have been replaced by shaded backgrounds: \begin{displaymath} \quad\begin{array}[t]{@{}l} \prim{newGlobal}~(n\times 1024).\mathsf{num}~\shade{\mathit{tmp'}}.\\ \quad \prim{parforGlobal}~n~\mathit{out}~(\lambda i~o .\\ \quad \quad \prim{for}~1024~(\lambda j.~\prim{idx}~\shade{(\prim{idx}~\mathit{tmp'}.1~i)}~j := (\prim{idx}~(\prim{idx}~(\prim{split}~1024~\mathit{xs})~i)~j) * (\prim{idx}~(\prim{idx}~(\prim{split}~1024~\mathit{ys})~i)~j)\ );\\ \quad \quad \prim{newPrivate}~\mathsf{num}~\mathit{accum}.\\ \quad \quad \quad \mathit{accum}.1 := 0;~\prim{for}~1024~(\lambda j.~\mathit{accum}.1 := \mathit{accum}.2 + (\prim{idx}~\shade{(\prim{idx}~\mathit{tmp'}.2~i)}~j)\ );~\mathit{out} := \mathit{accum.2}\ )\\ \end{array}\hskip \textwidth minus \textwidth \strut \end{displaymath} A $\prim{newGlobal}$ primitive is introduced at the very top of the program with the adjusted type. \section{Introduction} \label{sec:intro} \input{intro} \section{Strategy Preserving Compilation} \label{sec:motivation} \input{motivation} \section{Data Parallel Idealised Algol} \label{sec:typeSystem} \input{type-system} \section{From Functional to Imperative} \label{sec:translation} \input{translation} \section{Correctness of the Translation} \label{sec:correctness} \input{correctness} \section{From Data Parallel Idealised Algol to OpenCL} \label{sec:opencl} \input{opencl} \section{Experimental Results} \label{sec:experimentalResults} \input{experimentalResults} \section{Related Work} \label{sec:relatedWork} \input{related-work} \section{Conclusions and Future Work} \label{sec:conclusion} \input{conclusion} \subsection{Translation Stage I: Higher-order Functional to Higher-order Imperative} \label{sec:translation-i} \begin{figure*} \begin{minipage}{1.0\linewidth} \begin{displaymath} \begin{array}{lcl} \semA{x}_\delta(A) &=& A :=_\delta x \medskip\\ \semA{\underline{n}}_{\mathsf{num}}(A) &=& A := \underline{n} \\ \semA{\prim{negate}~E}_{\mathsf{num}}(A) &=& \semE{E}_{\mathsf{num}}(\lambda x.~A := \prim{negate}~x) \\ \semA{E_1 + E_2}_{\mathsf{num}}(A) &=& \semE{E_1}_{\mathsf{num}}(\lambda x.~\semE{E_2}_{\mathsf{num}}(\lambda y.~A := x + y))\medskip\\ \semA{\prim{map}~n~\delta_1~\delta_2~F~E}_{n.\delta_2}(A) &=& \semE{E}_{n.\delta_1}(\lambda x.~\prim{mapI}~n~\delta_1~\delta_2~(\lambda x~o. \semA{F~x}_{\delta_2}(o))~x~A) \\ \semA{\prim{reduce}~n~\delta_1~\delta_2~F~I~E}_{\delta_2}(A) &=& \begin{array}[t]{@{}l} \semE{E}_{n.\delta_1}(\lambda x.~\semE{I}_{\delta_2}(\lambda y.\\ \quad\quad \prim{reduceI}~n~\delta_1~\delta_2~(\lambda x~y~o. \semA{F~x~y}_{\delta_2}(o))~y~x~(\lambda r. \semA{r}(A)))) \end{array} \medskip\\ \semA{\prim{zip}~n~\delta_1~\delta_2~E_1~E_2}_{n.\delta_1\times\delta_2}(A) &=& \semA{E_1}_{n.\delta_1}(\prim{zipAcc}_1~n~\delta_1~\delta_2~A); \semA{E_2}_{n.\delta_2}(\prim{zipAcc}_2~n~\delta_1~\delta_2~A) \\ \semA{\prim{split}~n~m~\delta~E}_{n.m.\delta}(A) &=& \semA{E}_{nm.\delta}(\prim{splitAcc}~n~m~\delta~A) \\ \semA{\prim{join}~n~m~\delta~E}_{nm.\delta}(A) &=& \semA{E}_{n.m.\delta}(\prim{joinAcc}~n~m~\delta~A) \\ \semA{\prim{pair}~\delta_1~\delta_2~E_1~E_2}_{\delta_1\times\delta_2}(A) &=& \semA{E_1}_{\delta_1}(\prim{pairAcc_1}~\delta_1~\delta_2~A); \semA{E_2}_{\delta_2}(\prim{pairAcc_2}~\delta_1~\delta_2~A) \\ \semA{\prim{fst}~{\delta_1}~{\delta_2}~E}_{\delta_1}(A) &=& \semE{E}_{\delta_1\times\delta_2}(\lambda x.~A :=_{\delta_1} \prim{fst}~{\delta_1}~{\delta_2}~x) \\ \semA{\prim{snd}~{\delta_1}~{\delta_2}~E}_{\delta_2}(A) &=& \semE{E}_{\delta_1\times\delta_2}(\lambda x.~A :=_{\delta_2} \prim{snd}~{\delta_1}~{\delta_2}~x) \end{array} \end{displaymath} \subcaption{Acceptor-passing Translation}\label{fig:transone} \end{minipage} \begin{minipage}{1.0\linewidth} \begin{displaymath} \begin{array}{lcl} \semE{x}_\delta(C) &=& C(x) \medskip\\ \semE{\underline{n}}_{\mathsf{num}}(C) &=& C(\underline{n}) \\ \semE{\prim{negate}~E}_{\mathsf{num}}(C) &=& \semE{E}_{\mathsf{num}}(\lambda x.~C(\prim{negate}~x)) \\ \semE{E_1 + E_2}_{\mathsf{num}}(C) &=& \semE{E_1}_{\mathsf{num}}(\lambda x.~\semE{E_2}_{\mathsf{num}}(\lambda y.~C(x+y)~)~) \medskip\\ \semE{\prim{map}~n~\delta_1~\delta_2~F~E}_{n.\delta_2}(C) &=& \prim{new}~(n.\delta_2)~(\lambda \mathit{tmp}.~\semA{\prim{map}~n~\delta_1~\delta_2~F~E}_{n.\delta_2}(\mathit{tmp}.2);~C(\mathit{tmp}.1)~) \\ \semE{\prim{reduce}~n~\delta_1~\delta_2~F~I~E}_{\delta_2}(C) &=& \begin{array}[t]{@{}l} \semE{E}_{n.\delta_1}(\lambda x.~ \semE{I}_{\delta_2}(\lambda y.~\prim{reduceI}~n~\delta_1~\delta_2~(\lambda x~y~o.~\semA{F~x~y}_{\delta_2}(o))~y~x)~C) \end{array} \medskip\\ \semE{\prim{zip}~n~\delta_1~\delta_2~E_1~E_2}_{n.\delta_1 \times \delta_2}(C) &=& \semE{E_1}_{n.\delta_1}(\lambda x.~\semE{E_2}_{n.\delta_2}(\lambda y.~C(\prim{zip}~n~\delta_1~\delta_2~x~y)~)~) \\ \semE{\prim{split}~n~m~\delta~E}_{n.m.\delta}(C) &=& \semE{E}_{nm.\delta}(\lambda x.~C(\prim{split}~n~m~\delta~x)~) \\ \semE{\prim{join}~n~m~\delta~E}_{nm.\delta}(C) &=& \semE{E}_{n.m.\delta}(\lambda x.~C(\prim{join}~n~m~\delta~x)~) \\ \semE{\prim{pair}~\delta_1~\delta_2~E_1~E_2}_{\delta_1\times\delta_2}(C) &=& \semE{E_1}_{\delta_1}(\lambda x.~\semE{E_2}_{\delta_2}(\lambda y.~C(\prim{pair}~\delta_1~\delta_2~x~y)~)~) \\ \semE{\prim{fst}~\delta_1~\delta_2~E}_{\delta_1}(C) &=& \semE{E}_{\delta_1\times\delta_2}(\lambda x.~C(\prim{fst}~\delta_1~\delta_2~x)~) \\ \semE{\prim{snd}~\delta_1~\delta_2~E}_{\delta_2}(C) &=& \semE{E}_{\delta_1\times\delta_2}(\lambda x.~C(\prim{snd}~\delta_1~\delta_2~x)~) \\ \end{array} \end{displaymath} \subcaption{Continuation-passing Translation}\label{fig:transtwo} \end{minipage} \caption{Acceptor and Continuation-passing Translations} \end{figure*} The goal of the first stage of the translation is to take a phrase $E : \mathsf{exp}[\delta]$, constructed from the functional primitives in \autoref{fig:func-prim}, and an acceptor $\mathit{out} : \mathsf{acc}[\delta]$ and to produce a $\mathsf{comm}$ phrase that has the same semantics as the command \begin{displaymath} \mathit{out} :=_\delta E \end{displaymath} where $(:=_\delta)$ is an assignment operator for non-base types defined by induction on $\delta$ below. The resulting program will be an imperative program that acts as if we could compute the functional expression in one go and assign it to the output acceptor. Since our compilation targets know nothing of higher-order functional combinators like $\prim{map}$ and $\prim{reduce}$ they will have to be translated away. We do not use any of the traditional methods for compiling higher-order functions, such as closure conversion \cite{steele78rabbit} or defunctionalisation \cite{DBLP:journals/lisp/Reynolds98a}. Instead, we rely on the whole-program nature of our translation, our lack of recursion, and the special form of our functional primitives. Specifically, we are relying on a version of Gentzen's subformula principle (identified by \citet{Gentzen-1935} and named by \citet{Prawitz-1965}). Our approach is reminiscent of that of \citet{NajdLSW16} who use quotation and normalisation, making essential use of the subformula principle, to embed domain-specific languages. An obvious difference with our work is that rather than stratifying a language into a host functional language and a quoted functional language, we seamlessly combine a functional and an imperative language. We have already mentioned the use of assignment at compound data types. This is defined by: \begin{displaymath} \begin{array}{lcl} A :=_{\mathsf{num}} E &=& A := E \\ A :=_{n.\delta} E &=& \prim{mapI}~n~\delta~\delta~(\lambda x~a.a :=_\delta x)~E~A \\ A :=_{\delta_1 \times \delta_2} E &=& \prim{pairAcc_1}~A :=_{\delta_1} \prim{fst}~E; \prim{pairAcc_2}~A :=_{\delta_2} \prim{snd}~E \end{array} \end{displaymath} The translation of functional expressions to imperative code is accomplished by two type-directed mutually defined translations: the acceptor-passing translation $\semA{-}_\delta$ (\autoref{fig:transone}) and the continuation-passing translation $\semE{-}_\delta$ (\autoref{fig:transtwo}). The acceptor-passing translation takes a data type $\delta$, an expression of type $\mathsf{exp}[\delta]$ and an acceptor of type $\mathsf{acc}[\delta]$, and produces a $\mathsf{comm}$ phrase. Likewise, the continuation-passing translation takes a data type $\delta$, an expression of type $\mathsf{exp}[\delta]$ and a continuation of type $\mathsf{exp}[\delta] \to \mathsf{comm}$, and produces a $\mathsf{comm}$ phrase. It is straightforward to see by inspection, and using the fact that weakening is admissible in \ensuremath{\mathsf{DPIA}}\xspace, that the two translations are type-preserving: \begin{theorem}~ \begin{enumerate} \item If $\typ{\Delta}{\Pi}{\Gamma_1}{E}{\mathsf{exp}[\delta]}$ and $\typ{\Delta}{\Pi}{\Gamma_2}{A}{\mathsf{acc}[\delta]}$ then $\typ{\Delta}{\Pi}{\Gamma_1,\Gamma_2}{\semA{E}_\delta(A)}{\mathsf{comm}}$. \item If $\typ{\Delta}{\Pi}{\Gamma_1}{E}{\mathsf{exp}[\delta]}$ and $\typ{\Delta}{\Pi}{\Gamma_2}{C}{\mathsf{exp}[\delta] \to \mathsf{comm}}$ then $\typ{\Delta}{\Pi}{\Gamma_1,\Gamma_2}{\semE{E}_\delta(C)}{\mathsf{comm}}$. \end{enumerate} \end{theorem} It is also important that these translations satisfy the following equivalences. {\small\begin{mathpar} \semA{E}_\delta(A) \simeq A :=_\delta E \semE{E}_\delta(C) \simeq C(E) \end{mathpar}}% We define observational equivalence ($\simeq$) for \ensuremath{\mathsf{DPIA}}\xspace and establish these particular equivalences in \autoref{sec:correctness}. Ultimately, our goal is to compute the result of $\semA{E}_\delta(\mathit{out})$. It might appear that we could dispense with the acceptor-passing translation and simply use $\semE{E}_\delta(\lambda x.~\mathit{out} :=_\delta x)$. However, this would create unnecessary temporary storage, violating our desire for an efficient translation. There are clear similarities between our mutually-defined translations and tail-recursive one-pass CPS translations that do not produce unnecessary administrative redexes~\cite{DanvyMN07}. The clauses for both translations split into four groups. The first group consists only of the clause for translating functional expression phrases that are just identifiers $x$. In the acceptor-passing case, we defer to the generalised assignment defined above; in the continuation-passing case, we simply apply the continuation to the variable. The second group handles the first-order operations on numeric data. In all cases, we defer to the continuation-passing translation for the sub-expressions, with appropriate continuations. The third group is the most interesting: the translations of the higher order $\prim{map}$ and $\prim{reduce}$ primitives. For $\prim{map}$: in the acceptor-passing case, we can immediately translate to $\prim{mapI}$ which already takes an acceptor to place its output into; in the continuation-passing case, we must create a temporary array as storage, invoke $\prim{mapI}$, and then let the continuation read from the temporary array. This indirection is required because we do not know what random access strategy the continuation $C$ will use to read the array it is given. For $\prim{reduce}$, in both cases we translate to the $\prim{reduceI}$ combinator. The fourth group of clauses handles the translations of the functional data layout combinators. In the continuation-passing translation, they are passed straight through. They will be handled by the final translation to low-level C-like code in \autoref{sec:translation-iii}. In the acceptor-passing translation, the combinators that construct data are translated into the corresponding acceptors. In the $\prim{fst}$ and $\prim{snd}$ cases, which project out data, we defer to the continuation-passing translation. In practice, the case of a projection in tail position rarely arises, since it corresponds to disposal of part of the overall computation. \subsection{Translation Stage II: Higher-order Imperative to For-loops} \label{sec:translation-ii} \newcommand{\prim{idx}~n~(n.\delta_1)~i~E}{\prim{idx}~n~(n.\delta_1)~i~E} The next stage in the translation replaces the intermediate level imperative combinators $\prim{mapI}$ and $\prim{reduceI}$ with lower-level implementations in terms of (parallel) for-loops. This is accomplished by substitution and $\beta$-reduction (\ensuremath{\mathsf{DPIA}}\xspace includes full $\beta\eta$ reasoning principles). The combinator $\prim{mapI}$ is implemented as a parallel loop: \begin{displaymath} \prim{mapI} = \Lambda n~\delta_1~\delta_2.\lambda F~E~A.~ \prim{parfor}~n~\delta_2~A~(\lambda i~a.~F~(\prim{idx}~n~(n.\delta_1)~i~E)~a) \end{displaymath} The implementation of $\prim{reduceI}$ is more complex, involving the allocation of a temporary variable to store the accumulated value during the reduction. In this case, the for loop is sequential, since the semantics of reduction demands that we visit each element in turn: \begin{displaymath} \prim{reduceI} = \Lambda n~\delta_1~\delta_2.\lambda~F~I~E~C.~\prim{new}~\delta_2~(\lambda~acc.~ \begin{array}[t]{@{}l} \projtwo{acc} :=_{\delta_2} I;\\ \prim{for}~n~(\lambda~i.~F~(\prim{idx}~n~(n.\delta_1)~i~E)~(\projone{acc})~(\projtwo{acc})\ );\\ C~(\projone{acc})\ ) \end{array} \end{displaymath} These definitions define the intended semantics of the intermediate-level imperative combinators. \subsection{Translation Stage III: For-loops to Parallel Pseudo-C} \label{sec:translation-iii} \newcommand{\codegenComm}[1]{\textsc{CodeGen}_{\mathsf{comm}}(#1)} \newcommand{\codegenAcc}[2]{\textsc{CodeGen}_{\mathsf{acc}[#1]}(#2)} \newcommand{\codegenExp}[2]{\textsc{CodeGen}_{\mathsf{exp}[#1]}(#2)} \newcommand{\codegenData}[1]{\textsc{CodeGen}_{\mathsf{\mathsf{data}}}(#1)} \begin{figure}[t] \begin{minipage}{1.0\linewidth} \begin{displaymath} \begin{array}{l@{\hspace{0.4em}}c@{\hspace{0.4em}}l} \codegenComm{\prim{skip}, \eta} &=& \texttt{/* skip */} \\ \codegenComm{\mathsf{P_1; P_2}, \eta} &=& \begin{array}[t]{@{}l} \codegenComm{P_1, \eta}~\codegenComm{P_2, \eta} \end{array} \\ \codegenComm{A := E, \eta} &=& \codegenAcc{\mathsf{num}}{A, \eta, []}\texttt{ = }\codegenExp{\mathsf{num}}{E, \eta, []}\texttt{;} \\ \codegenComm{\prim{new}~\delta~(\lambda v.~P), \eta} &=& \texttt{\{}~ \begin{array}[t]{@{}l} \codegenData{\delta}~\texttt{v;} \\ \codegenComm{P[(v_a,v_e)/v], \eta[v_a \mapsto \texttt{v}, v_e \mapsto \texttt{v}]}\ \texttt{\}} \end{array} \\ \codegenComm{\prim{for}~n~(\lambda i.~P), \eta} &=& \begin{array}[t]{@{}l} \texttt{for(int i = 0; i < $n$; i += 1) \{} \\ \quad \codegenComm{P, \eta[i \mapsto \texttt{i}]} \ \texttt{\}} \end{array} \\ \codegenComm{\prim{parfor}~n~\delta~A~(\lambda i~o.~P)~E, \eta} &=& \begin{array}[t]{@{}l} \texttt{parfor(int i = 0; i < $n$; i += 1) \{ }\\ \quad \codegenComm{P[\prim{idxAcc}~n~\delta~A~i/o], \eta[i \mapsto \texttt{i}]}\ \texttt{\}} \end{array} \end{array} \end{displaymath} \subcaption{\ensuremath{\mathsf{DPIA}}\xspace commands to C statements}\label{fig:codegen-comm} \end{minipage} \bigskip \begin{minipage}{1.0\linewidth} \begin{displaymath} \begin{array}{@{}l@{~}c@{~}l@{}} \codegenAcc{\delta}{x, \eta, \mathit{ps}} &=& \eta(x)(\mathit{reverse}~\mathit{ps}) \\ \codegenAcc{\mathsf{\delta}}{\prim{idxAcc}~n~\delta~A~I, \eta, \mathit{ps}} &=& \begin{array}[t]{@{}l} \codegenAcc{\mathsf{n.\delta}}{A, \eta,\\ \quad\codegenExp{\mathit{idx}(n)}{I, \eta, []} :: \mathit{ps})} \end{array} \\ \codegenAcc{\mathsf{nm.\delta}}{\prim{splitAcc}~n~m~\delta~A, \eta, \texttt{i} :: \mathit{ps}} &=& \codegenAcc{\mathsf{n.m.\delta}}{A, \eta, \texttt{i/}m :: \texttt{i\%}m :: \mathit{ps}} \\ \codegenAcc{\mathsf{n.m.\delta}}{\prim{joinAcc}~n~m~\delta~A, \eta, \texttt{i} :: \texttt{j} :: \mathit{ps}} &=& \codegenAcc{\mathsf{nm.\delta}}{A, \eta, \texttt{i*}m\texttt{+j} :: \mathit{ps}} \\ \codegenAcc{\delta_1}{\prim{pairAcc_1}~\delta_1~\delta_2~A, \eta, \mathit{ps}} &=& \codegenAcc{\delta_1\times\delta_2}{A, \eta, \texttt{.x1} :: \mathit{ps}} \\ \codegenAcc{\delta_2}{\prim{pairAcc_2}~\delta_1~\delta_2~A, \eta, \mathit{ps}} &=& \codegenAcc{\delta_1\times\delta_2}{A, \eta, \texttt{.x2} :: \mathit{ps}} \\ \codegenAcc{n.\delta_1}{\prim{zipAcc_1}~n~\delta_1~\delta_2~A, \eta, \texttt{i} :: \mathit{ps}} &=& \codegenAcc{n.(\delta_1\times\delta_2)}{A, \eta, \texttt{i} :: \texttt{.x1} :: \mathit{ps}} \\ \codegenAcc{n.\delta_2}{\prim{zipAcc_2}~n~\delta_1~\delta_2~A, \eta, \texttt{i} :: \mathit{ps}} &=& \codegenAcc{n.(\delta_1\times\delta_2)}{A, \eta, \texttt{i} :: \texttt{.x2} :: \mathit{ps}} \end{array} \end{displaymath} \subcaption{\ensuremath{\mathsf{DPIA}}\xspace acceptors to C l-values}\label{fig:codegen-acc} \end{minipage} \bigskip \begin{minipage}{1.0\linewidth} \begin{displaymath} \begin{array}{@{}l@{~}c@{~}l@{}} \codegenExp{\delta}{x, \eta, \mathit{ps}} &=& \eta(x)(\mathit{reverse}~\mathit{ps}) \\ \codegenExp{\mathsf{num}}{\underline{n}, \eta, []} &=& n \\ \codegenExp{\mathsf{num}}{\prim{negate}~E, \eta, []} &=& \texttt{(}\texttt{-}~\codegenExp{\mathsf{num}}{E, \eta, []}\texttt{)} \\ \codegenExp{\mathsf{num}}{E_1 + E_2, \eta, []} &=& \texttt{(} \begin{array}[t]{@{}l} \codegenExp{\mathsf{num}}{E_1, \eta, []}\\ \texttt{+ }\codegenExp{\mathsf{num}}{E_2, \eta, []}\texttt{)} \end{array} \\ \codegenExp{n.(\delta_1 \times \delta_2)}{\prim{zip}~n~\delta_1~\delta_2~E_1~E_2, \eta, \texttt{i} :: \texttt{.x}j :: \mathit{ps}} &=& \codegenExp{n.\delta_j}{E_j, \eta, \texttt{i} :: \mathit{ps}} \\ \codegenExp{m.n.\delta}{\prim{split}~n~m~\delta~E, \eta, \texttt{i} :: \texttt{j} :: \mathit{ps}} &=& \codegenExp{mn.\delta}{E, \eta, \texttt{i*}n\texttt{+j} :: \mathit{ps}} \\ \codegenExp{mn.\delta}{\prim{join}~n~m~\delta~E, \eta, \texttt{i} :: \mathit{ps}} &=& \codegenExp{m.n.\delta}{E, \eta, \texttt{i/}n :: \texttt{i\%}n :: \mathit{ps}} \\ \codegenExp{\delta_1 \times \delta_2}{\prim{pair}~\delta_1~\delta_2~E_1~E_2, \eta, \texttt{.x}j :: \mathit{ps}} &=& \codegenExp{\delta_j}{E_j, \eta, \mathit{ps}} \\ \codegenExp{\delta_1}{\prim{fst}~\delta_1~\delta_2~E, \eta, \mathit{ps}} &=& \codegenExp{\delta_1 \times \delta_2}{E, \eta, \texttt{.x1} :: \mathit{ps}} \\ \codegenExp{\delta_2}{\prim{snd}~\delta_1~\delta_2~E, \eta, \mathit{ps}} &=& \codegenExp{\delta_1 \times \delta_2}{E, \eta, \texttt{.x2} :: \mathit{ps}} \\ \codegenExp{\delta}{\prim{idx}~n~\delta~E~I, \eta, \mathit{ps}} &=& \begin{array}[t]{@{}l} \textsc{CodeGen}_{\mathsf{exp}[n.\delta]}(E, \eta,\\ \quad\codegenExp{\mathit{idx}(n)}{I, \eta, []} :: \mathit{ps}) \end{array} \end{array} \end{displaymath} \subcaption{\ensuremath{\mathsf{DPIA}}\xspace expressions to C r-values}\label{fig:codegen-exp} \end{minipage} \caption{Translation of Purely Imperative \ensuremath{\mathsf{DPIA}}\xspace to Parallel Pseudo C} \label{fig:codegen} \end{figure} After performing the translation steps in the previous two sections, we have generated a command phrase that does not use the higher-order functional combinators $\prim{map}$ and $\prim{reduce}$, but still contains uses of the data layout combinators $\prim{zip}$, $\prim{split}$ etc., and their acceptor counterparts. We now define a translation to a C-like language with parallel for loops that resolves these data layout expressions into explicit indexing expressions. The translation is defined in \autoref{fig:codegen}. Parallel for loops can easily be achieved using OpenMP's \texttt{\#pragma parallel for} construct, for example. In \autoref{sec:opencl}, we describe how to adapt this process (and the earlier stages) to work with the real parallel C-like language OpenCL. The translation in \autoref{fig:codegen} is split into three parts: the translation of \ensuremath{\mathsf{DPIA}}\xspace commands into C statements, the translation of acceptors into l-values, and the translation of expressions into r-values. (Recall the analogy made in \autoref{sec:types} between the phrase types of \ensuremath{\mathsf{DPIA}}\xspace and syntactic categories in an imperative language; we are now reaping the rewards of Reynolds' careful design of Idealised Algol.) We assume that the input to the translation process is in $\beta\eta$-normal form, so all $\prim{new}$-block and loop bodies are fully expanded. The translation of commands in \autoref{fig:codegen-comm} straightforwardly translates each \ensuremath{\mathsf{DPIA}}\xspace command into the corresponding C statement. The translation is parameterised by an environment $\eta$ that maps from \ensuremath{\mathsf{DPIA}}\xspace identifiers to C variable names. There is a small discrepancy that we overcome in that semicolons are statement terminators in C, but separators in \ensuremath{\mathsf{DPIA}}\xspace, and that doing nothing useful is unremarkable in a C program, but is explicitly written $\prim{skip}$ in \ensuremath{\mathsf{DPIA}}\xspace. The translation of assignment relies on the translations for acceptors and expression, which we define below. Variable allocation is translated using a \texttt{\{ ... \}} block to limit the scope. We omit initialisation of the new variable because we know by inspection of our previous translation steps that newly allocated variables will always be completely initialised before reading. Note how we explicitly substitute a pair of identifiers for the acceptor and expression parts of the variable in \ensuremath{\mathsf{DPIA}}\xspace, but that these both refer to the same C variable in the extended environment. Translation of variable allocation makes use of generator $\codegenData{\delta,\texttt{v}}$ that generates the appropriate C-style variable declaration for the data type $\delta$. Since C does not have anonymous tuple types, this entails on-the-fly generation of the appropriate \texttt{struct} definitions. \ensuremath{\mathsf{DPIA}}\xspace $\prim{for}$ loops are translated into C \texttt{for} loops, \ensuremath{\mathsf{DPIA}}\xspace $\prim{parfor}$ loops are translated into pseudo-parallel-for loops. In the body of the \texttt{parfor} loop, we substitute in an $\prim{idxAcc}$ phrase which will be resolved later by the translation of acceptors. The variable names introduced by translating $\prim{new}$, $\prim{for}$ and $\prim{parfor}$ are all assumed to be fresh. The translation of acceptors (\autoref{fig:codegen-acc}) is parameterised by an environment $\eta$, as for commands, as well as a path $\mathit{ps}$, consisting of a list of C-expressions of type $\texttt{int}$ denoting array indexes, and \texttt{struct} fields, \texttt{.x1}, \texttt{.x2}, denoting projections from pairs. The path must always agree with the type of the acceptor being translated. During command translation, all acceptors being translated have type $\texttt{num}$ so the access path starts empty. The acceptor translation clauses in \autoref{fig:codegen-acc} all proceed by manipulating the access path appropriately until an identifier is reached. At this point, the \ensuremath{\mathsf{DPIA}}\xspace identifier is replaced with its corresponding C variable and the access path is appended. Translation of expressions (\autoref{fig:codegen-exp}) is parameterised similarly to the acceptor translation, and contains similar clauses for all the data layout combinators. Expressions also include literals and arithmetic expressions, which are translated to the corresponding notion in C. \newcommand{\eta[i \mapsto \texttt{i}]}{\eta[i \mapsto \texttt{i}]} \paragraph{Example} We demonstrate how the translation to C works by applying it to the $\prim{parfor}$ loop in the translation (\ref{codeex:dpia-dot-product}) of the simple dot product in \autoref{sec:motivation}. We use the environment $\eta = [\mathit{out} \mapsto \texttt{out}, \mathit{xs} \mapsto \texttt{xs}, \mathit{ys} \mapsto \texttt{ys}]$. The command translation translates, in two steps, the $\prim{parfor}$ loop and the assignment, substituting in the indexing acceptor for the acceptor in the loop body: \begin{displaymath} \begin{array}{cl} &\codegenComm{\prim{parfor}~\underline{n}~\mathit{out}~(\lambda i~o. o := \prim{fst}~(\prim{idx}~(\prim{zip}~\mathit{xs}~\mathit{ys})~i) * \prim{snd}~(\prim{idx}~(\prim{zip}~\mathit{xs}~\mathit{ys})~i)), \eta} \\ =& \begin{array}[t]{@{}l} \texttt{parfor(int i = 0; i < }n\texttt{; i+=1) \{}\\ \quad \codegenComm{\prim{idxAcc}~\mathit{out}~i := \prim{fst}~(\prim{idx}~(\prim{zip}~\mathit{xs}~\mathit{ys})~i) * \prim{snd}~(\prim{idx}~(\prim{zip}~\mathit{xs}~\mathit{ys})~i)), \eta[i \mapsto \texttt{i}]}\\ \texttt{\}} \end{array} \\ =& \begin{array}[t]{@{}l} \texttt{parfor(int i = 0; i < }n\texttt{; i+=1) \{}\\ \quad \codegenAcc{\mathsf{num}}{\prim{idxAcc}~\mathit{out}~i, \eta[i \mapsto \texttt{i}], []} =\\ \qquad \codegenExp{\mathsf{num}}{\prim{fst}~(\mathsf{idx}~(\prim{zip}~\mathit{xs}~\mathit{ys})~i) * \prim{snd}~(\prim{idx}~(\prim{zip}~\mathit{xs}~\mathit{ys})~i)), \eta[i \mapsto \texttt{i}], []}\texttt{;}\\ \texttt{\}} \end{array} \end{array} \end{displaymath} The acceptor part of the assignment is translated as follows: \begin{displaymath} \begin{array}{lcl} \codegenAcc{\mathsf{num}}{\prim{idxAcc}~\mathit{out}~i, \eta[i \mapsto \texttt{i}], []} &=&\codegenAcc{\underline{n}.\mathsf{num}}{\mathit{out}, \eta[i \mapsto \texttt{i}], [\texttt{i}]} \\ &=&\texttt{out[i]} \end{array} \end{displaymath} The expression part of the assignment is translated as follows, where we only spell out the left-hand side of the multiplication in detail; the right hand side is similar. \begin{displaymath} \begin{array}{cl} &\codegenExp{\mathsf{num}}{\prim{fst}~(\prim{idx}~(\prim{zip}~\mathit{xs}~\mathit{ys})~i), \eta[i \mapsto \texttt{i}], []} \\ =&\codegenExp{\mathsf{num}\times\mathsf{num}}{\prim{idx}~(\prim{zip}~\mathit{xs}~\mathit{ys})~i, \eta[i \mapsto \texttt{i}], [\texttt{.x1}]} \\ =&\codegenExp{\underline{n}.(\mathsf{num}\times\mathsf{num})}{\prim{zip}~\mathit{xs}~\mathit{ys}, \eta[i \mapsto \texttt{i}], [\texttt{i}, \texttt{.x1}]} \\ =&\codegenExp{\underline{n}.\mathsf{num}}{\mathit{xs}, \eta[i \mapsto \texttt{i}], [\texttt{i}]} \\ =&\texttt{xs[i]} \end{array} \end{displaymath} Putting everything together, we get the following translation of the original $\prim{parfor}$ loop, which has had all the data layout combinators translated away. \begin{displaymath} \texttt{parfor(int i = 0; i < $n$; i+=1) \{ out[i] = xs[i] * ys[i]; \}} \end{displaymath} A similar translation was recently presented in a more informal style by \citet{SteuwerReDu2017/cgo}. Their experimental results show that in practice it is important to keep the indices concise and short for generating efficient OpenCL code and discuss how to simplify index expressions making use of range information of the indices involved. \subsection{The Types of \ensuremath{\mathsf{DPIA}}\xspace} \label{sec:types} The type system of \ensuremath{\mathsf{DPIA}}\xspace, following Idealised Algol, separates \emph{data types}, which classify data (integers, floats, arrays, \emph{etc.}), from \emph{phrase types}, which classify the parts of a program according to the interface they offer. Phrase types are a generalisation to first-class status of the syntactic categories in a standard imperative language that distinguish between expressions (r-values), l-values, and statements. Phrase types in \ensuremath{\mathsf{DPIA}}\xspace comprise \emph{expressions}, which produce data, possibly reading from the store; \emph{acceptors}, which describe modifiable areas of the store (analogous to \emph{l-values} in imperative languages~\citep{Strachey00}); \emph{commands}, which modify the store; \emph{functions}, which are parameterised phrases; and \emph{records}, which offer a choice of multiple phrases. The separation into data and phrase types distinguishes Idealised Algol-style type systems from those for functional languages, which commonly use expression phrases for everything (permitting, for example, functional data). To facilitate interference control, we identify a subset of phrase types which are \emph{passive} (\autoref{sec:passive-types}), i.e.~essentially read-only, and so are safe to share across parallel threads. (We elaborate on what ``essentially read-only'' means in \autoref{sec:passive-types} and \autoref{sec:typing-rules}.) \subsubsection{Kinding rules} \label{sec:kinding-rules} \begin{figure}[t] \small \begin{minipage}{0.3\linewidth} \begin{mathpar} \kappa ::= \mathsf{data} \mid \mathsf{phrase} \mid \mathsf{nat} \end{mathpar} \subcaption{Kinds}\label{fig:kinds} \end{minipage}% \begin{minipage}{0.3\linewidth} \begin{mathpar} \inferrule* {x : \kappa \in \Delta} {\Delta \vdash x : \kappa} \end{mathpar} \subcaption{Kinding Structural Rules}\label{fig:structural-kinding} \end{minipage} \begin{minipage}{0.3\linewidth} \begin{mathpar} \inferrule {\models \forall \sigma : \mathit{dom}(\Delta) \to \mathbb{N}.\sigma(I) = \sigma(J)} {\Delta \vdash I \equiv J : \mathsf{nat}} \end{mathpar} \subcaption{Type Equality}\label{fig:equality-kinding} \end{minipage} \medskip \begin{minipage}{1.0\linewidth} \begin{mathpar} \inferrule* { } {\Delta \vdash \underline{n} : \mathsf{nat}} \inferrule* {\Delta \vdash I : \mathsf{nat} \\ \Delta \vdash J : \mathsf{nat}} {\Delta \vdash I + J : \mathsf{nat}} \inferrule* {\Delta \vdash I : \mathsf{nat} \\ \Delta \vdash J : \mathsf{nat}} {\Delta \vdash I J : \mathsf{nat}} \end{mathpar} \subcaption{Natural numbers}\label{fig:natural-number-kinding} \end{minipage} \medskip \begin{minipage}{1.0\linewidth} \begin{mathpar} \inferrule* { } {\Delta \vdash \mathsf{num} : \mathsf{data}} \inferrule* {\Delta \vdash I : \mathsf{nat}} {\Delta \vdash \prim{idx}(I) : \mathsf{data}} \inferrule* {\Delta \vdash I : \mathsf{nat} \\\\ \Delta \vdash \delta : \mathsf{data}} {\Delta \vdash I.\delta : \mathsf{data}} \inferrule* {\Delta \vdash \delta_1 : \mathsf{data} \\\\ \Delta \vdash \delta_2 : \mathsf{data}} {\Delta \vdash \delta_1 \times \delta_2 : \mathsf{data}} \end{mathpar} \subcaption{Data Types}\label{fig:data-type-kinding} \end{minipage} \medskip \begin{minipage}{1.0\linewidth} \begin{mathpar} \inferrule* {\Delta \vdash \delta : \mathsf{data}} {\Delta \vdash \mathsf{exp}[\delta] : \mathsf{phrase}} \inferrule* {\Delta \vdash \delta : \mathsf{data}} {\Delta \vdash \mathsf{acc}[\delta] : \mathsf{phrase}} \inferrule* { } {\Delta \vdash \mathsf{comm} : \mathsf{phrase}} \inferrule* {\Delta \vdash \theta_1 : \mathsf{phrase} \\ \Delta \vdash \theta_2 : \mathsf{phrase}} {\Delta \vdash \theta_1 \times \theta_2 : \mathsf{phrase}} \inferrule* {\Delta \vdash \theta_1 : \mathsf{phrase} \\\\ \Delta \vdash \theta_2 : \mathsf{phrase}} {\Delta \vdash \theta_1 \to \theta_2 : \mathsf{phrase}} \inferrule* {\Delta \vdash \theta_1 : \mathsf{phrase} \\\\ \Delta \vdash \theta_2 : \mathsf{phrase}} {\Delta \vdash \theta_1 \to_{\textrm{p}} \theta_2 : \mathsf{phrase}} \inferrule* {\Delta, x : \kappa \vdash \theta : \mathsf{phrase} \\ \kappa \in \{\mathsf{nat},\mathsf{data}\}} {\Delta \vdash (x\mathord:\kappa) \to \theta : \mathsf{phrase}} \end{mathpar} \subcaption{Phrase Types}\label{fig:phrase-type-kinding} \end{minipage} \caption{Well-formed Types} \label{fig:types} \end{figure} We extend SCIR with both data type and size polymorphism, so we need a kind system. \autoref{fig:types} presents the kinding rules for \ensuremath{\mathsf{DPIA}}\xspace types. The kinds $\kappa$ of \ensuremath{\mathsf{DPIA}}\xspace include the major classifications into data types ($\mathsf{data}$) and phrase types ($\mathsf{phrase}$), along with the kind of type-level natural numbers ($\mathsf{nat}$). Types may contain variables, so we use a kinding judgement $\Delta \vdash \tau : \kappa$, which states that type $\tau$ has kind $\kappa$ in kinding context $\Delta$. \autoref{fig:structural-kinding} gives the variable rule that permits the use of type variables in well-kinded types. \autoref{fig:natural-number-kinding} presents the rules for type-level natural numbers: either constants $\underline{n}$, addition $I + J$, or multiplication $I J$ (where $I$ and $J$ range over terms of kind $\mathsf{nat}$). The rules for data types are presented in \autoref{fig:data-type-kinding}. We use $\delta$ to range over data types. The base types are $\mathsf{num}$ for numbers; and a data type of array indexes $\prim{idx}(n)$, parameterised by the maximum array index. There are two compound types of data. For any data type $\delta$ and natural number term $I$, $I.\delta$ is the data type of homogeneous arrays of $\delta$s of size $I$. (We opt for a concise notation for array types as they are pervasive in data parallel programming.) Heterogeneous compound data types (records) are built using the rule for $\delta_1 \times \delta_2$. The phrase types of \ensuremath{\mathsf{DPIA}}\xspace are given in \autoref{fig:phrase-type-kinding}. We use $\theta$ to range over phrase types. For each data type $\delta$, there are phrase types $\mathsf{exp}[\delta]$ for \emph{expression} phrases that produce data of type $\delta$, and $\mathsf{acc}[\delta]$ for \emph{acceptor} phrases that consume data of type $\delta$. The $\mathsf{comm}$ phrase type classifies \emph{command} phrases that may modify the store. Phrases that can be used in two different ways, $\theta_1$ or $\theta_2$, are classified using the phrase product type $\theta_1 \times \theta_2$. This type is distinct from the \emph{data} product type $\delta_1 \times \delta_2$: the data type represents a pair of data values; the phrase type represents an ``interface'' that offers two possible ``methods''. (For readers familiar with Linear Logic \cite{DBLP:journals/tcs/Girard87}, the phrase product is like ``with'' ($\with$) and the data product like ``tensor'' $(\otimes)$.) The final three phrase types are all variants of parameterised phrase types. The phrase types $\theta_1 \to \theta_2$ and $\theta_1 \to_{\textrm{p}} \theta_2$ classify phrase functions. The $\mathrm{p}$ subscript denotes passive functions. The phrase type $(x\mathord:\kappa) \to \theta$ classifies a phrase that is parameterised either by a data type or a natural number. The types of \ensuremath{\mathsf{DPIA}}\xspace include arithmetic expressions, so we have a non trivial notion of equality between types, written $\Delta \vdash \tau_1 \equiv \tau_2 : \kappa$. The key type equality rule is given in \autoref{fig:equality-kinding}: two arithmetic expressions are equal if they are equal as natural numbers for all interpretations ($\sigma$) of their free variables. This equality is lifted to all other types by structural congruence. \subsubsection{Passive Types} \label{sec:passive-types} \autoref{fig:passive-types} identifies the subset of phrase types that classify passive phrases. The opposite of passive is \emph{active}. We use $\phi$ to range over passive phrase types. An expression phrase type $\mathsf{exp}[\delta]$ is always passive --- phrases of this type can, by definition, only read the store. A compound phrase type is always passive if its component phrase types are all passive. Furthermore, a passive function type $\theta_1 \to_{\textrm{p}} \theta_2$ is always passive, and a plain function type is passive whenever its return type is passive (irrespective of the argument type). Passive types are essentially read-only. The one exception whereby a phrase of passive type may modify the store is a passive function with active argument and return types. Such a function can only modify the part of the store addressable through the active phrase it is supplied with as an argument. \begin{figure}[t] \small \centering \begin{mathpar} \inferrule* {\Delta \vdash \delta : \mathsf{data}} {\Delta \vdash \mathsf{exp}[\delta] : \mathsf{passive}} \inferrule* {\Delta \vdash \phi_1 : \mathsf{passive} \\ \Delta \vdash \phi_2 : \mathsf{passive}} {\Delta \vdash \phi_1 \times \phi_2 : \mathsf{passive}} \inferrule* {\Delta \vdash \theta : \mathsf{phrase} \\ \Delta \vdash \phi : \mathsf{passive}} {\Delta \vdash \theta \to \phi : \mathsf{passive}} \inferrule* {\Delta \vdash \theta_1 : \mathsf{phrase} \\ \Delta \vdash \theta_2 : \mathsf{phrase}} {\Delta \vdash \theta_1 \to_{\textrm{p}} \theta_2 : \mathsf{passive}} \inferrule* {\Delta, x : \kappa \vdash \theta : \mathsf{passive} \\ \kappa \in \{\mathsf{nat},\mathsf{data}\}} {\Delta \vdash (x\mathord:\kappa) \to \theta : \mathsf{passive}} \end{mathpar} \caption{Passive Types} \label{fig:passive-types} \end{figure} \subsection{Typing Rules for \ensuremath{\mathsf{DPIA}}\xspace} \label{sec:typing-rules} The typing judgement of \ensuremath{\mathsf{DPIA}}\xspace follows the SCIR system of \citet{OHearnPTT99} in distinguishing between passive and active uses of identifiers. Our judgement also has a kinding context for size and data type polymorphism. The judgement form has the following structure: \begin{displaymath} \typ{\Delta}{\Pi}{\Gamma}{P}{\theta} \end{displaymath} where $\Delta$ is the kinding context, $\Pi$ is a context of passively used identifiers, $\Gamma$ is a context of actively used identifiers, $P$ is a program phrase, and $\theta$ is a phrase type. All the types in $\Pi$ and $\Gamma$ are phrase types well-kinded by $\Delta$. The phrase type $\theta$ must also be well-kinded by $\Delta$. The order of entries does not matter in any of the contexts. The contexts $\Delta$ and $\Pi$ are subject to contraction and weakening; context $\Gamma$ is not. The split context formulation of SCIR recalls that of Barber's DILL system \citep{barber96dual}, which also distinguishes between linear and unrestricted assumptions. The SCIR system differs in how movement between the zones is mediated in terms of passive and active types. Section 2.6 of \citep{OHearnPTT99} discusses the relationship between SCIR and Linear Logic. The core typing rules of \ensuremath{\mathsf{DPIA}}\xspace are given in \autoref{fig:typing-rules}. These rules define how variable phrases are formed, how parameterised and compound phrases are introduced and eliminated, and how passive and active types are managed. Any particular application of \ensuremath{\mathsf{DPIA}}\xspace is specified by giving a collection of primitive phrases \textsc{Primitives}, each of which has a closed phrase type. We describe a collection for data parallel programming in \autoref{sec:dpia-prims}. \autoref{fig:structural-rules} presents the rule for forming variable phrases, implicit conversion between equal types, and the use of primitives. At point of use, all variables are considered to be used actively. If the final phrase type is passive, then an active use may be converted to a passive one by the \TirName{Passify} rule. Primitives may be used in any context. \autoref{fig:intro-elim-rules} presents the rules for parameterised and compound phrases. These are all standard typed $\lambda$-calculus style rules, except the use of separate contexts for a function and its arguments in the \TirName{App} rule. This ensures that every function and its argument use non-interfering active resources, maintaining the invariant that distinct identifiers refer to non-interfering phrases. Note that we do not require separate contexts for the two parts of a compound phrase in the \TirName{Pair} rule. Compound phrases offer two ways of interacting with the \emph{same} underlying resource (as in the with ($\with$) rule from linear logic). \autoref{fig:active-passive-rules} describes how passive and active uses of variables are managed. The \TirName{Activate} rule allows any variable that has been used passively to be treated as if it were used actively. The \TirName{Passify} rule allows active uses to be treated as passive, as long as the final phrase type is passive. The \TirName{Promote} rule turns functions into passive functions, as long as they do not contain any free variables used actively. The \TirName{Derelict} rule indicates that a passive function can always be seen as a normal function, if required. \begin{figure*}[t] \small \begin{minipage}{1.0\linewidth} \begin{mathpar} \inferrule* [right=Var] { } {\typ{\Delta}{\Pi}{\Gamma, x : \theta, \Gamma'}{x}{\theta}} \inferrule* [right=Conv] {\typ{\Delta}{\Pi}{\Gamma}{P}{\theta_1} \\\\ \Delta \vdash \theta_1 \equiv \theta_2 : \mathsf{phrase}} {\typ{\Delta}{\Pi}{\Gamma}{P}{\theta_2}} \inferrule* [right=Prim] {\mathsf{prim} : \theta \in \textsc{Primitives}} {\typ{\Delta}{\Pi}{\Gamma}{\mathsf{prim}}{\theta}} \end{mathpar} \subcaption{Structural Rules}\label{fig:structural-rules} \end{minipage} \medskip \begin{minipage}{1.0\linewidth} \begin{mathpar} \inferrule* [right=Lam] {\typ{\Delta}{\Pi}{\Gamma, x : \theta_1}{P}{\theta_2}} {\typ{\Delta}{\Pi}{\Gamma}{\lambda x.P}{\theta_1 \to \theta_2}} \inferrule* [right=App] {\typ{\Delta}{\Pi}{\Gamma_1}{P}{\theta_1 \to \theta_2} \\ \typ{\Delta}{\Pi}{\Gamma_2}{Q}{\theta_1}} {\typ{\Delta}{\Pi}{\Gamma_1, \Gamma_2}{P~Q}{\theta_2}}\medskip \end{mathpar} \begin{mathpar} \inferrule* [right=TLam] {\typ{\Delta, x : \kappa}{\Pi}{\Gamma}{P}{\theta} \\ x \not\in \mathit{fv}(\Pi, \Gamma)} {\typ{\Delta}{\Pi}{\Gamma}{\Lambda x. P}{(x \mathord: \kappa) \to \theta}} \inferrule* [right=TApp] {\typ{\Delta}{\Pi}{\Gamma}{P}{(x \mathord:\kappa) \to \theta} \\ \Delta \vdash e : \kappa} {\typ{\Delta}{\Pi}{\Gamma}{P~e}{\theta[e/x]}}\medskip \end{mathpar} \begin{mathpar} \inferrule* [right=Pair] {\typ{\Delta}{\Pi}{\Gamma}{P}{\theta_1} \\ \typ{\Delta}{\Pi}{\Gamma}{Q}{\theta_2}} {\typ{\Delta}{\Pi}{\Gamma}{\langle P, Q \rangle}{\theta_1 \times \theta_2}} \inferrule* [right=Proj] {\typ{\Delta}{\Pi}{\Gamma}{P}{\theta_1 \times \theta_2}} {\typ{\Delta}{\Pi}{\Gamma}{P.i}{\theta_i}} \end{mathpar} \subcaption{Introduction and Elimination Rules}\label{fig:intro-elim-rules} \end{minipage} \medskip \begin{minipage}{1.0\linewidth} \begin{mathpar} \inferrule* [right=Activate] {\typ{\Delta}{\Pi, x : \theta}{\Gamma}{P}{\theta'}} {\typ{\Delta}{\Pi}{\Gamma, x : \theta}{P}{\theta'}} \inferrule* [right=Passify] {\typ{\Delta}{\Pi}{\Gamma, x : \theta}{P}{\phi}} {\typ{\Delta}{\Pi, x : \theta}{\Gamma}{P}{\phi}}\medskip \end{mathpar} \begin{mathpar} \inferrule* [right=Promote] {\typ{\Delta}{\Pi}{\cdot}{P}{\theta_1 \to \theta_2}} {\typ{\Delta}{\Pi}{\cdot}{P}{\theta_1 \to_{\textrm{p}} \theta_2}} \inferrule* [right=Derelict] {\typ{\Delta}{\Pi}{\Gamma}{P}{\theta_1 \to_{\textrm{p}} \theta_2}} {\typ{\Delta}{\Pi}{\Gamma}{P}{\theta_1 \to \theta_2}} \end{mathpar} \subcaption{Active and Passive Phrase Rules}\label{fig:active-passive-rules} \end{minipage} \caption{Typing Rules: Indexed Affine Linear $\lambda$-Calculus with Passivity \cite{OHearnPTT99}} \label{fig:typing-rules} \end{figure*} \paragraph{\ensuremath{\mathsf{DPIA}}\xspace's functional sub-language} By inspection of the rules, we can see that if we restrict to phrase types constructed from $\mathsf{exp}[\delta]$, functions, polymorphic functions, and tuples, then the constraints on multiple uses of variables in \ensuremath{\mathsf{DPIA}}\xspace cease to apply. Therefore, \ensuremath{\mathsf{DPIA}}\xspace has a sub-language that has the same type system as a normal (non-substructural) typed $\lambda$-calculus with base types for numbers, arrays and tuples, and a limited form of polymorphism. When we introduce the functional primitives for \ensuremath{\mathsf{DPIA}}\xspace in the next section, we will enrich this $\lambda$-calculus with arithmetic, array manipulators, and higher-order array combinators. It is this purely functional sub-language of \ensuremath{\mathsf{DPIA}}\xspace that allows us to embed functional data parallel programs in a semantics preserving way. \subsection{Data Parallel Programming Primitives} \label{sec:dpia-prims} \begin{figure} \small \begin{minipage}{1.0\linewidth} \begin{tabular*}{\linewidth}{>{$}l<{$}@{\hspace{0.4em}}>{$}c<{$}>{$}l<{$}} \underline{n}&:&\mathsf{exp}[\mathsf{num}] \\ \prim{negate}&:&\mathsf{exp}[\mathsf{num}] \to \mathsf{exp}[\mathsf{num}] \\ (+,*,/,-) &:&\mathsf{exp}[\mathsf{num}] \times \mathsf{exp}[\mathsf{num}] \to \mathsf{exp}[\mathsf{num}] \bigskip\\ \prim{map}&:&(n : \mathsf{nat}) \to (\delta_1~\delta_2 : \mathsf{data}) \to (\mathsf{exp}[\delta_1] \to \mathsf{exp}[\delta_2]) \to \mathsf{exp}[n.\delta_1] \to \mathsf{exp}[n.\delta_2] \\ \prim{reduce}&:&(n : \mathsf{nat}) \to (\delta_1~\delta_2 : \mathsf{data}) \to (\mathsf{exp}[\delta_1] \to \mathsf{exp}[\delta_2] \to \mathsf{exp}[\delta_2]) \to \mathsf{exp}[\delta_2] \to \mathsf{exp}[n.\delta_1] \to \mathsf{exp}[\delta_2]\bigskip\\ \prim{zip}&:&(n : \mathsf{nat}) \to (\delta_1~\delta_2 : \mathsf{data}) \to \mathsf{exp}[n.\delta_1] \to \mathsf{exp}[n.\delta_2] \to \mathsf{exp}[n.(\delta_1 \times \delta_2)] \\ \prim{split}&:&(n~m : \mathsf{nat}) \to (\delta : \mathsf{data}) \to \mathsf{exp}[nm.\delta] \to \mathsf{exp}[m.n.\delta] \\ \prim{join}&:&(n~m : \mathsf{nat}) \to (\delta : \mathsf{data}) \to \mathsf{exp}[n.m.\delta] \to \mathsf{exp}[nm.\delta] \\ \prim{pair}&:&(\delta_1~\delta_2: \mathsf{data}) \to \mathsf{exp}[\delta_1] \to \mathsf{exp}[\delta_2] \to \mathsf{exp}[\delta_1 \times \delta_2]\\ \prim{fst}&:&(\delta_1~\delta_2 : \mathsf{data}) \to \mathsf{exp}[\delta_1 \times \delta_2] \to \mathsf{exp}[\delta_1] \\ \prim{snd}&:&(\delta_1~\delta_2 : \mathsf{data}) \to \mathsf{exp}[\delta_1 \times \delta_2] \to \mathsf{exp}[\delta_2] \\ \end{tabular*} \subcaption{Functional primitives}\label{fig:func-prim} \end{minipage} \vspace{1em} \begin{minipage}{1.0\linewidth} \begin{tabular*}{\linewidth}{>{$}l<{$}>{$}c<{$}>{$}l<{$}} \prim{skip}&:&\mathsf{comm} \\ (\mathord;)&:&\mathsf{comm} \times \mathsf{comm} \to \mathsf{comm} \\ \prim{new}&:&(\delta : \mathsf{data}) \to (\mathsf{var}[\delta] \to \mathsf{comm}) \to \mathsf{comm} \qquad (\text{where }\mathsf{var}[\delta] = \mathsf{acc}[\delta] \times \mathsf{exp}[\delta] : \mathsf{phrase}) \\ (:=)&:&\mathsf{acc}[\mathsf{num}] \times \mathsf{exp}[\mathsf{num}] \to \mathsf{comm} \\ \prim{for}&:&(n : \mathsf{nat}) \to (\mathsf{exp}[\mathrm{idx}(n)] \to \mathsf{comm}) \to \mathsf{comm} \\ \prim{parfor}&:&(n : \mathsf{nat}) \to (\delta : \mathsf{data}) \to \mathsf{acc}[n.\delta] \to (\mathsf{exp}[\mathrm{idx}(n)] \to \mathsf{acc}[\delta] \to_{\textrm{p}} \mathsf{comm}) \to \mathsf{comm} \bigskip\\ \prim{splitAcc}&:&(n~m : \mathsf{nat}) \to (\delta : \mathsf{data}) \to \mathsf{acc}[m.n.\delta] \to \mathsf{acc}[nm.\delta] \\ \prim{joinAcc}&:&(n~m : \mathsf{nat}) \to (\delta : \mathsf{data}) \to \mathsf{acc}[nm.\delta] \to \mathsf{acc}[n.m.\delta] \\ \prim{pairAcc_1}&:&(\delta_1~\delta_2 : \mathsf{data}) \to \mathsf{acc}[\delta_1 \times \delta_2] \to \mathsf{acc}[\delta_1] \\ \prim{pairAcc_2}&:&(\delta_1~\delta_2 : \mathsf{data}) \to \mathsf{acc}[\delta_1 \times \delta_2] \to \mathsf{acc}[\delta_2] \\ \prim{zipAcc_1} &:& (n : \mathsf{nat}) \to (\delta_1 \delta_2 : \mathsf{data}) \to \mathsf{acc}[n.\delta_1 \times \delta_2] \to \mathsf{acc}[n.\delta_1] \\ \prim{zipAcc_2} &:& (n : \mathsf{nat}) \to (\delta_1 \delta_2 : \mathsf{data}) \to \mathsf{acc}[n.\delta_1 \times \delta_2] \to \mathsf{acc}[n.\delta_2] \bigskip\\ \prim{idx} &:&(n : \mathsf{nat}) \to (\delta : \mathsf{data}) \to \mathsf{exp}[n.\delta] \to \mathsf{exp}[\mathrm{idx}(n)] \to \mathsf{exp}[\delta] \\ \prim{idxAcc} &:&(n : \mathsf{nat}) \to (\delta : \mathsf{data}) \to \mathsf{acc}[n.\delta] \to \mathsf{exp}[\mathrm{idx}(n)] \to \mathsf{acc}[\delta] \end{tabular*} \subcaption{Imperative primitives}\label{fig:imp-prim} \end{minipage} \vspace{2em} \begin{minipage}{1.0\linewidth} \begin{tabular*}{\linewidth}{>{$}l<{$}@{\hspace{1.4em}}>{$}c<{$}>{$}l<{$}} \prim{mapI} &:& (n : \mathsf{nat}) \to (\delta_1~\delta_2 : \mathsf{data}) \to (\mathsf{exp}[\delta_1] \to \mathsf{acc}[\delta_2] \to_{\textrm{p}} \mathsf{comm}) \to \mathsf{exp}[n.\delta_1] \to \mathsf{acc}[n.\delta_2] \to \mathsf{comm} \\ \prim{reduceI}&:& (n: \mathsf{nat}) \to (\delta_1~\delta_2 : \mathsf{data}) \to (\mathsf{exp}[\delta_1] \to \mathsf{exp}[\delta_2] \to \mathsf{acc}[\delta_2] \to \mathsf{comm}) \to \\ & & \qquad \mathsf{exp}[\delta_2] \to \mathsf{exp}[n.\delta_1] \to (\mathsf{exp}[\delta_2] \to \mathsf{comm}) \to \mathsf{comm} \\ \end{tabular*} \subcaption{Intermediate imperative combinators}\label{fig:imp-intermediate} \end{minipage} \vspace{1em} \caption{Data Parallel Programming Primitives, Functional and Imperative} \label{fig:primitives} \end{figure} \autoref{sec:typing-rules} has described a general framework for a language with interference control. We now instantiate this framework with typed primitive operations for data parallel programming, outlined in \autoref{fig:primitives}. Our primitives fall into two principal categories: high-level functional primitives, and low-level imperative primitives. Programs that are the input to our translation process are composed of the high-level functional primitives. These programs contain uses of $\prim{map}$ and $\prim{reduce}$ that have no counterpart in low-level languages for data-parallel computation. Our translation process converts these into low-level combinators (\autoref{sec:translation-i}). A final lowering translation removes all functional primitives except arithmetic (\autoref{sec:translation-iii}). As primitives are treated specially by the \textsc{Prim} rule they can (with the aid of a little $\eta$-expansion) always be promoted to be passive. Thus, it is never necessary to annotate the arrows of a first-order primitive with a $\mathrm{p}$ subscript. The only such annotations that are necessary are those final arrows of function types occurring inside the type of a higher-order primitive that is required to be passive (in our case, only $\prim{parfor}$ and $\prim{mapI}$). \paragraph{Functional Primitives} \autoref{fig:func-prim} lists the type signatures of the primitives used for constructing purely functional data parallel programs. These fit into three groups. The first group consists of numeric literals ($\underline{n}$) and first-order operations on scalars ($\prim{negate},(+),(-),(*),(/)$). The second group contains the two key higher-order functional combinators for constructing array processing programs: $\prim{map}$ and $\prim{reduce}$. These have (the Idealised Algol renditions of) the standard types for these primitives, extended with size information. The third group comprises functions for manipulating data layouts: $\prim{zip}$ joins two arrays of equal length into an array of pairs, $\prim{split}$ breaks a one dimensional array into a two dimensional array and $\prim{join}$ flattens a two dimensional array into a one dimensional array, $\prim{pair}$ constructs a pair, and $\prim{fst}$ and $\prim{snd}$ deconstruct a pair. All of these primitives are data type indexed and those that operate on arrays are also size indexed. \paragraph{Example: dot-product} The dot-product example from \autoref{sec:motivation} is written using the functional primitives like so, for input vectors $\mathit{xs}$ and $\mathit{ys}$ of length $n$, the only difference being that all of the size and data type information is described in detail (often we can infer these arguments and so in practice we often omit them): \begin{displaymath} \prim{reduce}~n~\mathsf{num}~\mathsf{num}~ (\lambda x~y.~x+y)~ \underline{0}~ (\prim{map} \begin{array}[t]{@{\hspace{0.3em}}l} n~(\mathsf{num} \times \mathsf{num})~\mathsf{num} \\ (\lambda x.~\prim{fst}~\mathsf{num}~\mathsf{num}~x * \prim{snd}~\mathsf{num}~\mathsf{num}~x)\\ (\prim{zip}~n~\mathsf{num}~\mathsf{num}~\mathit{xs}~\mathit{ys})) \end{array} \end{displaymath} Likewise, the specialised version of dot-product from \autoref{sec:motivation} with nested $\prim{split}$s and $\prim{join}$s can be expressed with detailed size and type information throughout. \paragraph{Imperative Primitives} \autoref{fig:imp-prim} gives the type signatures for the imperative primitives. These are split into two groups. The first group includes the standard Idealised Algol combinators that turn \ensuremath{\mathsf{DPIA}}\xspace into an imperative programming language: $(\mathord;)$ sequences commands, $\prim{skip}$ is the command that does nothing; $\prim{new}~\delta$ allocates a new mutable variable on the stack, where a variable is a pair of an acceptor and an expression; and $(:=)$ assigns the value of an expression to an acceptor. For-loops are constructed by the combinators $\prim{for}$ and $\prim{parfor}$. Sequential for-loops $\prim{for}~n~b$ take a number of iterations $n$ and a loop body $b$, a command parameterised by the iteration number. Parallel for-loops $\prim{parfor}~n~\delta~a~b$ take an additional acceptor argument $a : \mathsf{acc}[n.\delta]$ that is used for the output of each iteration. The loop body for parallel for loops is required to be passive. This ensures that the side-effects of the loop body are restricted to their allotted place in the output array. This is illustrated by the non-typability of a phrase such as: \begin{displaymath} \prim{parfor}~n~\delta~a~(\lambda i~o.~b := \mathsf{idx}~n~\mathsf{num}~E~i) \end{displaymath} where $b$ is some identifier of type $\mathsf{acc}[\mathsf{num}]$. If this loop were executed in parallel, then it would contain a data race as each parallel iteration attempted to write to $b$. Thus, by ensuring its body is passive and explicitly passing in an acceptor $o$, $\prim{parfor}$ enables deterministic data race free parallelism in an imperative setting, a key feature of \ensuremath{\mathsf{DPIA}}\xspace. We will see how the acceptor-transforming behaviour of our $\prim{parfor}$ primitive is translated into a normal, potentially racy, parallel for loop in \autoref{sec:translation-iii}. Formally, newly allocated variables are zero initialised (and pointwise zero initialised for compound data), but in our implementation we typically optimise away the initialisation. In particular, it is never necessary to initialise dynamic memory allocations that are introduced by the translation of the functional primitives into imperative code as all dynamically allocated memory is always written to before being read. The second group of imperative primitives include the acceptor variants of the $\prim{split}$, $\prim{join}$ and $\prim{pair}$ functional primitives, and array indexing. The acceptor primitives transform acceptors of compound data into acceptors of their components. They will be used to funnel data into the correct positions in the imperative translations of functional programs. In the final translation to parallel C code, described in \autoref{sec:translation-iii}, all acceptor phrases will be translated into l-values with explicit index computations. \paragraph{Intermediate Imperative Combinators} \autoref{fig:imp-intermediate} gives the type signatures for the intermediate imperative counterparts of $\prim{map}$ and $\prim{reduce}$. These combinators will be used in our translation from higher-order functional programs to higher-order imperative programs in \autoref{sec:translation-i}. In the second stage of the translation they will be substituted by implementations in terms of variable allocation and for-loops (\autoref{sec:translation-ii}).
1,314,259,994,035
arxiv
\section*{Acknowledgments} The work of J.G. was supported by the Research Centre ``Elementarkraefte und Mathematische Grundlagen" at the University Mainz. The authors like to thank J. Arrington, R. Milner, L. Pentchev, and C. Perdrisat for helpful communications and discussions.
1,314,259,994,036
arxiv
\section{Introduction} \setcounter{equation}{0} \renewcommand{\theequation}{{1}.\arabic{equation}} \subsection{A brief review on the BZ process} As is well known by the Penrose process, the rotational energy of the Kerr black hole (BH) is in principle extractable, by making use of such a property as presence of negative-energy orbits in the ergosphere \citep{pen69,mis73,las14}, although this {\em mechanical} process so far is not regarded as viable enough to fuel astrophysical phenomena such as high-energy gamma-ray jets. It is then the course of nature that a notice of an {\em electrodynamic} mechanism analogous to that in pulsar electrodynamics \citep{gj69} was taken, because the presence of the magnetic field is known to be very effective to transfer angular momentum and energy from rapidly rotating objects like a neutron star (NS). A magnetized NS is thought to genetically anchor an active magnetosphere as its belongings, with the field line angular velocity (FLAV) $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\Omega_{\rm NS}} \newcommand{\SNS}{S$_{\rm NS}$$ fixed by the `boundary condition'. This means that a unipolar induction battery \citep{lan84} be at work on the stellar surface, driving electric currents through the pulsar magnetosphere with the outgoing Poynting flux. Subsequent to the Penrose process is the Blandford-Znajek (BZ) process, i.e., an {\em electromagnetic} process of extracting energy from Kerr BHs, which has been taken to be promising and efficient so far, and the efficiency is given by $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}/\OmH$ \citep{bla77,zna78,bla79}, where $\OmH$ is the hole's angular velocity. BH electrodynamics was then formulated in the form of the $3+1$ formalism by Macdonald \& Thorne \citep{mac82}, and Thorne, Price \& Macdonald \citep{tho86} proposed `The Membrane Paradigm', in which the `horizon battery' was explicitly regarded as existent in the hole's `surface of no return'. \cite{phi83a,phi83b} was the first who tried to develop a comprehensive model for `BH-driven hydromagnetic flows' or jets for AGNs, referring to the pulsar wind theory \citep{oka78,ken83}. It was thought since then that a `magnetized' Kerr BH would possess not only a battery but an internal resistance $Z_{\rm H}$ on the horizon, as seen in `a little table on BH circuit theory for engineers' (Fig.\ 3 in \citet{phi83a}). The image in the 1980s looks like the magnetosphere consisting of {\em double} wind structures with a {\em single} circuit with a battery on the horizon. The BZ process and the Membrane Paradigm have however invited serious critiques of causality violation \citep[see e.g.][]{pun89, pun90,bla02}. It was in reality pointed out that the BZ process is ``{\em incomplete and additional physics is needed}" \citep{pun90}. It appears that the presence of the causality question and related confusion since the 1990s had been bottlenecking sound progress of BH electrodynamics. We attempt to trace the question exhaustively to its origins. There appear to be two causes inextricably linked; one is on the pulsar-electrodynamics side, and the other is on the general-relativistic side. It can be pointed out in the former that the role of $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ as the FLAV appears to have been well-explored, whereas that of $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ as the potential gradient has unfortunately not so well, so that in the latter, the frame-dragging angular velocity (FDAV) $\om$ could not accomplish a smooth coupling with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ in the BH's {\em unipolar induction}. Also, a Kerr BH itself is not an electrodynamic object like a magnetized NS, but basically a thermodynamic object, fated to obey the four laws of thermodynamics \citep{tho86, OK90, OK91,KO91}. The electrodynamic process of extraction of energy is therefore governed by the first law, $c^2 dM=\Th dS+\OmH dJ$ and the second law, $dS\geq 0$, because the former defines the efficiency of extraction and the latter poses an important restriction on the efficiency (see section \rfs{first/eps}). It became soon obvious that what lacked for in the BZ process is the frame-dragging (FD) effect \citep[see e.g.][]{bla02}. `{\em The dragging of inertial frames}' was indeed mentioned in \cite{bla77} and \cite{phi83a}, and was in reality perfectly incorporated into the $3+1$ formulation for BH electrodynamics in \cite{mac82}. The `{\em needed additional physics}' really is thermodynamics. It is the FD effect that bridges the event horizon between (BH) thermodynamics and (pulsar) electrodynamics (see sections \rfs{FFPM} and \rfs{BHTsub}), and the resulting {\em physics} is `gravito-thermo-electrodynamics' unified by coupling the FDAV $\om$ with the FLAV $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$. The key is to elucidate `why, where, and how' the coupling gives rise to a violation of iso-rotation and then the breakdown of the freezing-in and force-free conditions (see \citet{mei12} and \citet{bla19} for comprehensive reviews from a different viewpoint). \subsection{A brief overview of this paper} The followings are a brief outline of each section. Sections \rfs{FFPM} and \rfs{BHTsub} briefly describe basic properties of pulsar electrodynamics and BH thermodynamics, in order to smoothly unify them with the help of the FD effect toward BH gravito-thermo-electrodynamics (GTED). It is clarified how the first law indicates the unique way of energy extraction under strict restrictions of the second law. For example, the first law allows us to define the overall efficiency of extraction by $\bar{\epsilon}_{\rm GTED}=\OmFb/\OmH$ and the second law gives a restriction of $0\lo \bar{\epsilon}_{\rm GTED}\lo 1$ \citep{bla77}. The FD effect is visualizable by `coordinatizing' $\om$ and $\Omega_{{\rm F}\omega}$ along each field line (see Figure \rff{Flux-om}). Section \rfs{nature} clarifies the nature of the BZ process. Twelve key statements, both positive from a {\em physical observer's} viewpoint and rather negative from a {\em static observer's} viewpoint, are picked up from \cite{bla77} and \cite{mac82}. The BZ process may be referred to as the single-pulsar model in circuit theory, in the sense that the {\em single} battery in the horizon with an internal resistance in addition to the external one feeds the two wind zones with the pair-production discharge taking place between the two light surfaces, under the condition of {\em negligible violation} of the force-free condition. Vital difficulties and contradictions as well as some important suggestions had been pointed out \citep[see e.g.][]{pun89,pun90}. In section \rfs{T-GTED}, we show our needed modifications of the BZ process under some major premises introduced. One of them is that the large-scale poloidal magnetic field $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ extending from near the horizon S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$\ to the infinity surface \Sinf, with the FLAV $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=$~constant (Ferraro's law of iso-rotation). We presume that magnetic field lines {\em need} not thread the horizon, nor be anchored there. These premises must be justified on firm physical bases. In section \rfs{ED3/1F} we make full use of the $3+1$ formulation with the freezing-in and force-free conditions, where the FD effect by $\om$ plays an indispensable role. We rederive important quantities and relations. Apart from $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, the force-free magnetosphere possesses another integral function of the stream function $\Psi$, the field angular momentum flux (or the current function) $I(\Psi)$. Physical quantities are measured by the zero-angular-momentum-observers (ZAMOs) or `physical observers' \citep{bla77} circulating round the hole with $\om$, and in particular, the ZAMO-FLAV is denoted by $\Omega_{{\rm F}\omega}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-\om$. It is shown in section \rfs{NullS} that the FLAV $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ keeps the iso-rotation law throughout the magnetosphere with perfect conductivity, while the ZAMO-FLAV $\Omega_{{\rm F}\omega}$ does not by the FD effect, and then the violation of the iso-rotation law in $\Omega_{{\rm F}\omega}$ leads to breakdown of the freezing-in and force-free conditions at the `null surface' $\SN$ \citep{oka92}, giving rise to $\Omega_{{\rm F}\omega}=I=0$ and the particle velocity $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=0$ and the current $\vcj=0$ at $\SN$. The `inductive' membrane $\SN$ must be in between the two, outer and inner light surfaces, S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL, for the outflow and inflow, respectively, and hence some pair-production mechanism must be at work there. The breakdown of the two conditions imposes strong constraints in building a reasonable gap model. Section \rfs{eMP} extends the Membrane Paradigm \citep{tho86} from {\em one} membrane on the stretched horizon to {\em three} membranes, namely, two resistive membranes on the horizon and infinity surfaces, $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ and $\SffH$, and one inductive membrane on the null surface $\SN$ (or Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$). The force-free magnetosphere is edged with the resistive membranes of the surface resistivity ${\cal R}} \newcommand{\calRffinf}{\calR_{{\rm ff}\infty}} % \newcommand{\calRffH}{\calR_{\rm ffH}=4\pi/c$ \citep{zna78}, where the Ohmic dissipation of the surface currents implies an increase of the hole's entropy or particle acceleration, whereas the inductive membrane $\SN$ divides the magnetosphere into the two force-free domains, $\calDout$ and $\calDin$, and is installed with a pair of unipolar induction batteries with electromotive forces (EMFs), $\calEout$ and $\calEin$, driving currents along the circuits $\calCout$ and $\calCin$ in $\calDout$ and $\calDin$, respectively. There is a huge voltage drop $\Dl V$ at $\SN$ between the two EMFs (see equation (\rf{Dl-V})). One of the most important constraints in constructing the Gap model is $ I(\ell,\Psi)=\Omega_{{\rm F}\omega}(\ell,\Psi)=0$ at $\SN$, which specifies the Gap structure (see section \rfs{m-mdGap}). When we presume $I=0$ in $|\Omega_{{\rm F}\omega}|\lo\Dlom$, where $\Dlom$ is the half-width of the Gap, the Gap will be filled with zero-angular-momentum-particles (\zamp s) pair-produced due to the voltage drop $\Dl V$, and the `zero-angular-momentum' state of the Gap (ZAM-Gap) will be maintained, i.e., $I(\ell,\Psi)=\Omega_{{\rm F}\omega}(\ell, \Psi)=0$ in $|\Omega_{{\rm F}\omega}|\lo\Dlom$. It is argued (cf. section \rfs{BC-SN}) that the {\em in}going flow of {\em negative} angular momentum is equivalent to the {\em out}going flow of {\em positive} angular momentum in the inner domain $\calDin$, and this fact is helpful for understanding a smooth flow of positive angular momentum beyond the Gap from the horizon S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$\ to infinity surface \Sinf. The inevitable existence of the ZAM-Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ between the two force-free domains will allow to impose the boundary condition to determine the eigen-function $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H} (\Psi)$ in the eigen-magnetosphere. In section \rfs{R-T-D}, as opposed to the single-pulsar model in the conventional BZ process, we attempt to explain the twin-pulsar model as a natural outcome in the modified MZ process, in terms of a \RTDy\ \citep{oka15a}. It seems that the outer domain of the force-free magnetosphere behaves like a {\em normal} pulsar-type magnetosphere around a {\em hypothetical} normal NS with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, and the inner domain $\calDin$ does like an {\em abnormal} pulsar magnetosphere around another {\em hypothetical} NS spinning with $-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$. The difference of spin rates is $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-[-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})]=\OmH$, and this suggests that the null surface $\SN$ will be regarded as a kind of rotational-tangential discontinuity, although it is quite different from any of the ordinary MHD discontinuities \citep{lan84}. The last section \rfs{Dis-Con} is devoted to a summary, discussion, and conclusions with some remaining questions listed. One of the conclusions is that any electrodynamic process will not be viable without complying to the first three laws of thermodynamics with the help of the dragging of inertial frames. \section{The basics of the force-free pulsar magnetosphere} \lbs{FFPM} \setcounter{equation}{0} \renewcommand{\theequation}{{2}.\arabic{equation}} For a force-free pulsar magnetosphere filled with perfectly conductive plasma, with $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}=-(\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} \times\mbox{\boldmath $\nabla$}\Psi/2\pi \vp)$ for the poloidal component of the magnetic field $\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}$, one has two integral functions of $\Psi$, i.e., $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and $I$, from the induction equation and conservation of the field angular momentum (see section \rfs{ED3/1F}): $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$ denotes the FLAV in wind theory or the potential gradient in circuit theory, and $I(\Psi)$ denotes the angular momentum flux (multiplied by $-2/c$) or the current function in wind or circuit theory. The toroidal component is given by $\Bt=-(2I/\vp c)$, and the electric field is given by $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}=-(\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}/2\pi c)\mbox{\boldmath $\nabla$}\Psi$ from the induction equation with the freezing-in condition (with $E_{\rm t}\equiv 0$ by axisymmetry). Then, the electromagnetic Poynting and angular momentum fluxes become \begin{equation}} \newcommand{\eneq}{\end{equation} \vcSEM=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H} \vcSJ, \ \ \vcSJ=(I/2\pi c)\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t} , \lb{vcSEM/SJ} \eneq where the toroidal component of $\vcSEM$ (and other fluxes) is omitted here and in the followings. Contrary to a force-free BH magnetosphere, there is no reason nor necessity for the force-free condition to break down within a force-free pulsar magnetosphere, because one may determine $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ rather automatically by imposing the `boundary condition' $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\Omega_{\rm NS}} \newcommand{\SNS}{S$_{\rm NS}$$ for field lines emanating from the magnetized NS, where $\Omega_{\rm NS}} \newcommand{\SNS}{S$_{\rm NS}$$ is the surface angular velocity of the star, because we may usually suppose that the force-free condition has already been broken down in the `matter-dominated' interior with $I=0$. On the other hand, in order to determine the eigenfunction $I(\Psi)$, one needs a kind of process to terminate the force-free `field-dominated' domain by restoring particle inertia so far neglected in the force-free domain, which can be expressible by a few equivalent ways \citep{okam06}. One of them is the `criticality condition' at the fast magnetosonic surface S$_{\rm F}$ ($\approx$\Sinf) in wind theory, or the infinity resistive membrane $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ with the surface resistivity ${\cal R}} \newcommand{\calRffinf}{\calR_{{\rm ff}\infty}} % \newcommand{\calRffH}{\calR_{\rm ffH}=4\pi/c=377$ Ohm in circuit theory, containing a layer from S$_{\rm F}$ at $\ell=\ell_{\rm F}$ to \Sinf\ at $\ell=\ell_{\infty}$ where the transfer of field energy to kinetic energy takes place in the form of the MHD acceleration \citep{oka74}; \begin{equation}} \newcommand{\eneq}{\end{equation} I_{\rm NS}(\Psi)=\frac{1}{2}\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(B_{\rm p}} \newcommand{\Bt}{B_{\rm t}\vp^2)_{{\rm ff}\infty} \lb{I/Pul} , \eneq which is equivalent to the `radiative' condition and to Ohm's law for the surface current on the resistive membrane $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$. Then, the toroidal field $\Bt$ is regarded as the swept-back component of $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ due to inertial loadings (such as MHD acceleration) on the terminating surface $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ of the force-free domain. Then the behavior of $I=I(\ell, \Psi)$ will be described as follows; \begin{eqnarray} I(\ell,\Psi) = \left\{ \begin{array} {ll} 0 & ;\ell \lo \ell_{\rm NS}, \\[1mm] I_{\rm NS}(\Psi) &;\ \ell_{\rm NS} \lo \ell \lo \ell_{\rm F}, \\[1mm] \to 0 & ; \ell_{\rm F} \lo \ell \lo \ell_{\infty} \end{array} \right. \lb{I/Pul-M} \end{eqnarray} (see equation (\rf{OL-I}) and Figure \rff{GapI} for a Kerr hole's force-free magnetosphere). We do not intend to take into account complicated interactions of the force-free pulsar wind with the interstellar media permeated by the general magnetic field in this model. We assume simply that $I(\ell,\Psi)$ tends to null for $\ell\to\infty$ and also $\vp\to\infty$. This presumes that all the Poynting energy eventually is transmitted to the particle kinetic energy. We consider that the force-free model is not applicable to the interior of the NS and hence $I=0$. Instead, there will be a kind of `inductive membrane' on the NS crust, on which a unipolar induction battery is at work to drive currents throughout the pulsar magnetosphere, whose field lines is anchored in a magnetized NS. That is to say, the NS accommodates the sources of the angular momentum flux $\vcSJ$ as well as electromagnetic Poynting energy flux $\vcSEM$, together with the source of charged particles of both signs (at least in principle) at its surface. The force-free pulsar wind consists of charge-separated plasma, e.g., electrons and/or positrons, and the particle velocity is given by $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=\vcj/\vre$, which means that current-field-streamlines in the force-free domain are equipotentials everywhere, and $\mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}=\vre\mbox{\boldmath $v$}_{\rm p}} \newcommand{\vvt}{v_{\rm t}= -(1/2\pi c)(dI_{\rm NS}/d \Psi)\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$. The current-closure condition holds along each current line in circuit theory. There is no reason or necessity of looking for the breakdown of the force-free condition, and this is quite different from the hole's force-free magnetosphere, as argued in the followings. The wind theory and circuit theory must be complimentary with each other, where $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and $I$ take two sides of the same coin respectively (see \citet{oka15a}); $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ gives rise to the magneto-centrifugal particle acceleration in the former and to an EMF due to the unipolar induction battery on the NS surface in the latter, as related to the source of the Poynting flux at \SNS, i.e., \begin{equation}} \newcommand{\eneq}{\end{equation} {\cal E}_{\rm NS}} \newcommand{\calEBH}{{\cal E}_{\rm BH}=-\frac{1}{2\pi c}\int^{\Psi_2}_{\Psi_1}\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)d\Psi, \lb{nsEMF} \eneq which drives currents along the current-field-streamline $\Psit$ with $\mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}>0$ and return currents along $\Psio$ with $\mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}<0$, where $\Psi_0} \newcommand{\Psio}{\Psi_1} \newcommand{\Psit}{\Psi_2<\Psio<\Psi_{\rm c}} \newcommand{\Psib}{\bar{\Psi}<\Psit<\bar{\Psi}$ and $\mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}\lleg 0$ for $\Psi\lleg\Psi_{\rm c}} \newcommand{\Psib}{\bar{\Psi}$ (see equation (\rf{c-c-c}) and Figure \rff{DC-C}), where $\bar{\Psi}$ is the last limiting field line satisfying $I(\Psi_0} \newcommand{\Psio}{\Psi_1} \newcommand{\Psit}{\Psi_2)=I(\bar{\Psi})=0$ (see figure 2 in \cite{okam06} for one example of $I(\Psi)$). The surface return currents flows from $I(\Psit)$ to $I(\Psio)$, crossing field lines between $\Psio$ ands $\Psit$ on the resistive membrane $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$, and the Ohmic dissipation there formally represents the MHD acceleration in $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$. It is stressed that there is no reason, nor necessity in the `force-free' pulsar magnetosphere for the `force-free' condition to break down, at least because a magnetized NS is usually regarded as behaving like a unipolar induction battery with e.g.\ particle acceleration as an external resistance far from the star, but with no internal resistance. The FLAV $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ is given by $\Omega} \newcommand{\ellH}{\ell_{\rm H}_{\rm NS}$, i.e., the efficiency of the `extraction of energy' is $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}_{\rm NS}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}/\Omega_{\rm NS}} \newcommand{\SNS}{S$_{\rm NS}$=1$ in the sense that no dissipation takes place of rotational energy into the irreducible mass energy inside the NS. If we may interpret thermodynamically, this case may be called an `adiabatic extraction' (see section \rfs{BHTsub}). Contrary to pulsar electrodynamics, for the `force-free' model of the Kerr BH to be viable, the `force-free' condition must break down in the force-free magnetosphere, because the `adiabatic' extraction with the `perfect' efficiency $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}=1$ is unattainable \citep{bla77}. The main purpose of this paper is to elucidate `why, how, and where' the freezing-in as well as force-free conditions must and indeed can be broken down in the force-free magnetosphere. \section{Black hole thermodynamics} \lbs{BHTsub} \setcounter{equation}{0} \renewcommand{\theequation}{{3}.\arabic{equation}} It is an ill-understood phenomenon called the `dragging of inertial frames' that will connect pulsar electrodynamics with black hole thermodynamics beyond the event horizon, thereby leading to `gravito-thermo-electrodynamics' (GTED). It is a `physical observer' \citep{bla77}, a `zero-angular-momentum-observer' (ZAMO; \citet{mac82}) or a `fiducial observer' (FIDO; \cite{tho86}) circulating round the hole with the frame-dragging-angular-velocity (FDAV) $d\phi/dt=\om$ that plays a major role in describing the GTED extraction process of energy from the hole. Then the field-line-angular-velocity (FLAV), which the ZAMOs measure, is denoted by $\Omega_{{\rm F}\omega}\equiv \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-\om$. This means that the FDAV $\om$ acquires a role of gravito-electric-potential-gradient (GEPG) as well \citep{oka15a}. \subsection{Basic thermodynamic properties of the Kerr hole} \lbs{ther-rot} The no-hair theorem tells us that Kerr holes possess only two hairs, e.g., two `extensive' variables such as the entropy $S$ and the angular momentum $J$, and then all other thermodynamic quantities are expressed as functions of these two. For example, the BH's mass-energy $M$ is expressed in terms of $S$ and $J$; \begin{equation}} \newcommand{\eneq}{\end{equation} M=\sqrt{(\hbar cS/4\pi kG) + (\pi kcJ^2/\hbar GS)}. \lb{massF} \eneq As one can in principle utilize a Kerr BH as a Carnot engine \citep{KO91}, it is regarded basically as a `thermodynamic object', but not as an electrodynamic one, because the Kerr hole stores no extractable electromagnetic energy. The rotational-evolutional process of Kerr holes are governed by the four laws of thermodynamics (see e.g.\ Chapter III C3 in \cite{tho86} for a succinct summary; also \cite{oka92}). The mass $M$ of the hole is divided into the `irreducible' and `rotational' masses \citep{tho86}, i.e., \begin{subequations} \begin{eqnarray} M=M_{\rm irr}+M_{\rm rot}, \hspace{0.3cm} \lb{massa} \\ M_{\rm rot}= M[1- 1/\sqrt{1+h^2}], \hspace{0.3cm} \\ M_{\rm irr}=\frac{M}{\sqrt{1+h^2}} =\sqrt{c^4 A_{\rm H}/16\pi G^2} =\sqrt{\hbar cS/4\pi kG}, \hspace{0.3cm} \end{eqnarray} \lb{mass/irr/red} \end{subequations} where $A_{\rm H}$ is the surface area of the event horizon, and $h$ is defined as the ratio of $a\equiv J/Mc$ to the horizon radius $\rH$, i.e., \begin{equation}} \newcommand{\eneq}{\end{equation} h=\frac{a}{\rH} =\frac{2\pi kJ}{\hbar S}=\frac{2GM\OmH}{c^3}, \lb{Khole-h} \eneq to specify the evolutional state of the BH, and then $0\leq h\leq 1$ for the `outer-horizon,' with $h=0$ for a Schwarzschild BH with the same mass and $h=1$ for an extreme-Kerr BH \citep{OK90,OK91} (see also equations (10.4a,b,...,f) in \cite{oka92}). The zeroth law indicates that two `intensive' variables, $\Th$ (the surface temperature) and $\OmH$, conjugate to $S$ and $J$, respectively, are constant on S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$. This means that $\om$ tends to a constant value $\OmH$ for $\al\to 0$. In passing, the third law indicates that ``by a finite number of operations one cannot reduce the surface temperature to the absolute zero with $h=1$." In turn, ``the finite processes of mass accretion with angular momentum cannot accomplish the extreme Kerr state with $h=1$, $\Th=0$ and $\OmH=c^3/2GM$" \citep[as in][]{OK91}. Also, `inner-horizon' thermodynamics can formally be constructed analogously to `outer-horizon' thermodynamics \citep{OK93}. It is the first and second laws that govern the GTED process of extracting energy from a Kerr hole; \begin{subequations} \begin{eqnarray} c^2 dM=\Th dS + \OmH dJ , \lb{1st-l} \\ \Th dS \geq 0 , \lb{2nd-l} \end{eqnarray} \lb{1-2laws} \end{subequations} where $\Th$ and $\OmH$ are uniquely expressed in terms of $J$ and $S$ from equation (\rf{massF}) or $M$ and $h$ \citep{OK90}; \begin{equation}} \newcommand{\eneq}{\end{equation} \Th=(\hbar c^3/ 8\pi kGM) (1-h^2), \ \ \OmH=(c^3/ 2GM) h. \lb{T-OmH} \eneq Then, the `overall' efficiency of extraction, $\bar{\epsilon}_{\rm GTED}$, is defined as the ratio of ``{\em Actual energy extracted/ Maximum extractable energy, when unit angular momentum is removed}" (see \cite{bla77}; the C$_{\rm P}$ (i)-statement, section \rfs{phys-v} later), i.e., from the first law \begin{equation}} \newcommand{\eneq}{\end{equation} \bar{\epsilon}_{\rm GTED} =\frac{(dM/dJ)}{ (\partial M/\partial J)_{S}} =\frac{c^2}{\OmH}\dr{M}{J}, \lb{epsGTE1} \eneq and by the second law $dS\ggo 0$, we see that always $0 \lo \bar{\epsilon}_{\rm GTED}\lo 1$. \subsection{Connection of a force-free magnetosphere to the first and second laws} \lbs{first/eps} Because the gravity of the hole produces a gravitational redshift of ZAMO clocks, their lapse of proper time $d\tau$ is related to the lapse of global time $dt$ by the lapse function $\al$, i.e., $d\tau/dt=\al$ (see \cite{mac82}). Then, the change in global time of the hole's total mass-energy become from the first law (\rf{1st-l}) \begin{equation}} \newcommand{\eneq}{\end{equation} c^2 \dr{M}{t} = \Th\dr{S}{t}+ \OmH \dr{J}{t}. \lb{c.mass} \eneq When the hole loses angular momentum and energy, i.e.\ $dJ<0$ and $c^2 dM<0$, through the force-free magnetosphere with conserved quantities $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$ and $I(\Psi)$, it is shown in \cite{bla77} that the angular momentum and energy fluxes are given by \begin{equation}} \newcommand{\eneq}{\end{equation} \vcSE =\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)\vcSJ, \quad \vcSJ=( I(\Psi)/2\pi\al c)\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}, \lb{E-AmFlux} \eneq which are apparently the same as equation (\rf{vcSEM/SJ}) for the loss through the pulsar magnetosphere except the redshift factor/ lapse function $\al$. When $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and $I$ are determined as the eigenvalue problem due to the criticality-boundary condition (section \rfs{BC-SN}), the loss rate of angular momentum $\calPJ$ and the resultant output power $\calPE$ are given by \begin{subequations} \begin{eqnarray} \calPJ =-\dr{J}{t}= \oint \al\vcSJ \cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} = \frac{1}{c}\int^{\bar{\Psi}}_{\Psi_0} I d\Psi ,\hspace{0.7cm} \lb{TotFluxJ} \\ \calPE= -c^2 \dr{M}{t} = \oint \al\vcSE\cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} = \frac{1}{c}\int^{\bar{\Psi}}_{\Psi_0}\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H} I d\Psi , \hspace{0.6cm} \lb{TotFluxE} \end{eqnarray} \lb{PEJ} \end{subequations} where $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}\cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$}=2\pi d\Psi$ and integration is made over all open field lines in $\Psi_0\leq\Psi\leq \bar{\Psi}$. When we define the average potential gradient, calculated from $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$ weighted by $I(\Psi)$; \begin{equation}} \newcommand{\eneq}{\end{equation} \left. \OmFb =\frac{\calPE}{\calPJ}= \int^{\bar{\Psi}}_{\Psi_0} \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H} (\Psi) I(\Psi) d\Psi \right/ \int^{\bar{\Psi}}_{\Psi_0} I(\Psi) d\Psi, \lb{GTEepsb} \eneq we have $c^2 dM=(\calPE/\calPJ)dJ=\OmFb dJ$. Thus the first law (\rf{1st-l}) reduces to \begin{subequations} \begin{eqnarray} c^2 dM=-\calPE dt =\OmFb dJ, \hspace{2.4cm} \lb{first-la} \\ \Th dS = -(\OmH-\OmFb) dJ=(\OmH\calPJ -\calPE) dt , \lb{first-lb} \end{eqnarray} \lb{cEntr} \end{subequations} where $\OmFb$ must be determined as the eigenvalue of the criticality-boundary condition in electrodynamics (see section \rfs{BC-SN}). The `overall' efficiency $\bar{\epsilon}_{\rm GTED}$ reduces from equations (\rf{epsGTE1}), (\rf{first-la}), (\rf{c.mass}), (\rf{PEJ}) and (\rf{GTEepsb}) to \begin{equation}} \newcommand{\eneq}{\end{equation} \bar{\epsilon}_{\rm GTED}=\frac{c^2 |dM/dt|}{\OmH |dJ/dt|}=\frac{\calPE}{\OmH\calPJ} =\frac{\OmFb}{\OmH} \hspace{5mm} \lb{GTEepsa} \eneq (see equations (\rf{second}a,b,c) for the restrictions of $\bar{\epsilon}_{\rm GTED}$). Thus only if $\Th dS>0$, i.e., $\OmFb<\OmH$, angular momentum and energy are extractable from the hole. The differential form of $M$ in equation (\rf{massa}), i.e., $dM=dM_{\rm irr}+dM_{\rm rot}$ may be meaningful in defining the efficiency of extraction by $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}=dM/ dM_{\rm rot}$, but note that $dM_{\rm irr}\neq\Th dS$ and $dM_{\rm rot}\neq\OmH dJ$. The relevant `boundary condition' \citep{zna77,bla77} used to determine $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ at S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$\ is that $\Psi$ is finite and for $I$ in equation (\rf{E-AmFlux}) or (\rf{TotFluxJ}) \begin{equation}} \newcommand{\eneq}{\end{equation} \Iin(\Psi)=\frac12(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}) (B_{\rm p}} \newcommand{\Bt}{B_{\rm t}\vp^2)_{\rm ffH} \lb{ourI} \eneq (see their Eq. (3.15) in \cite{bla77}), which was regarded as indicating that this, together with appropriate boundary conditions at infinity, determines the angular velocity of field lines crossing the horizon. For example, equation (\rf{ourI}) for $\Iin(\Psi)$ combines with that in equation (\rf{I/Pul}) or (\rf{ouIa}) for the outgoing wind, to yield $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\approx 0.5\OmH$, i.e., the efficiency $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}/\OmH \approx 0.5$. It appears that this result for $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ was interpreted as to be imposed at S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$\ for field lines threading the horizon by the `boundary condition'. This procedure is often thought of as an `impedance matching' (e.g., \citet{mac82}). \begin{figure*} ~~~~~~~~~~~~~~~~~~~~~~~~~ \includegraphics[width=11cm, height = 7cm, angle=-0]{FIG1_NEW_V7.eps} \caption{This explains how the Kerr hole's force-free magnetosphere complies with the first and second laws of BH thermodynamics in gravito-thermo-electrodynamic (GTED) extraction of energy; three energy fluxes $\vcSE$, $\vcSEM$ and $\vcSsd$ are linked at the horizon with $c^2(dM/dt)$, $\Th(dS/dt)$ and $\OmH(dJ/dt)$ in equation (\rf{c.mass}) for the first law, where $\vcSE$ is the `total' flux expressing the actual extraction of energy, $\vcSEM$ is the electromagnetic Poynting flux and $\vcSsd$ is the frame-dragging spin-down energy flux, respectively. The key is to visualize the FD effect, by `coordinatizing' $\Omega_{{\rm F}\omega}$ as well as $\om$ (modified from figure 3 in \citet{oka09}), when $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and $I$ are given as the eigenfunctions of the criticality-boundary condition (see equation (\rf{FL-eigen}a,b,c)). The null surface $\SN$ (`this surface' in the C$_{\rm P}$ (v)-statement in section \rfs{phys-v}) must be in between the two light surfaces S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL\ (see equation (\rf{GapC})). Notice apparently no reversal of the total flux $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$, which is {\em conserved} \citep{bla77}, while the two component fluxes $\vcSEM$ and $\vcSsd$ are not {\em conserved}, and indeed $\vcSEM \ggel 0$ for $\Omega_{{\rm F}\omega}\ggel 0$ and $\vcSsd$ decreases linearly with $\Omega_{{\rm F}\omega}$ (the C$_{\rm P}$ (iii,iv)-statements). We see $\PN{\Omega_{{\rm F}\omega}}=\PN{I}=\PN{\vcSEM}=\PN{\vcSJ}=\PN{\vcSsd}=\PN{\vcSE}=0$ at $\SN$ in equations (\rf{EqSN}), and the breakdown of the freezing-in and force-free conditions at the null surface means that when field lines are continuous across $\SN$ (i.e., $\PN{\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}}\neq 0$), current- and stream-lines are severed there and are no longer equipotentials within the gap hidden by $\SN$. The efficiency of GTED extraction is given by the ratio of $|\vcSE|/|\vcSsd|$ at the horizon, i.e.\ $\epsGTE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}/\OmH$ (see equation (\rf{eff})). The widening process of $\SN$ to the magnetized ZAM-Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ must be described by microphysics such as pair-production discharges due to the voltage drop $\Dl V$ between the `two batteries with two EMFs' (see Figures \rff{GapI} and \rff{Flux-omPseudoF-S}; section \rfs{m-mdGap}). } \lbf{Flux-om} \end{figure*} \subsection{Two non-conserved energy fluxes and their roles} \lbs{2fluxes} It is the energy flux $\vcSE$ that corresponds to the term $c^2dM/dt$ in the first law (see equation (\rf{c.mass})); this means that there must be the other two fluxes, which correspond to $\Th dS/dt$ and $\OmH dJ/dt$. These were unequivocally given in Eqs. (4.13) and (5.7) in \cite{mac82}, which are reproduced here as they are; \begin{subequations} \begin{eqnarray} \vcSE=\frac{\al c}{4\pi}(\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}\times\vcBt)+\om\vcSJ =\frac{I}{2\pi} \left(-\frac{\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}\times\vcm}{\varpi^2} +\frac{\om\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}}{\al c} \right) \hspace{0.8cm} \lb{vcSEa} \\ =\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\left(\frac{\varpi}{4\pi}\left|\vcBt\right|\right)\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\left(\frac{I}{2\pi\al c}\right)\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t} \hspace{1.0cm} \lb{vcSEb \end{eqnarray} \lb{vcSEab} \end{subequations} (see equation (\rf{E-AmFlux})), where \begin{subequations} \begin{eqnarray} \mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}=-\frac{\Omega_{{\rm F}\omega}}{2\pi \al c}\mbox{\boldmath $\nabla$}\Psi, \ \quad \vcBt=-\frac{2I}{\vp\al c} \mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} , \ \ \lb{EpBt} \\ \Omega_{{\rm F}\omega}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-\om, \lb{OmFm} \quad \end{eqnarray} \lb{EpBtab} \end{subequations} and $\Omega_{{\rm F}\omega}$ denotes the FLAV measured by the `physical observers' (ZAMOs) circulating round the hole. We define the electromagnetic Poynting flux $\vcSEM$ and the frame-dragging spin-down energy flux $\vcSsd$ from equation (\rf{vcSEa}) as follows; \begin{subequations} \begin{eqnarray} \vcSEM=\frac{\al c}{4\pi}(\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}\times\vcBt)=\frac{\Omega_{{\rm F}\omega} I}{2\pi\al c}\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}=\Omega_{{\rm F}\omega}\vcSJ, \lb{Sem/a} \ \ \\ \vcSsd =\frac{\om I}{2\pi\al c}\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t} =\om \vcSJ, \lb{Sem/b} \ \ \end{eqnarray} \lb{Sem} \end{subequations} and hence a general expression of \cite{mac82} in equation (\rf{vcSEab}a,b) is simplified like \begin{equation}} \newcommand{\eneq}{\end{equation} \vcSE=\vcSEM+\vcSsd=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ, \lb{vcSE/EM/sd} \eneq behind which is a simple and yet significant identity, i.e., \begin{equation}} \newcommand{\eneq}{\end{equation} \Omega_{{\rm F}\omega}+\om=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}. \lb{s.i} \eneq This relation among the three angular velocities expresses the way in which unipolar induction couples with the dragging of inertial frames, so as to comply with the first law of thermodynamics (see equation (\rf{c.mass}) and (\rf{1st-l})). The ZAMO-FLAV $\Omega_{{\rm F}\omega}$ changes inwardly from $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ at infinity with $\om=0$, becoming null at the null surface $\SN$, i.e., $\PN{\Omega_{{\rm F}\omega}}=0$, to $-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$ at the horizon with $\om=\OmH$ (see equation (\rf{SEM<0})). When $\ell$ denotes distances from the horizon at $\ell=\ellH$ outwardly along each field line, it is convenient firstly to express $\om=\om(\ell,\Psi)$ and $\Omega_{{\rm F}\omega}=\Omega_{{\rm F}\omega}(\ell,\Psi)$, and next to `coordinatize' $\om$ and $\Omega_{{\rm F}\omega}$, instead of $\ell$ along each field line (see Figure \rff{Flux-om}; \cite{oka15a}). Note that `distant static observers' see that Ferraro's law of iso-rotation holds, and $\vcSE$ is conserved along each field line, i.e., $\mbox{\boldmath $\nabla$}\cdot\al\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\mbox{\boldmath $\nabla$}\cdot\al\vcSJ=0$, whereas `physical observers' or ZAMOs circulating round the hole with $\om$ see that {\em iso-rotation is violated}, and they detect that $\vcSE$ consists of $\vcSEM=\Omega_{{\rm F}\omega}\vcSJ$ and $\vcSsd=\om\vcSJ$, which are {\em not} conserved ($\vcSEM$ corresponds to `{\em a Poynting flux}' in \cite{bla77}; see the C$_{\rm P}$ (iii,iv)-statements, section \rfs{phys-v}). Thus the physical observers will find the Kerr hole's exquisite device to split the `actual' flux $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ into the two fluxes with use of the FD effect $\om$ (see equations (\rf{s.i}) and (\rf{vcSE/EM/sd})), thereby complying with the first and second laws of thermodynamics (see Figure \rff{Flux-om}). It is the `radiation condition' expressed in terms of $\Omega_{{\rm F}\omega}$ in the Poynting flux $\vcSEM$, the particle velocity $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}$, etc.\ (see section \rfs{ED3/1F}) that shows the directions of energy and particle flows. Then we have toward infinity or the horizon \begin{equation}} \newcommand{\eneq}{\end{equation} (\Omega_{{\rm F}\omega})_{\infty}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H} >0, \ \ (\Omega_{{\rm F}\omega})_{\rm H}= -(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}) <0, \lb{SEM<0} \eneq the former of which shows the outward Poynting flux far away. The latter in reality indicates the inflow of a Poynting flux, i.e., $(\vcSEM)_{\rm H} < 0$, when the hole loses angular momentum, i.e., $(\vcSJ)_{\rm H}>0$ or $dJ<0$ (by an {\em out}flow of {\em positive} angular momentum or equivalently an {\em in}flow of {\em negative} one). We have then an important limit of $0<\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}<\OmH$. The efficiency of extraction along each field line was defined by $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}/\OmH$ (\cite{bla77}; see the C$_{\rm P}$(i)-statement), and we have a condition \begin{equation}} \newcommand{\eneq}{\end{equation} \epsGTE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)/\OmH \lo 1 \lb{eff} \eneq (see equation (\rf{dS>0b})). \subsection{The second law and its restriction on the efficiency} \lbs{second/restr} It will be helpful at first to reproduce general expressions (3.99) and (3.100) in \cite{tho86} on thermodynamic processes taking place in the `stretched' horizon, as follows; \begin{subequations} \begin{eqnarray} \TTH \frac{dS}{dt}=\oint_{{\cal H}^S_t} \RH \vec{{\cal J}}_{H}^2 dA =\oint_{{\cal H}^S_t} \vec{E_{\rm H}} \newcommand{\BH}{B_{\rm H}}\cdot \vec{{\cal J}}_{H} dA \nonumber \hspace{1cm} \\ = \frac{1}{4\pi} \oint_{{\cal H}^S_t} ( -\vec{E_{\rm H}} \newcommand{\BH}{B_{\rm H}}\times\vec{\BH} )\cdot \vec{n } dA, \hspace{0.5cm} \lb{dSHH/dt} \end{eqnarray} \begin{eqnarray} \dr{J}{t}=\oint_{{\cal H}^S_t} (\sigma} \newcommand{\kp}{\kappa_{H} \vec{E_{\rm H}} \newcommand{\BH}{B_{\rm H}} + \vec{{\cal J}}_{H}\times\vec{B}_{n})\cdot \vec{\xi} dA . \lb{dJ/dt} \end{eqnarray} \lb{TPM-TD} \end{subequations} The Ohm's law holds on the stretched horizon ${\cal H}^S_t$ (identical to the `force-free horizon surface' $\SffH$; see section \rfs{RM}), i.e., $\vec{E}_{\rm H}=\RH \vec{{\cal J}}_{\rm H}$ with the surface resistivity $\RH=4\pi/c=377$Ohm (equal to $\calRffH$) and $ \vec{{\cal J}}_{H}$ is the surface current (see equation (\rf{calIffH})). Ohm's and Ampere's laws are equivalent to the radiative condition, i.e.\ $\vec{B}_H=\vec{E}_H\times \vec{n}$. Thus the inflow of the Poynting flux is equivalent to Joule heating, leading to the BH's entropy increase, i.e., $d\SSH/dt>0$ in equation (\rf{dSHH/dt}). The loss of the hole's angular momentum due to the surface Lorentz torque in (\rf{dJ/dt}) reduces to expression (\rf{TotFluxJ}) for the outflow of angular momentum. Inserting $\vcSEM$ in the last of expressions (\rf{Sem/a}) into equation (\rf{dSHH/dt}), one has in terms of $\OmFb$ from equation (\rf{GTEepsb}) \begin{eqnarray} \TTH \frac{dS}{dt}=- \oint \al\vcSEM\cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} \nonumber \hspace{3cm} \\ = \frac{1}{c}\int^{\bar{\Psi}}_{\Psi_0}(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}) I d\Psi =-(\OmH-\OmFb) \dr{J}{t}, \lb{ThdS/dt} \end{eqnarray} which reduces to equation (\rf{first-lb}) (also see section \rfs{SffH}). The `maximum extractable energy' becomes $c^2 dM= \OmH dJ$ in the adiabatic case with $dS=0$, and the `actual energy extracted' is given by $c^2 dM=\OmFb dJ$, and hence the ratio reduces to $\bar{\epsilon}_{\rm GTED}$ in equation (\rf{GTEepsa}) (\cite{bla77}; see C$_{\rm P}$({i,ii)-statements in section \rfs{phys-v}). It is thus the first law of thermodynamics that allows the definition of $\bar{\epsilon}_{\rm GTED}$, while it is the second law that imposes the following restrictions on $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, $\OmFb$, $\epsGTE$, $\bar{\epsilon}_{\rm GTED}$, $\vcSE$ and the power $\calPE$ (see Eq.\ (4.6) in \cite{bla77}); \begin{subequations} \begin{eqnarray} 0\lo \OmFb\approx \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H} \lo \OmH, \lb{dS>0a}\\ 0 \lo \bar{\epsilon}_{\rm GTED} \approx \epsGTE \lo 1, \lb{dS>0b}\\ \calPE \lo \OmH \calPJ , \quad \vcSE =\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ \lo \OmH \vcSJ, \lb{dS>0c} \end{eqnarray} \lb{second} \end{subequations} for any plausible process of energy extraction (see condition (\rf{Shareb})). Thus, when the second law and hence the condition (\rf{dS>0a}) are satisfied, such a surface always exists at $\Omega_{{\rm F}\omega}=0$ above the horizon that $\vcSEM=\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}=\Omega_{{\rm F}\omega}=0$, thereby separating the force-free magnetosphere into the two domains, the outer domain $\calDout$ with $\Omega_{{\rm F}\omega}>0$, $\vcSEM>0$ and the inner domain $\calDin$ with $\Omega_{{\rm F}\omega}<0$, $\vcSEM<0$ (see the C$_{\rm P}$ (iii,iv)-statements in section \rfs{phys-v}). `{\em This surface}' has so far been referred to as the `null surface' $\SN$ (e.g., \citet{oka92,oka15a}) (see also `{\em this surface}' in the {C$_{\rm P}$(iv)}-statement in section \rfs{phys-v}). The first law in equation (\rf{1st-l}) indicates that, when the hole loses angular momentum ($dJ<0$), the total available energy resource $\OmH|dJ|$ seems to be shared at the null surface $\SN$ to the outer domain $\calDout$ with $\vcSEM>0$ by $c^2 |dM|=\OmFb|dJ|\propto \bar{\epsilon}_{\rm GTED}$ and to the inner domain $\calDin$ with $\vcSEM<0$ by the remaining amount $\Th dS= (\OmH-\OmFb)|dJ|\propto (1-\bar{\epsilon}_{\rm GTED})$, i.e., \begin{equation}} \newcommand{\eneq}{\end{equation} \OmH|dJ| = \Th dS +c^2|dM| , \lb{Share} \eneq (also see equation (\rf{SDenergy})) and hence \begin{equation}} \newcommand{\eneq}{\end{equation} \OmH\calPJ=\Th (dS/dt) + \calPE \ggo\, \calPE =\OmFb \calPJ \lb{Shareb} \ene (see equation (\rf{second}a,b,c)). There is a simple identity \begin{subequations} \begin{eqnarray} \OmH= (\OmH-\OmFb) +\OmFb \\% (3.23a) = \OmFb -[-(\OmH-\OmFb)] \end{eqnarray} \lb{distr1} \end{subequations} behind equation (\rf{Share}) and (\rf{Shareb}). The so-called impedance matching, i.e., $c^2 |dM|\approx \Th dS$ yields $\OmFb\approx \OmH/2$ (cf.\ equation (\rf{FL-eigen}) for the eigenfunction $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ due to the criticality-boundary condition). The trick of utilizing the hole's resources under the control of the first law in equation (\rf{Share}) or (\rf{cEntr}) is as follows: the entropy term $\Th dS$ requires an inflow of a Poynting flux from some place above the horizon (i.e., `this surface' $\SN$ at $\Omega_{{\rm F}\omega}=0$; see the C$_{\rm P}$(iv)-statement, section \rfs{phys-v}). This must be followed by an {\em in}-flow of {\em negative} angular momentum, which is equivalent to an {\em out}-flow of {\em positive} angular momentum, leading to the decrease of the hole's, i.e., $dJ<0$. This couples with the FD effect to induce an {\em in}-flow of {\em negative} energy, which is equivalent to an outward spin-down energy $\OmH|dJ|$, with a part covering the cost of extraction, i.e., the entropy increase $\Th dS=(\OmH-\OmFb)|dJ|$, and with the rest becoming the outgoing Poynting flux $c^2 |dM|=\OmFb|dJ|$, as seen in equation (\rf{Share}). The first law thus has nothing to do directly with unipolar induction in the horizon. In passing, the identities with $\OmFb$ replaced with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ in equations (\rf{distr1}); \begin{subequations} \begin{eqnarray} \OmH= (\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}) +\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H} \lb{Iden-a} \\ = \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H} -[- (\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})] \lb{iden-b} \\ \equiv (\Omega_{{\rm F}\omega})_{\infty} - (\Omega_{{\rm F}\omega})_{\rm H}, \lb{iden-c} \end{eqnarray} \lb{distr2} \end{subequations} turn out to possess important meanings: when the ZAMOs measure the FLAV $\Omega_{{\rm F}\omega}$ along each field-line, they find that the FD effect produces the along-field gradient of `gravito-electric potential gradient' between different places, and equation (\rf{distr2}b,c) exhibits the difference between the horizon and infinity, which will give rise to an `along-field potential drop' at the null surface $\SN$ (see equation (\rf{Dl-V})), and also does the difference of the spin rates of two {\em hypothetical} magnetic rotators existent back to back at the surfaces of the Gap (sections \rfs{batteryG}, \rfs{BCagain} and \rfs{R-T-D}; Figure \rff{F-WS}). \section{The nature of the Blandford-Znajek process} \lbs{nature} \setcounter{equation}{0} \renewcommand{\theequation}{{4}.\arabic{equation}} To unlock the nature of the BZ process, we begin at first to classify important statements characterizing the nature to two classes: The first class C$_{\rm P}$\ from the physical-observers' viewpoint contains positive statements referring to the role of the two energy fluxes $\vcSEM$ and $\vcSsd$ (see equations (\rf{vcSEab}) and (\rf{vcSE/EM/sd})), and hence implicitly complying with the first and second laws described in the previous section. On the other hand, the second class \CS\ contains statements that seem to be based mainly on relation $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ in (\rf{E-AmFlux}) from the static observers' point of view, seemingly bypassing $\vcSEM$ and $\vcSsd$ and are not necessarily consistent to the C$_{\rm P}$-statements.\footnote{This section is written based on the natural supposition that expert readers are familiar with the original text of \cite{bla77}. } \subsection{Statements from the `physical-observers' viewpoint} \lbs{phys-v} Statements in the first class C$_{\rm P}$\ are consistent to the results in section \rfs{BHTsub}. In the following, `physical observers traveling round the hole' is equivalent to `the ZAMOs circulating round the hole with $d\phi/dt=\om$'. \begin{itemize} \item {\bf C$_{\rm P}$(i)}: In Eq.~(4.8) in \cite{bla77}, ``{\em We can define the efficiency of the energy extraction process to be $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}=$ Actual energy extracted$/$Maximum extractable energy, when unit angular momentum is removed.}" and then $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}/\OmH$ (as in Eq. (4.10) in \cite{bla77}). \item {\bf C$_{\rm P}$(ii)}: ``{\em Inequality (4.7) ({\rm i.e., $\vcSE\lo\OmH\vcSJ$}) could have been derived using the classical limit of the Second law of Black Hole Thermodynamics}". \item {\bf C$_{\rm P}$(iii)}: ``{\em A physical observer rotating at constant radius close to the horizon will in general see a Poynting flux of energy entering the hole, but he will also see a sufficiently strong flux of angular momentum leaving the hole to ensure that ${\calE}^{r} \ggo 0$.}" (i.e., $\vcSE\ggo 0$) \item {\bf C$_{\rm P}$(iv)}: ``{\em Physical observers traveling round the hole at constant $r$ and $\theta$ and angular velocity $d\phi/dt$ will see the electric field reverse direction on the surface $d\phi/dt=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ {\rm (i.e. $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}=\Omega_{{\rm F}\omega}=0$)}. Inside this surface they see a Poynting flux of energy going toward the hole. (For a system of observers with time-like world lines $d\phi/dt\to \OmH$ on the event horizon and $d\phi/dt \to 0$ at infinity. Hence when $0<\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}<\OmH$, i.e., when the hole is losing energy electromagnetically, this surface always exists.)}'' as in the caption of Fig.\ 2 in \cite{bla77}. \item {\bf C$_{\rm P}$(v)}: In the footnote on p.443 in \cite{bla77}; ``{\em The outer light surface corresponds to the conventional pulsar light surface and physical particles must travel radially outwards beyond it. Within the inner light surface, whose existence can be attributed to the dragging of inertial frames and gravitational redshift, particles must travel radially inwards.}'' It was thereby concluded that ``{\em the spark gaps discussed in Section 2 must therefore lie between these two surfaces}", but nevertheless ``{\em there is no reason to believe that its position is stationary}" was written in Sec. 2 of that paper. \item {\bf C$_{\rm P}$(vi)}: ``{\em The fundamental differential equation for the potential $A_\phi\equiv \Psi/2\pi$}" is given by Eq. (3.14) in \cite{bla77}, which has been reproduced for $\Psi$ in the $3+1$ formulation from the `stream equation' Eq. (6.4) in \cite{mac82}, i.e., \begin{eqnarray} \nabla\cdot\left\{ \frac{\al}{\vp^2} \left[1-\frac{(\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-\om)^2\vp^2}{\al^2 c^2} \right] \nabla\Psi \right\} \hspace{1cm} \nonumber \\ + \frac{(\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-\om)}{\al c^2}\dr{\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}}{\Psi}(\nabla\Psi)^2 +\frac{16\pi^2}{\al\vp^2 c^2} I\dr{I}{\Psi}=0 . \lb{stream/MT} \end{eqnarray} \end{itemize} Our comments to the above C$_{\rm P}$-statements are as follows: {\bf P(i)}: By equations (\rf{first-la}), the `{\em Actual energy extracted}' is $c^2 dM=\OmFb dJ$ and the `{\em Maximum extractable energy}' is obtainable in the adiabatic process with $\Th dS=0$, i.e., from equation (\rf{1st-l}) $c^2 dM=\OmH dJ$, and hence the ratio gives the mean efficiency $\bar{\epsilon}_{\rm GTED}= \OmFb/\OmH$. It is the radiation condition for inflow of a Poynting flux into the hole in equation (\rf{SEM<0}), or $(\Omega_{{\rm F}\omega})_{\rm H}<0$ that yield the efficiency $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)/\OmH$ along each field line. {\bf P(ii)}: Not only $\vcSE\lo \OmH \vcSJ$ (see Eq. (4.7) and equation (\rf{dS>0c})), but also other inequalities in (\rf{second}a,b) are obtainable from the second law of black hole thermodynamics, as conjectured in the C$_{\rm P}$ (ii)-statement. {\bf P(iii)} The C$_{\rm P}$(iii)-statement suggests necessity of the two additional energy fluxes $\vcSEM$ and $\vcSsd$ composing the `total' flux $\vcSE$, related to the two terms of the first law (see equations (\rf{vcSEab})$\sim$(\rf{vcSE/EM/sd})). The FD effect due to $\om$ couples with `{\em a sufficiently strong flux of angular momentum leaving the hole}' $\vcSJ$, to produce outflow of the frame-dragging spin-down energy flux $\vcSsd=\om\vcSJ$ and to ensure the `total' flux $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ keeping positive as seen in equation (\rf{vcSE/EM/sd}). {\bf P(iv)}: When the second law ensures inequalities (\rf{second}), `{\em this surface}' $\SN$ where $\Omega_{{\rm F}\omega}$ and $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}$ reverse always exist. This means that `{\em this surface}' divides the force-free magnetosphere into the two domains; the outer domain $\calDout$ where $\Omega_{{\rm F}\omega}>0$, $\vcSEM>0$, and the inner domain $\calDin$ where $\Omega_{{\rm F}\omega}<0$, $\vcSEM<0$; that is to say, ``{\em outside this surface they see a Poynting flux of energy going outward infinity}" as in \cite{bla77}. It is at `{\em this surface}' where the breakdown of not only the force-free condition but also the freezing-in condition take place, thereby leading to a severance of both of stream- and current-lines, when field lines are continuous (cf.\ \CS(ii)-statement later). {\bf P(v)}: When a physical observer (ZAMO) measures the field-line rotational-velocity (FL-RV), it is given by $\vF=\Omega_{{\rm F}\omega}\vp/\al$, and the two light surfaces S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL\ are defined by $\vF=\pm c$ (see section \rfs{SoiL}). Obviously, `{\em this surface}' $\SN$ where $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}= \vF=\Omega_{{\rm F}\omega}=0$ exists between S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL\ (see C$_{\rm P}$(iv)-statement above). A ZAMO will see $\vcSEM>0$ in the outer domain with $\vF>0$, and $\vcSEM<0$ in the inner domain with $\vF<0$, and hence ``{\em the spark gaps must therefore lie between these two light surfaces}", and yet ``{\em there is a {\rm good} reason to believe that its position is stationary at $\vF =\Omega_{{\rm F}\omega}=0$}". It is however doubtful that the {\em spark gaps} will be fully at work under the condition of {\em a negligible violation} of the force-free condition at `this surface' $\SN$ (see the \CS (v,vi)-statements in section \rfs{static}). {\bf P(vi)}. The stream equation certainly contains the two light surfaces at $\vF=\Omega_{{\rm F}\omega}\vp/\al=\pm c$ (see section \rfs{SoiL}), and is, though seemingly, seamless at the null surface in the force-free limit, but actually a gap accommodating the particle-current sources must exist at the null surface. It must be remarked that the stream equation (\rf{stream/MT}) describing the field structure in the force-free domains involves the two functions of $\Psi$, $I(\Psi)$ and $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$, in a nonlinear way, and yet these two cannot be determined within the force-free domains, but only by terminating the force-free domains in the resistive membranes for $I(\Psi)$ and breaking down the force-free condition in the inductive membrane for $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$, and then solving the criticality-boundary value problem in the steady state (see sections \rfs{eMP}$\sim$\rfs{BC-SN}). This is because there is no source nor sink of energy and angular momentum in the force-free domains. It will be necessary to elucidate compatibility and consistency of the exact monopolar analytic solution of equation (\rf{stream/MT}) in the slow-rotation limit with $h\ll 1$ \citep{bla77, oka09,oka12a} with Constraints in equations (\rf{EqSN}). \subsection{Statements from the `static-observers' viewpoint} \lbs{static} A decisive difference between the two classes is that while C$_{\rm P}$-statements are unique, \CS-statements are ambiguous, because C$_{\rm P}$-statements take care of the key roles of the FD effect and the two energy fluxes $\vcSEM$ and $\vcSsd$, but the skipping of the two fluxes in \CS-statements appears to have allowed non-unique statements, such as, e.g.,\ `a negligible violation of the force-free condition' in the \CS-(v,vi) statements. \begin{itemize} \item {\bf \CS(i)}: ``{\em When a rotating black hole is threaded by magnetic field lines supported by external currents flowing in an equatorial disc, an electric potential difference will be induced.}" (in Summary of \cite{bla77}) \item {\bf \CS(ii)}: By Eqs. (4.3) and (4.4) in \cite{bla77} (see equation (\rf{E-AmFlux})), {\em ``the direction of energy flow cannot reverse on any given field line unless the force-free condition breaks down. Therefore, the natural radiation condition at infinity requires energy to flow outwards on all the field lines}''. \item {\bf \CS(iii)}: ``{\em energy and angular momentum from a rotating hole can indeed be extracted by a mechanism directly analogous to that of Goldreich \& Julian (1969)}''. \item {\bf \CS(iv)}: ``{\em the massive black hole behaves like a battery with an EMF of up to $10^{21}$V and an internal resistance of about 30$\Omega$. When a current flows, the power dissipated within the horizon, manifest as an increase in the reducible mass, is comparable with that dissipated in particle acceleration etc.\ in the far field.}" (Blandford 1979) \item {\bf \CS(v)}: ``{\em there must be some source of particles within the near magnetosphere. The currents that pervade the magnetosphere as the sources of the magnetic field are presumably carried by charged particles that are flowing outward a large distances. ( ..., positive outflow at large radii seems unavoidable.) We also know that the particle flux must be directed inwards through the event horizon, and so it cannot be conserved."} Then a vacuum gap will presumably appear between the flows of particles of opposite charges in between the two light surfaces. And thus, ``{\em Provided that the potential difference necessary to produce breakdown is much less than the total across the open field lines, an electromagnetic force-free solution should provide a reasonable approximation to time-averaged structure of such a magnetosphere.}'' \item {\bf \CS(vi)}: The \CS(v)-statement was paraphrased in \cite{mac82} as follows; ``{\em ... magnetic field lines that thread the hole must get their charges and currents in some other manner. Blandford \& Znajek (1977) argue that they come from the Ruderman-Sutherland (1975) `spark-gap' process, a cascade production of electron-positron pairs in the force-free region --- a production induced indirectly by a component of $\vcE$ along $\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}$, which again is so weak as to constitute a negligible violation of force-freeness and degeneracy.}'' In the followings, the \CS(v,vi)-statements are referred to as `a negligible violation of the force-free condition.' \end{itemize} Our comments to the above \CS-statements are as follows: {\bf S(i)}: There will be no unipolar induction battery at work in the horizon due to the potential gradient $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ related to the ``{\em magnetic field lines supported by external currents flowing in an equatorial disc}". Such a battery, if any, would be unable to launch a Poynting flux outward from the horizon, against an ingoing Poynting flux required by the second law of thermodynamics. The inflow of a Poynting flux would have to be due to a unipolar induction battery somewhere above the horizon (the position of batteries will actually be on `{\em this surface}' $\SN$ where $\Omega_{{\rm F}\omega}=\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}=0$ in the C$_{\rm P}$(iv)-statement). {\bf S(ii)}: If the \CS (ii)-statement claims no breakdown of the force-free condition taking place based on the relation (\rf{E-AmFlux}), this will be almost equivalent to saying that the FD effect is negligible, i.e., $\om\approx 0$ and $\Omega_{{\rm F}\omega}\approx \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ in the hole's force-free magnetosphere, and hence to denying the existence of `{\em this surface}' $\SN$ where $\Omega_{{\rm F}\omega}=\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}=0$ (cf.\ the C$_{\rm P}$ (iv)-statement). It is the second law that requests the existence of $\SN$ where the {\em complete} violation of the freezing-in condition as well as the force-free condition takes place, thereby leading to building the particle-current sources in $\SN$ or Gap $\GN$ (see section \rfs{eMP}). The first law makes a right demand for the two more energy fluxes besides the total flux, i.e., $\vcSEM$ and $\vcSsd$ corresponding to $\Th dS$ and $\OmH dJ$, and these fluxes stand for the inflow of a Poynting energy flux near the horizon and outflow of spin-down energy flux, of which the summation yields positive outward energy flux, i.e., $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ>0$ (see equation (\rf{vcSE/EM/sd})). Thus, the `physical observers' rotating with the FDAV $\om$ see two oppositely directed Poynting energy fluxes originating at `this surface' $\SN$ where $\vcSEM\ggel 0$ (see Figure \rff{Flux-om}). On the other hand, if the `distant static observers' see the energy flux $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ only, then by bypassing the Poynting flux $\vcSEM\ggel 0$ at $\SN$, they may overlook the breakdown of the force-free condition (and the freezing-in condition as well) taking place at `this surface' $\SN$. {\bf S\,(iii, iv)}: There are several misgivings in the \CS(iii,iv)-statements as follows: ({\bf a}) When the hole loses angular momentum $dJ<0$, whichever by the outflow of positive one or inflow of negative one, this couples with the FD effect $\om$ to induce the spin-down energy flux $\vcSsd=\om\vcSJ>0$ outwardly from the source of $\OmH dJ$ in the first law at the horizon (see the C$_{\rm P}$ (iii)-statement), irrespective of unipolar induction. It is not at the horizon, but at `this surface' $\SN$ where $\vcSEM\ggel 0$ for $\Omega_{{\rm F}\omega}\ggel 0$ that a pair of unipolar induction batteries are needed. ({\bf b}) When the first term $\Th dS$ of the first law requires the inflow of a Poynting flux originating at the `this surface' $\SN$, not due to {\em internal resistance} but to an external resistance at the horizon for the battery at work at `this surface' $\SN$, and hence need not be an attribute of a probably non-existent horizon battery. ({\bf c}) The interpretation of $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ as a Poynting flux launched by the horizon battery is incompatible with the first and second laws, because the former contains $\OmH dJ$ for the source of the spin-down flux, but does not any source for an outgoing Poynting flux, while the latter requires an inflow of the Poynting flux at the horizon. ({\bf d}) The `total' energy flux in equation (\rf{E-AmFlux}) certainly seems to be the same as that (\rf{vcSEM/SJ}) for the flux in the pulsar force-free magnetosphere, but this is in a sense a result of tricky cancellation of the FD effect $\om$ as seen in equations (\rf{vcSE/EM/sd}) and (\rf{s.i}). The same thing similarly occurs in expression (\rf{first-la}) for the first law, namely, there must in reality be $\Th dS+\OmH dJ$ existent between $c^2 dM$ and $\OmFb dJ$. If the two fluxes $\vcSEM$ and $\vcSsd$ or the corresponding two terms in the first law were not left off in the later analyses, the \cite{bla77}'s process might not conclude that ``{\em energy and angular momentum from a rotating hole can be extracted by a mechanism directly analogous to Goldreich \& Julian (1969)}." It will be to the outer domain $\calDin$ with $\vcSEM>0$ outside `this surface' that \CS(iii)-statement may be applicable. {\bf S(v,vi)}: It appears that an oversight of `{\em this surface}' $\SN$ where the complete violation occurs might have led to an idea of `{\em a negligible violation of the force-free condition}'. This has been started from the necessity of `{\em some source of particles within the near magnetosphere}' within the framework of the force-free condition, but the {\em negligible violation} seems to have brought about subsequent ambiguities and discrepancies, such as: ({\bf a}) The prime-motive forces leading to the vacuum in an arena of particle production must drive oppositely-directed flows, and the idea suggested appears to be due to the centrifugal force acting outwards and gravitational pull acting inwards, and yet these forces will have to exist back to back in between the two light surfaces, for the pair production discharge mechanism to be at work under a {\em negligible violation} (e.g.\ $|\Epl| \ll |\Evl|$). But it is not certain that how much large will `{\em a negligible violation' of the force-free condition} has to be, to allow an amount of particle production needed to maintain the whole magnetospheric currents. ({\bf b}) The second law will require a domain with an ingoing Poynting flux within `this surface' $\SN$ (the C$_{\rm P}$ (iii,iv)-statements), but `a mechanism directly analogous to \cite{gj69}' with `a negligible violation' of the force-free condition does not appear to be capable of explaining such a domain with $\vcSEM<0$. Then, the efficiency $\bar{\epsilon}_{\rm GTED}$ based on the first and second laws will be unapplicable to such a situation as described in the \CS(v,vi)-statements. ({\bf c}) If the outflow passing through S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ is a pulsar-type magneto-centrifugal wind (the \CS (ii)-statement), the inflow passing through \SIL\ as well must be an anti-pulsar-type magneto-centrifugal wind blowing inwardly, with the particle source somewhere between S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL. Also the Poynting flux flows outwards in the outer domain with $\Omega_{{\rm F}\omega}>0$, while it must flow inwards in the inner domain with $\Omega_{{\rm F}\omega}<0$. These will require that a pair of unipolar induction batteries as well as the particle source, yet oppositely directed, must coexist hidden there. It will seem to be implausible to accomplish such a setting, unless the complete violation of the force-free condition is the case. \section{Toward the construction of gravito-thermo-electrodynamics} \lbs{T-GTED} \setcounter{equation}{0} \renewcommand{\theequation}{{5}.\arabic{equation}} The foundation for black-hole `gravito-thermo-electro-dynamics' was laid by \cite{mac82} and \cite{tho86}, with use of an absolute-space/universal-time formulation from the ZAMOs' point of view. It seems that the dragging of inertial frames and gravitational redshift, the most fundamental general-relativistic properties of the Kerr BH, are completely incorporated into the $3+1$ formulation. Were the formulation correctly applied to the BZ process, then should it have enabled people to {\em modify} the BZ process relevantly with the FD effect. Unfortunately, however, the $3+1$ formulation by \cite{mac82} was not used, to correct the \CS-statements in the light of the C$_{\rm P}$-statements, consequently losing a chance of modifying the BZ process toward the right track of progress. The \CS (ii,iii,iv)-statements seem to have stemmed effectively from skipping the two non-conserved energy fluxes $\vcSEM$ and $\vcSsd$ (see equation (\rf{vcSEa})), equivalently by presuming $\om\simeq 0$. It was rather conversely used to reinforce the \CS (iv,v,vi)-statements (see, e.g., section 7.3 in \cite{mac82}). \subsection{The Needed Modifications of the BZ process } \lbs{MOD/BZ} As seen in the previous section, it seems that C$_{\rm P}$-statements are made mainly from a physical observer's (or ZAMO's) point of view, while \CS-statements are done from a distant static observer's point of view; however seemingly the latters only provide the framework of the BZ process, with no attention paid to the formers as a result. This means that the energy flow through the hole's force-free magnetosphere is described by the same relation as that in the pulsar force-free magnetosphere, i.e., $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$, without essential consideration to indispensable fluxes $\vcSEM$ and $\vcSsd$, and hence resulting in decoupling thermodynamics from electrodynamics (see the \CS (iii)-statement; cf. C$_{\rm P}$ (iii,iv)-statements). It is the two energy fluxes $\vcSEM$ and $\vcSsd$ that play an integral role in unifying pulsar electrodynamics and BH electrodynamics with the help of the FD effect, but in calculation of the `total' flux $\vcSE$, an simple identity (\rf{s.i}) makes $\om$ cancel out in the total of the two fluxes, resulting in the same form as that in equation (\rf{E-AmFlux}) for the force-free pulsar magnetosphere, without remaining any trace or evidence leading to breakdown of the force-free condition, despite not only the two `positive' statements C$_{\rm P}$(iii,iv) in \citep{bla77}, but also formulae in equations (\rf{vcSEab}) \citep{mac82} and equations (\rf{TPM-TD}) \citep{tho86}, in which $\vcSEM$ and $\vcSsd$ clearly show the indispensable relation with the first and second laws of BH thermodynamics. As seen in sections \rfs{BHTsub} and \rfs{nature}, \cite{bla77} had derived $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}/\OmH$ from the mass formula (\rf{massF}) with the entropy $S$ replaced with the irreducible mass $M_{\rm irr}$ and condition $0\lo \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\lo \OmH$ (see Eqs. (4.9), (4.10) and (4.7) in \cite{bla77}). On the other hand, the two non-conserved energy fluxes $\vcSEM$ and $\vcSsd$ were skipped in deriving the actual energy flux $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ in the BZ process, which means that the indispensable information on `where and how' the force-free and freezing-in conditions break down was lost, eventually leading to the \CS (ii,iii,iv)-statements. Important is the fact that the BZ process is no longer linked to the source term $\OmH(dJ/dt)$ of the spin-down energy in the first law, so that a rather fictitious magnetic energy \citep{bla77,mac82} had to be assumed for an outward Poynting flux from the horizon, related to the horizon battery. \subsection{The major premises and propositions } \lbs{major} \noindent Our standpoint in this paper is as follows: ({\bf a}) Kerr holes are strictly governed by the no-hair theorem and the first three laws of thermodynamics. We presume that every Kerr hole is incapable of being `magnetized' in the steady state, and hence ``the massive black hole will {\em not} behave like a battery with an emf...." (cf.\ \CS(iii,iv)-statements). Also, Kerr holes may be an acceptor of a Poynting flux of external origin, but can never be an emitter of that of internal origin (see \cite{pun90}; statements C$_{\rm P}$(iii,iv)). So we at least need neither presume that ``{\em magnetic field lines supported by external currents in an equatorial disc thread the event horizon}", nor ``{\em particles inside the hole will interact with particles a long way away from the hole through the agency of the magnetic field}" (cf.\ \cite{bla77}). ({\bf b}) The Kerr hole is not magnetized, unlike NSs, nor allows field lines to be pinned down at the horizon. The hole's magnetosphere is connected with the hole's body itself only by means of coupling the FD-AV $\om$ and the FL-AV $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$. Moreover, we do not presume that it is interstellar general magnetic fields that are involved in the extraction process (cf.\ III D1, \cite{tho86}). By the no-hair theorem, no conservation of magnetic fluxes at the birth of a hole will allow an existence of magnetic fluxes of internal origin, emanating to the outside beyond the horizon, nor threading of field lines of external origin with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=$ constant, because the non-locality of Ferraro's law of iso-rotation will not extend inside the hole beyond the surface of causal disconnection. We presume that no `spooky action' will be at work at a distance across S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$, except for quantum entanglement. ({\bf c}) We firstly assume the existence of the poloidal magnetic field $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ as constituting the backbone of a hole's magnetosphere, which extends from the vicinity of the horizon surface S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$\ to near the infinity surface \Sinf, and secondly the existence of perfectly conductive plasma around a Kerr hole, which is permeated by the poloidal magnetic field $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$, with the FLAV $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$. Ferraro's law of iso-rotation holds throughout the stationary, axisymmetric magnetosphere, i.e., $(\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}\cdot\mbox{\boldmath $\nabla$})\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=0$ (see section \rfs{D-OmF}), where $0<\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}<\OmH$ is assumed by the second law of thermodynamics (see section \rfs{second/restr}). There will then be such a surface fairly above the horizon, where the FD-AV equals the FL-AV, i.e. $\om=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\equiv \omN)$. This surface is referred to as the null surface $\SN$ \citep{oka92}, and equal to `{\em this surface}' in the C$_{\rm P}$ (vi)-statement. It is in reality at $\SN$ that the freezing-in and force-free conditions spontaneously break down (section \rfs{ED3/1F}), although each poloidal field line defined by $\Psi=$constant is assumed to be robust and continuous with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=$constant, all the way across $\SN$ from S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$\ to \Sinf. Then, one of the key tasks is to elucidate `why and how' the breakdown must occur at `this surface' $\SN$. \section{The $3+1$ formulation for a modified BZ process} \lbs{ED3/1F} \setcounter{equation}{0} \renewcommand{\theequation}{{6}.\arabic{equation}} The $3+1$ absolute-space/universal-time formulation with Boyer-Lindquist coordinates by \cite{mac82} and succinct summary of thermodynamic properties of Kerr BHs described in \cite{tho86} are helpful to elucidate how to modify the usual BZ process, in the light of the C$_{\rm P}$-statements in \cite{bla77}. The effect of the dragging of inertial frames are indeed perfectly incorporated into the $3+1$ formulation in a consistent way to combine pulsar electrodynamics with BH thermodynamics (section \rfs{BHTsub}). The ZAMOs or `physical observers' traveling round the hole with the FDAV $\om$ will see how smoothly the FD effect combines with unipolar induction. The electromagnetic quantities, such as $\vcE$, $\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}$, $\vre$, $\vcj$ are measured by the ZAMOs, and in particular, we have used the ZAMO-measured FLAV with $\Omega_{{\rm F}\omega}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-\om$ already in the previous sections. The `distant, static observers' will see that Ferraro's law of iso-rotation holds for $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, but the `physical observers' will see that Ferraro's law for $\Omega_{{\rm F}\omega}$ no longer holds along each current-field-streamline due to the FD effect, and in turn this produces the {\em needed} complete breakdown of the freezing-in and force-free conditions, which plays an integral role in modifying the usual BZ process. The point is to clarify `where, how and why' the force-free condition breaks down (cf.\ \citet{bla77}) and the force-free domains terminate \citep{mac82}, to determine the two eigenfunctions $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and $I$ in the steady axisymmetric state (see sections \rfs{NullS}, \rfs{RM} and \rfs{BC-SN}). \subsection{Fundamental equations and conditions} \lbs{basics} The absolute space around a Kerr BH with mass $M$ and angular momentum per unit mass $a=J/Mc$ is described in Boyer-Lindquist coordinates: \begin{subequations} \begin{eqnarray} ds^2=(\rho^2/\Delta) dr^2+\rho^2 d\theta^2 +\vp^2 d\phi^2; \\[1mm] \rho^2\equiv r^2+a^2\cos^2 \theta,\ \Delta \equiv r^2-2GMr/c^2 +a^2,\\[1mm] \Sigma^2\equiv (r^2+a^2)^2-a^2\Dl\sin^2\theta, \ \ \vp=(\Sigma/\rho)\sin\theta \\[1mm] \al=\rho\Dl^{1/2}/\Sigma , \ \ \om=2aGMr/c\Sigma^2 \lb{alom} \end{eqnarray} \lb{Kmetric} \end{subequations} (see \cite{mac82,oka92}), where $\al$ is the lapse function/redshift factor and $\om$ is the FD angular velocity (FDAV). The parameters $\al$ and $\om$ are given as unique functions of $\vp$ and $z$ in the Boyer-Lindquist coordinates, and $0\leq\al\leq 1$ and $\OmH\geq\om\geq 0$. Note that for $\al\to 0$, $\om\to\OmH=$constant on S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$\ by the zeroth law. When we introduce curvilinear orthogonal coordinates $(\ell,\Psi)$ in the poloidal plane, where $\ell$ stands for the distances measured along each field line $\Psi=$constant, then we express e.g.\ $\om=\om(\ell,\Psi)$. Just as $\al$ was `coordinatized' in the stretched horizon \citep{mac82,tho86}, we `coordinatize' $\om$ along field lines in the whole magnetosphere \citep{oka15a}. The ZAMO-measured FL-AV $\Omega_{{\rm F}\omega}$ as well is `coordinatized' (see e.g.\ Figure \rff{Flux-om}). From a somewhat pedagogical point of view, we revisit basic expressions for the poloidal and toroidal components of $\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}$, $\vcE$, the charge density $\vre$, the particle velocity $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}$ and the field line rotational velocity (FL-RV) $\vF$ in the steady axisymmetric state \citep{mac82,tho82,tho86,oka92,oka15a}. For the electric field $\vcE$ in curved spacetime we use Eq. (2.24a) or (4.7) in \cite{mac82} \begin{equation}} \newcommand{\eneq}{\end{equation} \vcE=\frac{1}{\al}\left(\mbox{\boldmath $\nabla$} A_0+\frac{\om}{c}\mbox{\boldmath $\nabla$} A_\phi \right), \lb{vcEp-A} \eneq where $A_0$ is a scalar potential and $\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$}=(0,0, A_\phi)$ is a vector potential, and $A_\phi=\Psi/2\pi$. This is the kick-off equation to make the FD effect couple with unipolar induction, by utilizing both of the freezing-in and force-free conditions in the `force-free' magnetosphere \begin{subequations} \begin{eqnarray} \vcE+\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}/c\times\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}=0, \lb{fi-c} \\ \vre\vcE+\frac{\vcj}{c}\times \mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}=0, \lb{ff-fz} \\ \vcE\cdot\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}=\vcj\cdot\vcE=\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}\cdot\vcE =0. \lb{deg} \end{eqnarray} \lb{ff-a} \end{subequations} When the second condition (\rf{ff-fz}) regards inertial forces as negligible compared with the Lorenz force, the first condition (\rf{fi-c}) implies that `force-free' magnetic field lines are frozen-in particles and yet dragged around by the motion $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}$ of `massless' particles. The combination of two opposite conditions, i.e.\ force-freeness and freezing-inness, then creates a kind of extreme physical state \citep{oka06}, with the fields degenerate \citep{mac82}, where current-field-streamlines are equipotentials (see equation (\rf{vc-vjB}) later). Condition (\rf{deg}) means that no particle accerelation takes place in the force-free domains. The `force-free magnetosphere' governed by the above conditions (\rf{ff-a}) possesses the basic conserved quantities $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$ and $I(\Psi)$, but cannot be viable unless the two conserved quantities are determined by the criticality-boundary condition formulated by breakdown of the above conditions (see section \rfs{NullS}). The flows of angular-momentum and energy, particles and currents are described by {\em flux}, {\em wind} and {\em circuit} theories, respectively, not mentioning that these theories must be consistent with each other. \subsection{The electric current } \lbs{coupling} We decompose the magnetic field $\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}=\mbox{\boldmath $\nabla$}\times\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$}$ as \begin{subequations} \begin{eqnarray} \mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}=-(\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} \times\nabla \Psi)/2\pi\vp, \lb{bhBp}\\ \vcBt=- (2I/ \vp\al c)\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} , \lb{bhBt} \end{eqnarray} \lb{Bpt} \end{subequations} where the `current function' is denoted with $I=I(\ell,\Psi)=I(\Omega_{{\rm F}\omega},\Psi)$ in general. From Eq. (2.17c) in \cite{mac82} for $\vcj$ we have \begin{equation}} \newcommand{\eneq}{\end{equation} \vcj=\frac{c}{4\pi\al} \left[\mbox{\boldmath $\nabla$}\times\al\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}+ \frac{1}{c}(\vcE\cdot\mbox{\boldmath $\nabla$}\om)\vcm \right], \lb{vcjP} \eneq where $\vcm=\vp\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} $ is a Killing vector, and then for $\mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}$ \begin{equation}} \newcommand{\eneq}{\end{equation} \mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}= \frac{\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} \times\mbox{\boldmath $\nabla$} I}{2\pi\vp\al}. \lb{vcjpO} \eneq Introducing the two orthogonal unit vectors $\uvp$ and $\uvn$ in the poloidal plane, i.e., $\uvp=\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}/|\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}|$ and $\uvn=-\mbox{\boldmath $\nabla$}\Psi/|\mbox{\boldmath $\nabla$}\Psi|$, and $\uvn\times\uvp=\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} $, we have for the current function in general, i.e., $I=I(\ell,\Psi)$ \begin{equation}} \newcommand{\eneq}{\end{equation} \mbox{\boldmath $\nabla$} I=\LPPlDr{I}{\ell} \uvp - 2\pi\vpB_{\rm p}} \newcommand{\Bt}{B_{\rm t}\LPPlDr{I}{\Psi}\uvn \lb{vcnbI} \eneq and hence, we express the electric current $\mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}$ as follows; \begin{eqnarray} \mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}=\jpl\uvp+\jvl\uvn, \hspace{3cm} \nonumber \\[1mm] \jpl= -\frac{B_{\rm p}} \newcommand{\Bt}{B_{\rm t}}{\al}\LPPlDr{I}{\Psi} ,\ \ \jvl=-\frac{1}{2\pi\al} \LPPlDr{I}{\ell} \lb{jpl/vl} \end{eqnarray} (see \citet{oka99}). Also, for $\jt$ we have from equation (\rf{vcjP}) \begin{equation}} \newcommand{\eneq}{\end{equation} \jt= \frac{\vp c}{8\pi^2 \al} \left[- \mbox{\boldmath $\nabla$}\cdot \LPfrac{\al\mbox{\boldmath $\nabla$}\Psi} {\vp^2} + \frac{2\pi}{c} \vcE\cdot\mbox{\boldmath $\nabla$}\om \right]. \lb{vcjtO} \eneq When $(\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}\cdot\mbox{\boldmath $\nabla$})X=B_{\rm p}} \newcommand{\Bt}{B_{\rm t}(\partial X/\partial\ell)=0$ for an arbitrary function $X$, we have $X=X(\Psi)$, and then it is said that $X$ is {\em conserved} along each field line. For example, $I=I(\Psi)$ in the `force-free' domains (see equation (\rf{SJa})). We presume that each current line given by $I(\ell,\Psi)=$constant must close in {\em circuit} theory, starting from one terminal of a unipolar induction battery, to return to the other terminal in the steady state, after supplying power to the acceleration zone with $\jvl>0$ (the current-closure condition). \newcommand{\Nbe}{n^{(-)}} \newcommand{\Nbp}{n^{(+)}} \subsection{The velocity $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}$ of `massless' particles} \lbs{coupling} Combining the two conditions (\rf{ff-a}a,b), one has \begin{equation}} \newcommand{\eneq}{\end{equation} \mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=\vcj/\vre \lb{charge1} \eneq \citep{bla77}. When we denote the number densities of electrons and positrons by $\Nbe$ and $\Nbp$, the charge density is given by $\vre=e(\Nbp - \Nbe)$. Then equation (\rf{charge1}) implies that the `force-free' plasma must be charge-separated, i.e., $\vre=-e\Nbe$ or $+e\Nbp$, and that the role of `massless' or `inertia-free' particles is just to carry charges, exerting no dynamical effect. The `force-free' domains, where no particle acceleration takes place, must be terminated by restoration of particle inertia, for particles to accelerate, thereby determining the eigenfunction $I(\Psi)$. This requires a change of the {\em volume} currents, parallel to the poloidal field $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ ($\jvl =0$) in the force-free domain, into the {\em surface} currents, perpendicular to $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ ($\jpl \approx 0$) on the terminating surfaces of the outer and inner force-free domains $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ and $\SffH$ (see sections \rfs{Sffinf}, \rfs{SffH}). Moreover, the breakdown of the freezing-in and force-free conditions at `this surface' $\SN$ imposes $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=\vcj=0$, because $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}>0$ far outside and $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}<0$ near the horizon and $\vcj$ does not change direction but must vanish, and also $\vre$ must certainly change sign at the place of the breakdown (see section \rfs{NullS}). This implies that the breakdown at `this surface' $\SN$ must locate the sources of particles and currents (cf.\ the \CS-statements in section \rfs{static}). \subsection{The field angular momentum flux $I(\Psi)$} \lbs{FAgMF} An inner product of equation (\rf{ff-fz}) with $\vcm=\vp\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} $ yields, with use of equation (\rf{vcjpO}), \begin{eqnarray} 0=\vcm\cdot\left[ \vre\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}+\frac{\vcj}{c}\times \mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}\right]=\frac{\vp}{c}(\mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}\times\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t})_{\rm t} =\frac{\vp}{c}\jvlB_{\rm p}} \newcommand{\Bt}{B_{\rm t} \nonumber \\ =-\frac{(\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}\cdot\mbox{\boldmath $\nabla$}) I }{2\pi\al c} =- \frac{1}{\al}\mbox{\boldmath $\nabla$}\cdot \left(\frac{I\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}}{2\pi c}\right)=- \frac{1}{\al}\mbox{\boldmath $\nabla$}\cdot \al\vcSJ, \lb{SJa} \hspace{0.4cm} \end{eqnarray} where $\vcSJ$ is given by the second of equations (\rf{E-AmFlux}). It turns out that the {\em field} angular momentum $-\al\vp\Bt=(2/c)I(\Psi)$ is conserved along each field line. From equations (\rf{bhBp}) and (\rf{jpl/vl}) one has $\jvl=0$ and then \begin{equation}} \newcommand{\eneq}{\end{equation} \mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t} =-\frac{1}{\al}\dr{I}{\Psi}\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}. \lb{vcjp2} \eneq As $\vcm\cdot(\mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}\times\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t})=0$ shows, it is exactly the `torque-free condition' included in the force-free condition that leads to $(\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}\cdot\mbox{\boldmath $\nabla$})I=0$, i.e., $I=I(\Psi)$. It turns out that $I=I(\Psi)$ expresses not only the `current function' but also the angular momentum flux per unit magnetic flux tube in the force-free domains. Then $I(\Psi)$ as well as $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$ is {\em two-sided}, and current lines are coincident with corresponding field-streamlines, as shown in equation (\rf{deg}). Note that the {\em two-sidedness} of $I(\Psi)$ holds only in each force-free domain, and the $\ell$- or $\Omega_{{\rm F}\omega}$-dependence of $I$ must be restored, i.e., $I=I(\ell,\Psi)=I(\Omega_{{\rm F}\omega},\Psi)$, in the resistive membranes (see section \rfs{eMP} and \rfs{GapStruc}). \subsection{The potential gradients $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$ and $\Omega_{{\rm F}\omega}(\ell,\Psi)$} \lbs{D-OmF} The coupling of frame dragging with unipolar induction in BH electrodynamics begins with equation (\rf{vcEp-A}). Inserting relations $\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}=\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}+\Bt\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} $ and $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=\mbox{\boldmath $v$}_{\rm p}} \newcommand{\vvt}{v_{\rm t}+\vtt\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} $ into equation (\rf{fi-c}) yields \[ \vcE=-\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}/c\times\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}= -\mbox{\boldmath $v$}_{\rm p}} \newcommand{\vvt}{v_{\rm t}/c\times\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}+\frac{\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} }{c}\times(\mbox{\boldmath $v$}_{\rm p}} \newcommand{\vvt}{v_{\rm t}\Bt-\vtt\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}), \] and by axial symmetry, $\vcE_{\rm t}=-\mbox{\boldmath $v$}_{\rm p}} \newcommand{\vvt}{v_{\rm t}/c\times\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}=0$ and hence \begin{equation}} \newcommand{\eneq}{\end{equation} \mbox{\boldmath $v$}_{\rm p}} \newcommand{\vvt}{v_{\rm t}=\kp\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}, \lb{vp-Bp} \eneq where $\kp$ is a scalar function (see equation (\rf{kappa})). Then we have \begin{equation}} \newcommand{\eneq}{\end{equation} \mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}= -\frac{(\vtt-\kp\Bt)}{2\pi\vp c}\mbox{\boldmath $\nabla$}\Psi . \lb{vcEp-b} \eneq Equating two equations (\rf{vcEp-A}) and (\rf{vcEp-b}) for $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}$ yields \begin{equation}} \newcommand{\eneq}{\end{equation} \mbox{\boldmath $\nabla$} A_0 = -K\mbox{\boldmath $\nabla$}\Psi, \quad K \equiv -\frac{\al(\vtt-\kp\Bt) -\om\vp}{2\pi\vp c}, \lb{vcEp-A0} \eneq and taking the curl of $\mbox{\boldmath $\nabla$} A_0$, we get \[ 0= \mbox{\boldmath $\nabla$}\times \mbox{\boldmath $\nabla$} A_0= - \mbox{\boldmath $\nabla$}\times (K\mbox{\boldmath $\nabla$}\Psi) =-\mbox{\boldmath $\nabla$} K\times \mbox{\boldmath $\nabla$}\Psi=2\pi\vp\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} (\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}\cdot\mbox{\boldmath $\nabla$} K), \] which indicates that $K$ is a function of $\Psi$ only, and hence \begin{equation}} \newcommand{\eneq}{\end{equation} K=-\dr{A_0}{\Psi}\equiv \frac{\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)}{2\pi c}. \lb{K} \eneq Equating this $K$ to the one in equations (\rf{vcEp-A0}) yields the FL-AV $\vF$ in equation (\rf{vp/vt}) later. From equations (\rf{vcEp-b})$\sim$(\rf{K}), we get \begin{subequations} \begin{eqnarray} \mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp} =- \frac{\Omega_{{\rm F}\omega}}{2\pi\al c}\nabla\Psi = \frac{\vF}{ c} B_{\rm p}} \newcommand{\Bt}{B_{\rm t} \uvn , \lb{bhEp} \\ \vre= -\frac{1}{8\pi^2 c} \nabla\cdot \left( \frac{\Omega_{{\rm F}\omega}}{\al}\mbox{\boldmath $\nabla$}\Psi \right), \lb{bh/rhoa} \end{eqnarray} \lb{bh/rhob} \end{subequations} where $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}$ is already given in equation (\rf{EpBt}). Note that it is the freezing-in condition that ensures Ferraro's law of iso-rotation for field lines in the steady axisymmetric state, i.e., $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)=$constant, but the ZAMOs see that the iso-rotation law for $\Omega_{{\rm F}\omega}$ is violated by the FD effect, as shown by the $\ell$-dependence of $\Omega_{{\rm F}\omega}=\Omega_{{\rm F}\omega}(\ell,\Psi)$. The importance of `this surface' $\SN$ resulting from violation, where $\Omega_{{\rm F}\omega}=\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}=0$, was already pointed out by \cite{bla77} (see the C$_{\rm P}$ (iv)-statement in section \rfs{phys-v}). The ZAMO-measured particle-velocity $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}$ and the FL-AV $\vF$ are summarized as follows; \begin{subequations} \begin{eqnarray} \mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=\kappa\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}+\vF \mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} , \lb{vcv} \\ \mbox{\boldmath $v$}_{\rm p}} \newcommand{\vvt}{v_{\rm t}=\kappa B_{\rm p}} \newcommand{\Bt}{B_{\rm t}, \quad \vvt=\kappa\Bt +\vF, \lb{vcvp/t} \\ \vF=\Omega_{{\rm F}\omega}\vp /\al, \lb{vp/vt} \\ \kp=-(1/\vre\al) (dI/d\Psi). \lb{kappa} \end{eqnarray} \end{subequations} Because $\vF$ stands for the physical velocity of field lines relative to the ZAMOs, $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}$ seen by the ZAMO is entirely induced by the motion of the magnetic field lines, i.e., $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp} =-(\vF\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} /c)\times \mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ \citep{mac82}. It is the `$\al\om$ mechanism' \citep{oka92} that one can define the inner light surface by $\vF=-c$ and `this surface' $\SN$ by $\vF=0$, in addition to the outer light surface with $\vF=c$ (see the C$_{\rm P}$(v)-statement and section \rfs{SoiL}). We decompose the Lorentz force $(\vre\vcE+\vcj/c\times\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$})$ as follows; \begin{eqnarray} \vre\vcE+\frac{1}{c} \vcj\times\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$} =\frac{1}{c}\left[ -\jvl \Bt\uvp+\jvl B_{\rm p}} \newcommand{\Bt}{B_{\rm t}\mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} \right. \nonumber \\ \left. + \left(\jpl\Bt-\jt B_{\rm p}} \newcommand{\Bt}{B_{\rm t} +\frac{\Omega_{{\rm F}\omega}\vp}{\al}B_{\rm p}} \newcommand{\Bt}{B_{\rm t}\vre\right)\uvn \right] \lb{JcrossB} \end{eqnarray} (see \citet{oka99}). The force-free and torque-free conditions are given simply by $\jvl\propto -(\partial I /\partial\ell)=0$, i.e., $I=I(\Psi)=$constant along each field line (see equations (\rf{SJa}) and (\rf{energya})). The $\uvn$-component yields \begin{subequations} \begin{eqnarray} \jt=\vre\vvt=(\Omega_{{\rm F}\omega}\vp/\al) \vre + (1/\al^2 \vp c)(dI^2/d\Psi) ~, \ \ ~~ \lb{jta} \\ = -\frac{\Omega_{{\rm F}\omega}\vp}{8\pi^2 \al c}\mbox{\boldmath $\nabla$}\cdot \left(\frac{\Omega_{{\rm F}\omega}}{\al}\mbox{\boldmath $\nabla$}\Psi\right) +\frac{1}{\vp\al^2 c}\dr{I^2}{\Psi} ~, \ \ \ ~~ \lb{jtb} \end{eqnarray} \lb{jtc} \end{subequations} which accords with the result from $\jt=\vre\vvt$ in equation (\rf{charge1}), utilizing $\vre$ in (\rf{bh/rhoa}), $\vvt$ in (\rf{vcvp/t}) and $\vF$ in (\rf{vp/vt}). By equations (\rf{vcjtO}) and (\rf{bhEp}) we have also \begin{equation}} \newcommand{\eneq}{\end{equation} \jt = - \frac{\vp c}{8\pi^2\al} \left[\mbox{\boldmath $\nabla$}\cdot\left(\frac{\al}{\vp^2}\mbox{\boldmath $\nabla$}\Psi\right)+\frac{\Omega_{{\rm F}\omega}}{\al c^2}(\mbox{\boldmath $\nabla$}\Psi\cdot\mbox{\boldmath $\nabla$})\om \right] \lb{jt-Farad} \eneq (see Eqs. (2.17c), (5.6b) in \cite{mac82}). Equating two expressions (\rf{jtb}) and (\rf{jt-Farad}) for $\jt$ leads to the stream equation (\rf{stream/MT}). Putting relations among $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}$, $\vcj$ and $\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}$ together from equations (\rf{charge1}), (\rf{vcjp2}) and (\rf{jta}), one has \begin{equation}} \newcommand{\eneq}{\end{equation} \mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=\frac{\vcj}{\vre}= -\frac{1}{\vre\al}\dr{I}{\Psi}\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$}+\frac{\Omega_{{\rm F}\omega}\vp }{\al} \mbox{\boldmath $t$}} \newcommand{\uvp}{\mbox{\boldmath $p$}} \newcommand{\uvn}{\mbox{\boldmath $n$} , \lb{vc-vjB} \eneq which indicates that current-field-streamlines are equipotentials in the force-free domains (see equation (\rf{deg})). In passing, we clarify an important constraint imposed by the `current closure condition' in the steady axisymmetric state, that is, no net gain nor loss of charges over any closed surface threaded by current lines in the force-free domains. For a closed surface from the first open field line $\Psi=\Psi_0} \newcommand{\Psio}{\Psi_1} \newcommand{\Psit}{\Psi_2$ to the last open field line$\Psi=\bar{\Psi}$ in the poloidal plane, one has \begin{equation}} \newcommand{\eneq}{\end{equation} \oint \al\vcj\cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$}\propto I(\Psib)-I(\Psi_0} \newcommand{\Psio}{\Psi_1} \newcommand{\Psit}{\Psi_2)=0,\ \ I(\Psib)=I(\Psi_0} \newcommand{\Psio}{\Psi_1} \newcommand{\Psit}{\Psi_2)=0, \lb{c-c-c} \eneq when there is no line current at $\Psi=\Psi_0} \newcommand{\Psio}{\Psi_1} \newcommand{\Psit}{\Psi_2$, or $=\Psib$. This requires that function $I(\Psi)$ has at least one extremum at $\Psi=\Psi_{\rm c}} \newcommand{\Psib}{\bar{\Psi}$ where $(dI/d\Psi)_{\rm c}=0$ (see figure 2 in \citep{okam06} for one example of $I(\Psi)$), and hence $\mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}=\vre\mbox{\boldmath $v$}_{\rm p}} \newcommand{\vvt}{v_{\rm t} \lleg 0$ for $\Psi\lleg\Psi_{\rm c}} \newcommand{\Psib}{\bar{\Psi}$ (see Figure \rff{DC-C}), where $\Psi_0} \newcommand{\Psio}{\Psi_1} \newcommand{\Psit}{\Psi_2<\Psio<\Psi_{\rm c}} \newcommand{\Psib}{\bar{\Psi}<\Psit<\Psib$. \subsection{The `conserved' and `non-conserved' energy fluxes} \lbs{TEF} Multiplying equation (\rf{SJa}) with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, one has \begin{equation}} \newcommand{\eneq}{\end{equation} 0=- \frac{\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}}{\al}\mbox{\boldmath $\nabla$}\cdot \al\vcSJ =- \frac{1}{\al}\mbox{\boldmath $\nabla$}\cdot \al\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ, \lb{energya} \eneq which surely reproduces equation (\rf{E-AmFlux}) for the $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ relation. This procedure of derivation however does not yield the non-conserved fluxes $\vcSEM$ and $\vcSsd$ between $\vcSE$ and $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$, although one can easily obtain `{\em a Poynting flux}' (see \cite{bla77}; the C$_{\rm P}$ (iii,iv)-statements), which accords with $\vcSEM$, from equations (\rf{bhEp}) and (\rf{bhBt}). Then, to replace $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ with $\Omega_{{\rm F}\omega} + \om$ with use of the identity in (\rf{s.i}), one obtains equation (\rf{vcSE/EM/sd}) or (\rf{vcSEab}a,b), which shows that the `total' flux $\vcSE$ consists of the two non-conserved fluxes $\vcSEM$ and $\vcSsd$ (see section \rfs{first/eps}). The `total' energy flux $\vcSE$ and the angular momentum flux $\vcSJ$ correspond to terms $c^2(dM/dt)$ and $dJ/dt$, respectively, in the first law (see equations (\rf{c.mass}) and (\rf{PEJ}a,b)). The `non-conserved' energy flux $\vcSsd$ fits at the horizon to the second term $\OmH dJ/dt$ of the first law, because $\om$ tends to $\OmH=$constant on the horizon surface S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$\ by the zeroth law of thermodynamics. Another non-conserved flux $\vcSEM$ corresponds to $\Th (dS/dt)$ on the horizon \citep{oka09, oka12a,oka12b,oka15a,oka15b}. Therefore, we have three energy fluxes of each importance in total (see Figure~\rff{Flux-om}; cf.\ figure 3 in \citet{oka09}). \subsection{Two light surfaces S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL} \lbs{SoiL} It is easy to verify the existence of not only two, outer and inner, light surfaces, S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL (the C$_{\rm P}$ (v)-statement), but also `this surface' $\SN$ in between (see the C$_{\rm P}$ (iv)-statement). The `physical velocity' of field-lines $\vF$ by equation (\rf{vp/vt}) approaches $\pm\infty$ for $\vp\to\infty$ towards \Sinf\ and for $\al\to 0$ towards S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$. The outer and inner light surfaces are given by $\vF=\pm c$; (see Figure \rff{Flux-om}) \begin{equation}} \newcommand{\eneq}{\end{equation} \OmFmOL=+c(\al/\vp)_{\rm oL}, \ \ \OmFmIL=-c(\al/\vp)_{\rm iL} , \lb{Sil/oL} \eneq respectively, and their positions are obtained by solving \begin{equation}} \newcommand{\eneq}{\end{equation} \om_{\rm oL} =\omN - c(\al/\vp)_{\rm oL} , \ \ \om_{\rm iL} =\omN + c(\al/\vp)_{\rm iL}, \lb{om/iL} \eneq provided that $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$ is given, and hence \begin{equation}} \newcommand{\eneq}{\end{equation} \om_{\rm iL}>\omN =\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H} > \om_{\rm oL} \lb{GapC} \eneq (see equations (\rf{xoiL}a,b) for the behavior of S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL\ for the limit of $h\to 0$). The two non-conserved energy fluxes become from equations (\rf{Sem}a,b), (\rf{Sil/oL}) and (\rf{om/iL}) \begin{equation}} \newcommand{\eneq}{\end{equation} \vcSEM= \Omega_{{\rm F}\omega}\vcSJ=\left\{ \begin{array} {ll} \Iout(\Psi) (\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}/2\pi\vp)_{\rm oL}>0 &;\ \mbox{\rm S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$}, \\[1mm] \IinU(\Psi) (\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}/2\pi\vp)_{\rm iL}<0 &;\ \mbox{\rm \SIL}, \end{array} \right. \lb{LSoi} \eneq and \begin{equation}} \newcommand{\eneq}{\end{equation} \vcSsd= \om\vcSJ= \left\{ \begin{array} {ll} \Iout(\Psi) (\om\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}/2\pi \al c)_{\rm oL}>0 &;\ \mbox{\rm S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$}, \\[1mm] \Iin(\Psi) (\om\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}/ 2\pi \al c)_{\rm iL} >0 &;\ \mbox{\rm \SIL}, \end{array} \right. \lb{LSoi2} \eneq where $\Iin=-\IinU>0$, while $\vcSEM+\vcSsd=\vcSE$ is naturally kept constant at S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL. The light surfaces are in a sense a vestige in the force-free limit of the intermediate (Alfvenic) magnetosonic surfaces. We can derive the expressions of $\vre$ and $\jt$ similarly to the pulsar case \citep{oka74,oka78,ken83}. We rewrite $\vre$ from equation (\rf{bh/rhoa}) as follows; \begin{equation}} \newcommand{\eneq}{\end{equation} \vre=-\frac{1}{8\pi^2 c} \left[ \frac{\vp\vF}{\al} \mbox{\boldmath $\nabla$}\cdot \left(\frac{\al}{\vp^2} \mbox{\boldmath $\nabla$}\Psi\right)+\frac{\al}{\vp^2}\mbox{\boldmath $\nabla$}\Psi\cdot \mbox{\boldmath $\nabla$} \LPfrac{\vp\vF}{\al}\right]. \lb{vreLS} \eneq Eliminating the factor $\mbox{\boldmath $\nabla$}\cdot(\al\mbox{\boldmath $\nabla$}\Psi/\vp^2)$ between equations (\rf{vreLS}) and (\rf{jt-Farad}), we have \begin{equation}} \newcommand{\eneq}{\end{equation} \vre- \frac{\vF}{c^2} \jt=\frac{1}{8\pi^2 \al c}\cdot \left[ \frac{\vF^2}{c^2} \mbox{\boldmath $\nabla$}\Psi\cdot\mbox{\boldmath $\nabla$}\om-\frac{\al^2}{\vp^2} \mbox{\boldmath $\nabla$}\Psi\cdot\mbox{\boldmath $\nabla$}\LPfrac{\vp^2\vF}{\al} \right]. \lb{vre-jt1} \eneq Then solving $\vre$ and $\jt$ from two equations (\rf{vre-jt1}) and (\rf{jta}) yields \begin{eqnarray} c\vre = \frac{\displaystyle 1}{\left(\displaystyle 1-\frac{\vF^2}{c^2}\right)} \left[ \frac{\vF}{\al^2 \vp c^2} \dr{I^2}{\Psi} \right. \hspace{3.5cm} \nonumber \\ \left. +\frac{1}{8\pi^2 \al} \left( \frac{\vF^2}{c^2} (\mbox{\boldmath $\nabla$}\Psi\cdot\mbox{\boldmath $\nabla$}\om) -\frac{\al^2}{\vp^2} \mbox{\boldmath $\nabla$}\Psi\cdot\mbox{\boldmath $\nabla$}\LPfrac{\vp\vF}{\al} \right) \right] \hspace{0.5cm} \lb{BHvre} \end{eqnarray} and \begin{eqnarray} \jt = \frac{\displaystyle 1}{\left(\displaystyle 1-\frac{\vF^2}{c^2}\right)} \left[ \frac{1}{\al^2 \vp c} \dr{I^2}{\Psi} \right. \hspace{3.5cm} \nonumber \\ \left. + \frac{\vF}{8\pi^2 \al c} \left( \frac{\vF^2}{c^2} (\mbox{\boldmath $\nabla$}\Psi\cdot\mbox{\boldmath $\nabla$}\om) -\frac{\al^2}{\vp^2} \mbox{\boldmath $\nabla$}\Psi\cdot\mbox{\boldmath $\nabla$}\LPfrac{\vp\vF}{\al} \right) \right] \hspace{0.4cm} \lb{BHjt} \end{eqnarray} (see Eqs. (29) and (30) in \cite{oka09}), which reduce to Eqs. (45) and (46) in \citet{oka74} for a pulsar force-free magnetosphere with $\al=1$ and $\om=0$. For both $\vre$ and $\jt$ not to diverge at S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$/\SIL with $\vF=\pm c$, the numerators should vanish, i.e., \begin{eqnarray} \frac{\vF}{\al^2 \vp c^2} \dr{I^2}{\Psi} +\frac{1}{8\pi^2 \al} \left( \frac{\vF^2}{c^2} (\mbox{\boldmath $\nabla$}\Psi\cdot\mbox{\boldmath $\nabla$}\om) -\frac{\al^2}{\vp^2} \mbox{\boldmath $\nabla$}\Psi\cdot\mbox{\boldmath $\nabla$}\LPfrac{\vp\vF}{\al} \right) \nonumber \\ = \frac{\vF}{\al^2 \vp c^2} \dr{I^2}{\Psi}+ + \frac{\al B_{\rm p}} \newcommand{\Bt}{B_{\rm t}^2}{2} \left( \frac{\vp^2}{\al^2} \PlDr{\om}{\Psi} -\frac{\partial}{\partial\Psi}\LPfrac{\vp\vF}{\al} \right) =0. \hspace{0.8cm} \lb{So/iLS} \end{eqnarray} which will automatically be satisfied, when the eigenvalues $\Iout$, $\Iin$ and $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ are determined by the criticality-boundary condition (see section \rfs{BC-SN}). This is because in order to determine the eigenfunctions, breakdown of the force-free condition and termination of the force-free domains are necessary, while the `criticality condition' (\rf{So/iLS}) has nothing to do with determination of the eigenfunctions. When $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$ is obtained, we will be aware of not only $\omN=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, but also $\omoL$ and $\omiL$ in equations (\rf{om/iL}). It will be obvious that `this surface' $\SN$ with $\vcSEM=\vF=\Omega_{{\rm F}\omega}=0$ is located between the two light surfaces, i.e.\ \SIL$<\SN<$S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$, indicating that the particle sources of two, oppositely-directed, magneto-{\em centrifugal} winds, outwardly and inwardly passing through S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL, respectively, must be coexistent under the null surface $\SN$ or the Gap $\GN$. Contrary to the statement ``{\em there is no reason to believe that its position is stationary}" \citep{bla77}, its position must unequivocally be at `this surface' $\SN$ where $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}=\Omega_{{\rm F}\omega}=0$ (see the C$_{\rm P}$ (iii,iv)-statements; section \rfs{NullS}).\footnote{ In the case of treating full magnetohydrodynamic theory of BH winds, not only the outflow but also the inflow must pass smoothly through three critical points; slow, intermediate and fast magnetosonic surfaces \citep{web67,mic69,oka78,ken83,pun90,oka99,oka02,oka03}. In the force-free theory, the last two surfaces reduce to S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$, \SIL\ and \SoF, S$_{\rm iF}$} \newcommand{\SoF}{S$_{\rm oF}$, respectively, although the slow surface is usually neglected.} \section{The iso-rotation law, and the freezing-in and force-free conditions} \lbs{NullS} \setcounter{equation}{0} \renewcommand{\theequation}{{7}.\arabic{equation}} When the hole loses angular momentum and energy, `this surface' $\SN$ always exists (the C$_{\rm P}$ (iii,iv)-statements), by the second law of thermodynamics. When $\PN{\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}}\neq 0$ and $\PN{\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}}\neq 0$ (see section \rfs{major}), from equations (\rf{ff-a}a,b), (\rf{bh/rhob}a,b), (\rf{Sem}a,b), (\rf{vcSE/EM/sd}), (\rf{vcjpO}), (\rf{vc-vjB}) and (\rf{jtc}a,b), we see that following quantities necessarily vanish at $\SN$;\footnote{The surface of $\vre=0$ may not exactly accord with the null surface $\SN$ at $\Omega_{{\rm F}\omega}=\vcj=0$ in the force-free limit (see \cite{oka09,oka12a}).} \begin{subequations} \begin{eqnarray} \PN{\Omega_{{\rm F}\omega}}=\PN{\vF}= \PN{\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}}=\PN{\vre} =\PN{\vcSEM} \quad\quad \lb{SNa} \\ =\PN{\vcj}=\PN{I} =\PN{\Bt} =\PN{\vcSJ} =\PN{\vcSsd}=\PN{\vcSE} \quad \lb{SNb} \\ =\PN{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=\PN{\vcj/\vre} =0, \quad\quad \lb{SNc} \end{eqnarray} \lb{EqSN} \end{subequations} where we denote the value of function $X(\Omega_{{\rm F}\omega},\Psi)$ at $\SN$ with $\PN{\Omega_{{\rm F}\omega}}=0$; \begin{equation}} \newcommand{\eneq}{\end{equation} \PN{X}=X(0,\Psi) \lb{PNX} \eneq (see equations (\rf{EqSG}a,b) for the {\em widened} Constraints in the widened Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$). It is the kick-off equation (\rf{vcEp-A}) that combines with the freezing-in condition in (\rf{fi-c}), to produce the coupling of the frame-dragging with unipolar induction, i.e., $\om$ and $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, which in turn gives rise to `violation of Ferraro's law of iso-rotation' by the $\ell$-dependence of $\om$, thereby yielding `this surface' $\SN$ where $\PN{\Omega_{{\rm F}\omega}}=0$, and then other Constraints in equation (\rf{SNa}). It will then be argued that Constraint $\PN{\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}}=0$ reacts back to the force-free and freezing-in conditions in (\rf{ff-a}b,a) at `this surface' $\SN$, thereby breaking them down, to yield the Constraints in (\rf{SNb}) and (\rf{SNc}). We classify Constraints at $\SN$ into three distinctive groups: The first group, appeared in equation (\rf{SNa}), originates from the violation of the iso-rotation law, i.e., $\PN{\Omega_{{\rm F}\omega}}=0$, and contains the quantities that reverse directions or change signs across `this surface' $\SN$. The second one, appeared in equation (\rf{SNb}), contains quantities that vanish, but do not reverse directions nor change signs, and finally the third one contains $\PN{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=0$ resulting from the breakdown of the freezing-in condition, and $\PN{\vcj/\vre}$, which, similarly to those in the first group, reverses and changes sign due to the existence of $\vre$. The above Constraints uniquely specify the fundamental physical nature of the non-force-free $\SN$ (or widened Gap $\GN$) between the outer and inner force-free domains, as follows: \benu \item The ZAMOs see that $\Omega_{{\rm F}\omega}$ and $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}$ change their signs at $\SN$ along each field line. Then the behavior of the Poynting flux like $\vcSEM \ggel 0$ for $\Omega_{{\rm F}\omega}\ggel 0$ necessitates the existence of a pair of batteries with each EMF on both the upper and lower sides of $\SN$, in such a way that each current flows along each circuit in each domain, $\calDout$ or $\calDin$, to satisfy the current-closure condition (see section \rfs{indMem}; Figures \rff{Flux-om}, \rff{DC-C}). Constructing the DC circuits, $\calCout$ and $\calCin$, in the outer and inner domains, respectively, the Faraday path integral of $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}$ along the circuits will yield the EMFs, $\calEout$ and $\calEin$, driving currents along the circuits, to maintain the force-free magnetosphere active (see sections \rfs{indMem}, \rfs{m-mdGap} and Figure \rff{DC-C}; \citet{oka15a}). \item The surface $\SN$ (or the gap $\GN$) with $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}\ggel 0$ for $\Omega_{{\rm F}\omega}\ggel 0$ will behave like a watershed in a mountain pass for outflows and inflows of `massless' particles pair-created by the voltage drop (see equations (\rf{EMF-ab}) and (\rf{Dl-V})), and yet both flows are due to the magneto-centrifugal forces at work toward the opposite directions, inward and outward, respectively. `This surface' $\SN$ will thus be equipped with a kind of virtual `magnetic rotators' oppositely spinning and having a pair of EMFs with the in-between particle-current sources. As the outer pulsar-type magneto-centrifugal wind flows in $\calDout$ with $v_{\rm F}> 0$, the inner anti-pulsar-type wind will be existent in $\calDin$ with $v_{\rm F}< 0$, supplied by the in-between particle sources (see Figure \rff{DC-C}). \item Constraints $\PN{\vcj}=\PN{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=0$ mean that both of current- and stream-lines are no longer allowed to cross $\SN$. Relation $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=\vcj/\vre$ in equations (\rf{vc-vjB}) no longer holds at $\SN$. Constraints $\PN{\vcj}=0$ shows that there will be no EMF that is capable of driving such a current to be crossing $\SN$ due to a `single' unipolar induction battery at any (possible) position (cf.\ \citet{tho86}). Each electric circuit must close in its respective force-free domain, $\calDout$ (or $\calDin$), with each EMF in the inductive membrane at $\SN$ and with the eigenvalue $I(\Psi)$, i.e., $\Iout$ (or $\Iin$) determined as the eigenvalue in the resistive membrane $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ (or $\SffH$) (sections \rfs{Sffinf} and \rfs{SffH}). \item Constraint $\PN{\vcj}=0$ ensures that no angular momentum is conveyed by the force-free magnetic field from $\calDout$ to $\calDin$. Then, $\PN{I}=\PN{\Bt}=\PN{\vcSJ}=\PN{\vcSE}=0$, resulting from $\PN{\vcj}=0$, means that there will be no inertial loading upon $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ in the inductive membrane with no resistance (see Figure \rff{GapI}). It is important to remind that the toroidal field $\Bt$ is a swept-back component of the poloidal component $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ due to inertial loadings in the resistive membranes (see Figure \rff{GapI}). At $\SN$ (or $\GN$), $\PN{I}=0$ (or $\PG{I}=0$) means that there must be a jump of $I(\Psi)=\Iin$ to $\Iout$, just like in the NS surface (see equations (\rf{I/Pul}) and (\rf{OL-I}) and Figure \rff{GapI}). \item Constraint $\PN{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=0$ means that the particles at $\SN$ (or $\GN$) make no macroscopic motions, and conversely particles with $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=0$ are `\zam' particles (\zamp s) circulating with $\omN=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$. Moreover, Constraints $\PN{I}=\PN{\vcSJ}=\PN{\vcSE}=0$ mean that no angular momentum is transported along the field lines of $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ with $\PN{\Bt}=0$, nor energy passes across $\SN$. Then, despite $\PN{\vcSJ}=\PN{\vcSEM}= \PN{\vcSsd}=\PN{\vcSE}=0$, it is due to the existence of \zamp s at $\SN$ (or the widened Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$) that it looks like as if energy and angular momentum flow continuously beyond $\SN$ (or ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$) from the inner to outer domains (see sections \rfs{m-mdGap}, Figures \rff{Flux-om} and \rff{GapI}). \enen \section{An extended Membrane Paradigm} \lbs{eMP} The two functions of $\Psi$, i.e., $I(\Psi)$ and $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$, {\em conserved} in the `force-free domains,' are not freely specifiable parameters, but must be determined as the eigenfunctions of $\Psi$: it is the `criticality condition' at the {\em resistive} membranes $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ and $\SffH$ terminating the two force-free domains that determine $\Iout$ and $\Iin$ in the outer and inner force-free domains (see equations (\rf{ouIa}) and (\rf{Iinab}a,b})), respectively, and then it is the `boundary condition' in the {\em inductive} membrane $\SN$ in between that determines the final eigenvalue $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, so as to ensure continuity of the angular momentum flux across `this surface' $\SN$ (see equations (\rf{DN/SN}). At the same time, $\SN$ (or $\GN$) thus decided must be the optimum place to build up the power station consisting of a pair of unipolar induction batteries with EMFs for the DC circuits in both domains, as well as to produce pair-particles sufficient enough to get strongly magnetized by anchoring the threading magnetic field lines. This procedure would be impossible to conduct, unless the freezing-in and force-free conditions are not negligibly but completely violated (cf.\ \citet{bla77, mac82}; the \CS(v,vi)-statement). \subsection{The resistive membranes} \lbs{RM} \setcounter{equation}{0} \renewcommand{\theequation}{{8}.\arabic{equation}} \subsubsection{The force-free infinity surface $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$} \lbs{Sffinf} It is astrophysical loads such as MHD particle acceleration that produce the toroidal component $\Bt=-2\Iout/\vp\al c$ as a swept-back component from the poloidal field $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$. In MHD wind theory, the outgoing plasma flow is generally required to pass smoothly through the outer fast-magnetosonic surface \SoF, and the `criticality condition' for the present case yields \begin{equation}} \newcommand{\eneq}{\end{equation} \Iout(\Psi)= (1/2)\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(B_{\rm p}} \newcommand{\Bt}{B_{\rm t}\vp^2)_{{\rm ff}\infty} \quad \mbox{\rm at \SoF\ in}\ S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$, \lb{ouIa} \eneq which naturally accords with the force-free limit of that for MHD pulsar wind theory (see Eq. (10.1) in \cite{oka78}). The criticality condition in wind theory also accords with `Ohm's law' in circuit theory for the surface currents flowing in $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ and the `radiation condition' in flux theory. The outgoing magneto-centrifugal wind is followed by the Poynting flux and the angular momentum flux in $\calDout$ with $\Omega_{{\rm F}\omega}>0$, which are given in terms of $\Iout$ from equation (\rf{Sem/a}); \begin{equation}} \newcommand{\eneq}{\end{equation} \mbox{\boldmath $S$}_{\rm EM,(out)}} \newcommand{\vcSEMin}{\mbox{\boldmath $S$}_{\rm EM,(in)} =\Omega_{{\rm F}\omega} \vcSJout, \ \ \vcSJout =\Iout/(2\pi\al c) \mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t} \lb{SJout} \eneq (see equation (\rf{vcSEM/SJ}) for a pulsar wind, and equation (\rf{SJinUb}) for $\vcSEMin$). The outer force-free domain $\calDout$ is then terminated by a membrane $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$, which may be regarded in wind theory as containing a sufficiently thin layer onto which the MHD acceleration layer from the outer fast-magnetosonic surface \SoF\ to \Sinf\ is compressed, and on which membrane currents transformed from the volume currents in $\calDout$ flow across poloidal field lines threading there to dissipate. The membrane current flowing from $\Psit$ to $\Psio$ is \begin{equation}} \newcommand{\eneq}{\end{equation} {\cal I}} % {{\cal I}_{\rm ff{\infty}}=(\Ito/2\pi \vp)_{\rm ff{\infty}}, \lb{calIffinf} \eneq where $\Ito=\Iout(\Psio)=\Iout(\Psit)$ and $\Psi_0} \newcommand{\Psio}{\Psi_1} \newcommand{\Psit}{\Psi_2<\Psio<\Psi_{\rm c}} \newcommand{\Psib}{\bar{\Psi}<\Psit<\Psib$ (see Figure \rff{DC-C}). The resistive membrane $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ may also be interpreted as possessing the same surface resistivity ${\cal R}} \newcommand{\calRffinf}{\calR_{{\rm ff}\infty}} % \newcommand{\calRffH}{\calR_{\rm ffH}=4\pi/c=377$ Ohm as on another membrane $\SffH$ above S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$, and Ohm's law holds on $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$, i.e., ${\cal R}} \newcommand{\calRffinf}{\calR_{{\rm ff}\infty}} % \newcommand{\calRffH}{\calR_{\rm ffH}{\cal I}} % {{\cal I}_{\rm ff{\infty}}=(E_{\rm p})_{\rm ff{\infty}}$. This Ohmic dissipation (in circuit theory) implies that the MHD acceleration (in wind theory) takes place. From Eq. (4.14) in \cite{mac82}, the rate per unit $\tau$ time at which electromagnetic fields transfer redshifted energy to particles is \begin{eqnarray} -\frac{1}{\al} \mbox{\boldmath $\nabla$}\cdot\al\vcSE=\al\vcj\cdot\vcE+(\om/c)(\vcj\times\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$})\cdot \vcm \hspace{0.5cm} \nonumber \\ =(\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}/c)(\vcj\times\mbox{\boldmath $B$}} \newcommand{\vcE}{\mbox{\boldmath $E$})\cdot\vcm =\frac{\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vp}{c}\jvlB_{\rm p}} \newcommand{\Bt}{B_{\rm t} \nonumber \\ =- \frac{\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vp}{c}\frac{B_{\rm p}} \newcommand{\Bt}{B_{\rm t}}{2\pi\vp\al}\LPPlDr{I}{\ell}>0, \lb{DivvcSE} \end{eqnarray} where equations (\rf{jpl/vl}) and (\rf{JcrossB}) are used. It thus turns out that when the current function $I(\ell,\Psi)$ is {\em continuously} decreasing with $\ell$ beyond near \SoF\ toward \Sinf, the MHD acceleration occurs (see Figure \rff{GapI}), but the force-free magnetosphere regards the ``force-free" domain with $\jvl=0$ formally as extending to compress the acceleration layer $\jvl>0$ to the force-free infinity surface $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ with $|\jpl|\ll \jvl\approx 0$. By doing so, the circuit $\calCout$ closes, so as not to violate the current-closure condition. \subsubsection{The force-free horizon surface $\SffH$} \lbs{SffH} The inner domain $\calDin$ with $\Omega_{{\rm F}\omega}<0$ is terminated by another membrane $\SffH$ with the surface resistivity ${\cal R}} \newcommand{\calRffinf}{\calR_{{\rm ff}\infty}} % \newcommand{\calRffH}{\calR_{\rm ffH}$; in other words, a sufficiently thin resistive layer, in which the Ohmic dissipation of the surface current flowing across poloidal field lines occurs, thereby generates Joule heat, for which to be easily absorbed as entropy by the hole. The criticality condition in wind theory requires the ingoing magneto-centrifugal wind to pass smoothly through \SIF, to yield for the eigenfunction $\IinU$, \begin{subequations} \begin{eqnarray} \IinU(\Psi) =-\Iin(\Psi) \lb{IinUU} \quad\quad\quad \quad\quad\quad \quad\quad \\ = - (1/2)(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})(B_{\rm p}} \newcommand{\Bt}{B_{\rm t}\vp^2)_{{\rm ffH}}, \lb{oub} \end{eqnarray} \lb{Iinab} \end{subequations} at S$_{\rm iF}$} \newcommand{\SoF}{S$_{\rm oF}$\ as the outermost surface of $\SffH$, where the reason of taking $\IinU<0$ is because the inflow is due to the inward-directed magneto-centrifugal force ($\Omega_{{\rm F}\omega}<0$) by the virtual magnetic spin axis rotating with $-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$ at $\SN$, as opposed to the outflow with $\Omega_{{\rm F}\omega}>0$ by the virtual magnetic spin axis rotating with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ (see Figures \rff{GapI}, \rff{Flux-omPseudoF-S} and \rff{F-WS}). Then we have the {\em ingoing} fluxes of {\em negative} angular momentum, electromagnetic Poynting energy and frame-dragging spin-down energy in terms of $\IinU=-\Iin$ and $\mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)}=-\vcSJin$, i.e., \begin{subequations} \begin{eqnarray} \vcSJin = \frac{\Iin}{2\pi\al c}\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t} =-\frac{\IinU}{2\pi\al c}\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t} = -\mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)} >0, \quad \lb{SJinUa} \\[1mm] \vcSEMin =\Omega_{{\rm F}\omega} \vcSJin =(\om-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}) \mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)} <0, \quad \lb{SJinUb} \\[1mm] \vcS_{\rm SD,(in)}} \newcommand{\vcSsdout}{\vcS_{\rm SD,(out)} =\om \vcSJin =- \om \mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)}= -\vcS_{\rm SD}^{(\rm in)}} %\newcommand{\vcSsdout}{\vcS_{\rm SD,(out)}>0. \quad \lb{SJinUsdb} \end{eqnarray} \lb{SJinUa,b} \end{subequations} Similarly to the membrane current on $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ in equation (\rf{calIffinf}), the membrane current on $\SffH$ from $\Psio$ to $\Psit$ is \begin{equation}} \newcommand{\eneq}{\end{equation} {\cal I}} % {{\cal I}_{\rm ffH}=(\Iot/ 2\pi \vp)_{\rm ffH}. \lb{calIffH} \eneq Then Ohm's law holds, i.e., ${\cal R}} \newcommand{\calRffinf}{\calR_{{\rm ff}\infty}} % \newcommand{\calRffH}{\calR_{\rm ffH}{\cal I}} % {{\cal I}_{\rm ffH}=(E_{\rm p})_{\rm ffH}$, in the stretched horizon \citep{mac82,oka12a,oka15a}. The `stretched horizon' with a depth of $\al$ covering the true horizon S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$\ \citep{tho86} is almost identical to the `force-free horizon surface' $\SffH$ here to cover a thin layer of the Ohmic dissipation taking place from S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$\ to the inner fast-magnetosonic surface S$_{\rm iF}$} \newcommand{\SoF}{S$_{\rm oF}$\ in circuit theory. Here we paraphrase the thermodynamic aspect of the stretched horizon $\SffH$ (see section \rfs{BHTsub}). The surface current $\al\vccalI_{\rm ffH}$($\equiv \vec{{\cal J}}_{H}$ in equation (\rf{TPM-TD}a,b)) flows in the stretched horizon $\SffH$, crossing field lines threading there, the surface torque will be at work through $\SffH$ to the hole, thereby extracting the angular momentum of the hole, i.e.\ $dJ/dt<0$. Then, by equations (\rf{dJ/dt}), (\rf{Iinab}) and (\rf{SJinUa}), one has \begin{eqnarray} \dr{J}{t} = -\int_{{\cal S}_{\rm ffH}} (\al \vccalI_{\rm ffH}/c \times \mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t})\cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} \quad \quad \quad \quad \nonumber \\ = - \int_{{\cal S}_{\rm ffH}} \al \vcSJin \cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$}= - \frac{1}{c} \int ^{\bar{\Psi}}_{\Psi_0} \Iin (\Psi) d\Psi \nonumber \\ = \int_{{\cal S}_{\rm ffH}} \al \mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)} \cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$}= \frac{1}{ c} \int ^{\bar{\Psi}}_{\Psi_0} \IinU (\Psi) d\Psi <0, \lb{torque/SffH} \end{eqnarray} where $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}\cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$}=d\Psi$. The first line of equations (\rf{torque/SffH}) shows that the outflow of positive angular momentum takes place by the positive surface torque on $\SffH$, which is equivalent to the inflow of {\em negative} angular momentum due to the {\em in}going magneto-centrifugal wind from the ZAM-Gap $\GN$, thereby braking the hole through $\SffH$, as seen in the second line. Associated with the outgoing flux $\vcSJin$ is the spin-down energy flux $\vcSsd$ due to the FD effect in equation (\rf{Sem/b}), and the surface integral of $\vcSsd$ over $\SffH$ yields $-\OmH(dJ/dt)$, i.e., \begin{equation}} \newcommand{\eneq}{\end{equation} \int_{{\cal S}_{\rm ffH}} \al \vcSsd \cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} = \OmH \int_{{\cal S}_{\rm ffH}} \al \vcSJin \cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} =-\OmH\dr{J}{t}, \lb{Intg/vcSsd} \eneq because $\om$ approaches $\OmH=$constant, independent of $\Psi$, by the zeroth law (see section \rfs{BHTsub}). It seems that it is only when {\em negative} angular momentum is poured from the ZAM-Gap $\GN$ that the outflow of spin-down energy takes place from the hole, although {\em positive} angular momentum from \zam-Gap is extracted outwardly to keep the Gap in \zam-state (see section \rfs{m-mdGap}). The process of entropy generation in the stretched horizon $\SffH$ is formally described by `Ohmic dissipation' of the surface currents due to the surface resistivity given by ${\cal R}} \newcommand{\calRffinf}{\calR_{{\rm ff}\infty}} % \newcommand{\calRffH}{\calR_{\rm ffH}=377 {\rm Ohm}$. Other than exerting the surface torque to extract angular momentum, the surface current on the resistive membrane gives rise to Ohmic dissipation, which generates the Joule heating of the surface current on $\SffH$ into a digestible form of irreducible mass or entropy, i.e., from equation (\rf{dSHH/dt}) and (\rf{ourI}) or (\rf{oub}), \begin{equation}} \newcommand{\eneq}{\end{equation} \Th\dr{S}{t} = \int_{{\cal S}_{\rm ffH}} {\cal R}} \newcommand{\calRffinf}{\calR_{{\rm ff}\infty}} % \newcommand{\calRffH}{\calR_{\rm ffH} (\al {\cal I}} % {{\cal I}_{\rm ffH})^2 dA= -(\OmH-\OmFb) \dr{J}{t} , \lb{TdS/dt} \eneq which is naturally equivalent to the Poynting flux flowing into $\SffH$ in expression (\rf{ThdS/dt}) for entropy production in the stretched horizon. This is an inevitable result due to the second law in the causal extraction of the rotational energy. This cannot be due to the internal resistance of the battery in the horizon (cf.\ the \CS(iv)-statement). \newcommand{(\vre)_{\rm N}}{(\vre)_{\rm N}} \begin{figure*} \begin{center} ~~~~~~~~~~~~~~~\includegraphics[width=12cm, height = 7cm, angle=-0]{FIG2_NEW_V7.eps} \end{center} \caption{ A schematic picture illustrating a pair of circuits $\calCout$ and $\calCin$ closed in the force-free domains $\calDout$ and $\calDin$, which are separated by the non-force-free Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ in $|\Omega_{{\rm F}\omega}|\lo\Dlom$, where $\PG{\Omega_{{\rm F}\omega}}=\PG{\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}}=\PG{\vcSEM}=0$ and $\PG{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=\PG{\vcj}=\PG{\vp\Bt}=\PG{I}=\PG{\vcSJ}=0$ (see equation (\rf{EqSG}a,b)). There will be the dual unipolar inductors with EMFs $\calEout$ and $\calEin$ at work with the magnetic spin axes of virtual magnetic rotators oppositely directed. The angular velocities of the {\em axes} are $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and $-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$, respectively, and the difference is $\OmH$ (see equation (\rf{iden-b}); Figures \rff{GapI}, \rff{Flux-omPseudoF-S} and \rff{F-WS}). Note that $\mbox{\boldmath $v$}_{\rm p}} \newcommand{\vvt}{v_{\rm t}=\mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}/\vre>0$ in $\calDout$ and $ <0$ in $\calDin$. There will be a huge voltage drop of $\Dl V\propto \OmH$ (see equation (\rf{Dl-V})), leading to strong particle production processes at work, to produce ample plasma particles towards the development of a thick Gap with the half-width $\Dlom$. The particles with $\PG{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=0$, circulating around the hole's axis with $\omN$, are \zamp s, dense enough to pin down magnetic field lines, to fix $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$ and make the Gap magnetized, thereby enabling the dual batteries to drive currents in each circuit (see Figures \rff{F-WS} and figure 4 in \citet{oka15a}). } \lbf{DC-C} \end{figure*} \subsection{The inductive membrane $\SN$ } \lbs{indMem} The force-free magnetosphere possesses two {\em resistive} membranes in both ends, which are never an emitter of a Poynting energy flux. The first and second laws ensure the existence of `this surface' $\SN$ at $\Omega_{{\rm F}\omega}=0$, which inexorably divides the force-free magnetosphere into the two domains with $\Omega_{{\rm F}\omega}>0$ or $<0$, to accommodate the {\em inductive} membrane in between and to install a pair of batteries, one for the outgoing Poynting flux toward astrophysical loads in $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ and the other for the ingoing Poynting flux toward Ohmic dissipation in $\SffH$. The first law in equation (\rf{Share}) indeed shows that the energy extracted through the spin-down energy flux will be shared at the inductive membrane $\SN$ between the Poynting fluxes toward the two resistive membranes $\SffH$ and $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$; \begin{equation}} \newcommand{\eneq}{\end{equation} \int_{{\cal S}_{\rm ffH}} \al \vcS_{\rm SD,(in)}} \newcommand{\vcSsdout}{\vcS_{\rm SD,(out)} \cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} = - \int_{{\cal S}_{\rm ffH}} \alpha \vcSEM^{\rm (in)} \cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} +\int_{{\cal S}_{\rm ff}\infty} \alpha \mbox{\boldmath $S$}_{\rm EM,(out)}} \newcommand{\vcSEMin}{\mbox{\boldmath $S$}_{\rm EM,(in)} \cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} , \lb{SDenergy} \eneq i.e., ``{\em the power dissipated in the horizon and that dissipated in particle acceleration in the far field}" (see the \CS (iii)-statement). The left-hand side of equation (\rf{SDenergy}) shows the `maximum extractable energy', and the first term of the right-hand side shows the ingoing Poynting flux inevitably dissipated by Joule heating in the resistive membrane $\SffH$ leading to the hole's entropy increase, and the second term is the `actual extracted energy' and expresses the outgoing one dissipated in the astrophysical loads in the outer resistive membrane $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ (see section \rfs{second/restr}). The source cannot doubtlessly be due to the battery in the horizon, because the left hand side originates from $\OmH |dJ|$ in equation (\rf{Share}) and has nothing to do with unipolar induction. Thus the overall expression $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ in (\rf{E-AmFlux}) does not express any Poynting flux corresponding directly to any actual battery (cf.\ \CS (iii,iv)-statement). The Poynting fluxes must be related to a pair of batteries in the inductive membrane $\SN$ (see equations (\rf{EMF-ab})). We think of the two circuits $\calCout$ and $\calCin$ closed in $\calDout$ and $\calDin$, respectively \citep{oka15a}. These circuits are disconnected at the null surface $\SN$ by Constraints $\PN{\vcj}=\PN{I}=\PN{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=0$ in (\rf{SNb}) (see Figure \rff{DC-C}). For electric currents to flow in the closed circuit $\calCout$ or $\calCin$, there must naturally be an EMF due to the unipolar induction battery for each circuit, resulting from the gravito-electric potential gradient $\Omega_{{\rm F}\omega}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-\om$. Let us then pick up such two current-field-streamlines $\Psio$ and $\Psit$ in each force-free domain as the two roots of an algebraic equation $I(\Psi)=\Iot$, i.e., $I(\Psi_1)=I(\Psi_2)\equiv \Iot$ in the range of $0<\Psio<\Psi_{\rm c}} \newcommand{\Psib}{\bar{\Psi}<\Psit<\bar{\Psi}$ (see equation (\rf{c-c-c}); Figs.\ 2, 3 in \cite{oka15a}), where $(dI/d\Psi)_{\rm c}=0$ and $\mbox{\boldmath $j$}_{\rm p}} \newcommand{\jt}{j_{\rm t}\lleg 0$ for $\Psi\lleg\Psi_{\rm c}} \newcommand{\Psib}{\bar{\Psi}$. The Faraday path integrals of $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}$ from equation (\rf{bhEp}) along two circuits $\calC_{\rm out}$ and $\calC_{\rm in}$ yield \begin{subequations} \begin{eqnarray} \calEout= \oint_{{\cal C}_{\rm out}} \al\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}\cdot d\vcell =-\frac{1}{2\pi c}\int_{\Psi_1}^{\Psi_2} \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)d \Psi, \ \ \lb{EMF-out} \hspace{0.2cm} \\ \calEin=\oint_{{\cal C}_{\rm in}} \al\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}\cdot d\vcell =+\frac{1}{2\pi c}\int_{\Psi_1}^{\Psi_2} (\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})d \Psi \ \ \ \lb{EMF-in} \hspace{0.2cm} \end{eqnarray} \lb{EMF-ab} \end{subequations} (see \citet{oka15a}; cf.\ \citet{tho86}). There is no contribution to EMFs from the integral along $\Psio$ and $\Psit$ and on the null surface, because of $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}\cdot d\vcell=\PN{\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}}=0$. The difference between the two EMFs across $\SN$ is \begin{equation}} \newcommand{\eneq}{\end{equation} \DN{\calE} =\calEout- \calEin=-\OmH\Dl\Psi/2\pi c = -\Dl V \lb{Dl-V} \eneq (cf.\ \citet{tho86}), where $\Dl\Psi=\Psit -\Psio$ and the difference of a quantity $X$ across the (infinitely thin) interface $\SN$ is denoted with \begin{equation}} \newcommand{\eneq}{\end{equation} \DN{X} =(X)_{\rm N}^{{(\rm out)}} - (X)_{\rm N}^{{(\rm in)}} . \lb{DNX} \eneq Expression (\rf{Dl-V}) is derivable simply by integrating the identity (\rf{iden-b}) from $\Psio$ to $\Psit$, just as the potential difference between the two equipotential lines. Note that the two EMFs, $\calEout$ and $\calEin$, are partitioned by the infinitely thin surface $\SN$ in the force-free limit. These EMFs drive the {\em volume} current $\vcj$ shown in equation (\rf{vc-vjB}) in the force-free domain $\calDout$ (or $\calDin$) and the {\em surface} current ${\cal I}} % {{\cal I}_{{\rm ff}\infty}$ (or ${\cal I}} % {{\cal I}_{\rm ffH}$) in the resistive membrane $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ (or $\SffH$) in equations (\rf{calIffinf}) and (\rf{calIffH}), so as to flow along each closed circuit $\calCout$ (or $\calCin$). Also, these are responsible for launching the Poynting energy fluxes in both outward and inward directions, i.e., $\vcSEM\ggel 0$ for $\Omega_{{\rm F}\omega}\ggel 0$. By the way, $\calEout$ is seemingly the same as ${\cal E}_{\rm NS}} \newcommand{\calEBH}{{\cal E}_{\rm BH}$ (see equation (\rf{nsEMF})), i.e., the outer force-free domain $\calDout$ behaves like a pulsar force-free magnetosphere (see section \rfs{FFPM}), whereas the inner domain $\calDin$ behaves like an anti-pulsar-type magnetosphere \citep{oka92}. Moreover, these two appear to be associated with the two {\em virtual} magnetic rotators with the FL-AVs $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and $-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$, back to back, and yet oppositely directed, at the interface $\SN$ with $\PN{\vcj}=\PN{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=\PN{\Omega_{{\rm F}\omega}}=0$ (see Figure \rff{DC-C}; \cite{oka15a}). Expression (\rf{SDenergy}) may remind us of the mechanical Penrose process; when the static-limit surface is replaced with the null surface $\SN$ and the ergoregion with the inner domain $\calDin$. Yet when the pair-production discharge due to the voltage drop $\Dl V$ is at work, the twin-pulsar model may be regarded as its gravito-thermo-electrodynamic version (see Figures \rff{Flux-om} and \rff{Flux-omPseudoF-S}). Thus the voltage drop across the infinitely thin interface $\SN$, $\Dl V=\DN{\calE}$, suggests that the null surface $\SN$ will be a kind of rotational-tangential discontinuity due to the two {\em virtual} magnetic rotators, although $\Omega_{{\rm F}\omega}$ and $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}$ seem to change sign smoothly through zero (see Figure \rff{Flux-omPseudoF-S}; \cite{lan84}). It will be worthwhile emphasizing that the null surface with Constraints (\rf{EqSN}) is genetically endowed with the discontinuity $\DN{\calE}=\Dl V$, to widen a surface $\SN$ to a gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$, thereby constructing the magnetized `\zam' Gap (see section \rfs{m-mdGap}). \begin{figure*} \begin{center} \includegraphics[width=12cm, height = 7cm, angle=-0]{FIG3_NEW_V7.eps} \end{center} \caption{ A plausible behavior of the angular momentum flux along each field line, $I(\Omega_{{\rm F}\omega},\Psi)$ (see equation (\rf{OL-I})). The abscissa is the `coordinatized' $\Omega_{{\rm F}\omega}$ along a field line $\Psi=$constant. Violation of the iso-rotation due to the FD effect, i.e., $\PN{\Omega_{{\rm F}\omega}}=0$, leads to subsequent breakdown of the force-free and freezing-in conditions at the null surface $\SN$. The voltage drop $\Dl V$ between the two EMFs will induce steady particle production, thereby developing a Gap with $\PG{I}=0$ in a finite zone $|\Omega_{{\rm F}\omega}|\lo \Dlom$ between the two force-free domains with $I=\Iout(\Psi)$ and $=\IinU(\Psi)$, respectively (see sections \rfs{GapStruc} and \rfs{BC-SN}). When $\PG{\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}}\neq 0$, Constraints $\PG{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=\PG{\vcj}=0$ mean that current- and stream-lines no longer thread ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ (see Figures \rff{Flux-om}, \rff{DC-C}, \rff{Flux-omPseudoF-S}). There may be a kind of boundary layers in the vicinity $\Omega_{{\rm F}\omega}\simeq\pm\Dlom$ where non-force-free \zamp s pair-created in $\vre\simeq 0$ are changing to force-free charge-separated plasma with $\vre\approx\mp e n^{\pm}$, and $I$ increases (or decreases) to $\Iout$ (or $\IinU$) rather steeply. The non-force-free, matter-dominated Gap, filled with \zamp s, will ensure the pinning-down of poloidal field lines $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$, and the pinning-down conversely ensure magnetization of \zamp s with $\PG{I}=0$ in the Gap within $|\Omega_{{\rm F}\omega}|\lo \Dlom$. When the rate of positive angular momentum conveyed outwardly by the outgoing magneto-centrifugal wind is equal to that of negative angular momentum conveyed inwardly by the ingoing magneto-centrifugal wind, the `zero-angular-momentum state' of the Gap is maintained, i.e., $\DG{I}=\Iout-\Iin=\Iout+\IinU=0$, and the `boundary condition' $\DG{I}=0$ yields the eigenfunction $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$ (see equation (\rf{DN/SN})). The two {\em virtual} spinning magnetic axes are denoted by the two arrows with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and $-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$ in the Gap surfaces with $\DG{\overline{\OmFm}}=\OmH$ (see sections \rfs{BCagain} and \rfs{R-T-D}). The \zamp s circulating with $\omN=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ will firmly embed in the poloidal filed $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$, and hence the Gap and the magnetosphere as a whole will be `frame-dragged' by the hole's rotation with angular velocity $\omN=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$. It is conjectured in the twin-pulsar model (see section \rfs{TW-P-M}) that the outer half of the Gap in $0\lo\Omega_{{\rm F}\omega}\lo\Dlom$ plays a role of a `normal' magnetized NS spinning with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, while the inner half in $0\ggo\Omega_{{\rm F}\omega}\ggo -\Dlom$ behaves like an `abnormal' magnetized NS spinning reversely with $-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$. The rotational-tangential discontinuity in between will promote widening of the null surface $\SN$ to the Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$. Just as the water-shed at a mountain range produces two down streams to both sides by the gravitational force, the {\em plasma}-shed in the midst of the Gap at $\Omega_{{\rm F}\omega}\approx 0$ will divide pair-produced particles into outflows (or inflows) by the magneto-centrifugal force due to $\Omega_{{\rm F}\omega}>0$ (or $<0$). The Gap filled with the ZAMPs will be well inside between the two light surface S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL\ (see equation (\rf{SG<SoL})). } \lbf{GapI} \end{figure*} \section{The zero-angular-momentum gap $\GN$ } \lbs{m-mdGap} \setcounter{equation}{0} \renewcommand{\theequation}{{9}.\arabic{equation}} The Gap inevitably emerging in the force-free magnetosphere will be in the {\em zero-angular momentum} state, because from the viewpoint of the ZAMOs traveling with $\omN=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, the particles and the field carry no angular momentum within the Gap (see equations (\rf{EqSN}) or (\rf{EqSG}) later). Also, the Gap will be {\em magnetized} in almost the same way that spinning NSs are magnetized strongly enough to ensure the angular velocity of emanating field lines to possess the surface angular velocity of the star, given by the `boundary condition', i.e., $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\Omega_{\rm NS}} \newcommand{\SNS}{S$_{\rm NS}$$ (see section \rfs{FFPM}). The poloidal magnetic field lines {\em threading} the Gap are naturally {\em pinned down} in the ZAMPs circulating round the hole with $\omN=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, and hence the magnetized ZAMPs must ensure the `boundary condition' $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$ (see section \rfs{BC-SN}). There will thus be the non-force-free magnetized Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ under the inductive membrane developed in between $\calDout$ and $\calDin$, with its {\em surfaces} \SgapO\ and S$_{\rm G(in)} $} \newcommand{\SgapO}{S$_{\rm G(out)}$\, respectively, at $\Omega_{{\rm F}\omega}\approx \pm\Dlom$ (Figure \rff{GapI}), where $\Dlom\approx|\PN{\partial\om/\partial\ell}| \Dlell$ stands for the Gap half-width (see equation (\rf{OL-I})), and for $\Dlom\to 0$, ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}\to\SN$ (see Figure 4 in \cite{oka15a} for interplay of microphysics with macrophysics in the magnetized, matter-dominated Gap). \subsection{A plausible Gap structure with $I(\Omega_{{\rm F}\omega},\Psi)$} \lbs{GapStruc} In the force-free domains, $I$ has a constant value $\Iout(\Psi)$ or $\IinU(\Psi)=-\Iin(\Psi) $ for $\Omega_{{\rm F}\omega}>0$ or $<0$ along each field line, whereas Constraints $\PG{\vcj}=\PG{I}=0$ require $I(\Omega_{{\rm F}\omega},\Psi)$ to vanish, indicating breakdown of the freezing-in and force-free conditions within $\GN$. In reality, the voltage drop, or a kind of discontinuity, $\Dl V=\DN{\calE}$ at $\SN$ (see equation (\rf{Dl-V})), will produce pair-particles copious enough and plasma pressure in the steady state will expand $\SN$ to ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ with a finite width $\Dlom$ in $|\Omega_{{\rm F}\omega}|\lo \Dlom$ (see section \rfs{indMem}). We hereafter denote the {\em widened null} Gap with ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$, and then we replace Constraints in (\rf{EqSN}) at $\SN$ with following ``widened" Constraints in ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ in $|\Omega_{{\rm F}\omega}|\lo\Dlom$; \begin{subequations} \begin{eqnarray} \PG{\Omega_{{\rm F}\omega}}= \PG{\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}}=\PG{\vre} = \PG{\vcj}=\PG{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}= \PG{I} \lb{SGa} \\ =\PG{\vcSJ}=\PG{\vcSEM}=\PG{\vcSsd}=\PG{\vcSE} =0. \lb{SGb} \end{eqnarray} \lb{EqSG} \end{subequations} In passing, $\DG{X}$ denotes the difference of $X$ across the Gap $|\Omega_{{\rm F}\omega}|\lo\Dlom$ (cf.\ $\DN{X}$ in equation (\rf{DNX})); \begin{equation}} \newcommand{\eneq}{\end{equation} \DG{X} =X(\Dlom,\Psi)-X(-\Dlom,\Psi) \equiv (X)_{\rm G}^{{(\rm out)}} - (X)_{\rm G}^{{(\rm in)}} . \lb{DGX} \eneq In the followings, we show that ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}\to\SN$ in the force-free {\em limit} of $\Dlom\to 0$. Then we presume such a simple form of $I=I(\Omega_{{\rm F}\omega},\Psi)$ along a typical field line in $0\leq\Psi\leq\Psib$ in the force-free magnetosphere as \begin{eqnarray} I(\Omega_{{\rm F}\omega},\Psi) = \left\{ \begin{array} {ll} \to 0 & ;\ S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$\ ( \Omega_{{\rm F}\omega}\to\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}), \\ [1mm] \Iout & ;\ \calDout\ ( \Dlom \lo \Omega_{{\rm F}\omega} \lo \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}), \\[1mm] 0 & ;\ {\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}\ ( |\Omega_{{\rm F}\omega}|\lo \Dlom) , \\[1mm] \Iin &;\ \calDin\ (-\Dlom \ggo \Omega_{{\rm F}\omega}\ggo-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})), \\[1mm] \to 0 & ;\ \SffH\ (\Omega_{{\rm F}\omega}\to -(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})) \end{array} \right. \lb{OL-I} \end{eqnarray} (see Figure \rff{GapI}). The behaviour of $I(\Omega_{{\rm F}\omega},\Psi)$ in the outer domain $\calDout$ is more or less similar to that of the force-free pulsar magnetosphere (see equation (\rf{I/Pul})). Note that there is a jump from $\Iout$ to $\Iin$ beyond $\PG{I}=0$. Because ``{\em the particle-production mechanism described in \cite{bla77} must operate between the two light surfaces}" (see \citet{zna77}; the C$_{\rm P}$ (v)-statements), the Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ must exist in between \SIL and S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ (see equation (\rf{GapC})). The particle source will have to be situated in the Gap $\GN$ well within the two light surfaces, and from equation (\rf{Sil/oL}) we have \begin{equation}} \newcommand{\eneq}{\end{equation} |\Omega_{{\rm F}\omega}| \lo \Dlom < c(\al/\vp)_{\rm oL} \approx c(\al/\vp)_{\rm iL}. \lb{SG<SoL} \eneq It is not clear how helpful or rather indispensable is the above condition, in constructing a reasonable gap model. The particle production will anyway take place by the voltage drop across the Gap $\GN$, $\Dl V=-(\OmH/2\pi c)\Dl\Psi$, almost independently of the presence of the light surfaces in wind theory. The outer half of the Gap is thought to play a role like a {\em normal} NS, while the inner half is done to play a role like an {\em abnormal} NS. Each half of {\em virtual} magnetized spinning NSs is stuck together, back to back, reverse to each other. In the limit of $\Dlom\to 0$, ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}\to\SN$ and the dual EMFs, $\calEout$ and $\calEin$, make a rotational-tangential discontinuity with the voltage drop $\Dl V$ (see equation (\rf{Dl-V}); section \rfs{R-T-D}). This voltage drop will lead to a new type of pair-production mechanism quite different from the one described in \cite{bla77}, and make $\SN\to{\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ with a finite $\Dlell$ or $\Dlom$. Then the outer half of the Gap will launch the outgoing magneto-centrifugal wind with $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=\vcj/\vre>0$, the angular momentum flux $\vcSJout>0$ and the Poynting flux $\mbox{\boldmath $S$}_{\rm EM,(out)}} \newcommand{\vcSEMin}{\mbox{\boldmath $S$}_{\rm EM,(in)}=\Omega_{{\rm F}\omega} \vcSJout>0$, while the inner half will launch the ingoing magneto-centrifugal wind with $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=\vcj/\vre<0$, the angular momentum flux $\mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)}<0$ and the Poynting flux $\vcSEM^{\rm (in)}=\Omega_{{\rm F}\omega} \vcSJin=(\om-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})\mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)}<0$. The {\em outward} `positive' angular momentum flux $\vcSJin>0$ is equivalent to the {\em inward} `negative' angular momentum flux $\mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)}<0$ (see Figure \rff{GapI}). It must be remarked that when the boundary condition of $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\Omega_{\rm NS}} \newcommand{\SNS}{S$_{\rm NS}$$ is used at S$_{\rm NS}$\ for the pulsar force-free magnetosphere, the behavior of $I=I(\ell,\Psi)$ in the vicinity of $\ell\approx\ell_{\rm NS}$ in equation (\rf{I/Pul-M}) seems to be ill-understood so far, and the same is true here, so that the treatment of $I=I(\Omega_{{\rm F}\omega},\Psi)$ in the vicinity of $\Omega_{{\rm F}\omega}\approx\pm \Dlom$ will be allowable (see the caption of Figure \rff{GapI}). The two force-free domains separated by ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ consist of non-dissipative current-field-streamlines (i.e., equipotentials) with $\jvl=0$, and must be terminated by the membranes with resistivity ${\cal R}} \newcommand{\calRffinf}{\calR_{{\rm ff}\infty}} % \newcommand{\calRffH}{\calR_{\rm ffH}=4\pi/c=377$Ohm, restoring particle inertia with $|\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}|\to c$. It is the two EMFs $\calE_{\rm out}$ and $\calE_{\rm in}$ that drive volume currents ($\vcj=\vre\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}$) to flow through the force-free domains $\calDout$ and $\calDin$, and then the surface currents on the resistive membranes $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ and $\SffH$ (see Figure \rff{DC-C}). This means that the {\em volume} currents at finite distances from ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ are regarded as transformed to the {\em compressed} surface cross-field currents, crossing each circle of $2\pi\vp$ on these membranes at $\vp\to\infty$ or $\al\to 0$, which are given by ${\cal I}} % {{\cal I}_{\rm out}=$ or ${\cal I}} % {{\cal I}_{\rm in}=I(\Psi)/2\pi\vp$ (equation (\rf{calIffinf}) or (\rf{calIffH})) crossing poloidal field lines to Joule-dissipating, and this implies that there will be MHD acceleration on $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ or entropy-production on $\SffH$ (see sections \rfs{Sffinf} and \rfs{SffH}). \subsection{A pair of batteries installed at the Gap surfaces } \lbs{batteryG} We understand that the two force-free domains $\calDout$ and $\calDin$ adjoin to the magnetized Gap filled with \zamp s at the surfaces (say) \SgapO\ and S$_{\rm G(in)} $} \newcommand{\SgapO}{S$_{\rm G(out)}$\ with $\Omega_{{\rm F}\omega}\simeq \pm\Dlom$, where the unipolar induction batteries are installed with EMFs $\calEout$ and $\calEin$, respectively (see equation (\rf{EMF-ab}a,b)). The situation is somehow similar to the force-free pulsar magnetosphere attached to a magnetized NS. It is generally believed (see section \rfs{FFPM}) that a magnetized spinning NS possesses its own unipolar induction battery with an EMF ${\cal E}_{\rm NS}} \newcommand{\calEBH}{{\cal E}_{\rm BH}$ (see equation (\rf{nsEMF})), which drives currents flowing through the force-free magnetosphere with astrophysical loads at $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ (see equation (\rf{I/Pul-M})). Then we may suppose likewise that the EMF $\calEout$ at \SgapO\ supplies electricity to the outer circuit $\calCout$, like a `normal' NS spinning with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$, whereas the EMF $\calEin$ at S$_{\rm G(in)} $} \newcommand{\SgapO}{S$_{\rm G(out)}$\ supplies electricity to the inner circuit $\calCin$, like an `abnormal' NS reversely spinning with $-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$. At the same time, the Poynting flux $\vcSEM$ flows towards both of the directions, related to $\calEout$ and $\calEin$, respectively, i.e., $\mbox{\boldmath $S$}_{\rm EM,(out)}} \newcommand{\vcSEMin}{\mbox{\boldmath $S$}_{\rm EM,(in)}= (\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-\om)\vcSJout>0$ to $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ and $\vcSEM^{\rm (in)}=(\om-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})\mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)}<0$ to $\SffH$ (see Figure \rff{F-WS}). That is to say, \SgapO\ and S$_{\rm G(in)} $} \newcommand{\SgapO}{S$_{\rm G(out)}$\ of the Gap will behave as if they were the surfaces of two {\em virtual} NSs; the outer one is like a normal NS spinning with $\Omega_{\rm NS}} \newcommand{\SNS}{S$_{\rm NS}$=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and the inner one is like an `abnormal' NS reversely spinning with $\Omega_{\rm NS}} \newcommand{\SNS}{S$_{\rm NS}$=-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$. This does not insist that the matter state of the particles pair-produced with the voltage drop $\Dl V=\DG{\calE}$ in the Gap may be the same as that of NSs, and rather will be distinctly different from that, but what we suggest here is just that the role of the two surfaces of the Gap $\GN$, \SgapO\ and S$_{\rm G(in)} $} \newcommand{\SgapO}{S$_{\rm G(out)}$, will be similar to those of two surfaces S$_{\rm NS}$\ of NSs oppositely spinning with each other (see the twin-pulsar model in section \rfs{TW-P-M}). Also, the BH circuit theory consists of superposition of an infinite number of a pair of unipolar induction batteries and also a corresponding pair of external resistances connected by current-field-streamlines (see section \rfs{indMem}). \subsection{Pinning-down of threading field lines on \zamp s and magnetization of the Gap } \lbs{PPP} When we regard the Gap surfaces \SgapO\ and S$_{\rm G(in)} $} \newcommand{\SgapO}{S$_{\rm G(out)}$\ as being equipped with EMFs $\calEout$ and $\calEin$, respectively, these EMFs not only drive the currents along the respective circuits $\calCout$ and $\calCin$, but also produce the voltage drop $\Dl V=\DG{\calE}$ across the Gap, which will produce a plenty of \zamp s to pin the field lines down. Conversely the \zam\ Gap will be filled with \zamp s circulating around the hole with $\omN$, and the poloidal field lines threading the Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ will be pinned down on \zamp s, circulating with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$, and hence the \zam\ Gap will be in perfectly magnetized state. To keep the force-free magnetosphere active, the two EMFs $\calEout$ and $\calEin$ must supply enough currents to the two circuits $\calCout$ and $\calCin$, connected to each other by the field lines $\Psio$ and $\Psit$, respectively, but not by current lines nor streamlines ($\PG{\vcj}=\PG{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=0$), despite that the iso-rotation law $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)=$constant still holds along each field line threading the particle production Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$. It is actually across the Gap that the `boundary condition' of no jump of the angular momentum transport rate, i.e., $\DG{I}=0$, will determine the eigenfunction $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ (see section \rfs {BC-SN}). Just as ZAMOs literally are `Zero-Angular-Momentum' observers, plasma pair-created by the potential drop due to discontinuity $\Dl V=\DG{\calE}$ within the Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ (see equation (\rf{Dl-V})) as well must be `\zam' particles (\zamp s), circulating with ZAMOs at $\om=\omN=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, but no longer behave as force-free particles with negligible mass in the Gap. These particles will become easily charge-separated owing to zero angular momentum (and also $\PG{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=0$), to flow from the Gap out to the force-free domains as `force-free' and charge-separated particles, with $\mbox{\boldmath $v$}_{\rm p}} \newcommand{\vvt}{v_{\rm t}>0$ in $\calDout$ and $\mbox{\boldmath $v$}_{\rm p}} \newcommand{\vvt}{v_{\rm t}<0$ in $\calDin$ (see Figures \rff{DC-C} and \rff{F-WS}). These ZAMPs pair-produced will easily be blown out of the Gap towards both directions by the magneto-centrifugal force. It was stated that ``{\em The present gap model with a pair of batteries and a strong voltage drop is fundamentally different from any existing pulsar gap models}" \citep{oka15a}. The models so far used for a pair-production discharge mechanism is based on the negligible violation of the force-free condition (see \CS (v,vi)-statements; \citet{bes92, hir98, son17, hir18b}; etc.\ see e.g.\ \citet{hir18a} for recent review). Unlike the perfect violation of the force-free condition due to the FD effect leads to a unique gap model in the modified BZ process, the negligible violation produces a lot of ambiguities and inconsistencies, as well as the causality violation. It will be implausible that the gap models constructed on the negligible violation yield nearly the same amount of particle production as the Gap $\GN$ with the voltage drop (see equations (\rf{EMF-ab}) and (\rf{Dl-V})) based on complete breakdown of the force-free condition. It seems plausible that {\em positive} angular momentum and energy extracted through $\SffH$ from the hole will be transported beyond the Gap with $\PG{I}=\PG{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=0$, toward astrophysical loads in $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$. The angular momentum flux need however not necessarily pass through the ZAM-state in the Gap with $\PG{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=\PG{\vcj}=\PG{I}=0$. The point is that \zamp s are spinning with $\om=\omN$ dragged by the hole's rotation, literally with no angular momentum, so that they will be feasible to flow out of the Gap, flung easily both outwards and inwards from surfaces \SgapO\ and S$_{\rm G(in)} $} \newcommand{\SgapO}{S$_{\rm G(out)}$, with positive and negative angular momenta, by the respective magneto-centrifugal forces, to keep the ZAM-state of the Gap. This picture will be equivalent to the one that the positive angular momentum extracted by the surface magnetic torque through the stretched horizon from the hole flows through the inner force-free domain $\calDin$, beyond the ZAM-Gap to the outer force-free domain $\calDout$ (see Figure \rff{GapI}). \section{The Eigen-magnetosphere} \lbs{BC-SN} \setcounter{equation}{0} \renewcommand{\theequation}{{10}.\arabic{equation}} The vital role of the Gap is to pin the poloidal field $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ down onto the \zamp s pair-created in there, and to accomplish magnetization of the \zamp s, thereby ensuring $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$, so as to maintain the ZAM-state of the Gap circulating with $\omN$ dragged by the hole. But the actual position of $\SN$ and the value $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$ {\em per se} remain still undetermined. In order to determine the final eigenfunction $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)=\omN$ in terms of $\OmH$, we formulate the `boundary condition' rightly, with Constraints (\rf{EqSN}) or (\rf{EqSG}) appropriately taken into account, in particular, $\PG{I}=\DG{I}=0$ at the place of the ZAM-Gap $\GN$. \newcommand{\dJout}{dJ_{\rm (out)}} \newcommand{\dJinU}{dJ^{\rm (in)}} \newcommand{\dJin}{dJ_{\rm (in)}} \subsection{The `boundary condition' for the eigenfunction $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$} \lbs{BCagain} Angular momentum and energy must be conserved in the ZAM-Gap $|\Omega_{{\rm F}\omega}|\lo\Dlom$. Expressions (\rf{Share}) and (\rf{SDenergy}) show that the spin-down energy extracted through the stretched horizon $\SffH$ is shared between the out- and in-going Poynting fluxes reaching the resistive membranes $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ and $\SffH$, respectively, to dissipate in particle acceleration and entropy generation. Yet both of Poynting fluxes emerging from the ZAM-Gap will carry {\em positive} and {\em negative} angular momentum, i.e. $\dJout>0$ and $\dJinU<0$, respectively, keeping the zero-angular-momentum state of the Gap, namely, $\dJout+\dJinU=0$. Related to this, energies carried out from the Gap are $\OmFb\dJout$ outward by $\mbox{\boldmath $S$}_{\rm EM,(out)}} \newcommand{\vcSEMin}{\mbox{\boldmath $S$}_{\rm EM,(in)}$, and $-(\OmH-\OmFb)\dJinU$ inward by $\vcSEM^{\rm (in)}$, respectively. Then, the total energy that gets out of the Gap is \begin{equation}} \newcommand{\eneq}{\end{equation} \OmFb\dJout-(\OmH-\OmFb)\dJinU=-\OmH \dJinU \lb{energyC} \eneq (see equation (\rf{energyH}) later). Because the {\em in}flow of {\em negative} angular momentum into the hole is equivalent to {\em out}flow of {\em positive} one, i.e., $\dJin=-\dJinU>0$, and when $\dJinU=dJ<0$ in the stretched horizon $\SffH$, the angular momentum and energy will be conserved in the ZAM-Gap $\GN$, i.e., $-\OmH \dJinU=\OmH |dJ|$, consistently with the first law in (\rf{Share}). Thus the condition of continuity of angular momentum across the ZAM-Gap will be given in the following form: \begin{subequations} \begin{eqnarray} \DG{dJ}=\dJout -\dJin \quad \quad \ \lb{DN/dJa} \\ =\dJout +\dJinU=0 , \lb{DN/dJb} \end{eqnarray} \lb{DN/dJ} \end{subequations} which is transformed to the flux-base along each field line for determining $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$; \begin{subequations} \begin{eqnarray} \DG{I}=\Iout(\Psi) -\Iin(\Psi) \quad \quad \ \lb{DN/SNa} \\ =\Iout(\Psi) +\IinU(\Psi)=0 . \lb{DN/SNb} \end{eqnarray} \lb{DN/SN} \end{subequations} The `boundary condition' $\PG{I}=\DG{I}=0$ ensures that the outward flux $\vcSJin$ in the inner domain is equal to the flux $\vcSJout$ in the outer domain across the Gap, i.e., $\PG{\vcSJ}=\DG{\vcSJ}=0$, thanks to the ZAM-state of the Gap. It appears thus that the same form $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ for the energy-angular momentum fluxes as that in the pulsar force-free magnetosphere still holds except but in the Gap, where $\PN{\vcSE}=\PN{\vcSJ}=0$ (see Figure \rff{Flux-omPseudoF-S}). \newcommand{\bar{\zeta}} %\newcommand{\omNb}{\bar{\omega}_{\rm N}} % \newcommand{\SNb}{ \bar{{\cal S}}_{\rm N} }{\bar{\zeta}} \subsection{The final eigenfunctions $I(\Psi)$ and $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$ in the force-free magnetosphere} \lbs{Feigenv} From equations (\rf{ouIa}), (\rf{oub}) and (\rf{DN/SN}) we have \begin{subequations} \begin{eqnarray} \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)=\omN= \frac{\OmH}{1+\zeta}, \lb{EigenOmFI} \\ I=\Iout=\Iin=- \IinU=\frac{\OmH}{2(1+\zeta)} (B_{\rm p}} \newcommand{\Bt}{B_{\rm t}\vp^2)_{\rm ffH}, \lb{EigenOmF} \\ \zeta(\Psi)=(B_{\rm p}} \newcommand{\Bt}{B_{\rm t}\vp^2)_{{\rm ff}\infty}/(B_{\rm p}} \newcommand{\Bt}{B_{\rm t}\vp^2)_{\rm ffH} \lb{Zeta} \end{eqnarray} \lb{FL-eigen} \end{subequations} \citep{oka09,oka12a,oka15a}. Note that $\PG{\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}}\neq 0$, $\PG{\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}}\neq 0$ and $\DG{\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}}=\DG{\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}}=0$ in our Major Premises (section \rfs{major}), and also $\PG{I}=\DG{I}=0$ across the supposed Gap $\GN$ in $|\Omega_{{\rm F}\omega}|\lo \Dlom$ with $\DG{\Omega_{{\rm F}\omega}}=2\Dlom$ (see Figure \rff{GapI} and equation (\rf{DN/SN})). It is in reality the ZAM-state of the Gap that makes it possible to use (\rf{DN/SN}) as the `boundary condition' for $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ even in the finite value of $\Dlom$ (see section \rfs{m-mdGap} and Figure \rff{GapI}). Contrarily, the `boundary condition' (\rf{DN/SN}) is necessary to ensure the ZAM-state of the Gap. { The `zero-angular-momentum' state of the Gap with the `boundary condition' $\PG{I}=\DG{I}=0$ nevertheless allows not only the smooth flow of angular momentum and energy from the hole beyond the Gap, as shown by a simple relation $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ (though $\PN{\vcSE}=\PN{\vcSJ}=0$), but also a determination of the final eigenfunction of $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ in terms of the hole's gravito-electric potential gradient $\OmH$ }. Constraints $\PG{\vcj}=\PG{\Bt}=\PG{I}=\PG{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=0$ in equation (\rf{EqSG}) imply that no transport of angular momentum by the field and particles is possible, i.e., $\PG{\vcSJ}=0$, and these indicate a disconnection of current- and stream-lines between the two force-free domains, by the necessity of the current-particle sources and the EMFs in the Gap. It will be ensured in equation (\rf{DN/SN}) that the charged copious ZAMPs pair-produced in $|\Omega_{{\rm F}\omega}|\lo\Dlom$ serve to connect and equate both $\Iout$ and $\IinU$ across the Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$, despite $\PG{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=\PG{I}=\PG{\vcj}=0$. Also the flow of energy-angular momentum is continuous beyond the ZAM-Gap $\GN$, i.e., $\DG{\calPE}=\DG{\calPJ}=0$ and also $\DG{\OmFb}=0$ from equations (\rf{PEJ}a,b). The eigen-efficiency of gravito-thermo-electrodynamic extraction is given from equations (\rf{eff}) and (\rf{EigenOmFI}) by \begin{equation}} \newcommand{\eneq}{\end{equation} \epsGTE=\frac{\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}}{\OmH}=\frac{1}{1+\zeta}. \lb{eps} \eneq When the plausible field configuration permits us to put $\zeta\approx 1$ and hence $\epsGTE\approx 0.5$, one has from equations (\rf{EigenOmFI}), (\rf{SDenergy}) and (\rf{Share}), \begin{subequations} \begin{eqnarray} \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\approx (\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})\approx \frac12\OmH, \lb{matching} \\ \left. \left. \oint \alpha \vcSEM \cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} \right|_{{\rm ff}\infty}\approx - \oint \alpha \vcSEM \cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} \right|_{\rm ffH} \nonumber \\ \left. \approx \frac12 \oint \al \vcSsd \cdot d\mbox{\boldmath $A$}} \newcommand{\vcell}{\mbox{\boldmath $\ell$} \right|_{\rm ffH}, \lb{TD-EDc} \\ c^2 \left|\dr{M}{t} \right| \approx \Th\dr{S}{t} \approx \frac12 \OmH \left|\dr{J}{t}\right| . \lb{therfirst-eig} \end{eqnarray} \lb{eig/state} \end{subequations} The average value $\OmFb= \bar{\om}_{\rm N}$, together with $\bar{\zeta}} %\newcommand{\omNb}{\bar{\omega}_{\rm N}} % \newcommand{\SNb}{ \bar{{\cal S}}_{\rm N} $, in the eigen-state will be given from equations (\rf{FL-eigen}a,b,c), and also the average null surface $\bar{\SN}$ and the overall efficiency $\bar{\epsilon}_{\rm GTED}$ will be given from equation (\rf{eps}). \begin{figure*} ~~~~~~~~~~~~~~~~~~~ \includegraphics[width=12cm, height = 7cm, angle=-0]{FIG4_NEW_V7.eps} \caption{A diagram showing the energy fluxes with discontinuities at the Gap $\GN$ widened from $\SN$ (cf.\ Figure \rff{Flux-om}); $\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}=\overline{\OmFm} \vcSJ$, and $\overline{\vcSsd}} \newcommand{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}{\overline{\vcSEM}} \newcommand{\OLvcSEMout}{\overline{\vcSEMout}=\bar{\om} \vcSJ$, where $\overline{\OmFm}=(\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-\bar{\om})$ (see equations (\rf{OLom}), (\rf{OLOmFm}), (\rf{OLvcSEMsd}) and (\rf{vcSEMsd}a,b)). Relations (\rf{SDenergy}) and (\rf{energyH}) show that it is the frame-dragging spin-down energy flux $\OLvcSsdin$ that supplies the rotational energy extracted by exerting the Lorentz surface torque through the stretched horizon $\SffH$ to the magnetized ZAM-Gap at ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$, where the spin-down energy is shared between the outgoing and ingoing Poynting fluxes $\OLvcSEMout>0$ and $\overline{\vcSEM}^{\rm (in)} } \newcommand{\OLvcSEMin}{\overline{\vcSEMin}<0$ (see equation (\rf{energyC})). This energetics is explained by a simple identity (\rf{iden-b}). The outer and inner light surfaces S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$ ~and \SIL~ are shown respectively by $\OmFmOL$ and $\OmFmIL$ with the magnetized ZAM-Gap in between. } \lbf{Flux-omPseudoF-S} \end{figure*} \section{Rotational-tangential discontinuity at the null surface $\SN$} \lbs{R-T-D} \setcounter{equation}{0} \renewcommand{\theequation}{{11}.\arabic{equation}} We discuss one basic feature of `this surface' $\SN$ in the force-free {\em limit}, which is sandwiched between the general-relativistic and semi-classical force-free domains. As argued in section \rfs{indMem}, there is a pair of unipolar induction batteries with two EMFs at both {\em surfaces} of $\SN$, which give rise to the voltage drop in between, i.e., $\Dl V =-\DN{\calE} =\OmH\Dl\Psi/2\pi c$, for particle production to arise. there is also a discontinuity behind this relation of the difference $(\Omega_{{\rm F}\omega})_{\infty} - (\Omega_{{\rm F}\omega})_{\rm H}=\OmH$ (see equations (\rf{iden-b})). Likewise, the first law in (\rf{Share}) and (\rf{SDenergy}) reduces to $\OmFb -[-(\OmH-\OmFb)]= \OmH$ at the null surface $\SN$. It seems then that this situation has allowed us to conjecture that the magnetized ZAM-Gap consists of two halves of {\em virtual} magnetized NSs spinning with $(\Omega_{{\rm F}\omega})_{\infty}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and $(\Omega_{{\rm F}\omega})_{\rm H}= -(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$, reversely packed together, and threaded by the poloidal field $\PG{\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}} \neq 0$ with no toroidal component ($\PG{\Bt}=\PN{I}=0$), and yet with $\PG{\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}}$ pinned down in the \zamp s in the Gap. From the two halves of {\em virtual} stars, a normal pulsar wind flows outwardly to \Sinf\ and an abnormal one flows inwardly to the horizon S$_{\rm H}$} \newcommand{\Sinf}{S$_{\infty}$, passing through the two light surfaces S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL\ on the way (the twin-pulsar model; see \cite{oka15a}). Nonetheless, the calculation of Faraday path integrals of $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}$ in equation (\rf{EMF-ab}a,b) along the two circuits $\calCout$ and $\calCin$ reveals the sharp potential drop $\Dl V$ between the EMFs for the two circuits, as if $\om$ were such a step function $\bar{\om}$ as \begin{equation}} \newcommand{\eneq}{\end{equation} \bar{\om}= \left\{ \begin{array} {ll} 0 & ;\ \calDout\ (\Omega_{{\rm F}\omega}>0), \\[1mm] \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\equiv \omN & ;\ \SN\ (\Omega_{{\rm F}\omega}=0), \\[1mm] \OmH &;\ \calDin\ (\Omega_{{\rm F}\omega}<0 ), \end{array} \right. \lb{OLom} \eneq and $\omNb=\OmFb$. Likewise $\Omega_{{\rm F}\omega}$ and $\vF$ are also replaced by the following step-functions, i.e., $\overline{\OmFm}\equiv \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-\bar{\om}$; \begin{equation}} \newcommand{\eneq}{\end{equation} \overline{\OmFm}= \left\{ \begin{array} {ll} \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\equiv (\overline{\OmFm})_{\rm (out)} & ;\ \calDout\ (\Omega_{{\rm F}\omega}>0), \\[1mm] 0 \equiv (\overline{\OmFm})_{\rm (N)} & ;\ \SN\ (\Omega_{{\rm F}\omega}=0) , \\[1mm] -(\OmH- \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}) \equiv (\overline{\OmFm})^{\rm (in)} &;\ \calDin\ (\Omega_{{\rm F}\omega}<0) \end{array} \right. \lb{OLOmFm} \eneq (see equation (\rf{OL-I}) and Figure \rff{GapI}), and then $\OLvF=\overline{\OmFm}\vp/\al$, where $\vp_{\rm N}} \newcommand{\Dlvp}{\Delta\vp$ is obtained by solving $\om(\vp,\Psi)=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$ (see section \rfs{SNshapAp}). Then, from equation (\rf{OLOmFm}), we have discontinuity for $\overline{\OmFm}$ at $\SN$ \begin{equation}} \newcommand{\eneq}{\end{equation} \DN{\overline{\OmFm}}=(\overline{\OmFm})_{\rm (out)}- (\overline{\OmFm})^{\rm (in)}=\OmH \lb{DN-OmFm} \eneq (see equation (\rf{iden-b})). The related electric field $\OLEp$ and its discontinuity at $\SN$ become \begin{subequations} \begin{eqnarray} \OLEp=- \frac{\overline{\OmFm}}{2\pi\al c}\mbox{\boldmath $\nabla$}\Psi, \lb{OLEp} \\ \DN{\OLEp}=- \frac{\DN{\overline{\OmFm}}}{2\pi c} \LPfrac{\mbox{\boldmath $\nabla$}\Psi}{\al}_{\rm N} = - \frac{\OmH}{2\pi c} \LPfrac{\mbox{\boldmath $\nabla$}\Psi}{\al}_{\rm N}. \lb{DNOLEp} \end{eqnarray} \lb{OLEpDN} \end{subequations} Utilizing expressions (\rf{OLOmFm}) and (\rf{OLEp}), we have the same results for $\calEout$ and $\calEin$ in Faraday path integrals along the circuits $\calCout$ and $\calCin$, respectively, as given in expressions (\rf{EMF-ab}a,b). Along with $\om$ and $\Omega_{{\rm F}\omega}$ respectively replaced by step-functions $\bar{\om}$ and $\overline{\OmFm}$, the related energy fluxes $\vcSEM$ and $\vcSsd$ are also replaced with step-functions $\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}$ and $\overline{\vcSsd}} \newcommand{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}{\overline{\vcSEM}} \newcommand{\OLvcSEMout}{\overline{\vcSEMout}$, i.e., \begin{equation}} \newcommand{\eneq}{\end{equation} \overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}=\overline{\OmFm} \vcSJ, \quad \overline{\vcSsd}} \newcommand{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}{\overline{\vcSEM}} \newcommand{\OLvcSEMout}{\overline{\vcSEMout} =\bar{\om} \vcSJ \lb{OLvcSEMsd} \eneq (see Figure \rff{Flux-omPseudoF-S}; \citet{oka15a}). When $\Iout=\Iin=-\IinU>0$ and $\vcSJout=\vcSJin=- \mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)}=\vcSJ>0$ by the boundary condition in equation (\rf{DN/SN}), we compute the differences of the Poynting flux $\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}$ and the spin-down flux $\overline{\vcSsd}} \newcommand{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}{\overline{\vcSEM}} \newcommand{\OLvcSEMout}{\overline{\vcSEMout}$ across $\SN$ from equations (\rf{DN-OmFm}) and (\rf{OLvcSEMsd}), \begin{subequations} \begin{eqnarray} \DN{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}= \PN{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}^{(+)} - \PN{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}^{(-)} =\DN{\overline{\OmFm}}\vcSJ = \OmH\vcSJ, \ \ \quad \quad \lb{energyN} \\ \DN{\overline{\vcSsd}} \newcommand{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}{\overline{\vcSEM}} \newcommand{\OLvcSEMout}{\overline{\vcSEMout}} = - \PN{\overline{\vcSsd}} \newcommand{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}{\overline{\vcSEM}} \newcommand{\OLvcSEMout}{\overline{\vcSEMout}}^{(-)} =\DN{\bar{\om}}\vcSJ= - \OmH\vcSJ, \ \ \quad \quad \ \lb{energyG} \end{eqnarray} \lb{vcSEMsd} \end{subequations} which combine to yield $\PN{\overline{\vcSsd}} \newcommand{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}{\overline{\vcSEM}} \newcommand{\OLvcSEMout}{\overline{\vcSEMout}}^{(-)} = - \PN{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}^{(-)} + \PN{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}^{(+)}$, or \begin{equation}} \newcommand{\eneq}{\end{equation} \OLvcSsdin = - \overline{\vcSEM}^{\rm (in)} } \newcommand{\OLvcSEMin}{\overline{\vcSEMin} + \OLvcSEMout , \lb{energyH} \eneq where $\PN{\overline{\vcSsd}} \newcommand{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}{\overline{\vcSEM}} \newcommand{\OLvcSEMout}{\overline{\vcSEMout}}^{(+)} =0$. This corresponds to equation (\rf{energyC}) for the energy conservation within the ZAM-Gap. Integrating this over all open field lines from $\Psio$ to $\Psit$ yields expression (\rf{SDenergy}) or (\rf{Share}), and disintegrating yields expression (\rf{iden-b}). There is naturally no discontinuity in the `total' energy and angular momentum flux across the null surface $\SN$ with $\DN{\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}}=0$, i.e., \begin{equation}} \newcommand{\eneq}{\end{equation} \DN{\vcSE}=\DN{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}+\overline{\vcSsd}} \newcommand{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}{\overline{\vcSEM}} \newcommand{\OLvcSEMout}{\overline{\vcSEMout}}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H} \DN{\vcSJ}=0, \lb{DGvcSE} \eneq which is similar to $\PN{I}=0$ from the boundary condition for $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$, although there are the jumps proportional to $\DN{\overline{\OmFm}}=\OmH$ in the EMFs, i.e., $\DN{\calE}=-\Dl V$ in equation (\rf{Dl-V}) and non-conserved energy fluxes $\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}$ and $\overline{\vcSsd}} \newcommand{\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}}{\overline{\vcSEM}} \newcommand{\OLvcSEMout}{\overline{\vcSEMout}$ in equations (\rf{vcSEMsd}a,b). Figures \rff{Flux-omPseudoF-S} and \rff{F-WS} show that the pulsar-type wind is slung outwardly from the outer magnetic rotator spinning with $(\overline{\OmFm})_{\rm (out)} =\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ through the outer domain $\calDout$, in which the related Poynting flux $\overline{\vcS}_{\rm EM}} \newcommand{\OLvcSEMout}{\overline{\vcS}_{\rm EM,(out)}$ is equal to $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H} \vcSJout$ with no frame-dragging spin-down energy flux, whereas the anti-pulsar-type wind is conversely slung inwardly from the inner magnetic rotator spinning with $(\overline{\OmFm})^{\rm (in)} =-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$ through the inner domain $\calDin$, in which the Poynting flux is directed inward, i.e.\ $\overline{\vcSEM^{\rm (in)}} =(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}) \mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)}<0$, while the frame-dragging spin-down flux is directed outward, i.e.\ $\OLvcSsdin =\OmH \vcSJin>0$, which may be understood as equivalent to an {\em in-}flow of the {\em negative} energy, i.e.\ $\OLvcSsdinU =\OmH \mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)}=- \OLvcSsdin<0$, related to the {\em in}-flow of {\em negative} angular momentum. Thus the energy fluxes in the curved space with $\om$ and $\Omega_{{\rm F}\omega}(\om,\Psi)$ are thus respectively reproduced by those in the pseudo-flat space with $\OLom$ and $\overline{\OmFm}(\OLom,\Psi)$. Thanks to the presence of the ZAM-Gap, yet despite $\PN{\vcSE}=\PN{\vcSJ}=0$ in there, the ZAMOs see that the same relation $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ in the Kerr hole force-free magnetosphere as that in the pulsar force-free magnetosphere holds (see section \rfs{role-FD}). \begin{figure*} \begin{center} \includegraphics[width=12cm, height = 7cm, angle=-0]{FIG5_NEW_V7.eps} \end{center} \caption{A twin-pulsar model with three membranes with $\overline{\OmFm}$ (see Figure \rff{Flux-omPseudoF-S} and section \rfs{R-T-D}). The {\em two} magnetospheres are anti-symmetric to each other with respect to the inductive membrane ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$; the outer domain $\calDout$ behaves like a normal pulsar-type magnetosphere with the FLAV $(\overline{\OmFm})_{\rm (out)}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and the inner domain $\calDin$ does like an `abnormal' anti-pulsar-type magnetosphere with the FLAV $(\overline{\OmFm})^{\rm (in)}= -(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$. The magnetized ZAM-Gap in $|\Omega_{{\rm F}\omega}| \lo\Dlom$ at $\PG{\overline{\OmFm}}=0$ where $\PG{\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}}=\PG{\vcSEM}=\PG{\vcSsd}=\PG{\vcSJ}=\PG{\vcj}=\PG{\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}}=0$ may be an inevitable product of widening of the rotational-tangential discontinuity. It will be a future challenge to elucidate how this has happened: the inductive Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ is equipped with {\em virtual} oppositely-directed magnetic spin axes and a pair of related batteries with EMFs at \SgapO\ and S$_{\rm G(in)} $} \newcommand{\SgapO}{S$_{\rm G(out)}$. A steady pair-production mechanism due to the voltage drop $\Dl V$ will be at work, to supply \zamp s with $\vre\approx 0$, dense enough to anchor threading field lines, thereby ensuring $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$. The batteries supply electricity to such external resistances in the resistive membranes $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ and $\SffH$ as Joule heating leading to particle acceleration and entropy production. The Gap will be \emph{frame-dragged} with the angular velocity $\omN=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$. The outgoing flux of {\em positive} energy is equivalent to the ingoing flux of {\em negative} energy, because the outgoing flux of {\em positive} angular momentum is equivalent to the ingoing flux of {\em negative} angular momentum, i.e.\ $\OLvcSsdinU =\OmH \mbox{\boldmath $S$}_{\rm J}^{\rm (in)}} \newcommand{\vcSJoutU}{\mbox{\boldmath $S$}_{\rm J}^{\rm (out)}=-\OmH \vcSJin=- \OLvcSsdin<0$,. } \lbf{F-WS} \end{figure*} \section{Discussion and conclusion} \lbs{Dis-Con} \setcounter{equation}{0} \renewcommand{\theequation}{{12}.\arabic{equation}} \subsection{Pulsar thermo-electrodynamics and black hole gravito-themo-electrodynamics} \lbs{PTE} It is usually thought that an NS is strongly magnetized, behaving like a battery capable of radiating energy in the form of a Poynting flux through the force-free magnetosphere. This is because field lines emanating from the star are allowed to rotate with the star's angular velocity, i.e., $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\Omega_{\rm NS}} \newcommand{\SNS}{S$_{\rm NS}$$. In pulsar {\em thermo}-electrodynamics, no difference of the two angular velocities gives rise to no energy dissipation in the surface layer, and there is no need of an `inner domain' where an ingoing Poynting flux may lead to entropy production in the star. This means that the pulsar magnetosphere is {\em adiabatically} attached to the stellar crust, and the whole spin-down energy of the star will be delivered directly to the magnetosphere, without dissipating in the surface layer. Let us now examine whether an adiabatic process is possible or not in the hole magnetosphere. When the hole loses angular momentum $dJ<0$, any adiabatic process of $\bar{\epsilon}_{\rm GTED}=1$ requires no difference between the two key angular velocities, i.e., $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\OmFb=\OmH$, because $\Th dS=0$ and $c^2 dM=\OmH dJ$ by equations (\rf{cEntr}a,b). Then the ZAMOs see $\IinU=\Iin=\calEin=0$ by equations (\rf{Iinab}) and (\rf{EMF-ab}a,b). Also, $\calEout=-\OmH\Dl\Psi/2\pi c=-\Dl V$ in equation (\rf{Dl-V}), which was interpreted as showing a potential drop between the horizon and infinity (cf.\ Eq. (4.39) in \citet{tho86}). If ``{\em this voltage drop could really be thought of as produced by a `battery' with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\OmH$ in the hole's horizon}" \citep{tho86}, the adiabatic extraction, i.e., the `maximum extractable energy' might not be unattainable, just as in the pulsar force-free magnetosphere with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\Omega_{\rm NS}} \newcommand{\SNS}{S$_{\rm NS}$$. If so, ``{\em the massive black hole may behave like a battery}" (cf.\ the \CS (iv)-statement), with no internal resistance nor the inner domain needed. In this case, the null surface $\SN$ or the inductive membrane would replace the stretched horizon $\SffH$, yet as a non-resistive one. But in order to behave like a battery just as an NS does so, the Kerr hole as well must be strongly magnetized, and yet the magnetosphere is free from a restriction of the FD effect and hence decoupled from regulations of the first and second laws. In the adiabatic case with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\OmH$, the ZAMOs may see he outgoing spin-down energy flows outward, but no inner domain with an ingoing Poynting flux. On the other hand, we presume simply that the Kerr hole is not magnetized with no field lines emanating from the hole by the no-hair theorem. This means that, unlike NSs, the force-free magnetosphere is {\em not} adiabatically attached to the hole, because $0\lo \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\lo \OmH$ by restrictions (\rf{second}). It is rather impossible that the adiabatic process takes place, as it was argued in \cite{bla77} that the perfect efficiency $\epsGTE=1$ is never achieved. There appears to be no reasonable reason that the hole must possess any magnetosphere directly anchored beyond the event horizon in its body. When $0\lo \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\approx\OmFb\lo\OmH$, `{\em this surface}' $\SN$ always exists, with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\lleg 0$, and indeed, the `inner' domain with $\Omega_{{\rm F}\omega}< 0$ must exist, as opposed to the `outer' one with $\Omega_{{\rm F}\omega}>0$. In the BZ process, the overlook of the FD effect on the two fluxes, $\vcSEM$ and $\vcSsd$, seemingly have led to the conclusion that `{\em a mechanism directly analogous to \cite{gj69}}' is applicable to a rotating hole (the \CS (iii)-statement). Then the first law and the efficiency $\epsGTE$ or $\bar{\epsilon}_{\rm GTED}$ no longer have a part to play in `pulsar thermo-electrodynamics' applied to extracting energy from the rotating hole. However, the necessity of particle production somewhere above the horizon in the `force-free' magnetosphere has led to spark gaps between the two light surfaces (the \CS (v)-statement) under `{\em a negligible violation of the force-free condition and degeneracy}' (the \CS (vi)-statement), and pair-created charged particles from non-stationary places will flow out into both, outer and inner, domains, and the criticality condition at $S$_{{\rm ff}\infty}$} \newcommand{\SffH}{S$_{\rm ffH}$$ and $\SffH$ for the flows will have yielded respectively the two eigenvalues $\Iout$ and $\Iin$, and impedance matching may have given some value for efficiency $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}$. These seemingly plausible results will depend on the validity of a possible degree of {\em a negligible violation} of the force-free condition and degeneracy. But the efficiency $\epsilon} % \newcommand{\epsBZ}{\epsilon_{\rm BZ}$ obtained under a negligible violation will be quite different from the the GTED efficiency $\bar{\epsilon}_{\rm GTED}$ based on the first law under the perfect violations. \subsection{The role of the dragging of inertial frames} \lbs{role-FD} Kerr BHs are a kind of thermodynamic objects in the sense that their behavior and evolution are describable in terms of the four laws of classical thermodynamics. They will nevertheless be capable of finding the way of radiating energy in the form of Poynting fluxes just as in the case of a pair of pulsars, by adjusting pulsar electrodynamics to BH thermodynamics. It is nothing but the dragging of inertial frames that plays an intangible and essential role in adjusting both dynamics to extract energy from the Kerr hole through the surrounding magnetosphere. By nature, the FDAV $\om$ reduces to the constant value $\OmH$ by the zeroth law, which is an intensive variable conjugate to an extensive variable $J$ with respect to the hole's mass-energy $c^2 M$, similarly to the surface temperature $\Th$ conjugate to the entropy $S$ as an extensive variable. Then, when the hole loses angular momentum by the flux $\vcSJ$, the FDAV $\om$ combines with the flux $\vcSJ$, to carry energy outward by the flux $\vcSsd =\om\vcSJ$, which connects at the horizon with the term $\OmH (dJ/dt)$ by $\om\to\OmH$. The role of $\om$ also is to split $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ into $\Omega_{{\rm F}\omega}+\om$, to fit the flux relation $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ to the first law of thermodynamics, like $\vcSE=\vcSEM+\vcSsd=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$. This starts from combination of the kick-off equation (6.2) with the freezing-in condition (6.3a), thereby investing $\om$ with a role of the gravito-electric potential gradient, just as shown above. The ZAMOs traveling round the hole with $\om$ see that the FL-AV is given by $\Omega_{{\rm F}\omega}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}-\om$. The FL-RV $\vF=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vp/\al$ in equation (\rf{vp/vt}) shows existence of not only the two light surfaces S$_{\rm oL}$} \newcommand{\SIL}{S$_{\rm iL}$\ and \SIL\ ($\vF=\pm c$), but also the null surface $\SN$ ($\vF=0$) dividing the force-free magnetosphere into the two domains $\calDout$ and $\calDin$ by $\vF \propto \Omega_{{\rm F}\omega}\ggel 0$. The electric field $\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}$ induced by the motion of the magnetic field lines also changes sign at `{\em this surface}' $\SN$, and the related Poynting fluxes are launched both outward and inward, $\vcSEM\ggel 0$. The vanishing of $\Omega_{{\rm F}\omega}=\vF=\mbox{\boldmath $E$}_{\rm p}} \newcommand{\OLEp}{\overline{\vcEp}=\vcSEM=0$ indicates the breakdown of the freezing-in and force-free conditions, i.e., $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}=\vcj=0$ and hence $I=\Omega_{{\rm F}\omega}=0$, as well. This seems to indicate that the violation of the iso-rotation of field lines and resultant breakdown of the freezing-in and force-free conditions at the null surface $\SN$, are inevitable results of {\em reductio ad absurdum} on the force-free model of the BH magnetosphere due to the general-relativistic effect of the dragging of inertial frames. \subsection{The functions of the magnetized ZAM-Gap $\GN$} \lb{role-FD} The `freezing-in $+$ force-free' theory for a pulsar magnetosphere possesses two integral functions of $\Psi$ only, i.e., $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}(\Psi)$ and $I(\Psi)$, but this is no longer the case in the general-relativistic setting, because the event horizon is decisively different from the classical neutron-star surface, namely, the horizon does not allow existence of the particle and current sources. Instead, the breakdown of the freezing-in condition at the null surface $\SN$ with $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}\ggel 0$, leads to severance of streamlines, thereby requiring the particle source and hence a mechanism of particle production at work there, and that of the force-free condition leads to severance of current lines as well, similarly requiring current source and electro-motive forces driving the currents. The recovery of {\em absurdum} must include construction of a pair of unipolar induction batteries and their EMFs for the current circuits in the two domains $\calDout$ and $\calDin$. Then the difference of two EMFs will give birth to the voltage drop needed in steady and lasting pair-production between the two batteries and domains, i.e., $\Dl V=\calEout-\calEin= -(\OmH/2\pi c)\Dl\Psi$. The breakdown of the force-free condition $\PN{I}=0$ will lead to a ZAM-Gap with $I(\Omega_{{\rm F}\omega},\Psi)=0$ within $|\Omega_{{\rm F}\omega}|\lo \Dlom$. The pressure of pair-created particles will widen the null surface $\SN$ at the force-free limit, to the pair-creation Gap $\GN$, which circulates with $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$ determined as the eigenvalue by the `boundary condition' requiring the angular-momentum conservation across the Gap $\GN$ in the ZAM-state, from which the Poynting fluxes are launched both outward and inward. The pinning-down of threading magnetic field lines on pair-created plasma, and conversely magnetization of the plasma will ensure $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$, thereby with the Gap $\GN$ frame-dragged into rotation around the hole with $\omN=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$. The `boundary condition' $\PG{I}=\DG{I}=0$ will then be guaranteed, to determine the eigenvalue $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$. Here a bit puzzling thing is that the `continuous' variation of $\Omega_{{\rm F}\omega}(\ell,\Psi)$ across $\SN$ between the two force-free domains has produced a discontinuous voltage drop $\Dl V= -\DN{\calE}$ between the two EMFs for the two circuits in the outer and inner domains (see Figures \rff{DC-C}), which can be reproduced by the `steep' discontinuities of $\DN{\overline{\OmFm}}=\OmH$ and $\DN{\OLEp} \propto \OmH$. The FD effect certainly gives rise to the ZAMO-FLAV $\Omega_{{\rm F}\omega}$ continuously varying along each field line, thereby not only producing the null surface $\SN$ with breakdown of the freezing-in, force-free conditions (\rf{ff-fz}a,b) and the `degeneracy' in equation (\rf{deg}), but also splitting the `total' energy flux $\vcSE$ into the two fluxes, i.e., $\vcSEM$ and $\vcSsd$, which are continuous functions of the `coordinates' $\Omega_{{\rm F}\omega}$ and $\om$. The degeneracy in equation (\rf{deg}) indicates that current-field-stream-lines are equipotentials in the force-free domains, with no place of depositing or dissipating energy-angular momentum anywhere except $\SN$ as the particle-current source. This situation appears to allow us to conjecture that $\om$ and $\Omega_{{\rm F}\omega}$ in the force-free domains are respectively replaceable with $\bar{\om}$ and $\overline{\OmFm}$ in the electric field and the energy fluxes. The ZAMO-FLAV $\overline{\OmFm}$ will satisfy the {\em iso-rotation} law in each domain, $(\overline{\OmFm})_{\rm (out)}= \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ and $(\overline{\OmFm})^{\rm (in)}=-(\OmH- \Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})$ in equation (\rf{OLOmFm}). In short, the Kerr hole does not seem to mind about continuous or discontinuous accommodation of the spin-down energy to the Poynting fluxes in the force-free domains, as long as the `actual' energy flux $\vcSE=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}\vcSJ$ is conserved along each current-field-streamlines. \subsection{The twin-pulsar model} \lbs{TW-P-M} It appears that the `outer wall' of $\SN$ or $\GN$ behaves like the `surface' of a rapidly spinning NS with $(\overline{\OmFm})_{\rm (out)}=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}>0$, and the `inner wall' behaves like that of a similarly rapidly but reversely spinning one with $(\overline{\OmFm})^{\rm (in)}=-(\OmH-\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H})<0$, and then each half of the two {\em virtual stars} is attached at the null surface $\SN$. The null surface $\SN$ may thus be regarded as a new type of {\em \RTDy} of the spin rates of the {\em virtual} magnetic rotators for the outer pulsar-type wind and inner anti-pulsar-type wind, coexistent back to back in between. By the `boundary condition' for $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ (see section \rfs{BC-SN}), the rates of angular momentum transport must be equal across the null surface $\SN$, as $\PN{I}=\DN{I}=0$ shown in equations (\rf{SGa}) and (\rf{DN/SN}). Importantly $\PN{\overline{\OmFm}}=0$ and $\DN{\overline{\OmFm}}=\OmH$ in equation (\rf{DN-OmFm}) are the key relations leading to a comprehension of the present twin-pulsar model in terms of the rotational-tangential discontinuity. The null surface $\SN$, infinitely thin in the force-free limit, will develop into the non-force-free magnetized Gap, $\GN$ of a finite half-width $\Dlom$, filled with \zamp s of finite inertia, pair-produced with the voltage drop $\Dl V$ (section \rfs{m-mdGap}). The {\em widening} from $\SN$ to a Gap ${\cal G}_{\rm N}} % {\SG}{\calS_{\rm G}$ with a finite $\Dlell$ or $\Dlom$ will be a realization of a kind of relaxation of \RTDy\ due to, e.g., the pair-particle production by the voltage drop $\Dl V=\DN{\calE} \propto \DN{\overline{\OmFm}} = \OmH$. Then, one of important questions remaining will be how to determine the Gap width $\Dlell$ or $\Dlom$ in terms of $\OmH$, the magnetic flux threading the Gp $\GN$, the details of pair-production, etc. This new type of \RTDy\ in the general-relativistic setting is distinctly different from ordinary tangential or rotational discontinuities in classical magnetohydrodynamics (see e.g.\ \$70, \cite{lan84}). Also, the present gap model with a pair of batteries and a strong voltage drop is fundamentally different from any existing pulsar outer-gap models deduced from the \CS (v,vi)-statements in section \rfs{static} for a charge-starved magnetosphere \citep{oka15a}. It seems that ``a mechanism directly analogous to \cite{gj69}" (see the \CS (iii)-statement) is certainly applicable to the outer domain $\calDout$, but it is ``a mechanism {\em anti}-analogous to \cite{gj69}" that is also applicable to the inner domain $\calDin$. It is the ZAM-Gap $\GN$ covered by the inductive membrane $\SN$ where the force-free condition breaks down, i.e., $\PG{I} = 0$ in $|\Omega_{{\rm F}\omega}|\lo \Dlom$ that allows us to partition the force-free magnetosphere off into the two domains with oppositely directed winds and Poynting fluxes, i.e., $\mbox{\boldmath $v$}} \newcommand{\vcm}{\mbox{\boldmath $m$}} \newcommand{\vcp}{\mbox{\boldmath $p$}\ggel 0$ and $\vcSEM\ggel 0$ for $\Omega_{{\rm F}\omega}\ggel 0$. \subsection{Concluding remarks} In this paper, we have attempted to unify pulsar electrodynamics to black-hole thermodynamics into black-hole gravito-thermo-electrodynamics, by coupling the dragging of inertial frames with unipolar induction. Fundamental concepts and expressions as well as the basic formulation of the $3+1$-formulation had been given in the almost complete form by \cite{bla77,phi83b,mac82,tho86}, more than three decades ago. The theory for GTED must naturally contain the correct procedure of combining pulsar electrodynamics with {\em additional} physics needed in the BZ process \citep{pun90}, i.e., BH thermodynamics. It is the dragging of inertial frames despite the presence of the event horizon that makes the Kerr hole to manipulate its magnetosphere to radiate the Poynting flux with help of the first three laws of thermodynamics. We need no longer to rely on the concept ``{\em magnetic field lines threading and anchored in the horizon}." We may instead consider ``poloidal magnetic field lines not only threading but also pinned down in the ZAM-Gap circulating round the hole with $\omN=\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}$ dragged by the hole rotation." The resulting twin-pulsar model based on {\em a modified} BZ process with {\em an extended} Membrane Paradigm will be a natural outcome from unification of pulsar electrodynamics with BH thermodynamics. It needed three decades of trial-and-errors \citep{oka92,oka06,oka09,oka12a,oka15a}, but still the secret of physics of the magnetized ZAM-Gap, its structure, particle production therein, etc.\ are still awaiting to be unveiled as one of the ultimate central engines in the universe \citep{oka15b}.\footnote{The referee of \cite{oka92} asked the author to comment on the question of causality then arising about the BZ process. Of course he could not devise a reasonable reply at that time (see \cite{oka92}). It unfortunately took more than three decades to find out a good chemistry among relativity, thermodynamics, and electrodynamics, but this might fortunately be done anyway before everything has been forgotten owing to dementia.} If observed large-scale high-energy $\gamma$-ray jets from AGNs are really originated from quite near the event horizon of the central super-massive BH, it appears to be plausible that these jets are a magnificent manifestation of collaboration of relativity, thermodynamics and electrodynamics; more precisely speaking, the frame-dragging effect, the first and second laws, and unipolar induction. The heart of the black-hole central engine may lie in the Gap $\GN$ between the two, outer and inner domains, fairly above the horizon, and the embryo of a jet will be born in the Gap $\GN$ under the null surface $\SN$ in the range of $2^{1/3}\rH \lo r_{\rm N} \lo 1.6433\rH$ quite near the horizon (see section \rfs{SNshapAp}). The confirmation of this postulate awaits a further illumination of Gap physics. Some of the key questions left to solve may include: \\ ({\bf a}) When the gravito-electric potential gradient of the hole $\OmH$ and the strength of $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ in the Gap are given from e.g., observations, how efficiently does the voltage drop due to the EMFs, $\Dl V=\DG{\calE}$, contribute to particle production? The pinning-down of the poloidal field $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ by the \zamp s pair-produced in the Gap will yield complete magnetization of the plasma to ensure field lines to possess $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$. Then, how much density of \zamp s is needed for pinning-down and magnetization to ensure $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$? How large is the Gap width $\Dlom=|\partial\om/\partial\ell| \Dlell$, not only to provide neutral plasma particles with $\vre\approx 0$, being charge-separated to both the outflow and inflow in the force-free domains? ({\bf b}) While the force-free domains $\calDout$ and $\calDin$ are filled with the `massless' particles, the Gap in between will be regarded as the extreme-opposite, i.e., non-force-free. That is to say, pair-produced \zamp s will acquire the rest mass energy probably larger than the field energy, to be able to anchor the poloidal field $\mbox{\boldmath $B$}_{\rm p}} \newcommand{\vcBt}{\mbox{\boldmath $B$}_{\rm t}$ and ensure $\Omega_{\rm F}} \newcommand{\OmH}{\Omega_{\rm H}=\omN$. That being so, if the \zamp s are massive, they may not be free from the hole's gravitational and tidal forces. In that case, how are the \zamp s in the Gap able to be sustained against gravity from the hole? Or can we neglect the gravitational force by the hole on the \zamp s dragged by the hole's rotation into circulation around the hole? \acknowledgments The authors thank Professor Kip Thorne for strong encouragement to continue this research. I. O.\ is grateful to T.\ Uchida for useful discussion and O.\ Kaburaki, the joint works with whom helped greatly deepening his understanding of black hole thermodynamics. Y. S.\ thanks National Astronomical Observatory of Japan for hospitality during his visits.
1,314,259,994,037
arxiv
\section{Introduction} Even for weakly interacting particles, a full many-body treatment of Bose-Einstein condensates (BEC) is only possible for a small number $N$ of particles. Most often a mean-field approximation is used, which describes the system quite well for large $N$ at low temperatures. In this mean-field approach, the bosonic field operators are replaced by c-numbers, the condensate wavefunctions. This constitutes a classicalization and therefore the result of the mean-field approximation, the Gross-Pitaevskii equation (GPE), is often denoted as `classical', despite of the fact that the GPE is manifestly quantum, i.e.~it reduces to the usual linear Schr\"odinger equation for vanishing interparticle interaction. Therefore, in order to avoid misunderstanding, this inversion of the second quantization may be called a second classicalization. In a number of recent papers, consequences of the classical nature of the mean-field approximation are discussed and semiclassical aspects are introduced. For a two-mode Bose-Hubbard model, Anglin and Vardi \cite{Vard01b,Angl01} consider equations of motion which go beyond the standard mean-field theory by including higher terms in the Heisenberg equations of motion. The classical-quantum correspondence has been studied in terms of phase space (Husimi) distributions \cite{Mahm05} for such systems. Mossmann and Jung \cite{Moss06} demonstrate for a three-mode Bose-Hubbard model that the organization of the $N$-particle eigenstates follows the underlying classical, i.e.~mean-field, dynamics. A generalized Landau-Zener formula for the mean-field description of interacting BECs in a two-mode system has been derived by studying the many particle system \cite{06zener_bec}. In \cite{Wu06} the commutability between the classical and the adiabatic limit for the same system is studied and first steps towards a semiclassical treatment of the problem are reported. The purpose of the present paper is to show that the mean-field model is not only capable to approximate the interacting $N$-particle system in the limit of large $N$ and to allow for an interpretation of the organization of the $N$-particle eigenvalues and eigenstates, but can also be used to reconstruct approximately the individual eigenvalues and eigenstates in a semiclassical Bohr-Sommerfeld (or EBK) manner with astounding accuracy even for a small number of particles. This will be demonstrated for $N$ bosonic particles in a two-mode system, a many-particle Bose-Hubbard Hamiltonian, describing for example the low-energy dynamics of a BEC in a (possibly asymmetric) double-well potential. This system is related to a classical non-rigid pendulum in the mean-field approximation (see, e.g., \cite{Mahm05} and references therein). Both the many particle model and its classical version -- which is often denoted as the discrete self-trapping equation for reasons which will become obvious in the following -- have been studied for decades in different context (see \cite{semiMP_Bem2}). A detailed semiclassical analysis is, however, missing up to now. Previous work in this direction concentrated on the symmetric case, where the permutational symmetry of the system with respect to an interchange of the two modes simplifies an analysis. Semiclassical expressions for the tunneling splittings of the eigenvalues have been derived \cite{Enz86,Scha87} in context of the spin-system in eqn.~(\ref{BH-hamiltonian-SR}) below (see also \cite{Bern90,Gara91} for a perturbative treatment of the splittings and \cite{Fran01} for a detailed analysis of the quantum spectrum). In the following we will first give a short overview of the many particle description of the model, the celebrated mean-field approximation and their correspondence. Afterwards we focus on the question to which extent many particle properties can be extracted from the mean-field system by an inversion of this `classical' approximation in a semiclassical way using the EBK-quantization method \cite{Brac97}. \section{Two-mode Bose-Hubbard model and mean-field approximation} In a two-mode approximation at low temperatures a BEC in a double-well potential can be described by a second quantized many particle Hamiltonian of Bose-Hubbard type: \begin{equation} \hat H= \varepsilon (\hat n_1- \hat n_2) + v(\hat a_1^\dagger \hat a_2 + \hat a_2^\dagger \hat a_1) +g\left(\hat n_1^2+\hat n_2^2 \right)\,. \label{BH-hamiltonian} \end{equation} Here $\hat a_j$, $\hat a_j^\dagger$ are bosonic particle annihilation and creation operators for the jth mode and $\hat n_j = \hat a_j^\dagger\hat a_j$ are the mode number operators. The mode energies are $\pm\varepsilon$, $v$ is the coupling constant and $g$ is the strength of the onsite interaction. In order to simplify the discussion, we assume here that $v$ is positive and $g$ is negative \cite{semiMP_Bem1}. The Hamiltonian (\ref{BH-hamiltonian}) commutes with the total number operator $\hat N=\hat n_1 + \hat n_2$ and the number $N$ of particles, the eigenvalue of $\hat N$, is conserved. For a given value of $N$, we have $N+1$ eigenvalues of the Hamiltonian (\ref{BH-hamiltonian}). Alternatively, the system can be described in the Schwinger representation by transformation to angular momentum operators $\hat J_x=( \hat a_1^\dagger \hat a_2+\hat a_2^\dagger \hat a_1)/2$\,, $\hat J_y=( \hat a_1^\dagger \hat a_2-\hat a_2^\dagger \hat a_1)/2{ \rm i }$\,, $\hat J_z=( \hat a_1^\dagger \hat a_1-\hat a_2^\dagger \hat a_2)/2$\,. The Hamiltonian (\ref{BH-hamiltonian}) then takes the form \begin{equation} \hat H=2\varepsilon\hat J_z+2v\hat J_x+2g(\hat J_z^2+N^2/4), \label{BH-hamiltonian-SR} \end{equation} where the total angular momentum is $J=N/2$. \begin{figure}[t] \centering \includegraphics[width=6.6cm]{phasenrcklein.eps} \includegraphics[width=6.6cm]{phasenrcgross.eps} \caption{\label{fig-phasespace} (Color online) Phase space portrait of the mean-field Hamiltonian ${\cal H}(p,q)$ in (\ref{klHam}) for $v=1$ and $\varepsilon=-0.5$ in the subcritical ($g=-1/N_s$, top) and supercritical ($g=-3/N_s$, bottom) region for $N=10$ and $\hbar=1$.} \end{figure} The celebrated mean-field description can be most easily formulated as a replacement of operators by c-numbers $\hat a_j\rightarrow \psi_j$, $\hat a_j^\dagger \rightarrow \psi_j^*$. Since the c-numbers commute in contrast to the quantum mechanical operators, the transition quantum $\to$ classic and vice versa is not one-to-one. To avoid ambiguities one has to replace symmetrized products of the operators by the corresponding products of c-numbers. Therefore we will start on the many particle side with a symmetrized Bose-Hubbard Hamiltonian in the following, where the $\hat n_j$ are replaced by $ \hat n^s_j=(\hat a_j^\dagger\hat a_j+\hat a_j\hat a_j^\dagger)/2$ (see also \cite{Moss06}). This symmetrization affects only the nonlinear term in (\ref{BH-hamiltonian}) and the symmetrized $\hat H$ is related to (\ref{BH-hamiltonian}) by an additive constant term depending only on $\hat N$. Note that thus the number operator $\hat N=\hat n_1 +\hat n_2= \hat n^s_1+ \hat n^s_2 -\hat 1$ is replaced by $\vert \psi_1\vert^2+\vert\psi_2\vert^2-1$ and therefore the mean-field wavefunction is normalized as $\vert \psi_1\vert^2+\vert\psi_2\vert^2=N+1=N_s$. The mean-field time evolution is given by the two level nonlinear Schr\"odinger equation, resp.~GPE, \begin{equation}\label{2stGPE} { \rm i } \hbar \,\frac{{ \rm d }}{{ \rm d } t} \begin{pmatrix} \psi_{1} \\ \psi_2 \end{pmatrix}= \begin{pmatrix} \varepsilon + 2g |\psi_1|^2 & v \\ v & -\varepsilon + 2g |\psi_2|^2 \end{pmatrix} \begin{pmatrix} \psi_{1} \\ \psi_2 \end{pmatrix}\,, \end{equation} where $\psi_1$ and $\psi_2$ are the amplitudes of the two condensate modes. The Schr\"odinger equation, linear or nonlinear, has the canonical structure of classical dynamics \cite{Ablo04,Fadd87,Hesl85}: The time dependence of the complex valued mean-field amplitudes can be written as canonical equations of motion \begin{equation}\label{can-eqn-psi} { \rm i }\hbar\frac{{ \rm d } \psi_j}{{ \rm d } t}=\frac{\partial {\cal H}}{\partial \psi_j^*}\quad \text{and}\quad { \rm i }\hbar\frac{{ \rm d } \psi_j^*}{{ \rm d } t}=-\frac{\partial {\cal H}}{\partial \psi_j} \end{equation} with a Hamiltonian function ${\cal H}=\epsilon(|\psi_1|^2-|\psi_2|^2)+v(\psi_1^*\psi_2+\psi_1\psi_2^*)+g(|\psi_1|^4+|\psi_2|^4)$. The conservation of the particle number allows a reduction of the dynamics to an effectively one-dimensional Hamiltonian evolution by an amplitude-phase decomposition $\psi_j=\sqrt{n_j+1/2}\,{ \rm e }^{{ \rm i } q_j}$ in terms of the canonical coordinate $q=(q_1-q_2)/2$, an angle, and the (angular)momentum $p=(n_1-n_2)\hbar$, with the Hamiltonian function \begin{equation}\label{klHam} {\cal H}(p,q) = \varepsilon \,\frac{p}{\hbar}+v\sqrt{N_s^2-\frac{p^2}{\hbar^2}}\,\cos(2q) +\frac{g}{2}\big(N_s^2+\frac{p^2}{\hbar^2}\big)\,, \end{equation} where $N_s$ is the normalization of $\psi$. Introducing the new variables the canonical equations of motion~(\ref{can-eqn-psi}) obtain their usual appearance $\dot{q}=\partial {\cal H}/\partial p$ and $\dot{p}=-\partial {\cal H}/\partial q$: \begin{align} \dot{p}=&2v\sqrt{N_s^2-\frac{p^2}{\hbar^2}}\sin(2q)\\ \dot{q}=&\frac{\varepsilon}{\hbar}-v\frac{p}{\hbar^2\sqrt{N_s^2-\frac{p^2}{\hbar^2}}}\,\cos(2q) +g \frac{p}{\hbar^2}. \end{align} This describes the classical dynamics of a non-rigid pendulum where the phase space is finite, $-N_s\hbar\le p\le N_s\hbar$, $0\le q\le \pi$, if the lines $q=0$ and $q=\pi$ are identified. One of the prominent features of the two-mode system is the self-trapping effect, which leads to the emergence of additional stationary states favoring one of the wells above a critical particle interaction strength. For a discussion of the relation between mean-field and $N$-particle behavior see, e.g., \cite{Aubr96,Milb97} and references therein and \cite{Holt01a} for its control by external driving fields. The self-trapping transition occurs at $g=-v/N_s$ and is connected to a bifurcation of the stationary states, the fixed points of the Hamiltonian (\ref{klHam}), in the mean-field approximation. Figure \ref{fig-phasespace} shows phase space portraits of ${\cal H}(p,q)$ for sub- and supercritical particle interaction strength. In the subcritical region one has a maximum, $E^+$, at $q=0$ and a minimum, $E^-$, at $q=\pi/2$. For the symmetric case $\varepsilon=0$, both are located at $p=0$, which means that the population in both wells is the same. In the supercritical region the minimum bifurcates into two minima, $E^-_\pm$, and a saddle point, $E^-_{\rm saddle}>E^-_\pm$. Even for the case $\varepsilon=0$ the two minima are located at a finite value of the population imbalance. In these stationary states the condensate mainly populates one of the wells, which leads to the name {\it self-trapping}. In phase space, the regions with oscillations around one of the two minima are separated by a separatrix passing through the saddle point. The period of the separatrix motion is infinite. \begin{figure}[b] \centering \includegraphics[width=6.6cm]{semi_vs_exact_vs_cl_sub_col.eps} \caption{\label{fig-levels_mp_mf-1} (Color online) Many particle energies $E_n$ and mean-field eigenenergies ${\cal H}_{stat}$ (\textcolor{red}{- -}) as a function of the onsite energy $\varepsilon$ in the subcritical region ($g=-0.5/N_s$) for $v=1$ and $N=10$ particles ($\hbar=1$). The spectrum is organized by the mean-field eigenenergies ${\cal H}_{stat}$ (\textcolor{red}{- -}). The exact quantum eigenvalues shown for $\varepsilon\leq 0$ (\textcolor{green}{---}) are almost exactly reproduced by the semiclassical ones, $E_n^{sc}$, shown for $\varepsilon\geq 0$ (\textcolor{blue}{---}).} \end{figure} \begin{figure}[t] \centering \includegraphics[width=4.2cm]{meanfield_vgl_leveld.eps} \includegraphics[width=4.2cm]{leveld1_normiert.eps} \includegraphics[width=4.2cm]{leveld2_normiert.eps} \includegraphics[width=4.2cm]{leveld3_normiert.eps} \caption{\label{fig-density} Level density $\varrho (E)$ of the many particle system in comparison to the mean-field energies for $N=1500$ particles for $v=1$, $g=-3/N_s$ and $\varepsilon=0.5, 1, 1.5$ ($\hbar=1$).} \end{figure} \begin{figure}[b] \centering \includegraphics[width=6.6cm]{semi_vs_exact_vs_cl_super_col.eps} \caption{\label{fig-levels_mp_mf-2} (Color online) Many particle energies $E_n$ and mean-field eigenenergies ${\cal H}_{stat}$ (\textcolor{red}{- -}) as a function of the onsite energy $\varepsilon$ in the supercritical region ($g=-3/N_s$) for $v=1$ and $N=10$ particles ($\hbar=1$). The spectrum is organized by the mean-field eigenenergies ${\cal H}_{stat}$ (\textcolor{red}{- -}). The exact quantum eigenvalues shown $\varepsilon\leq 0$ (\textcolor{green}{---}) are almost exactly reproduced by the semiclassical ones, $E_n^{sc}$, shown for $\varepsilon\geq 0$ (\textcolor{blue}{---}).} \end{figure} Figure~\ref{fig-levels_mp_mf-1} shows an example of the many particle eigenvalues $E_n$ in dependence on $\varepsilon$ for a subcritical interaction strength. The pattern of eigenvalues varies smoothly with $\varepsilon$ and is bounded by the stationary mean-field energies shown as red curves. Because of the symmetry of the spectrum for $\varepsilon \rightarrow - \varepsilon$ the exact spectrum is only shown for $\varepsilon \le 0$, whereas for $\varepsilon \ge 0$ the semiclassical eigenvalues are shown as discussed below. Figure \ref{fig-levels_mp_mf-2} shows a similar plot in the supercritical region. Here we observe a net of avoided crossings clearly organized by a skeleton provided by the stationary mean-field energies, as reported before by several authors \cite{Kark02,Buon04,06zener_bec}. Again, for $\varepsilon \ge 0$ the semiclassical eigenvalues are shown, which closely agree with the quantum ones in all details. The mean-field eigenenergies (red curves) show a swallowtail structure which forms a caustic of the multi-particle eigenvalue curves in the limit $N\rightarrow \infty$. To illustrate this issue, one can calculate the level density $\varrho (E)$ (normalized to unity) as a function of the energy \cite{Aubr96}. Figure~\ref{fig-density} shows a histogram of the level density for $N=1500$ particles and different values of $\varepsilon$. The mean-field swallowtail curve between the cusps manifests itself as a peak in the density of the many particle energies. In the limit $N\to\infty$ the density $\varrho (E)$ approaches a smooth curve and the peak develops into a singularity. At the positions of the other mean-field eigenenergies one observes finite steps. Indeed the quantum level densities shown in Fig.~\ref{fig-density} for a large value of $N$ are directly related to the classical period $T$ of motion by $T={ \rm d } S/{ \rm d } E$, where $S$ is the classical action, which we will focus on in more detail in the following. The height of the steps in the density plots are simply given by the period of harmonic oscillation in the vicinity of the extrema and the singularity corresponds to the separatrix motion. \section{Semiclassical quantization} \subsection{The classical action} The most important ingredient of a semiclassical quantization is the action $S(E)$, i.e.~the phase space area enclosed by the directed curve ${\cal H}(p,q)=E$. The action $S(E)$ increases with $E$ from zero at the minimum energy of ${\cal H}(p,q)$ to $2\pi N_s\hbar$, the total available phase space area, at the maximum energy of ${\cal H}(p,q)$. For the generalized pendulum Hamiltonian (\ref{klHam}), one can express the position variable $q$ uniquely as a function of $p$ and $E$ and write down the action in momentum space in the form $S(E)=\oint q(p,E)\,{ \rm d } p$. It is convenient \cite{Brau93,Haak01} to introduce two functions $U_+(p)={\cal H}(p,0)$ and $U_-(p)={\cal H}(p,\pi/2)$, which join smoothly at $p=\pm \hbar N_s$ and act as a potential for the variable $p$. The classically allowed energy region is given by $U_-(p)\le E\le U_+(p)$ as illustrated in Fig.~\ref{fig-Upm} in the sub- and supercritical regions. For a given energy $E$ the classical turning points $p_\pm$ (with $p_- \le p_+$) are determined by $U_-(p_\pm)=E$ or $U_+(p_\pm)=E$. One has to distinguish three basic types of motion and, with $\widetilde S=\int_{p_-}^{p_+}q(p,E)\,{ \rm d } p$, we find: \begin{enumerate} \item Orbits encircling a minimum of ${\cal H}(p,q)$. The classical turning points both lie on $U_-$ and we have $S(E)=\pi(p_{+}-p_{-})-2\widetilde S$. \item Orbits encircling a maximum of ${\cal H}(p,q)$. The classical turning points both lie on $U_+$ and we have $S(E)=2\pi N_s\hbar-2\widetilde S$. \item Rotor orbits extending over all angles $q$. We can find $p_-$ on $U_+$ and $p_+$ on $U_-$ with $S(E)=\pi(N_s\hbar+p_{+})-2\widetilde S$ or $p_-$ on $U_-$ and $p_+$ on $U_+$ with $S(E)=\pi(N_s\hbar-p_{-})-2\widetilde S$. \end{enumerate} \begin{figure}[htb] \centering \includegraphics[width=6.6cm]{Potentialkurven_unterkr.eps} \includegraphics[width=6.6cm]{Potentialkurven_ueberkr.eps} \caption{\label{fig-Upm} (Color online) The potentials $U_-(p)$ (\textcolor{blue}{---}) and $U_+(p)$ (\textcolor{red}{- -}), for $N=10$ particles, $v=1$ and $\varepsilon=0.6$ in the subcritical ($g=-0.6/N_s$, top) and supercritical ($g=-4/N_s$, bottom) region, with $\hbar=1$.} \end{figure} \subsection{Energy quantization} In the case of a single classically accessible region, where there are two real turning points for any energy $E$, the semiclassical quantization condition is given by \begin{equation} S(E)=h (n+{\textstyle \frac12})\ ,\quad n=0,\,1,\,\ldots, N \,.\label{sem-quant1} \end{equation} This simple case is always met in the subcritical region $|g|<v/N_s$. A numerical solution of (\ref{sem-quant1}) determines the semiclassical energies $E_n$, $n=0,\ldots, N$, where the total available phase space area, $0\le S(E)\le hN_s$, restricts the number of semiclassical eigenvalues to $N_s$, exactly as the quantum ones. The resulting semiclassical eigenvalues shown in Fig.~\ref{fig-levels_mp_mf-1} for $N=10$ particles ($g=-0.5/N_s$, $v=1$, $\hbar=1$) are in excellent agreement with the exact quantum ones. It should be pointed out, that in the noninteracting case, $g=0$, the action $S(E)$ is a linear function of the energy $E$, and the semiclassical eigenvalues agree with the exact ones \begin{equation} E_n=\sqrt{\varepsilon^2+v^2}(2n-N)\ ,\quad n=0,\,1,\,\ldots, N \,. \end{equation} This can be easily understood by recognizing that in this case the Hamiltonian (\ref{BH-hamiltonian}) describes nothing but a system of two coupled harmonic oscillators, which can be transformed to two uncoupled ones by introducing normal coordinates. It may also be of interest to note that (for $g=0$ and $N=2$ or $3$) the classical analog (\ref{klHam}) to the quantum Hamiltonian (\ref{BH-hamiltonian}) has been suggested many years ago by Miller and coworkers and applied in a semiclassical description of electronic transitions in atom-molecule collisions \cite{Mill78,Meye79}. The supercritical region $|g|>v/N_s$ is more complicated. Here the energy surface has two minima, hence the potential function $U_-(p)$ has two minima as well, separated by a potential barrier. In this case one has to distinguish different regions of the energy. For energies below the upper minimum (region I in Fig.~\ref{fig-Upm}), the quantization can be carried out like in the subcritical case by equation (\ref{sem-quant1}). For energies between the upper minimum and the barrier $E_{\rm barr}$ (regions II in Fig.~\ref{fig-Upm}), there are four real turning points $p^{(l)}_{-}<p^{(l)}_{+}<p^{(r)}_{-}<p^{(r)}_{+}$. In this case one has to consider tunneling through the barrier. The semiclassical quantization condition can be achieved by a more elaborate expression \cite{Chil74} (see also \cite{Froe72,Chil91}): \begin{equation} \sqrt{1+\kappa^2}\,\cos (S_l+S_{r}-S_\phi)=-\kappa \,\cos (S_r-S_{r}+S_\theta)\label{sem-quant2} \end{equation} where $S_l$ and $S_{r}$ are the action integrals in the left resp. right region in Fig.~\ref{fig-Upm} (note that also here one has to distinguish the different cases a) and c)). The term \begin{equation} \kappa ={ \rm e }^{-\pi S_\epsilon}\ ,\ S_\epsilon=\frac{1}{\pi}\int_{p_+^{(l)}}^{p_-^{(r)}}|q(p,E)|\,{ \rm d } p \end{equation} accounts for tunneling through the barrier, \begin{equation} S_\phi=\arg \Gamma({\textstyle \frac12}+{ \rm i } S_\epsilon)-S_\epsilon\,\log |S_\epsilon| +S_\epsilon \end{equation} is a phase correction, and $S_\theta=0$ below the barrier. Deep below the barrier, tunneling can be neglected and the simple semiclassical single well quantization is recovered (see also \cite{Wu06}). Above the barrier, the inner turning points $p_+^{(l)}$, $p_-^{(r)}$ turn into a complex conjugate pair and different continuations of the semiclassical quantization have been suggested \cite{Chil74,Froe72,Chil91}. Following \cite{Chil74} we replace these turning points by the momentum at the barrier $p_{\rm barr}$ in the formulas for $S_{l,r}$, modify the tunneling integral $S_{\epsilon}$ as \begin{equation} S_\epsilon=\frac{{ \rm i }}{2}(p_-^{(r)}-p_+^{(l)})-\frac{{ \rm i }}{\pi}\int_{p_+^{(l)}}^{p_-^{(r)}}q(p,E)\,{ \rm d } p \end{equation} and introduce a non-vanishing action integral \begin{equation} S_{\theta}=\int_{p_+^{(l)}}^{p_{\rm barr}}q(p,E)\,{ \rm d } p + \int_{p_-^{(r)}}^{p_{\rm barr}}q(p,E)\,{ \rm d } p. \end{equation} The combined semiclassical approximation is continuous if the energy varies across the barrier (from region II to III in Fig.~\ref{fig-Upm}) and continuously approaches the simple version with only two turning points $p_-^{(l)}$ and $p_+^{(r)}$ in region III high above the barrier. Figure \ref{fig-levels_mp_mf-2} shows the semiclassical many particle energy eigenvalues in dependence on the parameter $\varepsilon$ in the supercritical region for $N=10$ particles ($g=-3/N_s$, $v=1$, $\hbar=1$). Also here one observes an almost precise agreement with the exact eigenvalues and the net of avoided level crossings in all details. In particular the level distances at the avoided crossings are reproduced and allow furthermore a direct semiclassical evaluation. Figure \ref{fig-levels_mp_mf-N2} shows both exact and semiclassical eigenvalues in dependence on $\varepsilon$ for subcritical interaction for only $N=2$ particles. Even for that small particle number the semiclassical eigenvalues approximate the exact ones quite well. The deviation between the semiclassical and exact quantum eigenvalues decreases with increasing particle number $N$. For a more quantitative comparison, the exact and semiclassical eigenvalues are listed in Table~\ref{table1} for $N=20$ particles and selected $\varepsilon$-values. Here the relative error is only about $5\cdot10^{-4}$. \begin{figure}[htb] \centering \includegraphics[width=6.6cm]{semi_vs_exact_vs_cl_sub_N2.eps} \caption{\label{fig-levels_mp_mf-N2} (Color online) Exact (- -), $E_n$ and semiclassical (\textcolor{red}{---}) many particle energies, $E_n^{sc}$, as a function of the onsite energy $\varepsilon$ in the subcritical region ($g=-0.9/N_s$) for $v=1$ and $N=2$ particles ($\hbar=1$).} \end{figure} \begin{table}[t] \begin{center} \begin{tabular}{cc|cc|cc|cc} \multicolumn{2}{c|}{$\varepsilon = 0$} &\multicolumn{2}{|c|}{ $\varepsilon = 0.5$} & \multicolumn{2}{|c|}{$\varepsilon = 1.0$} &\multicolumn{2}{|c}{ $\varepsilon = 1.5$}\\ \rule[-2mm]{0mm}{6mm} $E_n$ & $E_n^{sc}$ & $E_n$ & $E_n^{sc}$ & $E_n$ & $E_n^{sc}$ & $E_n$ & $E_n^{sc}$ \\ \hline 12.481 & 12.469 & 11.823 & 11.811 & 9.859 & 9.846 & 6.618 & 6.600 \\ 16.354 & 16.342 & 15.692 & 15.679 & 13.715 & 13.702 & 10.458 & 10.440\\ 20.097 & 20.085 & 19.429 & 19.417 & 17.437 & 17.424 & 14.161 & 14.143\\ 23.707 & 23.695 & 23.032 & 23.020 & 21.021 & 21.008 & 17.722 & 17.703\\ 27.178 & 27.167 & 26.496 & 26.484 & 24.462 & 24.449 & 21.135 & 21.115\\ 30.508 & 30.497 & 29.815 & 29.804 & 27.753 & 27.741 & 24.391 & 24.370\\ 33.690 & 33.679 & 32.985 & 32.974 & 30.888 & 30.875 & 27.481 & 27.458\\ 36.718 & 36.708 & 35.997 & 35.987 & 33.857 & 33.844 & 30.395 & 30.369 \\ 39.585 & 39.575 & 38.845 & 38.835 & 36.648 & 36.635 & 33.115 & 33.085\\ 42.281 & 42.272 & 41.516 & 41.507 & 39.246 & 39.234 & 35.622 & 35.583\\ 44.795 & 44.786 & 43.999 & 43.990 & 41.630 & 41.618 & 37.896 & 37.829\\ 47.111 & 47.104 & 46.273 & 46.265 & 43.758 & 43.745 & 40.090 & 40.070\\ 49.181 & 49.176 & 48.301 & 48.299 & 45.649 & 45.642 & 43.015 & 43.023\\ 51.112 & 51.107 & 50.031 & 50.024 & 46.729 & 46.739 & 46.847 & 46.853\\ 52.193 & 52.192 & 51.406 & 51.406 & 48.952 & 48.979 & 51.439 & 51.443 \\ 54.690 & 54.687 & 52.871 & 52.870 & 52.760 & 52.771 & 56.717 & 56.720\\ 54.783 & 54.781 & 54.680 & 54.678 & 57.413 & 57.419 & 62.641 & 62.643\\ 58.828 & 58.825 & 56.738 & 56.750 & 62.782 & 62.786 & 69.187 & 69.188\\ 58.829 & 58.826 & 61.512 & 61.518 & 68.813 & 68.815 & 76.340 & 76.341\\ 63.766 & 63.763 & 67.009 & 67.013 & 75.475 & 75.477 & 84.090 & 84.091\\ 63.766 & 63.763 & 73.171 & 73.173 & 82.751 & 82.752 & 92.432 & 92.433\\ \end{tabular} \caption{\label{table1} Exact quantum $E_{n=0...20}$ and semiclassical eigenvalues $E_n^{sc}$ for $\varepsilon = 0,\,0.5,\,1.0,\,1.5$ for $v=1$ and $N=20$ particles ($\hbar=1$) in the supercritical region ($g=-3/N_s$).} \end{center} \end{table} \subsection{Eigenfunctions} The mean-field approximation allows also a semiclassical construction of the eigenstates $\hat{H}|\Psi_n\rangle=E_n|\Psi_n\rangle$ of the Bose-Hubbard Hamiltonian (\ref{BH-hamiltonian}) resp.~(\ref{BH-hamiltonian-SR}). In the quantum case, the main interest may concentrate on the population imbalance of these states, i.e. the $p$-representation \begin{equation} |\Psi_n\rangle=\sum_{p=-N:2:N}\Psi_n(p)\,|p\rangle\,, \end{equation} where $p$ runs from $-N$ to $N$ in steps of 2. Based on the (classical) mean-field dynamics, we have to construct a semiclassical approximation in momentum space, which is discussed in some detail in \cite{00mom}. The purely classical momentum probability distribution is easily calculated as $w^{cl}(p)=(2T|\partial{\cal H}/\partial q|)^{-1}$, where $T$ is the period of oscillation. Note that the factor 2 arises from the two classical contributions, i.e.~the direct path and the path once reflected at the opposite turning point. For our mean-field Hamiltonian (\ref{klHam}) this leads to \begin{equation} \textstyle w^{cl}(p)=C\Big[v^2\big(N_s^2-\frac{p^2}{\hbar^2}\big)- \big(E^{sc}_n-\varepsilon \frac{p}{\hbar}-\frac{g}{2}(N_s^2 +\frac{p^2}{\hbar^2})\big)^2\Big]^{-\frac{1}{2}} \end{equation} in the classically allowed region, where $C=1/2T$ takes care of the normalization. The so-called primitive semiclassical wavefunction includes interference of the two classical paths: \begin{equation} |\Psi_n^{sc}(p)|^2=2\,|w_c(p)|\,\cos ^2\big(S(p)/\hbar- \pi/4\big) \end{equation} where $S(p)=S(p,E^{sc}_n)$ is the classical action for an energy equal to the semiclassical eigenenergy $E^{sc}_n$ of state number $n$, i.e.~the oriented momentum-integral over $q(p)=q(p,E^{sc}_n)$ \begin{equation}\label{eqn-action-wave} S(p)=-\int_{p_-}^p{q(p')\,dp'}+\frac{\pi}{2}\big(p-p_-\big), \end{equation} if $p_-$ lies on the lower potential curve $U_-$ or \begin{equation} S(p)=\int_{p_-}^p{q(p')\,dp'}. \end{equation} if $p_-$ lies on $U_+$. In the classical forbidden region $q(p)$ is complex valued and we can use the usual complex continuation \cite{Chil91} \begin{equation} |\Psi_n^{sc}(p)|^2=\frac{1}{2}\left|w^{cl}_n(p)\exp{(-2{ \rm i } S(p)/\hbar)}\right|; \end{equation} where \begin{equation} S(p)=\left\lbrace\begin{matrix} \mp \int_p^{p_{-}}{q(p)\,dp},\qquad p<p_-,\ p_-\ \text{on}\ U_\pm\\[2mm] \mp \int_{p_+}^{p}{q(p)\,dp},\qquad p>p_+,\ p_+\ \text{on}\ U_\pm \end{matrix}\right.\,. \end{equation} Note that these distributions should be renormalized to unity. \begin{figure}[htb] \centering \includegraphics[width=6.6cm]{wavefun_0.eps} \caption{\label{fig-wavefun0} (Color online) Momentum ``potentials'' $U_\pm(p)$ and -- plotted at the energy $E_n$ -- exact (blue circles), primitive semiclassical (green) and uniform semiclassical (red crosses) wavefunctions $|\Psi_n(p)|^2$ of the lowest eigenstate $n=0$ for $N=14$ particles ($g=-0.6/N_s$, $\varepsilon=0.6$, $v=1$ and $\hbar=1$). The solid lines are drawn to guide the eye.} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=6.6cm]{wavefun_2.eps} \caption{\label{fig-wavefun2} (Color online) Momentum ``potentials'' $U_\pm(p)$ and -- plotted at the energy $E_n$ -- exact (blue circles), primitive semiclassical (green) and uniform semiclassical (red crosses) wavefunctions $|\Psi_n(p)|^2$ of eigenstate $n=2$ for $N=14$ particles ($g=-0.9/N_s$, $\varepsilon=0$, $v=1$ and $\hbar=1$). The solid lines are drawn to guide the eye.} \end{figure} The divergence at the classical turning points $p_\pm$ is finally cured by a uniform semiclassical approximation (see e.g.~\cite{Chil91}). Here the different turning point scenarios discussed above must be distinguished.In the following we only state the result if $p_\pm$ both lie on the lower potential curve $U_-(p)$. A mapping onto harmonic oscillator wave functions then yields \cite{Chil91} \begin{equation} |\Psi_n^{sc}(p)|^2\sim \left|w^{cl}_n(p)\sqrt{2n+1-\xi^2}\right|\text{H}_n(\xi)^2\exp{(-\xi^2)} \label{uniform} \end{equation} where H$_n$ denotes the Hermite polynomial of $n$th order and $\xi$ is determined by \begin{equation} \frac{1}{2}\xi\,\sqrt{\xi_0^2-\xi^2}+\frac{1}{2}\xi_0^2\Big(\frac{\pi}{2}+ \arcsin{\big(\frac{\xi}{\xi_0}\big)}\Big)=S(p) \end{equation} with $\xi_0=\sqrt{2n+1}$. Up to now, the semiclassical momentum variable $p$ could be treated as continuous in the mean-field approximation, whereas in the quantum system, $p$ is a discrete variable, $p=-N,-N+2,\ldots,+N$. Semiclassically, this is a consequence of the even symmetry and the $\pi$-periodicity of the mean-field Hamiltonian (\ref{klHam}) in the coordinate $q$. As in Fourier transformation, this allows only even or odd integer values of $p$. The final uniform semiclassical wave functions in momentum space are therefore given by (\ref{uniform}) at $p=-N,\ldots,N$, normalized as $\sum_{p=-N:2:N}|\Psi_n^{sc}(p)|^2=1$. Figures \ref{fig-wavefun0} and \ref{fig-wavefun2} show a comparison of the primitive semiclassical approximation (normalized to fit the central maximum) and the uniform one with exact quantum results, both in the subcritical region for $N=14$ particles. Shown is the ground state $n=0$ for a biased Bose-Hubbard model ($\varepsilon=0.6$) and the third excited state $n=2$ for a symmetric one ($\varepsilon=0$). As expected, the quantum distributions mainly populate the classically allowed region inside the ``potential'' curves $U_\pm(p)$ and are very well approximated by the primitive semiclassical distributions. In particular, the uniform approximation is almost indistinguishable from the exact values. \section{Conclusion} It is demonstrated for a two-mode Bose-Hubbard model, that the mean-field approximation can be used to reconstruct approximately the individual eigenvalues in a semiclassical Bohr-Sommerfeld (or EBK) manner with astounding accuracy even for a small number of particles. The same holds for the primitive semiclassical approximation of corresponding eigenstates which was shown for the subcritical case. Furthermore the possibility of a uniform approximation was demonstrated for a special case. For the two-mode Bose-Hubbard system considered here, the classical description provided by the mean-field model has one degree of freedom and is therefore integrable. For three and more modes, the classical dynamics is chaotic (see, e.g., the studies of the three-mode system \cite{Moss06,05level3} or tilted optical lattices \cite{Thom03}). Chaoticity also appears in periodically driven two-mode systems \cite{Holt01a,07kicked} or the related kicked tops \cite{Haak01}. A semiclassical description of the quasienergy spectrum in these cases is a challenge for future studies. Finally it should be noted that the semiclassical analysis used in the present paper is based on well-known results which allow, e.g., a straightforward treatment of tunneling corrections. Basically these theories are, however, valid for a flat phase space. More recent developments directly address semiclassical quantization of spin Hamiltonians with a compact phase space (see, e.g., \cite{Shan80,Garg04,Nova05} and references given there). This research is, however, still in progress and applications to Hamiltonians like (\ref{BH-hamiltonian-SR}) including tunneling corrections will be the topic of future investigations. \begin{acknowledgments} Support from the Deutsche Forschungsgemeinschaft via the Graduiertenkolleg ''Nichtlineare Optik und Ultrakurzzeitphysik'' is gratefully acknowledged. \end{acknowledgments}
1,314,259,994,038
arxiv
\section{Introduction} The Hamiltonian formulation is basic to quantum theory. Despite this, the quantum theory of {\em fields} always begins with a Lagrangian. The main reason for this is, in words of P. A. M. Dirac \cite{dirac} ``it is not at all easy to formulate the conditions for a theory to be relativistic in terms of the Hamiltonian''. The conditions for relativistic invariance are satisfied by choosing a Lagrangian function to be a relativistic scalar. This function can be constructed as a scalar by balancing indices of vector, tensor or spinor fields and their four-dimensional derivatives. The purpose of this series of papers is to show how one can directly set up a Hamiltonian formalism for relativistic fields, including fields in arbitrary curved background, without first writing a Lagrangian and then proceeding to the Hamiltonian through the Legendre transformation. A Hamiltonian formalism can be set up in terms of fields and their canonical momenta quite as easily as a Lagrangian is written in terms of fields and their derivatives provided we treat fields and canonical momenta as differential forms (with values in spaces that characterize them). The canonical momenta in our formalism are differential forms of {\em one degree higher than the fields}. Thus, mathematically, coordinates and their momenta are not quantities of the same type. This fundamental change in the mindset allows us to set up a covariant coordinate-free formalism which is Hamiltonian from the very beginning and does not require a Lagrangian for its definition. Preliminary work in this direction already exists in formalisms variously known as `finite-dimensional covariant formalism' or `multisymplectic' or `polysymplectic' formalism. The basic idea was given by Weyl and de Donder in the so-called `multiple integral problem in the calculus of variations' and was developed by Kastrup, Kanatchikov, Gotay et al and Rovelli and others. See references and discussion at the end of this section. It is commonly believed that the Hamiltonian formalism singles out time as a special variable and this spoils the relativistic invariance which would have required space and time to be treated on the same footing. This is true if one regards derivative of fields with respect to the time coordinate as fundamentally different from that with respect to a space coordinate. But if we treat all the four derivatives $\partial_\mu\phi$ of a scalar field $\phi$ as one quantity then it follows we should allow four components $p_\mu$ of momenta to be associated with this one field variable. We should not pair one coordinate with one momentum degree of freedom. Such a pairing is a peculiarity of the Hamiltonian mechanics based on a single evolution parameter time, whereas fields extend in space as well as time. The fundamental principle in classical mechanics is that variation of a quantity called action is zero. The laws of nature allow only those configurations of physical variables which achieve an extremum for action. And this requirement of extremum is the classical limit when $\hbar$ is regarded as small. For classical mechanics time can be regarded as a `base manifold' and coordinates and momenta are in the `fibre'. This is the extended phase space. Action is an integrated value $\int \Theta$ of a one-form called Poincare-Cartan form $\Theta=pdq-Hdt$ on a supposed trajectory in extended phase space. The variation of the trajectory is determined by a vector field of infinitesimal displacements. The condition that the Lie derivative of the action along the proposed trajectory with respect to the field of variation is zero when the the coordinates are fixed at the ends of the trajectory determines the trajectory. The tangent vectors to allowed trajectories determine Hamiltonian vector fields. The variational principle can be reformulated by saying that the Hamiltonian vector fields of allowed trajectories annihilate the differential 2-form $\Omega=-d\Theta$ where $\Theta$ is the Poincare-Cartan (PC) form.\cite{bundle} In order to set up a purely Hamiltonian formalism for fields, we must first try to define a suitable PC-form for fields. The PC-form $\Theta$ has two parts : the so-called fundamental form $pdq$ which governs geometry of the phase space, and the Hamiltonian part $-Hdt$ which determines the dynamics for the given system. The logic for writing a PC-like form for a scalar field goes like this. In field theory, the field $\phi$ is the configuration variable analogous to $q$. Time and space are four ``time'' variables $t^\mu,\mu=0,1,2,3$. We purposely use the letter $t$ also for space coordinates $t^i,i=1,2,3$ to emphasize this point. We expect the PC-form for fields to be a differential {\em four}-form whose spacetime integral will give the quantity we call action. For a single scalar field, the momenta are related to velocities by $p_\mu=\partial_\mu\phi$. Thus we can keep them together as a 1-form $p=p_\mu dx^\mu$. To get a fundamental 4-form similar in appearance to $pdq$ we need a 3-form (and not a 1-form $p$) to be multiplied to $d\phi$. There is a natural way to produce a 3-form out of a 1-form, namely, by using the metric of the spacetime through the star-dual $*p$. We are led naturally to introduce the following expressions for the PC 4-forms : \begin{eqnarray} \Theta=(*p)\ww d\phi-H \end{eqnarray} where $H$ is a differential 4-form \begin{eqnarray} H &=& \frac{1}{2}(*p)\ww p+\frac{1}{2}m^2\phi^2(*1)\nonumber\\ &=& \left(-\frac{1}{2}\la p,p\ra+\frac{1}{2}m^2\phi^2\right)(*1). \end{eqnarray} We have used the definition of the star operator relating it to the inner product determined by $g_{\mu\nu}$ which has a signature corresponding to $(-,+,+,+)$. Our convention for the star operator is the same as Sharan\cite{sharan} or Choquet-Bruhat and DeWitt-Morette\cite{AMP}, and is very briefly summarized in Appendix A. Observe that the Hamiltonian 4-form $H$ is defined solely in terms of the field variable $\phi$ and the momenta $p$ (or $*p$). It is a coordinate independent definition. $H$ is a 4-form and it should not be confused with the Hamiltonian density or the energy density of the usual Lagrangian field theory. (That density is a 3-form which will be seen to be the conserved quantity for time translations in static spacetimes.) A field theory involves infinitely many degrees of freedom. The traditional view is to think of each value $\phi(\vv{x},t)$ for space points $\vv{x}$ on a plane of constant time $t$ as a separate degree of freedom for a scalar field. This is the usual `3+1' Hamiltonian point of view. See Chernoff and Marsden\cite{chernoff} for a rigorous account of Hamiltonian systems of infinitely many degrees of freedom. There is another, more interesting way to look at this. One can regard a solution of the field equations as a section or a surface in the {\em finite} dimensional extended phase space four of whose coordinates are the spacetime coordinates. The infinitely many ways in which this surface can be embedded in the extended phase space is a reflection of the infinitely many degrees of freedom of the field system. For our example, the extended phase space for a single scalar field is a nine-dimensional manifold (four spacetime variables $t^\mu$, one field variable $\phi$ and four momentum variables in $p$). A possible configuration of the field (that is, a solution of the field equations) is a four dimensional surface in this nine dimensional space ``above'' the four dimensional spacetime. The fiber bundle picture is helpful because we are interested in `sections' or functions from spacetime base into the fields and momenta. Mathematically, there may be more general submanifolds or surfaces in the extended phase space but they do not seem to be physically relevant. As mentioned above the mathematical formalism of the present paper is similar to the ``multisymplectic'' Lagrangian approach to field theory in the works of Le Page, as reviewed and developed by Kastrup\cite{kastrup}, the De Donder-Weyl\cite{rund} approach of Kanatchikov\cite{kanat} and the covariant Hamiltonian-Jacobi formalism of Rovelli\cite{rovelli}. Recent contributions to multisymplectic formalism are by Gotay and collaborators\cite{gotay}. Our approach is different from these because we use the background spacetime metric in an essential way through the Hodge star operator. Also, we treat the spacetime degrees of freedom $t^\mu$ which specify the base differently from the field or momentum degrees of freedom which are in the fibre above the base. We require the PC-form to be a 4-form whose first term is linear in $d\phi$ to imitate $pdq$ term and the second term is a 4-form $-H$ proportional to volume form $(*1)$. If, for instance, there are two fields $\phi_1$ and $\phi_2$, a 4-form involving a factor $d\phi_1\ww d\phi_2$ is possible in principle but that does not seem be allowed in the formalism for matter fields. Similarly other `non-canonical' expressions are possible in place of the standard $pdq-Hdt$ like expression. For gravity, the Einstein-Hilbert PC form does seem to have a non-standard expression as we shall see in a later paper. But gravity is a special case anyway. For gravity the `internal' degrees of freedom in the fibre related to arbitrary choice of local inertial frames and spacetime bases which define the transformation of all field and momenta differential forms happen to coincide. It is natural and tempting to put our formalism in the fibre bundle language, but we avoid that for the sake of clarifying the physical concepts. For most part we assume the bundle to be a direct product of spacetime and the fibre manifold. Our aim is to develop a purely Hamiltonian approach and define a suitable bracket to help build a quantum theory. The only reliable way to convert a classical theory into a quantum theory is to define a suitable antisymmetric (or symmetric) bracket for observables of the theory which can be re-interpreted in quantum theory as a commutator (or anticommutator). Our phase space has a very different character than the traditional phase space and our coordinate and momenta are differential forms of different degrees. In the traditional formalism the observables are real valued functions on the phase space and the definition of the Poisson bracket uses the pairing of one coordinate with one canonical momentum degree of freedom. But that is special to one-time formalism of mechanics. But in mechanics there is another way to look at the Poisson bracket. The bracket $\{B,A\}$ of two observables $A$ and $B$ refers to the rate of change of one observable $B$ when the other observable $A$ acts as the Hamiltonian. In one-time formalism the rate of change of a quantity is mathematically the same type of quantity as the original quantity. When space {\em and} time are evolution parameters then the rate of change can only mean rate of change along a vector field. This rate of change is the Lie derivative. Thus we need the Lie derivative of one quantity with respect to the Hamiltonian vector field determined on the phase space by the other quantity. Whereas the Hamiltonian vector field for any observable exists for in mechanics the same may not be so for fields. We find that the concept of a covariant bracket introduced by Peierls\cite{peierls} in 1952 (and promoted extensively by De Witt\cite{dewitt}) is a natural object to use in our Hamiltonian theory of fields. Here the rate of change of one quantity is taken when {\em the other quantity is added to the Hamiltonian as an infinitesimal perturbation} and vice-versa. The Poisson brackets of mechanics can be defined without reference to any Hamiltonian whereas the Peierls bracket requires the existence of a suitable governing Hamiltonian. Roughly speaking, the Poisson bracket can be described as the ``equal time'' Peierls bracket with zero Hamiltonian. This gives us added insight into the Hamiltonian mechanics of one time formalism, particularly the concept of causality in systems with time dependent Hamiltonians. The interesting features for one-time formalism of classical mechanics relating to causality and Peierls bracket which are revealed by our formalism of fields will be published elsewhere. In section II we define the Poincare-Cartan form. We set up the variational principle and Noether's theorem in sections III and IV. We define our observables as smeared 4-forms and their Peierls bracket in section V. Symmetries and conserved quantities are discussed in section VI and VII and the Hamilton-Jacobi formalism is discussed briefly in section VIII. Notation is summarized in appendix A. A calculation for the solution manifold in section II is outlined in appendix B. \section{Poincare-Cartan form for a scalar field} For fields the extended phase space is a bundle with the four-dimensional spacetime $T$ as base space. We denote the spacetime points by $t=(t^0,t^1,t^2,t^3)\in T$. Let us consider the one-dimensional fibre of 0-forms with coordinate $\phi$ and the four-dimensional fibre of 1-forms whose points are labelled by $p=p_\mu dt^\mu$. We can think of the extended phase space $\Gamma$ to be the base (of spacetime) with a five-dimensional fibre at each point which is a direct sum of 0-forms and 1-forms. We require the momentum canonical to a scalar field $\phi$ to be a 1-form $p=p_\mu dt^\mu$ where coefficients $p_\mu$ are independent variables. The PC-form on this nine-dimensional extended phase space (with coordinates $t^\mu,\phi,p_\mu$) is chosen as \begin{eqnarray} \Theta=(*p)\ww d\phi- H \end{eqnarray} where $H$ is a 4-form constructed from $p$ and $\phi$. The simplest choice is a Hamiltonian with a `kinetic energy term' and a `mass term' : \begin{eqnarray} H &=& \frac{1}{2}(*p)\ww p+\frac{1}{2}m^2\phi^2(*1)\nonumber\\ &=& \left(-\frac{1}{2}\la p,p\ra+\frac{1}{2}m^2\phi^2\right)(*1). \end{eqnarray} It is necessary to point out here that although our star operator is limited to the four-dimensional spacetime the exterior derivative works in the nine-dimensional extended phase space. Thus $d\phi$ is linearly independent of $dt^\mu$ and so also independent of $p=p_\mu dt^\mu$. The coefficients $p_\mu$ are independent coordinates. Therefore $dp_\mu$ are linearly independent of $d\phi$ and $dt^\mu$. It is also worth pointing out that the definition of star operator requires the existence of a set of orthonormal basis fields with a given orientation. this is where gravity sneaks in as a universal field. In the present paper the gravitational field will be fixed as an external field defining the star operator. Dynamics is determined by the 5-form \begin{eqnarray} \Omega = -d\Theta= -(d*p)\ww d\phi + dH, \end{eqnarray} and the variational principle can be stated as follows : \begin{quote} The solution manifold $\sigma$ in the extended phase space is a section whose tangent vectors annihilate $\Omega$. \end{quote} This statement is explained below. The relation of this statement of variational principle to the usual statement for variation of the action is discussed in the next section. In mechanics we look for phase trajectories. Here, in field theory we look for a four-dimensional image of a {\em section}, that is, a mapping $\sigma$ from the four dimensional base into a 4-dimensional submanifold of the nine-dimensional extended phase space : \begin{eqnarray} \sigma : t=\{t^\mu\} \to \{t^\mu,\phi=F(t), p=G_\mu(t)dt^\mu\}. \end{eqnarray} By abuse of language we will denote the mapping as well as its image of the base by the same letter $\sigma$. The context will make it clear what the symbol corresponds to. $\sigma$ defines a surface or sub-manifold such that if $X_0,X_1,X_2,X_3$ are four linearly independent vectors in the tangent space of this submanifold at any point then the 1-form obtained by the interior product of all these with $\Omega$ should be zero : \begin{eqnarray} i(X_3)i(X_2)i(X_1)i(X_0)\Omega = 0.\end{eqnarray} Recall that the interior product of a vector field $X$ with an $r$-form $\alpha$ is defined as the $(r-1)$-form $i(X)\alpha$ so that $i(X)\alpha(Y_1,...,Y_{r-1})=\alpha(X,Y_1,...,Y_{r-1})$. Depending on typographical convenience we shall denote the interior product of a vector field $X$ with a form $\alpha$ by $i(X)\alpha$ or $i_X\alpha$. The meaning of variational principle above is that for arbitrary vector field $Y$ on $\Gamma$, \begin{eqnarray} \Omega(X_3,X_2,X_1,X_0,Y)=0.\end{eqnarray} In the following we call $\sigma$ determined by this condition as a ``solution submanifold''. We can choose $X_\mu$ to be just the push-forwards by $\sigma$ of the coordinate basis vectors $\partial_\mu\equiv \partial/\partial t^\mu$ : \begin{eqnarray*} X_\mu=\sigma_*(\partial_\mu)=\partial_\mu+F_{,\mu}\partial_\phi+G_{\nu,\mu}\partial_{p_\nu}. \end{eqnarray*} For our case $\Omega$ can be calculated easily. Using \begin{eqnarray*} d(*p\ww p)=d(*p)\ww p+*p\ww(dp)=2d(*p)\ww p \end{eqnarray*} we get \begin{eqnarray*} dH=(d*p)\ww p+m^2\phi\, d\phi\ww(*1) \end{eqnarray*} Substituting in $\Omega$ we see that it {\em factorizes} \begin{eqnarray} \Omega = (d*p-m^2\phi(*1))\ww(p-d\phi), \end{eqnarray} where we use the fact that the 5-form $(*1)\ww p$ in four variables $t$ is zero because there are five factors of $dt$'s. We give details of the calculation for flat space in Appendix B. The condition on $F$ and $G_\mu$ to define a solution manifold is \begin{eqnarray} G_\mu=F_{,\mu},\qquad d*dF-m^2F(*1)=0 \end{eqnarray} which is the solution $\phi=F(t)$ to the Klein-Gordon equation for the field $\phi$. There is a less rigorous but physically straightforward way to see what solution manifold should be. Vector fields annihilating $p-d\phi$ imply $p_\mu=\partial_\mu\phi$. Similarly, for $p=d\phi$, the first factor gives zero if \begin{eqnarray} d*d\phi-m^2\phi(*1)=0 \end{eqnarray} Now \begin{eqnarray} d*d\phi &=& \partial_\mu(\sqrt{|g|}g^{\mu\nu}\partial_\nu\phi) dt^0\ww\dots\ww dt^3\nonumber\\ &=& \frac{1}{\sqrt{|g|}}\partial_\mu(\sqrt{|g|}g^{\mu\nu}\partial_\nu\phi)(*1) \end{eqnarray} and thus $\phi$ satisfies the Klein-Gordon equation with the Laplace-Beltrami operator of the curved space. We close this section with a few remarks. \begin{enumerate} \item Since $\phi$ and $t^\mu$ are coordinates in the extended phase space, $d\phi$ and $dt^\mu$ are linearly independent. Therefore $p_\mu dt^\mu-d\phi=0$ is meaningless as it stands. What it implies is that there exists a subspace or submanifold $\sigma$ of the nine-dimensional extended phase space such that any of the independent vector fields $X_\mu$ tangent to $\sigma$ satisfies \begin{eqnarray*} (p_\mu dt^\mu-d\phi)(X)=0.\end{eqnarray*} The 1-form $p_\mu dt^\mu$ has non-zero coefficients for the $dt^\mu$'s and zero for $d\phi$ and $dp_\mu$. These coefficients $p_\mu$ themselves are independent coordinates. Thus, although $*p\ww p=-\la p,p\ra (*1)$ is proportional to 4-form $*1$ its exterior derivative $d(*p\ww p)$ need not be zero.\\ \item If there are several fields $\phi^a$ then we can construct the PC-form similarly as \begin{eqnarray} \Theta = *p_a\ww d\phi^a-H \end{eqnarray} where $p_a=p_{a\mu}dt^\mu$ are canonical momenta for the fields $\phi^a$ and $H$ is a 4-form depending on all the fields and the momenta. \\ \item If $(\phi_1,p_1)$ and $(\phi_2,p_2)$ are two solutions for the scalar field, then the 4-form \begin{eqnarray*} &&d(\phi_1 *p_2-\phi_2 *p_1) \\ &=& d\phi_1\ww *p_2+\phi_1 d*p_2-(1\leftrightarrow 2)\\ &=& (d\phi_1)\ww(*d\phi_2) +\phi_1(m^2\phi_2)*(1)-(1\leftrightarrow 2)\\ &=& (d\phi_1)\ww(*d\phi_2)-(d\phi_2)\ww(*d\phi_1)\\ &=& 0 \end{eqnarray*} because, (using the identity $(*t)\ww s=(-1)^{r(n-r)}t\ww(*s)$ for any $r$-forms $t$ and $s$ in an $n$-dimensional space) we conclude that in our case \begin{eqnarray*} (d\phi_1)\ww(*d\phi_2)=-(*d\phi_1)\ww(d\phi_2)=(d\phi_2)\ww(*\phi_1).\end{eqnarray*} Thus, by Stokes theorem the integral \begin{eqnarray*} \oint (\phi_1 *p_2-\phi_2 *p_1) \end{eqnarray*} over any closed surface is zero. This leads to a linear space of solutions on which there is a time-independent scalar product. \end{enumerate} \section{Stationary Action} We have seen that a specific solution to the field equations can be realized as a four-dimensional submanifold $\sigma$ of the nine-dimensional extended phase space. Hamilton's variational principle involves comparing the integral of the PC-form on a proposed four-dimensional solution submanifold with a similar integral on a neighboring submanifold. Let $\sigma : t\to \{ \phi=F(t),p_\mu=F_{,\mu}\}\in \Gamma$ be the submanifold corresponding to some given solution. Let $D$ be a region of spacetime and $\partial D$ its boundary. Calculate the PC-form $\Theta$ on the region $\sigma(D)$ of the extended phase-space mapped by $\sigma$. Let $Y$ be a vector field of variation. We can paraphrase Arnold's elegant argument\cite{arnold} for mechanics and apply to fields. Calculate the Lie derivative using the formula $L_Y=i_Y\circ d+d\circ i_Y$ (see for example \cite{michor}) : \begin{eqnarray*} \delta_Y \int_{\sigma(D)}\Theta & \equiv & L_Y\int_{\sigma(D)}\Theta\\ &=& \int_{\sigma(D)}L_Y\Theta\\ &=& \int_{\sigma(D)}(i_Y\circ d+d\circ i_Y)\Theta\\ &=& -\int_{\sigma(D)}i_Y\Omega +\int_{\sigma(D)}d[i_Y\Theta]\\ &=& \oint_{\partial \sigma(D)}i_Y\Theta \end{eqnarray*} where the integral of $i_Y\Omega$ on the submanifold $\sigma$ is zero because the integral evaluates $i_Y\Omega$ on tangent vectors $\sigma_*(\partial_\mu)$ to the proposed solution sumanifold which is zero. Thus variational principle can also be expressed as, \begin{eqnarray} \delta_Y \int_{\sigma(D)}\Theta=\left.\oint_{\partial \sigma(D)}i_Y\Theta\right|_{0}.\end{eqnarray} Here we use the symbol $0$ to denote a quantity ``on-shell'', that is, evaluated on a solution submanifold. It needs to be emphasized that since the variation field $Y$ is not restricted to the solution surface, it will be a mistake to use $\phi=F,p_\mu=F_{,\mu}$ {\em before} the evaluation of $i_Y\Theta$. Since $\Theta$ involves $d\phi$ and $dt^\mu$ (and no $dp_\mu$), the surface integral of 3-form $i(Y)\Theta$ gives zero if the infinitesimal field $Y$ is zero along the directions $\partial/\partial t^\mu$ and $\partial/\partial \phi$. But there is no restriction on variation in momenta directions. We can re-express the variational principle (or principle of stationary action) in extended phase space as : \begin{quote} \textit{ Under variation by a field $Y$ with $\phi,t^\mu$ held fixed at the boundary the action evaluated at the solution submanifold $\sigma$ is stationary : } \begin{eqnarray} \delta_Y \int_{\sigma(D)}\Theta=0.\end{eqnarray} \end{quote} \section{Noether's Theorem} Let us consider a variation $Y$ not necessarily zero at the boundary $\sigma(D)$ where $\sigma$ is solution manifold. Equation (14) for variations is \begin{eqnarray} \delta_Y \int_{\sigma(D)}\Theta=\int_{\sigma(D)} L_Y\Theta =\left.\oint_{\partial \sigma(D)}i_Y\Theta\right|_{0} \end{eqnarray} If we know that for some given type of variation $Y$, \begin{eqnarray} L_Y\Theta =0 \end{eqnarray} then we say that action in invariant under the infinitesimal mapping represented by the fields $Y$ and $Y$ is called a `symmetry field'. Usually, the symmetry fields satisfy the conditions $L_Y(*p\ww d\phi)=0$ and $L_Y H=0$ separately. The surface integral \begin{eqnarray} \left.\oint_{\partial \sigma(D)}i_Y\Theta\right|_{0}=0 \end{eqnarray} gives a conservation law for the 3-form $i_Y\Theta$. In the particular case when the boundary $\partial D$ is constituted by two spacelike surfaces, the 3-form $i_Y\Theta$, restricted to either surface represents the volume density of the conserved ``charge'' on that surface. \section{Observables and Peierls bracket} Our formalism treats coordinate $\phi$ and its canonical momentum $p$ respectively as 0- and 1-forms. In classical mechanics they seem to be quantities of the same type because in one-dimensional base manifold representing time, 0-forms and 1-forms are both 1-dimensional spaces. This situation changes for field theory in four dimensions. There 0- and 1-forms are respectively spaces of one and four dimensions. The observables of our theory are quantities like action : integrated quantities over a four dimensional submanifold. A typical observable is an integrated 4-form $A=\int \alpha$. The support of $\alpha$, that is set over which it has non-zero values could be suitably restricted to allow for local quantities as observables. For example, the scalar field $\phi$ is related to the observable $\int \phi j (*1)$ where $j(t)$ is a scalar `switching function' which is non-zero in a small spacetime region. For simplicity we would call both the integrated as well as the non-integrated quantity by the same name `observable', and it leads to no confusion. The Peierls bracket is the natural bracket-like quantity in this formalism. When the Hamiltonian 4-form $H$ is perturbed by observable $\lambda B$ (where $\lambda$ is an infinitesimal parameter) the solution manifold shifts, and, after taking causality into account, the difference between the two solutions at different points in the limit of $\lambda\to 0$ determines a `vertical' vector field $X_B$. This field changes all other observables. The change in an observable $A$ is equal to the Lie derivative $D_BA\equiv L_{X_B}A$ of $A$ with respect to $X_B$. Switching the roles of $B$ and $A$ we can calculate $D_AB$. The Peierls bracket $[A,B]$ is defined as the difference $D_BA-D_AB$. For illustration we outline the calculate the Peierls bracket for the scalar field with itself in Minkowski space. The observable in question is the integrated 4-form \begin{eqnarray*} B=\int \beta=\int \phi j (*1) \end{eqnarray*} where $j$ is a switching function in spacetime with which the field $\phi$ is `smeared'. The Hamiltonian is changed to $H+\lambda B$ and the solution manifold given by $t\to \phi=F_0(t),p_\nu=F_{0,\nu}$ gets modified to a solution manifold which is determined by the 5-form \begin{eqnarray*} \Omega_B &=& -d(*p)\ww d\phi +dH +\lambda d\phi j (*1)\\ &=& [d(*p)-m^2\phi(*1)-\lambda j(*1)]\ww [p-d\phi]. \end{eqnarray*} No derivative of $j$ appears because that would involve five factors of $dt$'s and there can be only four such factors in a wedge product. The equations for a solution $t\to \phi=F(t),p_\nu=G_\nu$ become \begin{eqnarray*} G_\nu=F_{,\nu},\qquad (\partial^\mu\partial_\mu -m^2)F = \lambda j. \end{eqnarray*} The modification caused by $\lambda B$ as $\lambda \to 0$ to the solution $F_0$ is given by the retarded solution to the inhomogeneous Klein-Gordon equation, \begin{eqnarray*} F(t)=F_0(t)+\lambda K(t),\qquad G_\nu=F_{,\nu} \end{eqnarray*} where \begin{eqnarray*} K(t)=\int G_R(t-s)j(s)d^4s. \end{eqnarray*} The retarded and advanced Green's functions $G_R(t),G_A(t)$ are the unique solutions \begin{eqnarray*} G_{R,A}(t)=\frac{1}{(2\pi)^4}\int d^4k\, \frac{\exp(-ik^0t^0+i\vv{k}\cdot\vv{t})}{(k^0\pm i\epsilon)^2-\vv{k}^2-m^2}\end{eqnarray*} of \begin{eqnarray*} (\partial^\mu\partial_\mu -m^2)G_R(t) =\delta^4(t)\end{eqnarray*} with the boundary condition that $G_R(t)$ is non-zero only in the forward light-cone and $G_A(t)$ in the backward light-cone. Thus the vertical field is determined to be ($\lambda\to 0$ can be factored out to give the tangent vector field) \begin{eqnarray*} Y_B=K(t)\dydx{}{\phi}+K_{,\nu}\dydx{}{p_\nu} \end{eqnarray*} Consider the observable \begin{eqnarray*} A =\int \alpha=\int \phi k (*1) \end{eqnarray*} where $k(t)$ is another switching function. The change in $A$ due to $B$ is given by $D_BA =L_{Y_B}(A)$. Now, \begin{eqnarray*} L_{Y_B}(A) &=& \int [i_Y (d\phi k (*1))+d(\phi k \,i(Y)(*1))]\\ &=& \int k K (*1), \end{eqnarray*} because $i(Y_B)(*1)=0$. Thus \begin{eqnarray*} D_BA &=&\int d^4t k(t)K(t)\\ &=& \int\int d^4t d^4s k(t)G_R(t-s)j(s) \end{eqnarray*} Reversing the role of $B$ and $A$ we get the Peierls bracket \begin{eqnarray*} [A,B]=D_BA-D_AB=\int\int d^4t d^4s k(t)\Delta(t-s)j(s) \end{eqnarray*} where $\Delta$ is the Pauli-Jordan function $\Delta=G_R-G_A$. This is equivalent to the commutator \begin{eqnarray*} [\phi(t),\phi(s)]=\Delta(t-s) \end{eqnarray*} when $k$ and $j$ are Dirac deltas with support at $t$ and $s$ respectively. The Peierls bracket for the field $\phi$ and momentum $p$ can be calculated by considering the observable \begin{eqnarray*} C=\lambda (*p)\ww l=-\lambda p_\mu l^\mu (*1)\end{eqnarray*} where in this case we must employ a 1-form switching function $l$ to smear the momentum. The 5-form is \begin{eqnarray*} \Omega_C= [d(*p)-m^2\phi(*1)]\ww[p+\lambda l-d\phi]. \end{eqnarray*} The relevant equation for the modified solution is \begin{eqnarray*} (\partial^\mu\partial_\mu -m^2)F = \lambda \partial^\mu l_\mu \end{eqnarray*} because $d(*p)$ becomes $d(*(d\phi-l))=\partial^\mu\partial_\mu\phi-\partial^\mu l_\mu$. The change in $B$ is \begin{eqnarray*} D_CB &=& \int\int d^4t d^4s j(t)G_R(t-s)(\partial^\mu l_\mu)(s). \end{eqnarray*} On the other hand we have already calculated the vertical field for $B$ which gives \begin{eqnarray*} D_BC &=&-\int K_{,\mu}l^\mu (*1)\\ &=& -\int\int d^4t d^4s\, l^\mu(t) \partial_{t^\mu} G_R(t-s)j(s)\\ &=& \int\int d^4t d^4s (\partial_\mu l^\mu)(t) G_R(t-s)j(s)\\ &=& \int\int d^4t d^4s j(t)G_A(t-s)(\partial_\mu l^\mu)(s)\end{eqnarray*} after integrating by parts in the third step. Therefore, \begin{eqnarray*} [B,C]=\int\int d^4t d^4s j(t)\Delta(t-s)(\partial_\mu l^\mu)(s)\end{eqnarray*} which, for $j(t)=\delta^4(t)$ and $l_\mu=(1,0,0,0)\delta^4(s)$ gives the equal-time ($t^0-s^0$) canonical Poisson bracket of the ``3+1'' version of field theory \begin{eqnarray*} [\phi(t,\vv{t}),p_0(t,\vv{s})]=\delta(\vv{t}-\vv{s})\end{eqnarray*} because \begin{eqnarray*} \partial_0\Delta(t)=-\delta^3(\vv{t}). \end{eqnarray*} \section{$i_Y\Theta$ for $Y=v^\mu\partial/\partial t^\mu$} As an illustration of the Noether theorem in our formalism let us evaluate $i_Y\Theta$ for the present scalar field case for spacetime translations. The vector field for constant infinitesimal displacement $v^\mu$ is \begin{eqnarray*} Y=v^\mu\dydx{}{t^\mu}\end{eqnarray*} We are not assuming that spacetime is flat or that $Y$ are Killing fields of translation symmetry. We know that \begin{eqnarray*} *p &=& p_\mu *(dt^\mu)\\ &=& \frac{1}{3!}\sqrt{-g}p_\mu g^{\mu\alpha}\varepsilon_{\alpha\nu\sigma\tau} (\nu\sigma\tau)\\ &\equiv & \frac{1}{3!}\sqrt{-g}p^\alpha\varepsilon_{\alpha\nu\sigma\tau} (\nu\sigma\tau) \end{eqnarray*} where we introduce a convenient notation \begin{eqnarray*} (\nu\sigma\tau) \equiv dt^\nu\ww dt^\sigma\ww dt^\tau, \end{eqnarray*} with similar notation for two or four factors of $dt^\mu$ and we have defined the contravariant canonical momentum \begin{eqnarray*} p^\mu=g^{\mu\nu}p_\nu .\end{eqnarray*} A simple calculation using \begin{eqnarray*} i_Y(dt^\mu\ww dt^\nu\ww dt^\sigma) &=& v^\mu(dt^\nu\ww dt^\sigma) -v^\nu(dt^\mu\ww dt^\sigma)\\&&+ v^\sigma(dt^\mu\ww dt^\nu)\end{eqnarray*} gives, \begin{eqnarray*} i_Y(*p)=\frac{1}{2!}\sqrt{-g}p^\alpha v^\beta \varepsilon_{\alpha\beta\sigma\tau}(\sigma\tau).\end{eqnarray*} We can write this also as \begin{eqnarray*} i_Y(*p)= p_\mu v_\nu *(dt^\mu\ww dt^\nu)= *(p\ww Y^\flat)\end{eqnarray*} where $v_\mu=g_{\mu\nu}v^\nu$ and $Y^\flat=v_\nu dt^\nu$ is the covariant field corresponding to $Y$ after lowering the index by the metric. As $Y$ involves $\partial/\partial t^\mu$ whose action on $d\phi$ is zero \begin{eqnarray*} i_Y(*p\ww d\phi)=(i_Y*p)\ww d\phi, \end{eqnarray*} and \begin{eqnarray*} i_Y[*p\ww p\,]&=&[i_Y*p]\ww p-*p(i_Yp)\\ &=&*(p\ww Y^\flat)\ww p-p(Y)*p. \end{eqnarray*} The formula $i_Y(*p)=*(p\ww Y^\flat)$, although elegant, is not very useful for calculations. A straightforward expression for $i_Y(*p)\ww p$ is \begin{eqnarray*} i_Y(*p)\ww p=[p_\mu(p.v)-v_\mu(p.p)]*(dt^\mu) \end{eqnarray*} where \begin{eqnarray*} p.v=p_\mu v^\mu=\la p,Y^\flat\ra,\qquad p.p= p_\mu p^\mu=\la p,p\ra. \end{eqnarray*} Thus the calculation of $i_Y\Theta$ proceeds as follows, \begin{eqnarray*} i_Y\Theta&=&i_Y\left[*p\ww d\phi-\frac{1}{2}*p\ww p -\frac{1}{2}m^2\phi^2*(1)\right]\\ &=&(i_Y*p)\ww\left(d\phi-\frac{1}{2}p\right)+\frac{1}{2}p(Y)*p\\ &&-\frac{1}{2}m^2\phi^2i_Y*(1) \end{eqnarray*} Evaluating it ``on-shell'' means we can put $p=d\phi$. Using expression for $i_Y(*p)\ww p$, $p(Y)=p.v$ and the fact that \begin{eqnarray*} i_Y*(1)&=&\sqrt{-g}(v^0[123]-v^1[023]+v^2[013]-v^3[012])\\ &=& v_{\mu}*(dt^\mu), \end{eqnarray*} we get \begin{eqnarray*} i_Y\Theta &=&\left(\frac{1}{2}[p_\mu(p.v)-v_\mu(p.p)]+\frac{1}{2}(p.v)p_\mu\right) *(dt^\mu)\\ &&-\frac{1}{2}m^2\phi^2v_\mu*(dt^\mu)\\ &=&\left.\left(p_\mu(p.v)-\frac{1}{2}\left[(p.p) +m^2\phi^2)v_\mu \right]\right) *(dt^\mu)\right|_{0}\\ &=& \la d\phi,Y^\flat\ra (*d\phi) -\frac{1}{2}\left[\la d\phi,d\phi\ra +m^2\phi^2\right](*Y^\flat) \end{eqnarray*} which can also be written in the useful form \begin{eqnarray} i_Y\Theta &=& \left[\phi_{,\mu}\phi_{,\nu} -\frac{1}{2}g_{\mu\nu}\left(g^{\alpha\beta}\phi_{,\alpha}\phi_{,\beta} +m^2\phi^2\right)\right]v^\mu*(dt^\nu)\nonumber \\ && \end{eqnarray} \section{Examples of conserved quantities} As an illustration we calculate the conserved quantities for the Klein-Gordon field in Minkowski background. In this case $L_Y\Theta=0$ (actually $\L_Y(*p\ww d\phi)=0$ and $\L_Y H=0$ independently) for any of the ten Killing vector fields $Y$ corresponding to Poincare transformations. For spacetime translations we have derived a formula in the last section. Since we usually integrate on the spacelike surface $t^0=$ constant, it is enough to calculate the term $*(dt^0)=-(123)$, which alone will give a non-zero contribution on $t=$ constant surface. The following table gives the expected conserved quantities (energy and momentum densities) for time- and space-translations \vskip 5mm \begin{center} \begin{tabular}{ccl} $Y$ & $v^\mu$ & $-(123)$ part of $i_Y\Theta$\\ &&\\ $\partial/\partial t^0$ & $(1,0,0,0)$ & $(1/2)[(\phi_{,0})^2+(\nabla\phi)^2+m^2\phi^2]d^3t$\\ &&\\ $\partial/\partial t^1$ & $(0,1,0,0)$ & $[\phi_{,1}\phi_{,0}]d^3t$\\ \end{tabular} \end{center} \begin{appendix} \section{Notation} The spacetime is a Riemannian space with coordinates $t^\mu,\mu=0,1,2,3$. Basis vectors in a tangent space are written $\partial_\mu=\partial/\partial t^\mu$ The metric is given by the inner product $\la \partial_\mu,\partial_\nu\ra=g_{\mu\nu}$. The cotangent spaces have basis elements $dt^\mu$ with $\la dt^\mu,dt^\nu\ra=g^{\mu\nu}$. The metric has signature $(-1,1,1,1)$. The wedge product is defined so that $\alpha\ww \beta=\alpha\tp\beta-\beta\tp\alpha$ for one-forms $\alpha$ and $\beta$. The exterior derivative is defined so that for an $r$-form $\alpha=a_{\mu_1\dots\mu_r}dt^{\mu_1}\ww\dots\ww dt^{\mu_r}$ the derivative is the $(r+1)$-form \begin{eqnarray*} d\alpha=a_{\mu_1\dots\mu_r,\nu}dt^\nu\ww dt^{\mu_1}\ww\dots\ww dt^{\mu_r}.\end{eqnarray*} The Hodge star is a linear operator that maps $r$-forms into $(4-r)$-forms in our four-dimensional space. The definition is \begin{eqnarray*} *(dt^{\mu_1}\ww\dots\ww dt^{\mu_r})&=&[(4-r)!]^{-1}\sqrt{-g}g^{\mu_1\nu_1}\dots\\ &&g^{\mu_r\nu_r}\varepsilon_{\nu_1\dots\nu_r\nu_{r+1}\dots\nu_4}dt^{\nu_{r+1}}\dots dt^{\nu_4}\end{eqnarray*} where $g$ denotes the determinant of $g_{\mu\nu}$ and $\varepsilon$ is the antisymmetric tensor defined with $\varepsilon_{0123}=1$. The one-dimensional space of 0-forms has the unit vector equal to real number $1$. The one-dimensional space space of 4-forms has the chosen orientation given by the unit vector $\varepsilon=n^0\ww n^1\ww n^2\ww n^3$ where $n^\mu$ are the orthonormal basis vectors. In ordinary basis $\varepsilon=\sqrt{g}dt^0\ww dt^1\ww dt^2\ww dt^3$. The star operator acting on the zero form equal to constant number $1$ is denoted by $*1=\varepsilon=\sqrt{g}dt^0\ww dt^1\ww dt^2\ww dt^3$. We have the simple result that $dt^\mu\ww *dt^\nu=-*dt^\nu\ww dt^\mu=g^{\mu\nu}(*1)$ Note carefully that $*1$ is not the same as $*(1)$ where the shorthand notation $(\mu)$ is used for $dt^\mu$. Similarly we use $(12)$ for $dt^1\ww dt^2$, $(013)$ for $dt^0\ww dt^1\ww dt^3$ etc. The interior product $i(X)$ of a vector $X$ with an $r$-form $\alpha$ gives an $(r-1)$-form $i(X)\alpha$ defined by \begin{eqnarray*} (i(X)\alpha)(Y_1,\dots,Y_{r-1})=\alpha(X,Y_1,\dots,Y_{r-1})\end{eqnarray*} When it is more convenient we will denote the interior product operator by $i_X$ in place of $i(X)$. Two successive applications of interior products on a form will be denoted by \begin{eqnarray*} i(X,Y)\alpha\equiv [i(X)\circ i(Y)]\alpha = i(X)[i(Y)\alpha] \end{eqnarray*} Note that $i(X,Y)=-i(Y,X)$. Similarly successive applications $i(XY\dots Z)$ of many such interior products can be defined. If $\alpha$ is an $r$-form then \begin{eqnarray*} i(X)(\alpha\ww\beta)=[i(X)\alpha]\ww\beta+(-1)^r\alpha\ww i(X)\beta\end{eqnarray*} In order to abbreviate expressions we use $i(12)$ for $i(X_1X_2)=i(X_1)\circ i(X_2)$ etc. when there is no confusion. \section{Solution submanifold of $\Omega$} We give the calculation of $i(X_3X_2X_1X_0)\Omega$ for $H=(*p)\ww p/2+m^2\phi^2*(1)/2$ in Minkowski space for illustration. We take the independent tangent vectors to the section \begin{eqnarray*} \sigma : t\to (t^\mu, \phi=F(t),p_\nu=G_\nu(t)) \end{eqnarray*} the push-forwards \begin{eqnarray*} X_\mu\equiv \sigma_*(\partial_\mu)=\partial_\mu+F_{,\mu}\partial_\phi+G_{\nu,\mu}\partial_{p_\nu}\end{eqnarray*} The calculation involves the following expressions (we use abbreviations of Appendix A) : \begin{eqnarray*} *1 &=& (0123)\\ *dt^\mu &=& [-(123),-(023),+(013),-(012)]\end{eqnarray*} \begin{eqnarray*} d(p_\mu *dt^\mu)&=& dp_\mu *dt^\mu=-dp_0(123)-dp_1(023)+dp_2(013)-dp_3(012)\\ i(0)(0123)&=& (123),\\ i(1)(0123) &=& -(023),\\ i(2)(0123) &=& (013),\\ i(3)(0123)&=&-(012)\\ &&\\ i(0)(d*p)& = & dp_1(23)-dp_2(13)+dp_3(12)-G_{0,0}(123)-G_{1,0}(023)\\&&+G_{2,0}(013)-G_{3,0}(012)\\ i(1)(d*p)& = & dp_0(23)+dp_2(03)-dp_3(02)-G_{0,1}(123)-G_{1,1}(023)\\&&+G_{2,1}(013)-G_{3,1}(012)\\ i(2)(d*p)& = & -dp_0(23)-dp_1(03)+dp_3(01)-G_{0,2}(123)-G_{1,2}(023)\\&&+G_{2,2}(013)-G_{3,2}(012)\\ i(0)(d*p)& = & dp_0(12)+dp_1(02)-dp_2(01)-G_{0,3}(123)-G_{1,3}(023)\\&&+G_{2,3}(013)-G_{3,3}(012)\\ i(10)(d*p)&=&dp_2(3)-dp_3(2)-G_{0,0}(23)-G_{2,0}(03)+G_{3,0}(02)\\&&+G_{1,1}(23)-G_{2,1}(13)+G_{3,1}(12)\\ i(20)(d*p)&=&-dp_1(3)+dp_3(1)+G_{0,0}(13)+G_{1,0}(03)-G_{3,0}(01)\\&&+G_{1,2}(23)-G_{2,2}(13)+G_{3,2}(12)\\ i(30)(d*p)&=&dp_1(2)-dp_2(1)-G_{0,0}(12)-G_{1,0}(02)+G_{2,0}(01)\\&&+G_{1,3}(23)-G_{2,3}(13)+G_{3,3}(12)\\ i(21)(d*p)&=&-dp_0(3)-dp_3(0)+G_{0,1}(13)+G_{1,1}(03)-G_{3,1}(01)\\&&+G_{0,2}(23)+G_{2,2}(03)-G_{3,2}(02)\\ i(31)(d*p)&=&dp_0(2)+dp_2(0)-G_{0,1}(12)-G_{1,1}(02)+G_{2,1}(01)\\&&+G_{0,3}(23)+G_{2,3}(03)-G_{3,3}(02)\\ i(32)(d*p)&=&-dp_0(1)-dp_1(0)-G_{0,2}(12)-G_{1,2}(02)+G_{2,2}(01)\\&&-G_{0,3}(13)-G_{1,3}(03)+G_{3,3}(01)\\ i(321)(d*p)&=&dp_0-G_{0,1}(1)-G_{1,1}(0)-G_{0,2}(2)-G_{2,2}(0)\\&&-G_{0,3}(3)-G_{3,3}(0)\\ i(320)(d*p)&=&dp_1-G_{0,0}(1)-G_{1,0}(0)-G_{1,2}(2)+G_{2,2}(1)\\&&-G_{1,3}(3)+G_{3,3}(1)\\ i(310)(d*p)&=&-dp_2+G_{0,0}(2)+G_{2,1}(0)-G_{1,1}(2)+G_{2,1}(1)\\&&+G_{2,3}(3)-G_{3,3}(2)\\ i(210)(d*p)&=&dp_3-G_{0,0}(3)-G_{3,0}(0)+G_{1,1}(3)-G_{3,1}(1)\\&&+G_{2,2}(3)-G_{3,2}(2)\\ i(3210)(d*p)&=&-G_{0,0}+G_{1,1}+G_{2,2}+G_{3,3}\\ \end{eqnarray*} If $A$ is a 4-form and $B$ a 1-form then \begin{eqnarray*} i(3210)[A\ww B]&=&[i(3210)A]B+[i(321)A]i(0)B-[i(320)A]i(1)B\\&&+[i(310)A]i(2)B-[i(210)A]i(3)B \end{eqnarray*} For $A=d*p-m^2\phi *1$ and $B=p-d\phi$ the expression for $i(3210)[A\ww B]=i(3210)\Omega$ is a 1-form in the extended phase space which should be equated to zero. The coefficients of $dp_\mu$ equated to zero give $G_\mu-F_{,\mu}=0$ and the coefficient of $d\phi$ gives $-G_{0,0}+G_{1,1}+G_{2,2}+G_{3,3}=0$. These imply the Klein-Gordon equation for $F$. \end{appendix}
1,314,259,994,039
arxiv
\section{Introduction} A networked multiagent system consists of heterogeneous agents that aspire to achieve a common objective by choosing their individual actions in the absence of a central coordinator. The common objective, which may represent a power control problem in wireless communications \cite{candogan2010near}, a distributed estimation problem \cite{chenSayed}, or a task given to a team of robots \cite{fink2013robust}, depends on an unknown environment variable in addition to the actions of \emph{all} agents. Here, we present a distributed algorithm for the scenario when agents disagree on their estimate of the environment, and thus of the objective. In such a setting unless agents wait or exchange information for multiple rounds, they cannot be sure about what other agents are optimizing. When information about the environment is streaming or the system is large-scale, waiting or communicating for multiple rounds before taking an action may be undesirable as it will incur long coordination delays. Here we propose a distributed algorithm for such scenarios where coordination delay is unreasonable. In the algorithm, agents keep an estimate of the environment and a model of how other agents' take their actions. Then, each agent \emph{best-responds}, i.e., takes the action that maximizes its expectation of the objective with respect to their estimate and model of behavior. The model of other agents' behavior assumes that each agent selects its actions from a stationary distribution given by the histogram of their past actions. This model is based on the fictitious play (FP) algorithm \cite{Brown_1951,Monderer_Shapley_1996a}. However, in a large-scale system, agents cannot observe the past actions of all the agents. Instead, here we consider a decentralized update scheme based on weighted averaging that allows agents to keep track of the histograms of all other agents when the communication network is time-varying. The proposed decentralized scheme generalizes prior work on distributed FP \cite{eksin2018distributed,Swenson_et_al_2014} to time-varying communication networks. We provide convergence rate of the decentralized weighted averaging updates to the true empirical frequencies when the weights matrix is row stochastic (see Proposition \ref{proposition_a}). Here, we build on distributed optimization algorithms that rely on reaching consensus fast enough \cite{tsitsiklis1984problems,NedicOzdaglar}. Unlike these prior works, we do not impose the weights of the averaging to be coordinated in order to satisfy a doubly stochastic assumption. The intuition behind our result is that each agent is \emph{stubborn} when it comes to keeping track of its own histogram of past actions, and other agents are following the stubborn agent's updates through information exchanges with their peers. As long as the time-varying network is connected over a union of past edges for some fixed finite-time, the stubborn agents' updates cascades down to the follower agents. Given the fast enough convergence of the estimates on others' empirical frequencies, and eventual agreement on the state of the environment, the distributed FP algorithm converges to the Nash equilibrium (NE) of the game where agents have identical payoffs computed by integrating the common objective with respect to the consensus estimates on the state of the environment. At an NE action profile, all agents act optimal with respect to the actions of other agents. Our convergence result relates to the literature on NE seeking algorithms \cite{Li_Basar_1987,Shamma_Arslan_2005,salehisadaghiani2016distributed,koshal2016distributed}. This work distinguishes from these NE seeking algorithms by not making any structural assumptions on the objective function, and considering unknown and time-varying payoffs due to evolving estimates of the environment. \section{Networked Multiagent Systems with Uncertainty} A group of agents $\ccalN = \{1, \ldots, n\}$ aims to maximize a common objective $u(a,\theta)$ that is a function of the joint action profile of all agents $a:=[a_1, \dots, a_n]$, and the state of the environment $\theta$ by selecting their individual actions $a_i$ belonging to a finite action space $A_i$. We define the space of joint action profile as $A = \prod_{i\in\ccalN} A_i$. The state of the environment $\theta$ is unknown. At subsequent points in time $t=0,1,\ldots$, agents simultaneously decide on an action $a_{i}(t)\in A_i$ that they deem optimal with respect to their current belief about the environment $\mu_{i}(t)$. Agent $i$'s belief about the environment $\mu_{i}(t)$ assigns probabilities to possible states of the environment $\Theta$, i.e., it belongs to the space of probability distributions over $\Theta$, denoted with $\Delta(\Theta)$. If agents have different beliefs about the environment that is unknown to agent $i$, agent $i$ cannot be sure of the actions of other agents $a_{-i}(t):=\{a_{j}(t)\}_{j \in \ccalN\setminus i}$, hence it cannot be sure whether its action $a_{i}(t)\in A_i$ is optimal or not. In such a scenario, we assume agent $i$ keeps a belief about the choices of other agents $v^i_{-i}(t):=\{v^i_{j}(t)\}_{j\ne i}$ where $v^i_{j}(t)\in \Delta(A_j)$ is the belief of agent $i$ on agent $j$'s next action. Using its beliefs, agent $i$ takes the action that maximizes the expectation of the common objective computed with respect to its beliefs about the state and the actions of other agents, \begin{align} \label{eqD} a_{i}(t)\in\underset{a_i\in {A_i}}{\argmax}\:u(a_i,v^i_{-i}(t);\mu_{i}(t)) \end{align} where $u(a_i,v^i_{-i}(t);\mu_{i}(t))$ is the expectation of the objective with respect to the beliefs $v^i_{-i}(t)$ and $\mu_{i}(t)$. \subsection{Communication} Agents update their beliefs about the actions of other agents, $v^i_{-i}(t)$, by interacting with a subset of the agents in $\ccalN$. The subset of the agents that $i$ can interact with at time $t$ is determined by a network $\ccalG(t)$ with node set $\ccalN$ and a symmetric edge set $\ccalE(t)$. If the edge $(i,j)$ belongs to $\ccalE(t)$, agents $i$ and $j$ can exchange information with each other after decision epoch $t$. We denote the set of neighboring agents that interacts with $i$ at time $t$ as $\ccalN(i,t):=\{j: (i,j)\in \ccalE(t)\}$. We make the following assumption on the connectivity of time-varying networks. \begin{assumption} \label{connectivity} The graph $(\ccalN,\ccalE(\infty))$ is connected where $(i,j)\in \ccalE(\infty)$ communicate infinitely many times, i.e., $\ccalE(\infty)=\{(i,j)| (i,j)\in \ccalE(t)$ for infinitely many $t\}$\cite{NedicOzdaglar}. \end{assumption} \begin{assumption} \label{BoundedIntercommunicationInterval} There exist an integer $T\geq 1$ such that for every $(i,j)\in \ccalE(\infty)$ and $k\geq 0$, $(i,j)\in \ccalE(k)\cup\ccalE(k+1)\cup...\cup\ccalE(k+T-1)$. \cite{NedicOzdaglar}. \end{assumption} The Assumptions \ref{connectivity} and \ref{BoundedIntercommunicationInterval} made above are called \emph{connectivity} and \emph{Bounded intercommunication interval}, respectively in \cite{NedicOzdaglar}. Together connectivity and bounded intercommunication interval imply that the information generated agent $j\in\ccalN$ can reach agent $i\in\ccalN$ by some time. \begin{remark} \label{assump_1} Define the network $\ccalG(t,T):=(\ccalN,\ccalE(t,T))$ where $\ccalE(t,T):=\cup^{T-1}_{\tau=0} \ccalE(t+\tau)$ for $t>T$ where $t,T\in\naturals_+$. According to Assumption \ref{connectivity} and \ref{BoundedIntercommunicationInterval} $\ccalG(t,T)$ is strongly connected for each $t$. \end{remark} \subsection{Information Exchange and Belief Updates} Agent $i$ assumes other agents are selecting their actions according to a stationary distribution, the empirical histogram of their past actions. The empirical histogram of agent $i$ at time $t$, denoted by $f_{i}(t)$, can be recursively updated as \cite{Monderer_Shapley_1996} \begin{equation} \label{eq1} f_{i}(t+1)=f_{i}(t)+\frac{1}{t}(\Psi (a_{i}(t))-f_{i}(t)), \end{equation} where, $f_{i}(t)$ denotes the empirical histogram of agent $i$ at time $t$, and $\Psi(a_{i}(t))$ denotes an $|A_i|\times 1$ dimensional vector that is one at the $k$th element if $a_{i}(t)=k$ with $k\in A_i$, and otherwise it is zero. Agent $i$ cannot observe past actions of all the agents given the communication limitations. Hence, it is not possible for agent $i$ to keep track of the empirical histogram of other agents. Instead, agent $i$ will share and keep estimates of others' empirical frequencies in $v^i_{-i}(t)$. Specifically, at each step agent $i$ receives its current neighbors' estimates of agent $j$'s empirical frequency $\{v^{k}_{j}(t)\}_{k\in\mathcal{N}(i,t)\bigcup\{i\}}$ to update its estimate as follows, \begin{equation} \label{eq2} v^i_{j}(t+1)=\sum_{k\in\mathcal{N}}w^i_{j,k}(t) v^{k}_{j}(t), \end{equation} where $w^i_{j,k}(t)$ denotes the weight that agent $i$ puts on $k$'s estimate of agent $j$ at time $t$. We make the following assumptions on the weights. \begin{assumption}\label{assumption_weights} Assume there exists a scalar $0<\eta<1$ such that for all $i\in \ccalN$, and $j\in\ccalN$,\\ {\it (i)} $w^i_{j,k}(t)\geq\eta$ only if $k\in\mathcal{N}(i,t)\bigcup\{i\}$, otherwise $w^i_{j,k}(t)=0$. \\ {\it (ii)} $w^i_{i,i}(t)=1$ for all $t$.\\ {\it (iii)} $\sum_{k \in\mathcal{N}}w^i_{j,k}(t)=1$ for all $t$. \end{assumption} We define the weights matrix $W_{j}(t)$ used for estimating agent $j$'s empirical frequency at time $t$, where element in the $i$th row and $k$th column of $W_j(t)$ is $[W_j(t)]_{i,k}=w^i_{j,k}(t)$, to discuss the implications of the above assumptions. Assumption \ref{assumption_weights}{\it (i)} makes sure that agents can only put positive weights on their current neighbors' estimates in \eqref{eq2}. Assumption \ref{assumption_weights}{\it (ii)} means that agent $i$ only listens to itself (\emph{stubborn}) when its empirical frequency is of concern, that is, we assume $v^i_{i}(t)= f_{i}(t)$. This implies that $j$th row of $W_j(t)$ is given by $\bbe_j^T$ which is an $1\times n$ row-vector of all zeros except 1 in the $j$th element. Assumption \ref{assumption_weights}{\it (ii)} also means that the weights matrix $W_j(t)$ is different for each agent $j$. Assumption \ref{assumption_weights}{\it (iii)} means that $W_j(t)$ is row-stochastic for all times. Note that we do not require $W_j(t)$ to be doubly-stochastic. In time-varying networks, requiring $W_j(t)$ to be column stochastic is unrealistic because it would necessitate agents to coordinate their weights at each step. We will be agnostic to the individual updates on the state of the environment $\mu_{i}(t)$ as long as the state learning process satisfies the following assumption. \begin{assumption}\label{assump_state} The local beliefs on the state $\mu_{i}(t)$ converge to a common belief $\mu\in \Delta(\Theta)$ in terms of total variation, \begin{equation}\label{eqn_state_belief_convergence} \lim_{t\to\infty} {\bf TV}(\mu_{i}(t),\mu) = 0 \quad \forall i \in \ccalN, \end{equation} where the total variation distance between distributions $\mu_{i}(t)$ and $\mu$ is defined as the maximum absolute difference between the respective probabilities assigned to elements $B$ of the Borel set $\ccalB(\Theta)$ of the space $\Theta$, i.e., ${\bf TV}(\mu_{i}(t),\mu):= \sup_{B\in\ccalB(\Theta)} |\mu_{i}(t)(B) - \mu(B)|$. \end{assumption} This assumption is equivalent to the one made in \cite{eksin2018distributed}. Next, we summarize the algorithm. \subsection{Decentralized Fictitious Play (D-FP) Algorithm} \begin{algorithm}[D-FP algorithm]\label{algo_dfp_general} \normalfont $~$\\ \noindent \textit{Initialize}\\ (i) For each $i$, let $a_{i}(0)$ be chosen arbitrarily, and let the estimate $v^i_{j}(1)$ be initialized as $f_{j}(1) = \Psi (a_{j}(1))$ for all $j\in \ccalN\setminus i$. Let $\mu_{i}(t)\in \Delta(\Theta)$ be arbitrarily chosen. \medskip \noindent \textit{Iterate} ($t\geq 1$)\\ (ii) Agents simultaneously choose their next-stage action according to the rule in \eqref{eqD}. \noindent (iii) Agents update their empirical frequencies $f_{i}(t)$ as in \eqref{eq1}, and let $v^i_{i}(t) = f_{i}(t)$. \noindent (iv) Each player $i$ engages in one round of information exchange with neighboring agents $j\in \ccalN(i,t)$ where they receive $v^j(t):=[f_{j}(t), v^j_{-j}(t)]$ and updates their estimate of the joint empirical distribution $v^i_{-i}(t)$ according to \eqref{eq2}. \noindent (v) Agents update their beliefs about the environment $\mu_{i}(t)$ according to some state learning process. \end{algorithm} Step (ii) determines the actions, and steps (iii-v) determine how agents update their beliefs $v^i_{-i}(t)$ and $\mu_{i}(t)$. We assume that agents synchronously select their actions, and update their beliefs. However, the time-varying connectivity loosens this assumption to scenarios where some agents randomly wake up and send their beliefs to each other. \section{Convergence} We define the best individual action given the actions of others as the Nash equilibrium (NE) action profile---see Section \ref{sec_prelim} for a definition. We show convergence of the empirical frequencies of actions $f_{i}(t)$ generated by the D-FP algorithm converge to an NE (Theorem \ref{thm_NE}). The key technical contribution is in showing the convergence of beliefs $v^i_{j}(t)$ to true empirical frequency $f_{j}(t)$ with updates \eqref{eq2} at a fast enough rate given non-doubly stochastic weights (Proposition \ref{proposition_a}). Given this convergence rate, the convergence to NE follows by results in \cite{eksin2018distributed}. Next, we introduce some preliminary technical concepts. \subsection{Preliminaries: Game Theory} \label{sec_prelim} When the expectations of the objective are different, agents $\ccalN$ are playing a game $\Gamma$ with utility functions $u_{i,t}(\cdot):A \to \reals$, that is, $\Gamma=\{\ccalN, A, \{u_{i,t}\}_{i \in \ccalN}\}$. A mixed strategy $\sigma_i$ in a game corresponds to probability distribution over the action space $\Delta(A_i)$. We use $\sigma_i(a_i)$ to denote the probability that agent $i$ takes action $a_i\in A_i$. The joint mixed strategy profile is the product distribution of individual mixed strategies $\sigma:=\{\sigma_1,\sigma_2,...,\sigma_n\}$. We express the expected objective value with respect to the strategy profile $\sigma$ as \begin{equation} \label{eqA} u(\sigma,\theta)=\sum_{a\in A} u(a,\theta)\sigma(a) \end{equation} where $A:=\prod_{i\in\ccalN} A_i$. We define agent $i$'s expectation of the common objective given its belief $\mu_{i}(t)$ about the environment as follows, \begin{equation}\label{eq_utility} u_{i,t}(\sigma):=u(\sigma; \mu_{i}(t)) = \int_{\theta \in \Theta} u(\sigma, \theta) \mu_{i}(t)(\theta) \end{equation} A (mixed) strategy profile $\sigma^*\in \prod_{i\in\ccalN}\Delta(A_i)$ is a Nash equilibrium of $\Gamma$ if no agent has unilaterally profitable deviation, \begin{equation} \label{eqC} u_{i,t}(\sigma^*_i,\sigma^*_{-i})\geq u_{i,t}(\sigma_i,\sigma^*_{-i})\:\; \forall \sigma_i \in \Delta(A_i). \end{equation} \subsection{Convergence of Beliefs on Empirical Frequencies} Denote the vector that shows the estimation of the population on the frequency of agent $n$'s $l$th action as $x(t):=[[v^1_{n}(t)]_l,\dots,[v^n_{n}(t)]_l]^T\in\mathbb{R}^{n\times 1}$, where the $k$th element of the vector is denoted by $x_k(t) = [v^k_{n}(t)]_l$. Recall that $[v^n_{n}(t)]_l= [f_n(t)]_l$ by Assumption \ref{assumption_weights}(ii). Thus $x_n(t)$ is updated according to the dynamics in \eqref{eq1}. Given the belief updates in \eqref{eq2}, we can write the linear dynamics for $x(t)$ as \begin{equation} \label{eq_belief} x(t+1) = W(t) \big(x(t)+ (x_n(t+1)-x_n(t)) e_n\big) \end{equation} where $W(t)$ is the weights matrix for agent $n$ defined after Assumption \ref{assumption_weights} with subindex $n$ dropped, and $e_n$ is the $n$'s vector of the canonical basis in $\mathbb{R}^n$. The following result shows convergence rate of beliefs in \eqref{eq_belief} to true empirical frequency $f_{n}(t)$. \begin{proposition}\label{proposition_a} Let $x(t)\in \reals^{n\times 1}$ be a belief vector evolving according to \eqref{eq_belief} and the weights matrix $W(t)$ satisfying Assumption \ref{assumption_weights}. If the communication network satisfies Assumptions \ref{connectivity} and \ref{BoundedIntercommunicationInterval}, and $x_i(0)=x_n(0)$, then $||x_i(t)-x_n(t)|| = O(\frac{\log t}{t})$ for all $i\in \ccalN\setminus n$. \end{proposition} \begin{myproof} Define $y(t) := x(t) - x_n(t) \bbone$ where $\bbone$ is a column vector of all ones. By subtracting $x_n(t) \bbone$ from both sides of \eqref{eq_belief}, we get \begin{equation} \label{eq5} y(t+1)=W(t)(y(t)+\delta(t)), \end{equation} where $\delta(t):=(x_n(t+1)-x_n(t))(e_n-\bbone)$. Substituting previous values of $y(s)$ for $s=0,\dots,t$ in \eqref{eq5}, we have \begin{equation}\label{eq_recurion_y} y(t+1)=\sum^{t-1}_{s=0}\big(\prod^{s}_{\tau=0} W(t-\tau)\big) \delta(t-s). \end{equation} where we used the assumption $x_i(0)=x_n(0)$ to get rid of the initial term containing $y(0)$. We take norms of both sides and bound the left hand side by moving the norm inside the summation \begin{equation}\label{eq_bound_y} ||y(t+1)||\leq\sum^{t-1}_{s=0}||\big(\prod^{s}_{\tau=0} W(t-\tau)\big) \delta(t-s)||. \end{equation} Lemma \ref{lemma1} states that the products of weight matrices converge to $\bbone e_n^T$ with some rate $\rho$. Thus we can bound the right hand side above as follows, \begin{equation}\label{eq_bound_y_3} ||y(t+1)||\leq \sum^{t-1}_{s=0}\rho^{s} ||\delta(t-s)||. \end{equation} Note that $\delta(t) \leq n/t$. Defining $\delta_{avg}(t):=\frac{1}{t}\sum^t_{s=1}\frac{n+1}{s}$, we can conclude $||y(t+1)||\leq\frac{\delta_{avg}(t)\rho}{1-\rho}$. Result follows by noting that $\delta_{avg}(t)=O(\frac{log t}{t})$. \end{myproof} The result shows that agents are able successfully track estimates of an arbitrarily selected agent $n$. When agents are able correctly estimate the empirical frequencies of other agents, the algorithm is close to a centralized FP algorithm from which convergence to NE follows as we state next. \begin{theorem} \label{thm_NE} Let Assumptions \ref{connectivity}, \ref{BoundedIntercommunicationInterval}, \ref{assumption_weights}, and \ref{assump_state} hold. Define the game with common state belief $\mu$ and identical payoffs $u_{i,\infty}$ as $\Gamma(\mu)$. The empirical frequency of actions generated by Algorithm \ref{algo_dfp_general} converge to a NE strategy of $\Gamma(\mu)$, \begin{align} \label{histogram_convergence} \lim_{t\to\infty} \min_{\sigma^* \in K(\mu)} \| f_t - \sigma^*\| = 0 \end{align} where $K(\mu)$ represents the set of Nash equilibria of $\Gamma(\mu)$, i.e., all $\sigma^*$ that satisfy \eqref{eqC}. \end{theorem} Proof of the above result follows by first observing that $\Gamma(\mu)$ is an identical interest potential game \cite{Monderer_Shapley_1996}. Second, we observe that Proposition \ref{proposition_a} satisfies the same convergence rate as its counterpart (Lemma 1) in \cite{eksin2018distributed} for fixed connected communication networks. Thus the proof of Theorem \ref{thm_NE} follows verbatim the proof of Theorem 1 in \cite{eksin2018distributed}. \section{Simulation} $n=5$ agents are tasked with covering $n$ targets. The global objective is given as \begin{equation} \label{eq_task_assignment} u(a,\theta)=\sum_{i=1}^n 1\bigg(\sum_{j\ne i}1(a_j=k)=0 \bigg)||x_i-\theta_k||^{-2} \end{equation} where $x_i$ and $\theta_k$ are the locations of the agent $i$ and target $k$, respectively. As per \eqref{eq_task_assignment}, agents receive a zero payoff from a target if more than one agent is covering it. The payoff agent $i$ can receive from selecting a target is inversely proportional to its distance to the target. Target locations are unknown. Agents receive private noisy signals about target locations at each step. In the target assignment game with common utility function in \eqref{eq_task_assignment} and common beliefs on the state, there are multiple Nash equilibria. In particular, any action profile that covers all targets is a NE. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=.47\linewidth] {estimation_error.pdf}& \includegraphics[width=.47\linewidth] {distance_equilibrium.pdf} \end{tabular} \caption{(Left) Convergence of estimates $\sum_{j\in\ccalN}\sum_{i\in\ccalN}||\hat v^i_j(t)-f_j(t)||$ (Right) Convergence of empirical frequencies to NE ($\sum_{i\in\ccalN}||f_{i}(t)-\sigma^*||$) where $\sigma^*$ is an NE strategy.} \label{fig3} \end{figure} Fig. \ref{fig3} compares convergence rates for fixed and time-varying communication networks (ring and star). In the time-varying networks each edge in the star (or ring) network appears one at a time similar to gossiping schemes satisfying Assumption \ref{assump_1} for $T=5$. Fig. \ref{fig3}(Left) shows that total error on estimates of empirical frequencies converges at the same rate when the network is time-varying as when the network is fixed. This plot confirms $O(\frac{\log t}{t})$ rate shown in Proposition \ref{proposition_a}. Fig. \ref{fig3}(Right) shows the rate of convergence to a NE action profile $\sigma^*$ for the run considered. While convergence of empirical frequencies to $\sigma^*$ is as shown, agents start acting according to the NE action profile $\sigma^*$, i.e., each agent selects a different target, after $t=50$ and $t=57$ for time-varying star and ring networks, respectively. \section{Conclusion} In this paper, we proposed a variant of the distributed fictitious play for time-varying communication networks. In the algorithm, agents keep estimates of empirical frequency of others' actions by sharing their estimates with their current neighbors and updating their estimates using weighted averaging. We showed that convergence rate of the estimates are fast enough to guarantee convergence of the empirical frequencies of actions to an NE of the game, where eventually agents have identical expectations of the common objective. The key technical novelty is that the weights matrix is only row (non-doubly) stochastic which means that there is no need for coordination of weights. \bibliographystyle{IEEEtran} \subsubsection{#1}\vspace{-3\baselineskip}\color{black}\medskip{\noindent \bf \thesubsubsection. #1.}} \newcommand{\myparagraph}[1]{\needspace{1\baselineskip}\medskip\noindent {\it #1.}} \newcommand{\myindentedparagraph}[1]{\needspace{1\baselineskip}\medskip \hangindent=11pt \hangafter=0 \noindent{\it #1.}} \newcommand{\myparagraphtc}[1]{\needspace{1\baselineskip}\medskip\noindent {\it #1.}\addcontentsline{toc}{subsubsection}{\qquad\qquad\quad#1}}
1,314,259,994,040
arxiv
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE Computer Society conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Communications Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{S}{ynthetic} aperture radar (SAR) is an active image remote sensing system that transmits the microwave signals in a side looking direction towards the earths' surface. The benefit of SAR imaging systems over optical includes high-resolution images independent of daylight and weather-related occlusions i.e., clouds, dust and snow \cite{fetterer_sea_1994,bovolo_hierarchical_2013}. This all-day and all-weather acquisition capability sometimes makes the SAR imaging system reliable than optical imaging system for analyzing earth resources, such as ground and sea monitoring, disaster assessment, and so on \cite{touzi_review_2002}. However, a major problem of SAR image is the presence of speckle \cite{gao_statistical_2010}. Speckle is a granular noise caused by constructive or destructive interference of backscattered microwave signals which results in the obtained active image quality. It strongly impairs the performance of the aforementioned tasks. Thus, speckle removal is a key and indispensable step in SAR image preprocessing. To removal the speckle from SAR image, this called for an intense research activity on SAR despeckling in the past decade \cite{argenti_tutorial_2013}. Overall, not absolutely, existing despeckling methods can be grouped into three categories, local window filters, non-local means (NLM)-based methods, and convolutional neural network (CNN)-based methods. The most commonly applied filtering technique restores the center pixel in a moving window with an average of all the pixels in the windows. In early years, scholars have proposed a lot of despeckling methods based local window filtering, such as, Lee filter \cite{lee_digital_1980}, Refined Lee filter \cite{jong-sen_l_scattering-model-based_2006}, Frost filter \cite{frost_model_1982}, Kuan filter \cite{kuan_adaptive_1987}, and Gamma-MAP filter \cite{lopes_structure_1993}. These filters can smooth the speckle noise, however, different sizes of window and the number of looks can result in varying filtering quality, as the size of filter window and the number of looks increase, the more speckle noise is decreased and edge information is lost. In order to overcome the deficiency of local window filters, the NLM-based methods have been applied to process SAR image in recent years \cite{chen_nonlocal_2011, hua_zhong_robust_2014, liu_nonlocal_2014,deledalle_nl_sar_2015}. In the NLM-based methods, filtering is carried out, through the weighted mean of all the pixels in a certain neighborhood. However, differing from the local window filters, the weight does not depend on the geometric distance between the target pixel and each given pixel, but on their similarity. The similarity is measured by the Euclidean distance between the patches surrounding the selected and the target pixels. This principle has inspired several methods, such as BM3D \cite{dabov_image_2007}, where the nonlocal approach is combined with wavelet shrinkage and Wiener filtering in a two-step process. And then, aimed to taking into account the SAR despeckling peculiarities, SAR-BM3D \cite{parrilli_nonlocal_2012}, a SAR-oriented version of BM3D is proposed. Local window filters and NLM-based methods both have the intensive computational complexity and burden. This may hinder their usage in practical applications. Other than this, for a given SAR image, these methods require the number of looks known. In other words, they can not achieve ``Blind Despeckling". However, in many cases, the number of looks in a SAR image is unavailable due to the uncertainties of sensors. Recently, deep learning has been a hot topic in image processing over the last decade \cite{lecun_deep_2015}. Benefiting from the new thoughts and theories in this area, CNN-based despeckling methods have gradually emerged in recent years ~\cite{wang_sar_2017, gu_residual_2017, chierchia_sar_2017, kim_despeckling_2018, zhang_learning_2018, tang_sar_2018, gui_sar_2018}. Their basic goal is to model the nonlinear relationship between speckle and clean images, i.e., supervised learning. This happens by training CNNs with numerous pairs of noisy inputs and clean targets. Differing from nature image denoising task, the significant problem of SAR despeckling is that there exist no clean targets in the real world. SAR-CNN \cite{chierchia_sar_2017} solve this problem by using multi-temporal SAR data of the same scene, where original data is used as noisy inputs and corresponding multi-look data as clean targets. Since it is not completely reliable to treat the multi-look images as clean targets, the despeckling effect of SAR-CNN is very limited. Differing from SAR-CNN, SAR-IDN \cite{wang_sar_2017} and SAR-DRN \cite{zhang_learning_2018} use optical photographs as simulated clean SAR images and artificially add speckle as noisy images. In order to achieve blind despeckling, the approach of CNN-based methods is adding speckle noise of different levels to training data. Recent work, Noise2Noise \cite{Lehtinen2018Noise2NoiseLI}, demonstrate that clean targets are unnecessary for denoising task. Under some certain distributional assumptions, such as additive Gaussian noise, Poisson noise, and so on, CNNs can learn to predict the clean signal by training on pairs of independent noisy measurements of the same target. Although Noise2Noise only has noisy images, it can achieve the same well or even better results as if using clean targets. This kind approach of self-supervised learning bring some new ideas for SAR image despeckling. In this paper, we propose a framework for blind despeckling of SAR image. The main contributions can be summarized as follows: \begin{itemize} \item[1)] We propose a new state of the art for SAR image blind despeckling, which employs a self-supervised learning approach, i.e., no clean targets are used in the training process. Since the statistical distribution of speckle noise satisfies the unit mean, BDSS can still learn to suppress speckle noise by optimized for $L2$ loss when speckled images are used as ground truth. The noise level of speckle, i.e., the look of SAR images, is not cared about. In other words, BDSS can achieve blind despeckling effectively. \item[2)] The structure of BDSS mainly consists of three enhanced dense blocks, where dilated convolution is used for enlarging the field of view rather than common convolution. In addition, to improve despeckling ability and reduce computational complexity and memory usage, we remove batch normalization layers and replace ReLU with ParametricReLU from each dense block. \item[3)] We confirm PSNR and SSIM on synthetic specke images are the new state of the art. In addition, we set a despeckling experiment on real SAR images from four different sensors. Compared with other methods in terms of visual results and quantitative evaluation indexes, BDSS can remain better image feature, such as edges, point targets, and radiometric while removing speckle noise effectively. \end{itemize} The rest of this paper is organized as follows: In Section \uppercase\expandafter{\romannumeral2}, the self-supervised blind despeckling theory and the structure of proposed BDSS model are described. In Section \uppercase\expandafter{\romannumeral3}, the synthetic and real-data experimental results and analysis are presented. Finally, our conclusion is given in Section \uppercase\expandafter{\romannumeral4}. \section{Methodology} \subsection{SAR Speckle Noise Degradation Model}\label{sub:SAR Speckle} Let $y \in \mathbb{R}^{W \times H}$ be the observed signal, $x \in \mathbb{R}^{W \times H}$ be the original signal (e.g., original data), and $n \in \mathbb{R}^{W \times H}$ be the uncorrelated multiplicative speckle. Then assuming that the a SAR image is an average of $L$ looks, the observed signal $y$ is related to $x$ by the following multiplicative model \cite{lee1989noise}: \begin{equation} y=nx. \end{equation} It is well-known that, for a SAR image, $n$ follows a Gamma distribution and has the following probability density function: \begin{equation} p(n)=\frac{1}{\Gamma(L)} L^{L} n^{L-1} e^{-L n}, \end{equation} where $\Gamma$ is the Gamma function with unit mean and variance $\frac{1}{L}$, and $n>=0$, $L>=1$. Hence, the process of SAR image despeckling is to estimate the original signal (clean images) $x$ from the observed signal (speckled images) $y$. \subsection{Self-Supervised Blind Despeckling Theory} \begin{figure*}[htb] \centering \includegraphics{Flowchart.pdf} \caption{Flowchart of the proposed BDSS for blind despeckling} \label{fig.Flowchart} \end{figure*} Assume that we have a set of speckle measurements $\left( {{{y}_1},{{y}_2}, \ldots } \right)$ of the SAR data, in existed CNN-based supervised methods, such as SAR-CNN, ID-CNN and SAR-DRN, a set of training pairs $\left( {{y_i},{x_i}} \right)$ are used to minimize the network loss function in which ${{y_i}}$ is the noisy input image and ${{x_i}}$ is the corresponding clean target image. The network function ${{f_\theta }(x)}$ is parameterized by $\theta $: \begin{equation} \underset{\theta}{\operatorname{arg min}} \mathbb{E}_{(y, x)}\left\{L\left(f_{\theta}(y), x\right)\right\}, \label{equ:FS} \end{equation} where $L$ denotes a loss function, normally the ${{L_2}}$ loss $L({f_\theta }(y),x) = {({f_\theta }(y) - x)^2}$. The ${\theta}$ is found when ${f_\theta }(y)$ has the smallest average deviation from the $x$. For the ${{L_2}}$ loss, the minimum of Eq.~(\ref{equ:FS}) is found at: \begin{equation} {f_\theta }\left( \hat y \right) = x. \label{equ:FSMIN} \end{equation} And then, in our introduced self-supervised SAR despeckling approach, we replace the clean targets ${x_i}$ with corresponding speckle measurement ${y'_i}$, as illustrated in Fig.~\ref{fig.Flowchart}. The network function Eq.~(\ref{equ:FS}) becomes: \begin{equation} \mathop {{\mathop{\rm argmin}\nolimits} }\limits_\theta {{\mathop{\mathbb{E}}\nolimits} _{(y,y')}}\left\{ {L\left( {{f_\theta }(y),{y'}} \right)} \right\}, \label{equ:SS} \end{equation} where $y'$ denotes another speckle SAR image, and both the inputs $y$ and the targets $y'$ are drawn from a corrupted distribution (not the same) conditioned on the underlying. For the ${{L_2}}$ loss, the minimum of Eq.~(\ref{equ:SS}) is found at the arithmetic mean of the observations: \begin{equation} {f_\theta }\left( {y} \right) = {\mathbb{E}}\{y'\}. \label{equ:SSMIN} \end{equation} As described with Section~\ref{sub:SAR Speckle}, multiplicative speckle noise in a SAR image satisfies unit mean distribution, the arithmetic mean of the observations is the same as clean target: \begin{equation} {\mathbb{E}}\{ y'\} = x, \end{equation} this means that Eq.~(\ref{equ:FSMIN}) and Eq.~(\ref{equ:SSMIN}) are equivalent. In other words, the optimal network parameters $\theta $ remain unchanged. In addition, noting that when the clean targets are replaced with speckle targets whose expected value is the same as the former, what the network learns will not be changed. The noise level of speckle, i.e., the number of looks, is not cared about. This means that the proposed BDSS can achieve reliable blind despeckling. \subsection{Network Architecture} \begin{figure*}[htb] \centering \includegraphics{Structure_1.pdf} \caption{Structure of BDSS} \label{fig.BDSS framework} \end{figure*} \begin{figure*}[tb] \centering \includegraphics{Structure_2.pdf} \caption{Structure of enhanced dense block} \label{fig.Structure_2} \end{figure*} \begin{table}[] \renewcommand{\arraystretch}{1.3} \caption{Detailed Configuration of BDSS} \label{Configuration} \centering \begin{tabular}{m{1.7cm}<{\centering} m{0.4cm}<{\centering} m{5.4cm}<{\centering}} \toprule[2pt] Name & ${N_{out}}$ & Configuration \\ \hline Input & 1 & \\ \hline LowLevel & 128 & CONV 3$ \times $3, padding=1, PReLU \\ \hline \multirow{15}{*}{\shortstack{Enhanced\\DenseBlock-A}} & 16 & DCONV 3$ \times $3, dilation=1, padding=1, PReLU \\ \cline{2-3} & 16 & DCONV 3$ \times $3, dilation=2, padding=2, PReLU \\ \cline{2-3} & 32 & Concat, PReLU \\ \cline{2-3} & 16 & DCONV 3$ \times $3, dilation=3, padding=3, PReLU \\ \cline{2-3} & 48 & Concat, PReLU \\ \cline{2-3} & 16 & DCONV 3$ \times $3, dilation=4, padding=4, PReLU \\ \cline{2-3} & 64 & Concat, PReLU \\ \cline{2-3} & 16 & DCONV 3$ \times $3, dilation=4, padding=4, PReLU \\ \cline{2-3} & 80 & Concat, PReLU \\ \cline{2-3} & 16 & DCONV 3$ \times $3, dilation=3, padding=3, PReLU \\ \cline{2-3} & 96 & Concat, PReLU \\ \cline{2-3} & 16 & DCONV 3$ \times $3, dilation=2, padding=2, PReLU \\ \cline{2-3} & 112 & Concat, PReLU \\ \cline{2-3} & 16 & DCONV 3$ \times $3, dilation=1, padding=1, PReLU \\ \cline{2-3} & 128 & Concat, PReLU \\ \hline Concat-A & 256 & LowLevel and enhanced DenseBlock-A \\ \hline Enhanced DenseBlock-B & 128 & Same as enhanced DenseBlock-A \\ \hline Concat-B & 384 & Concat-A and Enhanced DenseBlock-B \\ \hline Enhanced DenseBlock-C & 128 & Same as enhanced DenseBlock-A \\ \hline Concat-C & 512 & Concat-B and enhanced DenseBlock-C \\ \hline Bottleneck & 256 & CONV 1$ \times $1, padding=0 \\ \hline Reconstruction & 1 & CONV 3$ \times $3, padding=1 \\ \toprule[2pt] \end{tabular} \end{table} The overall architecture of the BDSS framework is displayed in Fig.~\ref{fig.BDSS framework}. The detailed configuration of the proposed model is provided in Table~\ref{Configuration}. BDSS can be decomposed into several parts: the convolution layer for extracting low-level features, three dense blocks for extracting high-level features, the bottleneck and reconstruction layers for generating the output. In each dense block, each dilated convolution layer is followed by a ParametricReLU (PReLU) \cite{he_delving_2015} as activation function except the bottleneck and reconstruction layers. The details of the proposed network structure is described in the following. \subsubsection{Enhanced Dense Block} Inspired by DenseNet structure \cite{huang_densely_2017}, we proposed a kind of enhanced dense block. The structure of enhanced dense block is displayed in Fig.~\ref{fig.Structure_2}. After adopting a convolution layer to the input speckled images for learning low-level features, three blocks are applied to extract the high-level features. Differing from ResNets \cite{he_deep_2016}, the feature maps extracted by different convolution layers are concatenated rather than directly summed. In each dense block, the ${i^{th}}$ convolution layer (dilated convolution, described in the Section~\ref{Dilated}) receives the feature maps of all preceding layers as input: \begin{equation} {X_i} = {H_i}\left( {\left[ {{X_0},{X_1}, \ldots {X_{i - 1}}} \right]} \right), \end{equation} where $\left[ {{X_0},{X_1}, \ldots {X_{i - 1}}} \right]$ refers to the concatenation of the feature-maps produced in layers $0,1, \ldots i - 1$. ${H_i}\left( \cdot \right)$ denotes the composite function. In \cite{huang_densely_2017}, the composite function is decomposed into three layers: batch normalization (BN) \cite{ioffe_batch_nodate}, followed by a rectified linear unit (ReLU) and a $3 \times 3$ convolution (CONV). BN normalize the features using mean and variance in a batch during training and use estimated mean and variance of the whole training dataset during the testing. However, applying BN in image-to-image tasks is not optimal. BN tend to introduce unpleasant artifacts and limit the generation ability, which has proven in low-level computer vision problems such as PSNR-oriented super-resolution and deblurring tasks \cite{lim_enhanced_2017} \cite{nah_deep_2017}. Therefore, we remove BN to improve despeckling ability and reduce computational complexity and memory usage. In addition, we replace ReLU with PReLU and removed convolution in composite function. In the structure of the proposed model, short connections are added between a layer and every other layer. This kind of connectivity strengthens the flow of information through deep networks, thus alleviating the vanishing-gradient problem. In addition, the reuse of feature substantially reduced the number of parameters, therefore, we can reduce memory usage and computation as much as possible while increasing network depth for high performance. \subsubsection{Dilated Convolution} \label{Dilated} \begin{figure}[] \centering \includegraphics{DCONV.pdf} \caption{Illustration of the dilated convolution. (a) corresponds to 1-dilated convolution, i.e., the common convolution; (b) corresponds to 2-dilated convolution; (c) corresponds to 3-dilated convolution; and (d) corresponds to 4-dilated convolution. The kernel sizes of their convolution are all $3 \times 3$. Obviously, the FOV becomes significantly larger as the dilation factor increases.} \label{fig.DCONV} \end{figure} For SAR despeckling, the context information can facilitate the reconstruction of the corrupted pixels. In CNN, enlarging the field of view (FOV) is the main way to augment the context information. Generically, there are two ways to achieve this purpose. We can either increase the size or depth of convolution. However, enlarging the size or increasing the depth both leads to more network parameters, which in turn increases computational complexity and memory usage. Thus, we introduce the dilated convolution (DCONV) \cite{yu_multi-scale_2015} to our model. The main idea of DCONV is to insert ``holes" (zeros) between filter elements, thereby, DCONV can increase the network's FOV while keeping the merits of convolution. Let $I$ be an input discrete 2-dimensional matrix, i.e., a speckled image. Let $k$ be a discrete filter of size ${{r}^2}$. The discrete convolution operator $ * $ can be defined as: \begin{equation} \left( {I*k} \right)\left( p \right) = \sum\nolimits_{s + t = p} {I\left( s \right)} k\left( t \right). \end{equation} Now, we replace CONV with DCONV, the $l$-dilated convolution operator ${{ * _l}}$ can be defined as: \begin{equation} \left( {I{*_l}k} \right)\left( p \right) = \sum\nolimits_{s + lt = p} {I\left( s \right)} k\left( t \right), \end{equation} where $l$ is a dilation factor. For common convolution, its field of view $FO{V_c}$ is the same as the size of filter: \begin{equation} FO{V_c} = {r^2}, \end{equation} and for dilated convolution, its field of view $FO{V_d}$ is: \begin{equation} FO{V_d} = {\left( {\left( {r + 1} \right)l - 1} \right)^2}. \end{equation} Fig.~\ref{fig.DCONV} gives the four kinds of $3 \times 3$ DCONV used in BDSS, their dilation factors are respectively set to 1, 2, 3, 4. For the 1-dilated convolution in Fig.~\ref{fig.DCONV} (a), i.e., the common convolution, its FOV is $3 \times 3$. And for others dilated convolutions in Fig.~\ref{fig.DCONV} (b), (c), (d), their FOVs are respectively $7 \times 7$, $11 \times 11$, and $15 \times 15$. \begin{figure*}[tb] \centering \includegraphics{SAR-like.pdf} \caption{Examples of simulated SAR-like images and SAR images.} \label{fig.SAR-like} \end{figure*} \section{Experiments and Analysis} To verify the effectiveness of the proposed method, both synthetic and real-data experiments are performed, as described below. \subsection{Setup} \subsubsection{SAR-like dataset} The training data is 0.21 million images from the ILSVRC 2017 ImageNet \cite{ILSVRC15}. Regarding these images, we mainly refer to histograms of SAR images, transforming the ImageNet images into the SAR-like images with a similar distribution intensity as the training dataset. As shown in Fig.~\ref{fig.SAR-like}, whether the human visual observation or the histogram features of the corresponding images, it can be seen that the transformed images and the SAR images are extremely similar. For achieve blind despeckling in our proposed BDSS, the different speckle noise levels of looks $L = rand\left[ {1, + \infty } \right)$ are set up for training data. All images sent to the network for training are cropped to $112 \times 112$ pixels. \subsubsection{Parameters Setting and Network Training} The proposed model is trained using the Adam \cite{kingma_adam_2014} algorithm as the gradient descent optimization method, with momentum ${\beta _1} = 0.9$, ${\beta _2} = 0.999$, and $\varepsilon = {10^{ - 8}}$. The learning rate $\alpha $ is initialized to 0.001 for the whole network and is decayed to half every three epochs. The training process of BDSS take 16 epochs. An epoch is equal to about 1.3\e{4} iterations, batch size is 16. We employ the PyTorch \cite{paszke2017automatic} framework to train the proposed BDSS on a PC with 128-GB RAM, an Intel Core i7-6900K CPU, and an NVIDIA 1080Ti GPU. \subsubsection{Assessment Indexes} For the experiment on synthetic speckled images, the clean reference is $x$ available, so the performance assessment becomes much simpler. We choose two full-referenced indexes, the peak signal to noise ratio (PSNR) and structural similarity index (SSIM) \cite{wang_image_2004}. For the index of PSNR, a higher value means a better recovery of underlying signal, while a higher value of SSIM means a better recovery of underlying structural information. For the experiment on real SAR images, no clean reference can be used, so the performance assessment only relies on some no-reference measures. In this paper, we evaluate performance of despeckling methods from four cases \cite{di_martino_benchmarking_2014} \cite{ma_review_2018}: Case 1: Speckle Reduction. The equivalent number of looks (ENL) \cite{oliver2004understanding} usually is used to assess the amount of speckle in SAR images, it is generally computed as: \begin{equation} ENL = {{{\mu ^2}} \over {{\sigma ^2}}}, \end{equation} where $\mu $ and $\sigma $ respectively represent the mean and standard deviation of a homogeneous area. The larger value of the ENL, the better performance of speckle suppression. Case 2: Point Target Feature Preservation. A strong point target is usually characterized by a cluster of pixels whose reflectivity values are much higher than the mean reflectivity of the surrounding scene. Based on this condition, a target-to-clutter ratio (TCR) is employed in \cite{argenti_tutorial_2013}, which measures the difference in the intensity ratios between point targets and the surrounding areas before and after despeckling by: \begin{equation} TCR=\left|20 \log _{10} \frac{\max _{p}\left(I_{d}\right)}{\operatorname{mean}_{p}\left(I_{d}\right)}-20 \log _{10} \frac{\max _{p}\left(I_{s}\right)}{\operatorname{mean}_{p}\left(I_{s}\right)}\right|, \end{equation} where ${I_s}$ and ${I_d}$ are respectively the speckled images and the despeckled images. Subscript $p$ denotes the patch containing a point target, and ${{{\max }_p}}$ and ${{{{\mathop{\rm mean}\nolimits} }_p}}$ are computed over the patch. \begin{table*}[tb] \renewcommand{\arraystretch}{1.3} \caption{Numerical Indexes for Synthetic Speckled Images} \centering \begin{tabular}{cccccccccc} \toprule[2pt] Looks&Index&Speckled Images&PPB-nonit&PPB-it25&SAR-BM3D&FANS&SAR-IDN&SAR-DRN&BDSS\\ \hline \multirow{2}{*}{L=1}&PSNR&16.10&18.97&18.99&21.07&19.85&20.31&\color{blue}27.91&\color{red} 28.45\\ \cline{2-10} &SSIM&0.3584&0.3556&0.3998&0.5332&0.5508&0.4972&\color{blue}0.8103&\color{red}0.8260\\ \hline \multirow{2}{*}{L=2}&PSNR&18.76&20.99&21.17&23.52&22.79&22.29&\color{blue}29.27&\color{red}29.69\\ \cline{2-10} &SSIM&0.4715&0.4653&0.5056&0.6391&0.6202&0.5942&\color{blue}0.8431&\color{red}0.8545\\ \hline \multirow{2}{*}{L=4}&PSNR&21.52&23.21&23.55&25.80&24.98&24.46&\color{blue}30.55&\color{red}30.94\\ \cline{2-10} &SSIM&0.5802&0.5692&0.6367&0.7169&0.6796&0.6966&\color{blue}0.8719&\color{red}0.8811\\ \hline \multirow{2}{*}{L=8}&PSNR&24.33&25.40&25.87&27.80&26.96&26.23&\color{blue}31.90&\color{red}32.32\\ \cline{2-10} &SSIM&0.6789&0.6630&0.7020&0.7753&0.7427&0.7793&\color{blue}0.8991&\color{red}0.9060\\ \toprule[2pt] \end{tabular} \label{Tabel:Simulated} \end{table*} Case 3: Edge Feature Preservation. The edge-preservation degree based on the ratio of average (EPD-ROA) is given by \cite{xiangli_nie_variational_2015}: \begin{equation} EPD - ROA = {{\sum\limits_{i \in I} {\left| {{{{E_{DH}}\left( i \right)} \mathord{\left/ {\vphantom {{{E_{DH}}\left( i \right)} {{E_{DV}}\left( i \right)}}} \right. \kern-\nulldelimiterspace} {{E_{DV}}\left( i \right)}}} \right|} } \over {\sum\limits_{i \in I} {\left| {{{{E_{SH}}\left( i \right)} \mathord{\left/ {\vphantom {{{E_{SH}}\left( i \right)} {{E_{SV}}\left( i \right)}}} \right. \kern-\nulldelimiterspace} {{E_{SV}}\left( i \right)}}} \right|} }}, \end{equation} where $I$ is the index set of a grey image, ${E_{DH}}\left( i \right)$ and ${E_{DV}}\left( i \right)$ represent the adjacent pixel values of the despeckled image along the horizontal and vertical direction, respectively. Similarly, ${E_{SH}}\left( i \right)$ and ${E_{SV}}\left( i \right)$ represent the corresponding adjacent pixel values of speckled images. The closer the value of EPD-ROA to 1, better the ability of edge preservation. Case 4: Radiometric Preservation. A successful despeckling method should not significantly change the mean within a homogeneous region. A typical index for measuring the radiometric preservation capability is the mean of ratio (MOR). If the MOR value between the speckled image and the despeckled image significantly different from 1 indicates some radiometric distortion. \subsubsection{Compared Methods} Six despeckling methods are used for comparison. They are non-iterated and 25-iterated PPB filters \cite{deledalle_iterative_2009}, SAR-BM3D \cite{parrilli_nonlocal_2012}, fast adaptive nonlocal SAR despeckling (FANS) \cite{cozzolino_fast_2014}, SAR-IDN \cite{wang_sar_2017} and SAR-DRN \cite{zhang_learning_2018}. The first four methods can only achieve despeckling of SAR image with a specified look, and the other two CNN-based methods, SAR-IDN and SAR-DRN, can achieve blind despeckling of SAR image. In these methods, SAR-IDN and SAR-DRN are considered as the state of art. In following experiments, the parameter settings used in PPB filter, SAR-BM3D and FANS are set, respectively, as suggested in \cite{deledalle_iterative_2009}, \cite{parrilli_nonlocal_2012} and \cite{cozzolino_fast_2014}. For SAR-IDN and SAR-DRN, the learning rate, the number of training iterations and batch size are set to be the same as proposed BDSS, and other parameters settings are set as suggested in \cite{wang_sar_2017} and \cite{zhang_learning_2018}. The input training data of SAR-IDN and SAR-DRN is the same as proposed BDSS, and their ground truth data is original clean images without speckle noise. \subsection{Experiment on Synthetic Speckled Images} \label{Synthetic} The use of synthetic speckled images allows an objective performance assessment of the speckle suppression efficiency. To achieve this goal, the common approach in the literature is to use a virtually noiseless optical image as a clean reference and to inject speckle on it. In this experiment, we use UC Merced land use dataset \cite{yang_bag--visual-words_2010} for the test, which of images are manually extracted from large images from the USGS National Map Urban Area Imagery collection for various urban areas around the country, and which of the pixel resolution is 1 foot. The dataset has 21 classes land -use image and each class has 100 images with a size of $256 \times 256$ pixels. We choose 360 images (18 classes, each class have 20 images) for the test. To verify the despeckling effectiveness with known speckle look, we set four different speckle noise levels of looks $L = 1,2,4$ and $8$. To acquire an integrated comparison for the other methods and the proposed BDSS, quantitative evaluation indexes (PSNR and SSIM) and a visual comparison are used to analyze the results of different methods. In Table~\ref{Tabel:Simulated}, we present the averages of contrasting evaluation indexes of the four different speckle noise level. To give detailed contrasting results, a Runway image with $L = 1$, a Building image with $L = 2$, a Forest image with $L = 4$ and a Beach image with $L = 8$ are chosen to demonstrate the visual results, corresponding to Figs.~\ref{Runway}, \ref{Building}, \ref{Forest} and \ref{Beach}. In Table~\ref{Tabel:Simulated}, the best performance for each quality index is marked in red and the second-best performance for each quality index is blue. As shown in Table~\ref{Tabel:Simulated}, the proposed BDSS obtains all the best PSNR and SSIM results in four speckle noise levels, and SAR-DRN obtains second-best results. The PSNR and SSIM results of these methods have all improved with the increase of speckle noise levels. For PPB filters, whether noniterated and 25-iterated PPB filter, their PSNR results have a certain degree of improvement compared with input speckled images. However, their SSIM results have not been significantly improved, even reduced. Specifically, the SSIM results of PPB-nonit underperforms input speckled images by about 0.01, and these of PPB-it25 outperforms only input speckled images by about 0.04. SAR-BM3D, FANS, and SAR-IDN perform moderately, of which of SAR-BM3D is a better method. Compared with SAR-BM3D, the performance of supervised SAR-DRN and our proposed method BDSS has been greatly improved, by at least 5 and 0.1 on PSNR and SSIM, respectively. In addition, the difference in speckle noise level has less impact on BDSS and SAR-DRN. Compared with $L=8$, the results of BDSS on PSNR and SSIM are reduced by about 0.4 and 0.08 respectively when $L=1$, while the results of SAR-BM3D are reduced by about 6.8 and 0.24 respectively. Though never seen clean reference, our proposed BDSS even perform better, compared with state-of-the-art SAR-DRN. \begin{figure*}[htb] \centering \includegraphics{Visio-Similated_1.pdf} \caption{Results for the Runway image contaminated by 1-look speckle. (a) Original clean image. (b) Speckled image. (c) PPB-nonit. (d) PPB-it25. (e) SAR-BM3D. (f) FANS. (g) SAR-IDN. (h) SAR-DRN. (i) BDSS.} \label{Runway} \end{figure*} \begin{figure*}[htb] \centering \includegraphics{Visio-Similated_2.pdf} \caption{Results for the Building image contaminated by 2-look speckle. (a) Original clean image. (b) Speckled image. (c) PPB-nonit. (d) PPB-it25. (e) SAR-BM3D. (f) FANS. (g) SAR-IDN. (h) SAR-DRN. (i) BDSS.} \label{Building} \end{figure*} \begin{figure*}[htb] \centering \includegraphics{Visio-Similated_3.pdf} \caption{Results for the Forest image contaminated by 4-look speckle. (a) Original clean image. (b) Speckled image. (c) PPB-nonit. (d) PPB-it25. (e) SAR-BM3D. (f) FANS. (g) SAR-IDN. (h) SAR-DRN. (i) BDSS.} \label{Forest} \end{figure*} \begin{figure*}[htb] \centering \includegraphics{Visio-Similated_4.pdf} \caption{Results for the Beach image contaminated by 8-look speckle. (a) Original clean image. (b) Speckled image. (c) PPB-nonit. (d) PPB-it25. (e) SAR-BM3D. (f) FANS. (g) SAR-IDN. (h) SAR-DRN. (i) BDSS.} \label{Beach} \end{figure*} Figs.~\ref{Runway}, \ref{Building}, \ref{Forest} and \ref{Beach} give some visual samples. Fig.~\ref{Runway} (a) is a Runway image contaminated by 1-look speckle. This image contains some ground signs with distinct edge feature such as an arrow. Two PPB filters, SAR-BM3D, and FANS simultaneously result in a certain degree of distortion. SAR-IDN performs well on edge feature preservation, while does not effectively despeckling. As shown in Fig.~\ref{Runway} (h) and (i), SAR-DRN and BDSS can keep ground signs clear while despeckling effectively. A similar situation is also reflected in Fig.~\ref{Building}, which is more obvious. The PSNR and SSIM values of two PPB filters are lower with the input speckled images. The SSIM values of SAR-BM3D and FANS are improved obviously, while PSNR values are not. Fig.~\ref{Forest} (a) is a Forest image contaminated by 4-look speckle, which contains lots of texture detail. As shown in Fig.~\ref{Forest} (c), (d), (e), and (f), two PPB filters, SAR-BM3D, and FANS loss much texture detail in order to image smoothing. This is reflected in that, their PSNR values are improved compared with Fig.~\ref{Forest} (b) (input speckled images), while their SSIM values are reduced. Fig.~\ref{Beach} (a) is a Beach image contaminated by 8-look speckle, which contains a homogeneous sea area, a homogeneous land area and a wave area with much texture detail. Similarly, we can see that two PPB filters, SAR-BM3D, and FANS remove much speckle noise in two homogeneous, while loss much texture detail in wave area. SAR-IDN has the opposite performance. SAR-DRN and proposed BDSS perform well on despeckling and texture detail preservation, and BDSS performs better. \subsection{Experiment on Real SAR Images} \begin{figure*}[htbp] \centering \includegraphics{Visio-SAR1.pdf} \caption{Results for the Sentinel-1 image contaminated by 1-look speckle. (a) Original speckled image. (b) PPB-nonit. (c) PPB-it25. (d) SAR-BM3D. (e) FANS. (f) SAR-IDN. (g) SAR-DRN. (h) BDSS.} \label{Sentinel-1} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics{Visio-SAR2.pdf} \caption{Results for the TerraSAR image contaminated by 1-look speckle. (a) Original speckled image. (b) PPB-nonit. (c) PPB-it25. (d) SAR-BM3D. (e) FANS. (f) SAR-IDN. (g) SAR-DRN. (h) BDSS.} \label{TerraSAR-X} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics{Visio-SAR3.pdf} \caption{Results for the ALOS-2 image contaminated by 2-look speckle. (a) Original speckled image. (b) PPB-nonit. (c) PPB-it25. (d) SAR-BM3D. (e) FANS. (f) SAR-IDN. (g) SAR-DRN. (h) BDSS.} \label{ALOS} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics{Visio-SAR4.pdf} \caption{Results for the AIRSAR image contaminated by 4-look speckle. (a) Original speckled image. (b) PPB-nonit. (c) PPB-it25. (d) SAR-BM3D. (e) FANS. (f) SAR-IDN. (g) SAR-DRN. (h) BDSS.} \label{AIRSAR} \end{figure*} \begin{table*}[htbp] \renewcommand{\arraystretch}{1.3} \caption{ENL Indexes for Real SAR Images} \centering \begin{tabular}{cccccccccc} \toprule[2pt] \multicolumn{2}{c}{Data} & Original & PPB-nonit & PPB-it25 & SAR-BM3D & FANS & SAR-IDN & SAR-DRN & BDSS \\ \hline \multicolumn{2}{c}{TerraSAR} & 16.79 & \color{blue}976.91 & \color{red}{1511.19} & 899.04 & 441.58 & 157.49 & 268.95 & 295.31 \\ \hline \multicolumn{2}{c}{ALOS-2} & 22.57 & \color{blue}16440.13 & \color{red}{23932.63} & 24055.05 & 959.28 & 382.66 & 193.43 & 583.69 \\ \hline \multirow{2}{*}{AIRSAR} & Region 1 & 213.07 & \color{blue}{55156.8} & 16652.38 & \color{red}{564467.56} & 6962.56 & 420.02 & 1355.79 & 2415.12 \\ \cline{2-10} & Region 2 & 72.86 & \color{blue}{21312.28} & 8468.95 & \color{red}{152451.39} & 4165.91 & 185.46 & 612.41 & 1863.71 \\ \hline \multicolumn{2}{c}{Mean} & 84.14 & \color{blue}{24191.28} & 14032.07 & \color{red}{196473.88} & 2787.80 & 320.06 & 606.06 & 1098.04 \\ \toprule[2pt] \label{Tabel:ENL} \end{tabular} \end{table*} \begin{table*}[htbp] \renewcommand{\arraystretch}{1.3} \caption{EPD-ROA Indexes for Real SAR Images} \centering \begin{tabular}{ccccccccc} \toprule[2pt] \multicolumn{2}{c}{Data}& PPB-nonit & PPB-it25 & SAR-BM3D & FANS & SAR-IDN & SAR-DRN & BDSS \\ \hline \multicolumn{2}{c}{Sentinel-1} & 0.7915 & 0.7917 & 0.8060 & 0.7918 & 0.8658 & \color{blue}{0.8824} & \color{red}{0.9145 }\\ \hline \multicolumn{2}{c}{TerraSAR} & 0.8337 & 0.8960 & 0.9016 & 0.8606 & 0.9188 & \color{blue}{0.9583} & \color{red}{0.9803} \\ \hline \multirow{2}{*}{ALOS-2}& Region 1 & 0.9191 & 0.9343 & 0.9288 & 0.9215 & 0.9348 & \color{blue}{0.9652} & \color{red}{0.9774} \\ \cline{2-9} &Region 2 & 0.9137 & 0.9474 & 0.9288 & 0.9292 & 0.9343 & \color{blue}{0.9656} & \color{red}{0.9774} \\ \hline \multicolumn{2}{c}{Mean} & 0.8645 & 0.8923 & 0.8913 & 0.8758 & 0.9134 & \color{blue}{0.9429} & \color{red}{0.9624}\\ \toprule[2pt] \end{tabular} \label{Tabel:EPD-ROA} \end{table*} \begin{table*}[htbp] \renewcommand{\arraystretch}{1.3} \caption{TCR Indexes for Real SAR Images} \centering \begin{tabular}{ccccccccc} \toprule[2pt] \multicolumn{2}{c}{Data}& PPB-nonit & PPB-it25 & SAR-BM3D & FANS & SAR-IDN & SAR-DRN & BDSS \\ \hline \multirow{2}{*}{Sentinel-1} & Point Target 1 & 0.5849 & 0.2499 & \color{blue}{0.0914} & 0.7039 & 0.4603 & 0.1434 & \color{red}{0.0873} \\ \cline{2-9} & Point Target 2 & 5.4117 & 0.4245 & 0.2748 & 2.3786 & 0.6111 & \color{blue}{0.1296 } & \color{red}{0.1052} \\ \hline \multirow{3}{*}{TerraSAR} & Point Target 3 & 0.6628 & 0.1821 & \color{blue}{0.1338} & 0.6509 & 0.9046 & 0.1870 & \color{red}{0.0100} \\ \cline{2-9} & Point Target 4 & 0.6097 & 0.0768 & \color{blue}{0.1011} & 0.5657 & 0.5850 & 0.1857 & \color{red}{0.0320} \\ \cline{2-9} & Point Target 5 & 2.1103 & 0.3009 & \color{blue}{0.1242} & 1.1786 & 1.2492 & 0.2788 & \color{red}{0.0173} \\ \hline \multicolumn{2}{c}{Mean} & 1.3319 & 0.1713 & 0.1000 & 0.7467 & 0.3952 & \color{blue}{0.0920} & \color{red}{0.0405}\\ \toprule[2pt] \end{tabular} \label{Tabel:TCR} \end{table*} \begin{table*}[htbp] \renewcommand{\arraystretch}{1.3} \caption{MOR Indexes for Real SAR Images} \centering \begin{tabular}{ccccccccc} \toprule[2pt] \multicolumn{2}{c}{Data}& PPB-nonit & PPB-it25 & SAR-BM3D & FANS & SAR-IDN & SAR-DRN & BDSS \\ \hline \multicolumn{2}{c}{TerraSAR} & 1.1706 & 1.1791 & 1.1134 & 1.2243 & 1.2198 & \color{blue}{1.0440} & \color{red}{1.0238} \\ \hline \multicolumn{2}{c}{ALOS-2} & 1.1123 & 1.1069 & 1.0882 & 1.1409 & 1.0483 & \color{blue}{1.0307} & \color{red}{1.0092} \\ \hline \multirow{2}{*}{AIRSAR} & Region 1 & 1.1141 & 1.0838 & 1.1085 & 1.0973 & 1.0544 & \color{blue}{1.0457} & \color{red}{1.0220} \\ \cline{2-9} & Region 2 & 1.0272 & 1.0301 & \color{blue}{1.0291} & 1.0533 & 1.0450 & 1.0353 & \color{red}{1.0241} \\ \hline \multicolumn{2}{c}{Mean} & 1.1323 & 1.1233 & 1.1034 & 1.1542 & 1.1075 & \color{blue}{1.0401} & \color{red}{1.0183}\\ \toprule[2pt] \end{tabular} \label{Tabel:MOR} \end{table*} In this section, to further verify the effectiveness of the proposed BDSS, four real SAR images obtained at different scenes with different remote sensors are used for evaluation. The real SAR images are: 1) a harbor nearby Lianyungang, China. (downloaded from https://sentinel.esa.int/web/sentinel/missions/sentinel-1/data-products, obtained by Sentinel-1, C-band, one-look, as shown in Fig.~\ref{Sentinel-1} (a)); 2) a harbor nearby Rotterdam, Netherlands. (downloaded from https://www.intelligence-airbusds.com, obtained by TerraSAR, X-band, one-look, as shown in Fig.~\ref{TerraSAR-X} (a)); 3) an agriculture area nearby Brazil. (downloaded from https://www.eorc.jaxa.jp, obtained by ALOS-2, L-band, 2-look, as shown in Fig.~\ref{ALOS} (a)); and 4) an agricultural area nearby Flevoland, Netherlands. (downloaded from https://earth.esa.int, obtained by AIRSAR, L-band, 4-look, as shown in Fig.~\ref{AIRSAR} (a)). These images are all cropped to $600\times600$ pixels. The corresponding visual results are given in Figs~\ref{Sentinel-1}, \ref{TerraSAR-X}, \ref{ALOS} and \ref{AIRSAR}. Due to the lack of the true signal of real SAR images, the indexes including ENL, EPD-ROA, TCR, and MOR are adopted for evaluation. The numerical results are given in Tables~\ref{Tabel:ENL}, \ref{Tabel:EPD-ROA}, \ref{Tabel:TCR}, and \ref{Tabel:MOR}. Moreover, the ENL and MOR indexes are obtained by four homogeneous regions that are manually selected in Figs.~\ref{TerraSAR-X} (a), \ref{ALOS} (a) and \ref{AIRSAR} (a) (identified by yellow rectangles). The EPD-ROA indexes are obtained by four regions with much edge features in Figs.~\ref{Sentinel-1} (a), \ref{TerraSAR-X} (a), and \ref{ALOS} (a) (identified by green rectangles). The TCR indexes are obtained by five point targets on the sea in Figs.~\ref{Sentinel-1} (a) and \ref{TerraSAR-X} (a) (identified by red arrows). The best performance for each quality index is marked in red and the second-best performance for each quality index is blue. As shown in Table~\ref{Tabel:ENL}, the ENL values of two PPB filters, SAR-BM3D and FANS are quite huge which of the largest is even hundreds of thousands, while the ENL values of original input SAR images are only about 200 even lower. This phenomenon is also shown in Figs~\ref{TerraSAR-X}, \ref{ALOS} and \ref{AIRSAR} (b)-(e), it seems that over-smoothing exist for these methods. In the above experiment on synthetic speckled images, their lower SSIM values also confirm this. This kind of over-smoothing image is obviously not desirable in practical applications such as target detection and classification. An ideal method should try to despeckling while maintaining well feature such point target, edge and radiometric. SAR-IDN, SAR-DRN, and our proposed BDSS do well on the trade-off between despeckling and feature preservation. From the mean EPD-ROA, TCR and MOR values shown in Tables~\ref{Tabel:EPD-ROA}, \ref{Tabel:TCR} and \ref{Tabel:MOR}, BDSS perform best on edge feature, point target, and radiometric preservation. SAR-DRN perform second-best, and SAR-IDN performs third-best on edge and radiometric preservation, fourth-best on point target preservation. From the ENL values shown in Table~\ref{Tabel:ENL}, we can see that proposed BDSS perform best on speckle reduction compared with SAR-IDN and SAR-DRN. Based on the analysis above, we can conclude that the proposed BDSS method can notably suppress speckle with well feature preservation such as edge, point target, and radiometric. SAR-DRN has comparable performance with BDSS. \subsection{Blind Despeckling Performance Analysis} To verify the blind despeckling effectiveness of the proposed method, we design an experiment. We add random speckle noise levels of looks $L = random\left[ {1,10} \right]$ on clean optical images, which are same as with these in Section~\ref{Synthetic}. Tabel~\ref{Blind} gives the PSNR and SSIM values between original clean images and despeckled images with different methods. With a certain SAR image, the known number of looks is necessary for PPB, SAR-BM3D, and FANS. Therefore, we set four looks $L = 1, 2, 4, 8$ for each method respectively. From the Table~\ref{Blind}, we can see that, as the number of looks set by these methods increase, the PSNR and SSIM values also increases. For PPB-nonit with $L=1, 2$, PPB-it25 with $L=1$, SAR-BM3D with $L=1$, FANS with $L=1, 2$ and SAR-IDN, their PSNR values are not improved compared with input speckled images. Similarly, for PPB-nonit with $L=1, 2, 4$, PPB-it25 with $L=1, 2$, SAR-BM3D with $L=1, 2$, FANS with $L=1, 2$ and SAR-IDN, their SSIM values are not improved. Compared with input speckled images, the PSNR and SSIM values of proposed BDSS has a significant improvement, up to 10.83 and 0.3799 respectively. SAR-DRN has a second-best performance. Though without clean references in the training process, our proposed BDSS has a better blind despeckling performance than SAR-DRN. The possible reason is that, in order to complete despeckling for different speckle noise levels, SAR-DRN based on supervised learning, needs to learn multiple mappings (different levels of speckle noise to clean). However, BDSS based self-supervised learning only need to learn a kind of mapping (speckle noise to speckle noise). \begin{table}[htbp] \renewcommand{\arraystretch}{1.3} \caption{Results of Blind Despeckling Test} \centering \begin{tabular}{cccc} \toprule[2pt] \multicolumn{2}{c}{Indexes} & \multicolumn{1}{c}{PSNR} & \multicolumn{1}{c}{SSIM} \\\hline \multicolumn{2}{c}{Speckled images} & 18.95 & 0.4752 \\\hline \multirow{4}{*}{PPB-nonit} & L=1 & 17.29 & 0.2175 \\\cline{2-4} & L=2 & 18.20 & 0.3250 \\\cline{2-4} & L=4 & 19.10 & 0.4297 \\\cline{2-4} & L=8 & 19.91 & 0.5286 \\\hline \multirow{4}{*}{PPB-it25} & L=1 & 19.13 & 0.3459 \\\cline{2-4} & L=2 & 20.27 & 0.4446 \\\cline{2-4} & L=4 & 21.36 & 0.5422 \\\cline{2-4} & L=8 & 22.54 & 0.6430 \\\hline \multirow{4}{*}{SAR-BM3D} & L=1 & 18.80 & 0.3760 \\\cline{2-4} & L=2 & 19.26 & 0.4239 \\\cline{2-4} & L=4 & 20.14 & 0.5053 \\\cline{2-4} & L=8 & 22.28 & 0.6572 \\\hline \multirow{4}{*}{FANS} & L=1 & 17.18 & 0.3650 \\\cline{2-4} & L=2 & 18.88 & 0.4444 \\\cline{2-4} & L=4 & 20.61 & 0.5347 \\\cline{2-4} & L=8 & 23.20 & 0.6636 \\\hline \multicolumn{2}{c}{SAR-IDN} & 19.13 & 0.3459 \\\hline \multicolumn{2}{c}{SAR-DRN} & \color{blue}29.35 & \color{blue}0.8439 \\\hline \multicolumn{2}{c}{BDSS} & \color{red}29.78 & \color{red}0.8551\\ \toprule[2pt] \end{tabular} \label{Blind} \end{table} \section{Conclusion} In this paper, self-supervised learning is first introduced to achieve blind despeckling of SAR image. Differing from the other deep learning-based methods, inputs and references in BDSS's training process both are drawn from a distribution corrupted by speckle noise. Due to the multiplicative speckle noise in a SAR image satisfies unit mean distribution, BDSS can output the arithmetic mean of the speckled images, i.e., clean images. Dense connection is employed to reduce memory usage as much as possible while increasing network depth for high performance. Dilated convolutions are used to enlarge the receptive field. To train BDSS, a SAR-like dataset based ImageNet is created, which is similar to real SAR images in whether human visual observation or the statistical distribution. We design two experiments on synthetic speckled images and real SAR images. Comparing with some the-state-of-art despeckling methods, a reasonable performance is achieved by our proposed BDSS, in terms of the speckle reduction and the preservation of edges, point targets, and radiometric. In addition, we give the experiment results of random speckle level (look) with different methods, BDSS do best in blind despeckling. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi
1,314,259,994,041
arxiv
\section{Introduction}\label{Intro} \setcounter{equation}{0} During the recent years, there has been an increasing interest in resolvent expansions near thresholds and their various applications. These developments were partially initiated by the paper of A. Jensen and G. Nenciu \cite{JN01} in which a general framework for asymptotic expansions is presented and then applied to potential scattering in dimension $1$ and $2$. The key point of that paper is an inversion formula which provides an efficient iterative method for inverting a family of operators $A(z)$ as $z\to0$ even if $\ker\big(A(0)\big)\ne\{0\}$. Corrections or improvements of this inversion formula can be found in \cite[Lemma 4]{ES04}, \cite[Prop.~3.2]{IJ13} and \cite[Prop.~1]{JN04}. However, in all these papers either it is assumed that $A(0)$ is self-adjoint, or the construction relies on a Riesz projection which is not always convenient to deal with. These features are harmless in these works, since the threshold considered always lies at the endpoints of the spectrum of the underlying operator. However, once dealing with embedded thresholds, these features turn out to be critical (see the comment at the end of Section \ref{seco}). Our aim in the present paper is thus twofold. On the one hand, we revisit the mentioned inversion formula, and on the other hand we show how its revised version can be used for proving the continuity of a scattering matrix at embedded thresholds. The abstract part of our results is presented in Section \ref{Sec_Inv}, and consists first in a reformulation of the inversion formula which does not require that $A(0)$ is self-adjoint or that the projection is a Riesz projection (see Proposition \ref{propourversion}). We then discuss two natural choices for the projection\;\!: either the Riesz projection defined in terms of the resolvent of $A(0)$ if $0$ is an isolated point in the spectrum of $A(0)$, or the orthogonal projection on $\ker\big(A(0)\big)$ if $A(0)$ has a non-negative imaginary part. If both conditions hold, we also discuss the relations between these two projections, and provide sufficient conditions for their equality. This situation often takes place in applications even without the assumption that $A(0)$ is self-adjoint (see Corollary \ref{Corol_So}). In the second part of the paper (Section \ref{secwave}), we present an application of our abstract results to scattering theory for quantum waveguides. Quantum waveguides provide a particularly good model of study since their Hamiltonians possess an infinite number of embedded thresholds (with a change of multiplicity at each threshold) but give rise to a simple scattering theory taking place in a one-Hilbert space setting. We refer to \cite{Tie06} for basic results and earlier references on the spectral and scattering theory for quantum waveguides. For a straight quantum waveguide with a compactly supported potential $V$, we derive an asymptotic expansion of the resolvent in a neighbourhood of each embedded threshold. More precisely, if the potential is written as $V=vuv$ with $v$ non-negative and $u$ unitary and self-adjoint, and if $H_0$ is the Dirichlet Laplacian for the waveguide, then we give an expansion of the operator $\big(u+v(H_0-z)^{-1}v\big)^{-1}$ as $z$ converges to any threshold $z_0$ (see Proposition \ref{Prop_Asymp}). Note that the operator $v(H_0-z_0)^{-1}v$ (once properly defined) has a non-trivial imaginary part. This fact automatically prevents the use of any approach assuming the self-adjointness of $A(0)$, as mentioned above. We then deduce two consequences of this asymptotic expansion. First, we prove in Corollary \ref{noaccp} that the possible point spectrum of the operator $H:=H_0+V$ does not accumulate at thresholds. Since the thresholds are the only possible accumulation points for such a model, we thus rule out this possibility. Second, we characterize for all scattering channels corresponding to the transverse modes of the waveguide the behavior of the scattering matrix for the pair $\{H_0,H\}$ at embedded thresholds. More precisely, we show that the scattering matrix is continuous at the thresholds if the channels we consider are already open, and that the scattering matrix has a limit from the right at the thresholds if a channel precisely opens at these thresholds (see Proposition \ref{propcon} for a precise formulation of this result). Up to our knowledge, these types of results are completely new since the analysis of the behavior of a scattering matrix at embedded thresholds has apparently never been performed. We also show the continuity of the scattering matrix at embedded eigenvalues which are not located at thresholds. But in this case, similar results were already known for other models, see for example \cite[Prop.~10]{IR12} or \cite[Prop.~6.7.11]{Y} (see also \cite{GJY04} where propagation estimates at embedded thresholds are obtained for a Schr\"odinger operator with time periodic potential). As a final comment, we stress that we fully describe all possible behaviors at thresholds since we do not assume any condition on the absence of bound states or resonances at thresholds. Based on the expressions obtained in this paper, a Levinson's type theorem for quantum waveguides could certainly be derived, and deserves further investigations.\\ \noindent {\bf Acknowledgements.} The authors thank A. Jensen for useful discussions. \section{Inversion formula}\label{Sec_Inv} \setcounter{equation}{0} In this section, we adapt the inversion formula \cite[Prop.~1]{JN04} to the case of an arbitrary projection, and then discuss two possible choices for this projection. The symbol $\H$ stands for an arbitrary Hilbert space with norm $\|\cdot\|$ and scalar product $\langle\;\!\cdot\;\!,\;\!\cdot\;\!\rangle$, and $\B(\H)$ denotes the algebra of bounded operators on $\H$ with norm also denoted by $\|\cdot\|$. \begin{Proposition}\label{propourversion} Let $O\subset\C$ be a subset with $0$ as an accumulation point. For each $z\in O$, let $A(z)\in\B(\H)$ satisfy $$ A(z)=A_0+zA_1(z), $$ with $A_0\in\B(\H)$ and $\|A_1(z)\|$ uniformly bounded as $z\to0$. Let also $S\in\B(\H)$ be a projection such that\;\!: \begin{enumerate} \item[(i)] $A_0 + S$ is invertible with bounded inverse, \item[(ii)] $S(A_0+S)^{-1}S = S$. \end{enumerate} Then, for $|z|>0$ small enough the operator $B(z): S\H \to S\H$ defined by \begin{equation}\label{eqB(z)} B(z) :=\frac1z\left(S-S\big(A(z)+S\big)^{-1}S\right) \equiv S(A_0+S)^{-1}\Bigg(\sum_{j\ge0}(-z)^j\big(A_1(z)(A_0+S)^{-1}\big)^{j+1}\Bigg)S \end{equation} is uniformly bounded as $z\to0$. Also, $A(z)$ is invertible in $\H$ with bounded inverse if and only if $B(z)$ is invertible in $S\H$ with bounded inverse, and in this case one has $$ A(z)^{-1} =\big(A(z)+S\big)^{-1}+\frac1z\big(A(z)+S\big)^{-1}SB(z)^{-1}S\big(A(z)+S\big)^{-1}. $$ \end{Proposition} \begin{proof} For $z\in O$ with $|z|>0$ small enough, one has the equality \begin{align*} B(z) &=\frac1z\big(S-S(A_0+S)^{-1}S\big) +S(A_0+S)^{-1}\Bigg(\sum_{j\ge0}(-z)^j\big(A_1(z)(A_0+S)^{-1}\big)^{j+1}\Bigg)S. \end{align*} So, the condition (ii) implies the second equality in \eqref{eqB(z)}. The second part of the claim is a direct application of the inversion formula \cite[Lemma~2.1]{JN01}. \end{proof} The choice of the projection $S$ plays an important role in the previous proposition. For example, if $0$ is an isolated point in the spectrum $\sigma(A_0)$ of $A_0$, a natural candidate for $S$ is the Riesz projection associated with this value, which is the choice made in \cite{ES04,JN01,JN04}. Another natural candidate is the orthogonal projection on the kernel of $A_0$. However, for both choices additional conditions are necessary in order to verify conditions (i) and (ii). Below, we first discuss the case of the Riesz projection and then the case of the orthogonal projection. \subsection{Riesz projection} In this section, we assume that $0$ is an isolated point in $\sigma(A_0)$ and write $S_r$ for the corresponding Riesz projection. In that case, $A_0S_r=S_rA_0=S_rA_0S_r$ and $A_0+S_r$ is invertible with bounded inverse (see \cite[Chap.~III.6.4]{Kato}). The condition (ii) above, namely $S_r(A_0+S_r)^{-1}S_r=S_r$, is more complicated to check. However, if one assumes that $A_0S_r=0$, or the stronger condition that $A_0$ is self-adjoint, then the equalities $S_r(A_0+S_r)^{-1}=S_r=(A_0+S_r)^{-1}S_r$ hold, and thus condition (ii) is satisfied (note that in that case a small simplification takes place on the r.h.s. of \eqref{eqB(z)}). However, the condition $A_0S_r=0$ does not always hold since $A_0S_r$ is in general only quasi-nilpotent \cite[Sec.~III.6.5]{Kato}. Fortunately, the condition $A_0S_r=0$ holds if $A_0$ has a particular form, as shown in the following lemma (which is an extension of \cite[Prop.~2]{JN04}). \begin{Lemma}\label{lemme_Riesz} Assume that $A_0=X+i\;\!Y$, with $X,Y$ bounded self-adjoint operators and $Y\ge0$, and suppose that $0$ is an isolated point in $\sigma(A_0)$. Let $S_r$ be the corresponding Riesz projection, and assume that $S_rA_0S_r$ is a trace-class operator. Then, $A_0S_r=S_rA_0=0$. \end{Lemma} Note that the trace-class condition is satisfied if, for instance, $S_r\H$ is finite-dimensional. \begin{proof} Since $S_r$ is a projection which commutes with $A_0$, one has $A_0S_r=S_rA_0=S_rA_0S_r$. Therefore, if $J$ is the operator in $S_r\H$ given by $J:=S_rA_0S_r$, then $$ \im\big\langle S_r\varphi,JS_r\varphi\big\rangle =\im\big\langle S_r\varphi,S_rA_0S_rS_r\varphi\big\rangle =\im\big\langle S_r\varphi,A_0S_r\varphi\big\rangle \ge0\quad\hbox{for all }\varphi\in\H, $$ or equivalently $\im(J)\ge0$ in $S_r\H$. Since $J$ is quasi-nilpotent \cite[Eq.~(III.6.28)]{Kato} and trace-class, and since quasi-nilpotent trace-class operators have trace $0$ \cite[p.~32]{Sim05}, it follows that $$ 0=\Tr(J)=\Tr\big(\re(J)\big)+i\;\!\Tr\big(\im(J)\big). $$ This equality together with the inequality $\im(J)\ge0$ imply that $\im(J) =0$. Thus, $J$ is self-adjoint and quasi-nilpotent, which means that $J=0$. \end{proof} We now list a series of consequences of the previous result. \begin{Corollary}\label{Corol_Sr} Suppose that the assumptions of Lemma \ref{lemme_Riesz} are satisfied, then the conditions (i) and (ii) of Proposition \ref{propourversion} are verified for $S=S_r$. \end{Corollary} \begin{Corollary}\label{Cor_image} Suppose that the assumptions of Lemma \ref{lemme_Riesz} are satisfied, then $S_r\H=\ker(A_0)$. \end{Corollary} \begin{proof} The inclusion $S_r\H\subset\ker(A_0)$ follows from the equality $A_0S_r =0$, and the inclusion $S_r\H\supset\ker(A_0)$ is standard. \end{proof} We finally present a simple result which holds under the assumptions of Lemma \ref{lemme_Riesz}, but can be proved in a slightly more general context. The norms and scalar products of the different Hilbert spaces are written with the same symbols. \begin{Lemma}\label{Cor_magique} Let $\G$ be an auxiliary Hilbert space, take $Z_n\in\B(\H,\G)$, and assume that the sum $\sum_nZ_n^*Z_n$ is weakly convergent. Let also $A_0=X+i\sum_nZ_n^*Z_n$, with $X$ a bounded self-adjoint operator in $\H$, and suppose that $S$ is a projection satisfying $A_0S=0$ and $SA_0=0$. Then, $Z_mS=0$ and $S Z_m^*=0$ for each $m$. \end{Lemma} \begin{proof} Let $\varphi\in\H$. Then, the first identity follows from the equalities $$ \textstyle \big\|Z_mS\varphi\big\|^2 \le\big\langle S\varphi,\big(\sum_n Z_n^*Z_n\big)S\varphi\big\rangle =\im\big\langle S\varphi,\big(X+i\sum_nZ_n^*Z_n\big)S\varphi\big\rangle =\im\big\langle S\varphi,A_0S\varphi\big\rangle =0, $$ and the second identity follows from the equalities $$ \textstyle \big\|Z_mS^*\varphi\big\|^2 \le\big\langle S^*\varphi,\big(\sum_n Z_n^*Z_n\big)S^*\varphi\big\rangle =-\im\big\langle S^*\varphi,\big(X-i\sum_nZ_n^*Z_n\big)S^*\varphi\big\rangle =-\im\big\langle S^*\varphi,A_0^*S^*\varphi\big\rangle =0. $$ \end{proof} \subsection{Orthogonal projection on the kernel}\label{seco} In this section, we assume from the beginning that $A_0=X+i\;\!Y$, with $X,Y$ bounded self-adjoint operators and $Y\ge0$. In that case, one has $\ker(A_0)=\ker(X)\cap\ker(Y)=\ker(A_0^*)$. Also, if $S_o$ denotes the orthogonal projection on $\ker(A_0)$, the relations $XS_o=0=S_oX$, $YS_o=0=S_oY$ and $A_0S_o=0=S_oA_0$ hold. Thus, if one shows that $A_0+S_o$ is invertible with bounded inverse, then the conditions (i) and (ii) of Proposition \ref{propourversion} would follow. So, we concentrate in the sequel on this invertibility condition. Since $A_0$ is reduced by the orthogonal decomposition $\H=S_o\H\oplus(1-S_o)\H$ and since $A_0$ is trivial in the subspace $S_o\H$, the operator $A_0+S_o$ is invertible with bounded inverse if the restriction of $A_0$ to $S_o^\bot\H:=(1-S_o)\H$ is invertible with bounded inverse. However, since $A_0|_{S_o^\bot \H}$ has an inverse on $\Ran\big(A_0|_{S_o^\bot\H}\big)=\Ran(A_0)$, and since $\Ran (A_0)$ is dense in $S_r^\bot\H$ (because $\overline{\Ran(A_0)}=\ker(A_0^*)^\bot=\ker(A_0)^\bot=S_r^\bot \H$), the only remaining question concerns the boundedness of the inverse $A_0^{-1}$ on $\Ran(A_0)$. In the following two lemmas, we exhibit conditions under which this question can be answered affirmatively. \begin{Lemma}\label{Lemma_trace} Assume that $A_0=X+i\;\!Y$, with $X,Y$ bounded self-adjoint operators and $Y\ge0$, and suppose that $0$ is an isolated point in $\sigma(A_0)$. Let $S_r$ denote the corresponding Riesz projection, and assume that $S_rA_0S_r$ is a trace-class operator. Then, $A_0$ is invertible in $\ker(A_0)^\bot$ with bounded inverse if and only if $S_r$ is an orthogonal projection. \end{Lemma} Before giving the proof, we recall that if $S_r$ is an orthogonal projection, then it automatically follows from Corollary \ref{Cor_image} that $S_r=S_o$. \begin{proof} Sufficient condition\;\!: Assume that $S_r$ is an orthogonal projection (and thus equal to $S_o$). Since $A_0$ is invertible in $S_r^\bot\H$ with bounded inverse by \cite[Thm.~III.6.17]{Kato}, one infers that $A_0$ is invertible in $S_o^\bot\H=\ker(A_0)^\bot$ with bounded inverse. Necessary condition\;\!: Suppose by absurd that $S_r$ is not an orthogonal projection, or more precisely that $S_r^\bot\H\ne S_o^\bot\H$ (since we already know that $S_r\H=\ker(A_0)=S_o\H$ by Corollary \ref{Cor_image}). Then, if there exists $\varphi\in S_r^\bot\H\setminus\{0\}$ with $\varphi\not\in S_o^\bot\H$, one has $S_o\varphi\ne0$ and $S_o^\bot\varphi\ne0$, and for any $z\in\C\setminus\{0\}$ with $|z|$ small enough $$ (A_0-z)^{-1}\varphi=(A_0-z)^{-1}S_o\varphi+(A_0-z)^{-1}S_o^\bot\varphi. $$ Now, we know from \cite[Thm.~III.6.17]{Kato} that the l.h.s. has a limit in $\H$ as $z\to0$. But since $S_o\varphi\in\ker(A_0)$, the first term on the r.h.s. does not have a limit as $z \to 0$. Therefore, the second term on the r.h.s. neither has a limit as $z \to 0$, and thus the operator $A_0$ is not invertible in $S_o^\bot\H=\ker(A_0)^\bot$. On the other hand, if there exists $\varphi\in S_o^\bot\H\setminus\{0\}$ with $\varphi\notin S_r^\bot\H$, one has $S_r\varphi\ne0$ and $S_r^\bot\varphi\ne0$, and for any $z\in\C\setminus\{0\}$ with $|z|$ small enough $$ (A_0-z)^{-1}\varphi=(A_0-z)^{-1}S_r\varphi+(A_0-z)^{-1}S_r^\bot\varphi. $$ In this case, the second term on the r.h.s. does have a limit in $\H$ as $z\to0$, but the first term on the r.h.s. does not. Therefore, the l.h.s. does not have a limit in $\H$ as $z \to 0$, and thus the operator $A_0$ is not invertible in $S_o^\bot\H=\ker(A_0)^\bot$. Summing up, if $S_r^\bot\H\ne S_o^\bot\H$, then $A_0$ is not invertible in $S_o^\bot\H=\ker(A_0)^\bot$, which concludes the proof of the claim. \end{proof} \begin{Lemma}\label{lemourcase} Assume that $A_0=X+i\;\!Y$, with $X,Y$ bounded self-adjoint operators and $Y\ge0$. Suppose also that $A_0=U+K$ with $U$ unitary and $K$ compact, or that $A_0$ is a finite-rank operator. Then, $A_0$ is invertible in $\ker(A_0)^\bot$ with bounded inverse. \end{Lemma} \begin{proof} Recall that $\Ran\big(A_0|_{\ker(A_0)^\bot}\big)\equiv\Ran(A_0)$ is dense in $S_r^\bot\H$. So, the boundedness of the inverse of $A_0$ in $\ker(A_0)^\bot$ follows from the closed graph theorem \cite[Thm.~III.5.20]{Kato} if $\Ran(A_0)$ is closed. But, this is verified under both conditions. Under the first condition, one has $A_0=U+K=(1+KU^{-1})U$ with $KU^{-1}$ is compact. So, $(1+KU^{-1})$ is Fredholm, and the image of $U\H=\H$ by $(1+KU^{-1})$ is closed \cite[Thm.~4.3.4]{Davies}. And under the second condition, $\Ran(A_0)$ is finite-dimensional and thus closed. \end{proof} Under the assumptions of Lemma \ref{lemourcase}, the value $0$ is an isolated point in $\sigma(A_0)$. Thus, the Riesz projection $S_r$ is well defined, and one obtains the following by combining the two previous lemmas\;\!: \begin{Corollary}\label{Corol_So} Suppose that the assumptions of Lemma \ref{lemourcase} are satisfied. Then, $S_r=S_o$, and the conditions (i) and (ii) of Proposition \ref{propourversion} are verified for $S=S_r=S_o$. \end{Corollary} \begin{proof} We know from Lemma \ref{lemourcase} that $A_0$ is invertible in $\ker(A_0)^\bot$ with bounded inverse. Thus, it follows from Lemma \ref{Lemma_trace} that $S_r=S_o$ and that the conditions (i) and (ii) of Proposition \ref{propourversion} are verified for $S=S_r=S_o$ if $S_rA_0S_r$ is a trace-class operator. But, the operator $S_rA_0S_r$ is clearly trace-class if $A_0$ is a finite-rank operator. On the other hand, if $A_0=U+K$ with $U$ unitary and $K$ compact, then the isolated eigenvalue $0$ is of finite multiplicity, $S_r\H$ is finite-dimensional \cite[Remark III.6.23]{Kato}, and $S_rA_0S_r$ is also trace-class. \end{proof} We close this section with a comment on the usefulness of Corollary \ref{Corol_So} for the iterative procedure of the next section. If we use a Riesz projection $S_r$ without knowing that it is orthogonal, this is harmless at the first step of the iteration (as illustrated in \cite{JN04}), but this becomes more and more annoying at each step of the iteration. Indeed, conjugation by Riesz projections does not preserve positivity, and thus any argument based on positivity can hardly be invoked. Therefore, Corollary \ref{Corol_So} leads to various simplifications in the iterative procedure since it provides conditions guaranteeing that $S_r$ is orthogonal. \section{Quantum waveguides}\label{secwave} \setcounter{equation}{0} We introduce in this section the model of quantum waveguide we use and recall some of its basics properties. Much of the material is borrowed from \cite{Tie06} to which we refer for further information. We consider a bounded open connected set $\Sigma\subset\R^{d-1}$ with $d\ge 2$, and let $-\Delta^\Sigma_{\rm D}$ be the Dirichlet Laplacian on $\Sigma$ acting in $\ltwo(\Sigma)$. This operator has a purely discrete spectrum $\tau:=\{\lambda_n\}_{n\ge1}$ consisting in eigenvalues $\lambda_1\le\lambda_2\le\cdots$ repeated according to multiplicity. The corresponding set of eigenvectors is denoted by $\{f_n\}_{n\ge1}$ and the corresponding set of one-dimensional orthogonal projections is denoted by $\{\P_n\}_{n\ge1}$. Sometimes, we omit for simplicity to stress that $n\ge1$. We consider also the straight waveguide $\Omega:=\Sigma\times\R$ with coordinates $(\omega,x)$, the Hilbert space $\H:=\ltwo(\Omega)$, and the Dirichlet Laplacian $H_0:=-\Delta^\Omega_{\rm D}$ on $\Omega$ acting in $\H$. This operator decomposes as $H_0=-\Delta^\Sigma_{\rm D}\otimes1+1\otimes P^2$ in $\H\simeq\ltwo(\Sigma)\otimes\ltwo(\R)$, with $P:=-i\hspace{1pt}\partial_x$ the usual self-adjoint operator of differentiation in $\ltwo(\R)$. The spectrum $\sigma(H_0)$ of $H_0$ is purely absolutely continuous with $\sigma(H_0)=[\lambda_1,\infty)$, and each value $\lambda\in\tau$ is a threshold in $\sigma(H_0)$ with a change of multiplicity. Moreover, for $z\in\C\setminus\R$, the resolvents $R^0(z):=(P^2-z)^{-1}$ and $R_0(z):=(H_0-z)^{-1}$ satisfy the relation \begin{equation}\label{factor_res} R_0(z)=\sum_n\P_n\otimes R^0(z-\lambda_n),\quad z\in\C\setminus\R, \end{equation} and the resolvent $R^0(z)$ has integral kernel \begin{equation}\label{eq_noyau_1} R^0(z)(x,x')=\frac i{2\sqrt z}\e^{i\sqrt z\;\!|x-x'|}\;\!, \quad z\in\C\setminus\R,~x,x'\in\R, \end{equation} with the convention that $\im(\sqrt z)>0$ for $z\in\C\setminus[0,\infty)$. In the following lemma, we recall some weighted estimates for $R^0(z)$ which complement the asymptotic expansion given in \cite[Lemma~5.1]{JN01}. We use the notations $\C_+:=\{z\in\C\mid\im(z)>0\}$ and $\langle x\rangle:=(1+x^2)^{1/2}$, and we let $Q$ denote the self-adjoint multiplication operator by the variable in $\ltwo(\R)$. \begin{Lemma}\label{lemsauve} Fix $\varepsilon>0$, take $\lambda\in\R\setminus(-\varepsilon,\varepsilon)$ and let $\zeta\in\overline{\C_+}$ with $|\zeta|<\varepsilon/2$. \begin{enumerate} \item[(a)] If $s>1/2$, then the limit $$ \langle Q\rangle^{-s}R^0(\lambda+\zeta)\langle Q\rangle^{-s} :=\lim_{\zeta'\to\zeta,\,\zeta'\in\C_+} \langle Q\rangle^{-s}R^0(\lambda+\zeta')\langle Q\rangle^{-s} $$ exists in $\B\big(\ltwo(\R)\big)$ and is independent of the sequence $\zeta'\to\zeta$. Moreover, the limit is a Hilbert-Schmidt operator with Hilbert-Schmidt norm $$ \big\|\langle Q\rangle^{-s}R^0(\lambda+\zeta)\langle Q\rangle^{-s}\big\|_{\rm HS} \le{\rm Const.}\;\!|\lambda|^{-1/2}. $$ \item[(b)] If $s >3/2$, then $$ \big\|\langle Q\rangle^{-s}\big(R^0(\lambda+\zeta)-R^0(\lambda)\big) \langle Q\rangle^{-s}\big\|_{{\rm HS}} \le{\rm Const.}\;\!|\zeta|\;\!|\lambda|^{-1/2}, $$ where the constant may depend on $\varepsilon$ but not on $\lambda$ and $\zeta$. \end{enumerate} \end{Lemma} \begin{proof} The first claim follows from \eqref{eq_noyau_1}. For the second one, one has to compute the integral kernel of $ \langle Q\rangle^{-s}\big(R^0(\lambda+\zeta)-R^0(\lambda)\big)\langle Q\rangle^{-s} $, taking into account the following equalities with $y=|x-x'|$ and $x,x'\in\R:$ $$ \frac{\e^{i\sqrt{\lambda+\zeta}\;\!y}}{\sqrt{\lambda+\zeta}} -\frac{\e^{i\sqrt\lambda\;\!y}}{\sqrt\lambda} =\frac{-\zeta}{\sqrt\lambda\;\!\sqrt{\lambda+\zeta} \;\!(\sqrt{\lambda+\zeta}+\sqrt\lambda)}\;\! \e^{i\sqrt{\lambda+\zeta}\;\!y} +\frac1{\sqrt\lambda}\big(\e^{i\sqrt{\lambda+\zeta}\;\!y}-\e^{i\sqrt\lambda\;\!y}\big) $$ and $$ \frac1{\sqrt\lambda}\big(\e^{i\sqrt{\lambda+\zeta}\;\!y}-\e^{i\sqrt\lambda\;\!y}\big) =\frac{i\;\!\zeta\;\!y}{2\sqrt\lambda} \int_0^1\frac{\e^{i\sqrt{\lambda+s\;\!\zeta}\;\!y}}{\sqrt{\lambda+s\zeta}}\,\d s. $$ \end{proof} Now, we consider a self-adjoint operator $H:=H_0+V$, where $V\in\linf(\Omega;\R)$ is measurable with bounded support. We impose the boundedness of the support for simplicity, but we note that our results would also hold for potentials $V$ decaying sufficiently fast at infinity (see for example the seminal papers \cite{JK79,JN01} for precise conditions on the decay of $V$ at infinity). Following the standard idea of decomposing the perturbation into factors, we define the functions $$ v:\Omega\to\R,\quad(\omega,x)\mapsto|V(\omega,x)|^{1/2} \qquad\hbox{and}\qquad u:\Omega\to\{-1,1\},\quad(\omega,x)\mapsto \begin{cases} 1 & \hbox{if}~~V(\omega,x)\ge0\\ -1 & \hbox{if}~~V(\omega,x)<0. \end{cases} $$ Then, the operator $u+vR_0(z)\;\!v$ has a bounded inverse in $\H$ for each $z\in\C\setminus\R$ and the resolvent equation may be written as $$ (H-z)^{-1}=R_0(z)-R_0(z)\;\!v\big(u+vR_0(z)\;\!v\big)^{-1}vR_0(z), \quad z\in\C\setminus\R. $$ Since the following equality holds: \begin{equation}\label{resolv_eq} uv(H-z)^{-1}vu=u-\big(u+v R_0(z)\;\!v\big)^{-1},\quad z\in\C\setminus\R, \end{equation} deriving expansions in $z$ for the resolvent $(H-z)^{-1}$ amounts to deriving expansions in $z$ for the operator $\big(u+v R_0(z)v\big)^{-1}$, as we shall do in the section. \subsection{Asymptotic expansion at embedded thresholds or eigenvalues} We derive in this section an asymptotic expansion in $z$ for the operator $\big(u+v R_0(z)v\big)^{-1}$. As a by-product, we show the absence of accumulation of eigenvalues of $H$. For this, we first adapt a convention of \cite{JN01} by considering values $z=\lambda-\kappa^2$ with $\kappa$ belonging to the sets $$ O(\varepsilon) :=\big\{\kappa\in\C\mid|\kappa|\in(0,\varepsilon),~\re(\kappa)>0\hbox{ and } \im(\kappa)<0\big\},\quad\varepsilon>0, $$ and $$ \widetilde O(\varepsilon) :=\big\{\kappa\in\C\mid|\kappa|\in(0,\varepsilon),~\re(\kappa)\ge0\hbox{ and } \im(\kappa)\le0\big\},\quad\varepsilon>0. $$ Also, we note that if $\kappa\in O(\varepsilon)$, then $-\kappa^2\in\C_+$, while if $\kappa\in\widetilde O(\varepsilon)$, then $-\kappa^2\in \overline{\C_+}$. Then, the main result of this section reads as follows\;\!: \begin{Proposition}\label{Prop_Asymp} Suppose that $V\in\linf(\Omega;\R)$ has bounded support, let $\lambda\in\tau\cup\sigma_{\rm p}(H)$, and take $\kappa\in O(\varepsilon)$ with $\varepsilon>0$ small enough. Then, the operator $\big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1}$ belongs to $\B(\H)$ and is continuous in $\kappa\in O(\varepsilon)$. Moreover, the continuous function $$ O(\varepsilon)\ni\kappa\mapsto \big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1}\in\B(\H) $$ extends continuously to a function $\widetilde O(\varepsilon)\ni\kappa\mapsto\M(\lambda,\kappa)\in\B(\H)$, and for each $\kappa\in\widetilde O(\varepsilon)$ the operator $\M(\lambda,\kappa)$ admits an asymptotic expansion in $\kappa$. The precise form of this expansion is given in equations \eqref{eq_expansion_1} and \eqref{eq_expansion_2} below. \end{Proposition} \begin{proof} For each $\lambda\in\R$, $\varepsilon>0$ and $\kappa\in O(\varepsilon)$, one has $\im(\lambda-\kappa^2)\ne0$. Thus, \eqref{resolv_eq} implies that the operator $\big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1}$ belongs to $\B\big(\H)$ and is continuous in $\kappa\in O(\varepsilon)$. For the other claims, we distinguish the cases $\lambda\in\tau$ and $\lambda\in\sigma_{\rm p}(H)\setminus\tau$, treating first the case $\lambda\in\tau$. All the operators defined below depend on the choice of $\lambda$, but for simplicity we do not always mention these dependencies. (i) Assume that $\lambda\in\tau$, take $\varepsilon>0$, set $N:=\{n\ge1\mid\lambda_n=\lambda\}$, and write $\P:=\sum_{n\in N}\P_n$ for the corresponding orthogonal projection (of dimension greater or equal to $1$). Then, \eqref{factor_res} implies for $\kappa\in O(\varepsilon)$ that $$ \big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1} =\left\{v\big(\P\otimes R^0(-\kappa^2)\big)v+u +\sum_{n\notin N}v\big(\P_n\otimes R^0(\lambda-\kappa^2-\lambda_n)\big)v\right\}^{-1}. $$ Moreover, the expansion $ R^0(-\kappa^2)(x,x') =\frac1{2\kappa}-\frac{|x-x'|}2+\kappa\;\!\frac{|x-x'|^2}4+\O(\kappa^2) $ for $\kappa\in\widetilde O(\varepsilon)$ (see \eqref{eq_noyau_1}) implies that the continuous function $$ O(\varepsilon)\ni\kappa\mapsto v\big(\P\otimes R^0(-\kappa^2)\big)v\in\B(\H) $$ extends continuously to a function $ \widetilde O(\varepsilon)\ni\kappa\mapsto \frac1{2\kappa}\;\!N_0+N_1(\kappa)\in\B(\H) $ with $N_0,N_1(\kappa)\in\B(\H)$ integral operators which kernels satisfy \begin{align*} N_0(\omega,x,\omega',x') &=\sum_{n\in N}f_n(\omega)\;\!v(\omega,x)\;\!v(\omega',x')\;\! \overline{f_n(\omega')},\quad(\omega,x),(\omega',x')\in\Omega,\\ N_1(0)(\omega,x,\omega',x') &=-\frac1{2}\sum_{n\in N}f_n(\omega)\;\!v(\omega,x)\;\!|x-x'|\;\!v(\omega',x') \;\!\overline{f_n(\omega')},\quad(\omega,x),(\omega',x')\in\Omega. \end{align*} Also, Lemma \ref{lemsauve}(a) implies the existence and the unicity in $\B(\H)$ of the limits $$ \sum_{n\notin N}v\big(\P_n\otimes R^0(\lambda-\kappa^2-\lambda_n)\big)v :=\lim_{\kappa'\to\kappa,\,\kappa'\in O(\varepsilon)} \sum_{n\notin N}v\big(\P_n\otimes R^0(\lambda-\kappa'^2-\lambda_n)\big)v, \quad\kappa\in\widetilde O(\varepsilon). $$ Therefore, one has for $\kappa\in O(\varepsilon)$ that $$ \big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1}=2\kappa\;\!I_0(\kappa)^{-1}, $$ with the operators \begin{equation}\label{form_I_0} I_0(\kappa):=N_0+2\kappa\;\!M_1(\kappa) \quad\hbox{and}\quad M_1(\kappa):=N_1(\kappa)+u+\sum_{n\notin N} v\big(\P_n \otimes R^0(\lambda-\kappa^2-\lambda_n)\big)\;\!v \end{equation} continuous as functions from $\widetilde O(\varepsilon)$ to $\B(\H)$. Furthermore, one infers from \cite[Lemma 5.1(i)]{JN01} and Lemma \ref{lemsauve}(a) that $\|M_1(\kappa)\|_{\B(\H)}$ is uniformly bounded as $\kappa\to0$. Our goal thus reduces to derive an asymptotic expansion for $I_0(\kappa)^{-1}$ as $\kappa\to0$. Since $I_0(0)=N_0$ is a finite-rank operator, $0$ is not a limit point of $\sigma(N_0)$. Also, $N_0$ is self-adjoint, therefore the orthogonal projection $S_0$ on $\ker(N_0)$ is equal to the Riesz projection of $N_0$ associated with the value $0$. We can thus apply Proposition \ref{propourversion}, and obtain for $\kappa\in\widetilde O(\varepsilon)$ with $\varepsilon>0$ small enough that the operator $I_1(\kappa):S_0\H\to S_0\H$ defined by \begin{equation}\label{defI1} I_1(\kappa) :=\sum_{j\ge0}(-2\kappa)^jS_0\;\!\big\{M_1(\kappa)\big(I_0(0)+S_0\big)^{-1}\big\}^{j+1}S_0 \end{equation} is uniformly bounded as $\kappa\to0$. Furthermore, $I_1(\kappa)$ is invertible in $S_0\H$ with bounded inverse satisfying the equation $$ I_0(\kappa)^{-1} =\big(I_0(\kappa)+S_0\big)^{-1}+\frac 1{2\kappa}\;\!\big(I_0(\kappa)+S_0\big)^{-1} S_0I_1(\kappa)^{-1}S_0\big(I_0(\kappa)+S_0\big)^{-1}. $$ It follows that for $\kappa\in O(\varepsilon)$ with $\varepsilon>0$ small enough, one has \begin{equation}\label{eq18} \big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1} =2\kappa\;\!\big(I_0(\kappa)+S_0\big)^{-1}+ \big(I_0(\kappa)+S_0\big)^{-1}S_0I_1(\kappa)^{-1}S_0\big(I_0(\kappa)+S_0\big)^{-1}, \end{equation} with the first term vanishing as $\kappa \to0$. To describe the second term of $\big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1}$ as $\kappa\to0$, we recall the equality $\big(I_0(0)+S_0\big)^{-1}S_0=S_0$, which (together with \eqref{defI1}) implies for $\kappa\in\widetilde O(\varepsilon)$ with $\varepsilon>0$ small enough that \begin{equation}\label{form_I_1} I_1(\kappa)=S_0M_1(0)S_0+\kappa\;\!M_2(\kappa), \end{equation} with \begin{align} M_2(\kappa) &:=\frac1\kappa\;\!S_0\big(M_1(\kappa)-M_1(0)\big)S_0 +\frac1\kappa\sum_{j\ge1}(-2\kappa)^jS_0\;\! \big\{M_1(\kappa)\big(I_0(0)+S_0\big)^{-1}\big\}^{j+1}S_0\nonumber\\ &\equiv S_0N_2(\kappa)S_0+\frac1\kappa\;\!S_0\sum_{n\notin N} v\;\!\big\{\P_n\otimes\big(R^0(\lambda-\kappa^2-\lambda_n) -R^0(\lambda-\lambda_n)\big)\big\}\;\!vS_0\nonumber\\ &\quad-2\sum_{j\ge0}(-2\kappa)^jS_0\;\! \big\{M_1(\kappa)\big(I_0(0)+S_0\big)^{-1}\big\}^{j+2}S_0\label{terme2} \end{align} and $$ N_2(\kappa):=\frac1\kappa\big(N_1(\kappa)-N_1(0)\big). $$ Then, we observe that \cite[Lemma 5.1(i)]{JN01} implies that $N_2(\kappa)$ admits a finite limit as $\kappa\to0$. Also, we note that Lemma \ref{lemsauve}(b) implies that the second term in \eqref{terme2} vanishes as $\kappa\to0$. Therefore, $\|M_2(\kappa)\|_{\B(S_0\H)}$ is uniformly bounded as $\kappa\to0$. Now, we recall that \begin{equation*} M_1(0)=N_1(0)+u+\sum_{n\notin N}v\big(\P_n\otimes R^0(\lambda-\lambda_n)\big)\;\!v, \end{equation*} with $u$ unitary and self-adjoint, $N_1(0)$ self-adjoint and compact, and with the last term compact with non-negative imaginary part (the last property holds for weighted resolvents on the real axis). So, since $S_0$ is an orthogonal projection with finite-dimensional kernel, the operator $I_1(0)=S_0M_1(0)S_0$ acting in the Hilbert space $S_0\H$ can also be written as the sum of a unitary and self-adjoint operator, a self-adjoint and compact operator, and a compact operator with non-negative imaginary part. Thus, Corollary \ref{Corol_So} applies with $S_1$ the finite-rank orthogonal projection on $\ker\big(I_1(0)\big)$, and the iterative procedure of Section \ref{Sec_Inv} can be applied to $I_1(\kappa)$ as it was done for $I_0(\kappa)$. Thus, for $\kappa\in\widetilde O(\varepsilon)$ with $\varepsilon>0$ small enough, the operator $I_2(\kappa):S_1\H\to S_1\H$ defined by $$ I_2(\kappa) :=\sum_{j\ge0}(-\kappa)^jS_1\big\{M_2(\kappa)\big(I_1(0)+S_1\big)^{-1}\big\}^{j+1}S_1 $$ is uniformly bounded as $\kappa\to0$. Furthermore, $I_2(\kappa)$ is invertible in $S_1\H$ with bounded inverse satisfying the equation $$ I_1(\kappa)^{-1} =\big(I_1(\kappa)+S_1\big)^{-1}+\frac1\kappa\;\!\big(I_1(\kappa)+S_1\big)^{-1} S_1I_2(\kappa)^{-1}S_1\big(I_1(\kappa)+S_1\big)^{-1}. $$ This expression for $I_1(\kappa)^{-1}$ can now be inserted in \eqref{eq18} in order to get for $\kappa\in O(\varepsilon)$ with $\varepsilon>0$ small enough \begin{align} &\big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1}\nonumber\\ &=2\kappa\;\!\big(I_0(\kappa)+S_0\big)^{-1} +\big(I_0(\kappa)+S_0\big)^{-1}S_0\big(I_1(\kappa)+S_1\big)^{-1}S_0 \big(I_0(\kappa)+S_0\big)^{-1}\nonumber\\ &\quad+\frac1\kappa\;\!\big(I_0(\kappa)+S_0\big)^{-1} S_0\big(I_1(\kappa)+S_1\big)^{-1}S_1I_2(\kappa)^{-1}S_1\big(I_1(\kappa)+S_1\big)^{-1} S_0\big(I_0(\kappa)+S_0\big)^{-1},\label{eq_F_second} \end{align} with the first two terms bounded as $\kappa\to0$. Let us concentrate on the last term and check once more that the assumptions of Proposition \ref{propourversion} are satisfied. For that purpose, we recall that $\big(I_1(0)+S_1\big)^{-1}S_1=S_1$, and observe that for $\kappa\in\widetilde O(\varepsilon)$ with $\varepsilon>0$ small enough \begin{equation}\label{form_I_2} I_2(\kappa)=S_1M_2(0)S_1+\kappa\;\!M_3(\kappa), \end{equation} with \begin{equation}\label{devM20} M_2(0)=S_0N_2(0)S_0 -2 S_0M_1(0)\big(I_0(0)+S_0\big)^{-1}M_1(0)S_0 \qquad\hbox{and}\qquad M_3(\kappa)\in\O(1). \end{equation} The inclusion $M_3(\kappa)\in\O(1)$ follows from standard estimates and from the fact that $\frac1\kappa\big(N_2(\kappa)-N_2(0)\big)$ admits a finite limit as $\kappa\to0$ (see \cite[Lemma 5.1(i)]{JN01}). Note also that the kernel of $N_2(0)$ is given by \begin{equation}\label{N20} N_2(0)(\omega,x,\omega',x') =\frac14\sum_{n\in N}f_n(\omega)\;\!v(\omega,x)\;\!|x-x'|^2\;\!v(\omega',x') \;\!\overline{f_n(\omega')},\quad(\omega,x),(\omega',x')\in\Omega. \end{equation} Now, as already observed, one has $M_1(0)=X+iZ^*Z$, with $X,Z$ bounded self-adjoint operators in $\H$. Therefore it follows that $I_1(0)=S_0M_1(0)S_0=S_0XS_0+i(ZS_0)^*(Z S_0)$, and one infers from Corollary \ref{Cor_magique} that $Z S_0S_1=0$ and $S_1S_0 Z^*=0$. Since $S_1S_0=S_1=S_0S_1$, it follows that $ZS_1=0$, that $S_1Z^*=0$, and also that \begin{align*} S_1M_1(0)\big(I_0(0)+S_0\big)^{-1}M_1(0)S_1 &=S_1(X+iZ^*Z)\big(I_0(0)+S_0\big)^{-1}(X+iZ^*Z)S_1\\ &=S_1X\big(I_0(0)+S_0\big)^{-1}XS_1. \end{align*} So, this operator is self-adjoint, and thus one infers from \eqref{devM20} and \eqref{N20} that $I_2(0)=S_1M_2(0)S_1$ is the sum of two bounded self-adjoint operators in $S_1\H$. Since $S_1\H$ is finite-dimensional, $0$ is not a limit point of the spectrum of $I_2(0)$. So, the orthogonal projection $S_2$ on $\ker\big(I_2(0)\big)$ is a finite-rank operator, and Proposition \ref{propourversion} applies to $I_2(0)+\kappa\;\!M_3(\kappa)$. Thus, for $\kappa\in\widetilde O(\varepsilon)$ with $\varepsilon>0$ small enough, the operator $I_3(\kappa):S_2\H\to S_2\H$ defined by $$ I_3(\kappa) :=\sum_{j\ge0}(-\kappa)^jS_2\;\!\big\{M_3(\kappa)\big(I_2(0)+S_2\big)^{-1}\big\}^{j+1}S_2 $$ is uniformly bounded as $\kappa\to0$. Furthermore, $I_3(\kappa)$ is invertible in $S_2\H$ with bounded inverse satisfying the equation $$ I_2(\kappa)^{-1} =\big(I_2(\kappa)+S_2\big)^{-1} +\frac1\kappa\;\!\big(I_2(\kappa)+S_2\big)^{-1}S_2I_3(\kappa)^{-1}S_2 \big(I_2(\kappa)+S_2\big)^{-1}. $$ This expression for $I_2(\kappa)^{-1}$ can now be inserted in \eqref{eq_F_second} in order to get for $\kappa\in O(\varepsilon)$ with $\varepsilon>0$ small enough \begin{align} &\big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1}\nonumber\\ &=2\kappa\big(I_0(\kappa)+S_0\big)^{-1} +\big(I_0(\kappa)+S_0\big)^{-1}S_0\big(I_1(\kappa)+S_1\big)^{-1}S_0 \big(I_0(\kappa)+S_0\big)^{-1}\nonumber\\ &\quad+\frac1\kappa\;\!\big(I_0(\kappa)+S_0\big)^{-1}S_0 \big(I_1(\kappa)+S_1\big)^{-1}S_1\big(I_2(\kappa)+S_2\big)^{-1}S_1 \big(I_1(\kappa)+S_1\big)^{-1}S_0\big(I_0(\kappa)+S_0\big)^{-1}\nonumber\\ &\quad+\frac1{\kappa^2}\;\!\big(I_0(\kappa)+S_0\big)^{-1}S_0 \big(I_1(\kappa)+S_1\big)^{-1}S_1\big(I_2(\kappa)+ S_2\big)^{-1}S_2 I_3(\kappa)^{-1} S_2\big(I_2(\kappa)+S_2\big)^{-1}S_1\nonumber\\ &\qquad\times\big(I_1(\kappa)+S_1\big)^{-1}S_0\big(I_0(\kappa)+S_0\big)^{-1}. \label{sol1} \end{align} Fortunately, the iterative procedure stops here. The argument is based on the relation $$ uv\;\!(H-\lambda+\kappa^2)^{-1}vu= u-\big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1} $$ and the fact that $H$ is a self-adjoint operator. Indeed, if we choose $\kappa=\frac\varepsilon2(1-i)\in O(\varepsilon)$, then the inequality $\big\|\kappa^2(H-\lambda+\kappa^2)^{-1}\big\|_{\B(\H)}\le1$ holds, and thus \begin{equation}\label{notresauveur} \limsup_{\kappa\to0} \big\|\kappa^2\big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1}\big\|_{\B(\H)}<\infty. \end{equation} So, if we replace $\big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1}$ by the expression \eqref{sol1} and if we take into account that all factors of the form $\big(I_j(\kappa)+S_j\big)^{-1}$ have a finite limit as $\kappa\to0$, we infer from \eqref{notresauveur} that \begin{equation}\label{notresecondsauveur} \limsup_{\kappa\to0}\big\|I_3(\kappa)^{-1}\big\|_{\B(S_2\H)}<\infty. \end{equation} Therefore, it only remains to show that this relation holds not just for $\kappa=\frac\varepsilon2(1-i)$ but for all $\kappa\in\widetilde O(\varepsilon)$. For that purpose, we consider $I_3(\kappa)$ once again, and note that \begin{equation}\label{souffrance1} I_3(\kappa)=S_2M_3(0)S_2+\kappa\;\!M_4(\kappa) \quad\hbox{with}\quad M_4(\kappa)\in\O(1). \end{equation} The precise form of $M_3(0)$ can be computed explicitly, but is irrelevant. Now, since $I_3(0)$ acts in a finite-dimensional space, $0$ is an isolated eigenvalue of $I_3(0)$ if $0\in\sigma\big(I_3(0)\big)$, in which case we write $S_3$ for the corresponding Riesz projection. Then, the operator $I_3(0)+S_3$ is invertible with bounded inverse, and \eqref{souffrance1} implies that $I_3(\kappa)+S_3$ is also invertible with bounded inverse for $\kappa\in\widetilde O(\varepsilon)$ with $\varepsilon>0$ small enough. In addition, one has $\big(I_3(\kappa)+S_3\big)^{-1}=\big(I_3(0)+S_3\big)^{-1}+\O(\kappa)$. By the inversion formula given in \cite[Lemma 2.1]{JN01}, one infers that $S_3-S_3\big(I_3(\kappa)+S_3\big)^{-1}S_3$ is invertible in $S_3\H$ with bounded inverse and that the following equalities hold \begin{align*} I_3(\kappa)^{-1} &=\big(I_3(\kappa)+S_3\big)^{-1}+\big(I_3(\kappa)+S_3\big)^{-1}S_3 \big\{S_3-S_3\big(I_3(\kappa)+S_3\big)^{-1}S_3\big\}^{-1}S_3 \big(I_3(\kappa)+S_3\big)^{-1}\\ &=\big(I_3(\kappa)+S_3\big)^{-1}+\big(I_3(\kappa)+S_3\big)^{-1}S_3 \big\{S_3-S_3\big(I_3(0)+S_3\big)^{-1}S_3+\O(\kappa)\big\}^{-1}S_3 \big(I_3(\kappa)+S_3\big)^{-1}. \end{align*} This implies that \eqref{notresecondsauveur} holds for some $\kappa\in\widetilde O(\varepsilon)$ if and only if the operator $S_3-S_3\big(I_3(0)+S_3\big)^{-1}S_3$ is invertible in $S_3\H$ with bounded inverse. But, we already know from what precedes that \eqref{notresecondsauveur} holds for $\kappa=\frac\varepsilon2(1-i)$. So, the operator $S_3-S_3\big(I_3(0)+S_3\big)^{-1}S_3$ is invertible in $S_3\H$ with bounded inverse, and thus \eqref{notresecondsauveur} holds for all $\kappa\in\widetilde O(\varepsilon)$. Therefore, \eqref{sol1} implies that the function $$ O(\varepsilon)\ni\kappa\mapsto \big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1}\in\B(\H) $$ extends continuously to a function $\widetilde O(\varepsilon)\ni\kappa\mapsto\M(\lambda,\kappa)\in\B(\H)$, with $\M(\lambda,\kappa)$ given by \begin{align} \M(\lambda,\kappa) &=2\kappa\big(I_0(\kappa)+S_0\big)^{-1} +\big(I_0(\kappa)+S_0\big)^{-1}S_0\big(I_1(\kappa)+S_1\big)^{-1}S_0 \big(I_0(\kappa)+S_0\big)^{-1}\nonumber\\ &\quad+\frac1\kappa\;\!\big(I_0(\kappa)+S_0\big)^{-1}S_0 \big(I_1(\kappa)+S_1\big)^{-1}S_1\big(I_2(\kappa)+S_2\big)^{-1}S_1 \big(I_1(\kappa)+S_1\big)^{-1}S_0\big(I_0(\kappa)+S_0\big)^{-1}\nonumber\\ &\quad+\frac1{\kappa^2}\;\!\big(I_0(\kappa)+S_0\big)^{-1}S_0 \big(I_1(\kappa)+S_1\big)^{-1}S_1\big(I_2(\kappa)+ S_2\big)^{-1}S_2 I_3(\kappa)^{-1} S_2\big(I_2(\kappa)+S_2\big)^{-1}S_1\nonumber\\ &\qquad\times\big(I_1(\kappa)+S_1\big)^{-1}S_0\big(I_0(\kappa)+S_0\big)^{-1}. \label{eq_expansion_1} \end{align} (ii) Assume now that $\lambda\in\sigma_{\rm p}(H)\setminus\tau$, take $\varepsilon>0$, let $\kappa\in\widetilde O(\varepsilon)$, and set $J_0(\kappa):=T_0+\kappa^2\;\!T_1(\kappa)$ with $$ T_0:=u+\sum_nv\big(\P_n\otimes R^0(\lambda-\lambda_n)\big)\;\!v $$ and $$ T_1(\kappa) :=\frac1{\kappa^2}\sum_nv\;\!\big\{\P_n\otimes\big(R^0(\lambda-\kappa^2-\lambda_n) -R^0(\lambda-\lambda_n)\big)\big\}\;\!v. $$ Then, one infers from Lemma \ref{lemsauve}(b) that $\|T_1(\kappa)\|_{\B(\H)}$ is uniformly bounded as $\kappa\to0$. Also, the assumptions of Corollary \ref{Corol_So} hold for the operator $T_0$, the Riesz projection $S$ associated with the value $0\in\sigma(T_0)$ is an orthogonal projection, and Proposition \ref{propourversion} applies for $J_0(\kappa)$. It follows that for $\kappa\in\widetilde O(\varepsilon)$ with $\varepsilon>0$ small enough, the operator $J_1(\kappa):S\H\to S\H$ defined by $$ J_1(\kappa) :=\sum_{j\ge0}(-\kappa^2)^jS\;\!\big\{T_1(\kappa)(T_0+S)^{-1}\big\}^{j+1}S $$ is uniformly bounded as $\kappa\to0$. Furthermore, $J_1(\kappa)$ is invertible in $S\H$ with bounded inverse satisfying the equation $$ J_0(\kappa)^{-1} =\big(J_0(\kappa)+S\big)^{-1} +\frac1{\kappa^2}\;\!\big(J_0(\kappa)+S)^{-1}SJ_1(\kappa)^{-1}S \big(J_0(\kappa)+S\big)^{-1}. $$ It follows that for $\kappa\in O(\varepsilon)$ with $\varepsilon>0$ small enough one has \begin{equation}\label{sol2} \big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1} =\big(J_0(\kappa)+S\big)^{-1} +\frac1{\kappa^2}\;\!\big(J_0(\kappa)+S)^{-1}SJ_1(\kappa)^{-1}S \big(J_0(\kappa)+S\big)^{-1}. \end{equation} Fortunately, the iterative procedure already stops here. Indeed, the argument is similar to the one presented above once we observe that $$ J_1(\kappa)=ST_1(0)S+\kappa\;\!T_2(\kappa)\quad\hbox{with}\quad T_2(\kappa)\in\O(1). $$ Therefore, \eqref{sol2} implies that the function $$ O(\varepsilon)\ni\kappa\mapsto \big(u+vR_0(\lambda-\kappa^2)\;\!v\big)^{-1}\in\B(\H) $$ extends continuously to a function $\widetilde O(\varepsilon)\ni\kappa\mapsto\M(\lambda,\kappa)\in\B(\H)$, with $\M(\lambda,\kappa)$ given by \begin{equation}\label{eq_expansion_2} \M(\lambda,\kappa) =\big(J_0(\kappa)+S\big)^{-1} +\frac1{\kappa^2}\big(J_0(\kappa)+S)^{-1}SJ_1(\kappa)^{-1}S \big(J_0(\kappa)+S\big)^{-1}. \end{equation} \end{proof} We now give a result on the possible embedded eigenvalues. Since it is already known that the eigenvalues of $H$ in $\sigma(H)\setminus\tau$ are of finite multiplicity and can accumulate at points of $\tau$ only (see \cite[Thm.~3.4(b)]{Tie06}), we show that such accumulations do not take place\;\!: \begin{Corollary}\label{noaccp} Suppose that $V\in\linf(\Omega;\R)$ has bounded support. Then, the point spectrum of $H$ has no accumulation point (except possibly at $+\infty$). \end{Corollary} \begin{proof} To show the absence of local accumulation of eigenvalues, suppose by absurd that there is an accumulation of eigenvalues at some point $\lambda\in\tau$. Then, the validity of the expansion \eqref{eq_expansion_1} at the point $\lambda$ contradicts the validity of the expansion \eqref{eq_expansion_2} which would take place at each of these eigenvalues. Thus, there is no accumulation of eigenvalues at points of $\tau$, and the claim is proved. \end{proof} We end up this section with some auxiliary results which will be useful later on. All notations and definitions are borrowed from the proof of Proposition \ref{Prop_Asymp}. The only change is that we extend by $0$ the operators defined originally on subspaces of $\H$ to get operators defined on all of $\H$. \begin{Lemma}\label{com_simples} Take $2\ge j\ge k\ge0$ and $\kappa\in\widetilde O(\varepsilon)$ with $\varepsilon>0$ small enough. Then, one has in $\B(\H)$ $$ \big[S_j,\big(I_k(\kappa)+S_k\big)^{-1}\big]\in\O(\kappa). $$ \end{Lemma} \begin{proof} The fact that $S_j$ is the orthogonal projection on the kernel of $I_j(0)$ and the relations $S_kS_j=S_j=S_jS_k$ imply that $[S_k,S_j]=0$ and $[I_k(0),S_j]=0$. Thus, one has the equalities \begin{align*} \big[S_j,\big(I_k(\kappa)+S_k\big)^{-1}\big] &=\big(I_k(\kappa)+S_k\big)^{-1}\big[I_k(\kappa)+S_k,S_j\big] \big(I_k(\kappa)+S_k\big)^{-1}\\ &=\big(I_k(\kappa)+S_k\big)^{-1}\big[I_k(0)+\O(\kappa)+S_k,S_j\big] \big(I_k(\kappa)+S_k\big)^{-1}\\ &=\big(I_k(\kappa)+S_k\big)^{-1}\big[\O(\kappa),S_j\big] \big(I_k(\kappa)+S_k\big)^{-1}, \end{align*} which implies the claim. \end{proof} Given $\lambda\in\tau$, we recall that $N=\big\{n\ge1\mid\lambda_n=\lambda\big\}$ and $\P=\sum_{n\in N}\P_n$. \begin{Lemma}\label{relations_simples} Let $\lambda\in\tau$ and let $\G$ be an auxiliary Hilbert space. \begin{enumerate} \item[(a)] For each $n\in N$, one has $(\P_n\otimes1)\;\!vS_0=0$. \item[(b)] For each $n\notin N$ and $B_n\in\B(\H,\G)$ such that $B_n^*B_n=\im\big\{v\big(\P_n\otimes R^0(\lambda-\lambda_n)\big)v\big\}$, one has $S_1B_n^*=0$ and $B_nS_1=0$. \end{enumerate} \end{Lemma} \begin{proof} The first claim follows from the fact that $S_0$ is the orthogonal projection on $\ker\big(v\;\!(\P\otimes1)\;\!v\big)$. The second claim follows from Lemma \ref{Cor_magique} applied with $Z_n=B_nS_0$ and $$ A_0 =S_0M_1(0)S_0 =S_0\left\{N_1(0)+u +\sum_{n\notin N}v\big(\P_n\otimes R^0(\lambda-\lambda_n)\big)v\right\}S_0 $$ if one takes into account the relations $S_0S_1=S_1=S_1S_0$. \end{proof} For what follows, we recall that $Q$ is the multiplication operator by the variable in $\ltwo(\R)$. \begin{Lemma}\label{la_cle_des_champs} One has \begin{enumerate} \item[(a)] $XS_2=0=S_2X$, with $X$ the real part of the operator $M_1(0)$, \item[(b)] $S_2\;\!(1\otimes Q)\;\!v\;\!(f_n\otimes1)=0\,$ for all $n\in N$, \item[(c)] $M_1(0)S_2=0=S_2M_1(0)$. \end{enumerate}\end{Lemma} \begin{proof} First, we recall from the proof of Proposition \ref{Prop_Asymp} that $$ I_2(0) =S_1M_2(0)S_1 =S_1N_2(0)S_1-2S_1X\big(I_0(0)+S_0\big)^{-1}X S_1, $$ with $N_2(0)$ given (in the usual bra-ket notation) by \begin{align*} N_2(0) =\frac14\sum_{n\in N}\big\{ &\big|(1\otimes Q^2)\;\!v\;\!(f_n\otimes1)\big\rangle\big\langle v\;\!(f_n\otimes1)\big| +\big|v\;\!(f_n\otimes1)\big\rangle\big\langle(1\otimes Q^2)\;\!v\;\!(f_n\otimes1)\big|\\ &-2\;\!\big|(1\otimes Q)\;\!v\;\!(f_n\otimes1)\big\rangle \big\langle(1\otimes Q)\;\!v\;\!(f_n\otimes1)\big|\big\}. \end{align*} Now, let $\varphi\in S_2\H$. Then, we have $I_2(0)\varphi=0$ and \begin{align}\label{eq_rhume} \big\langle\varphi,N_2(0)\varphi\big\rangle =2\;\!\big\langle\varphi,X\big(I_0(0)+S_0\big)^{-1}X\varphi\big\rangle. \end{align} In addition, one infers from the relation $S_2=S_0S_2$ and Lemma \ref{relations_simples}(a) that $$ \big\langle\varphi,\big\{\big|(1\otimes Q^2)\;\!v\;\!(f_n\otimes1)\big\rangle \big\langle v\;\!(f_n\otimes1)\big|\big\}\varphi\big\rangle =\big\langle\varphi,(1\otimes Q^2)v\;\!(f_n\otimes1)\big\rangle \big\langle S_0\;\!v\;\!(f_n\otimes1),\varphi\big\rangle =0, $$ and thus \eqref{eq_rhume} reduces to $$ -\left\langle\varphi,\sum_{n\in N}\big\{\big|(1\otimes Q)\;\!v\;\!(f_n\otimes1)\big\rangle \big\langle(1\otimes Q)\;\!v\;\!(f_n\otimes1)\big|\big\}\varphi\right\rangle =4\left\langle\varphi,X\big(I_0(0)+S_0\big)^{-1}X\varphi\right\rangle. $$ Since both operators are positive, both sides of the equality are equal to $0$. This implies that $$ \big\langle(1\otimes Q)\;\!v\;\!(f_n\otimes1),\varphi\big\rangle=0 ~\hbox{for each}~n\in N\quad\hbox{and}\quad \big\|\big(I_0(0)+S_0\big)^{-1/2}X\varphi\big\|^2=0, $$ from which the points (a) and (b) are easily deduced. Finally, we note that $M_1(0)S_2=XS_2$ and $S_2M_1(0)=S_2X$ due to the proof of Proposition \ref{Prop_Asymp}. So, the point (c) follows from the point (a). \end{proof} \subsection{Scattering theory and spectral representation} In this section, we recall some basics on the scattering theory for the pair $\{H_0,H\}$ and on the spectral decomposition for $H_0$. As before, we assume that $V\in\linf(\Omega;\R)$ has bounded support. Under this assumption, it is a well-known that the wave operators $$ W_\pm:=\slim_{t\to\pm\infty}\e^{itH}\e^{-itH_0} $$ exist and are complete (see \cite[Cor.~3.5(b)]{Tie06}). As a consequence, the scattering operator $S:=W_+^*W_-$ is a unitary operator in $\H$ which commutes with $H_0$, and thus $S$ is decomposable in the spectral representation of $H_0$. So, in order to proceed, we start by recalling the spectral representation of $H_0$. For that purpose, we define for each $\lambda\in[\lambda_1,\infty)$ the finite set $$ \N(\lambda):=\big\{n\ge1 \mid \lambda_n\le\lambda\big\} $$ and the finite-dimensional space $$ \Hrond(\lambda) :=\bigoplus_{n\in\N(\lambda)} \big\{\P_n\;\!\ltwo(\Sigma)\oplus\P_n\;\!\ltwo(\Sigma)\big\}, $$ with $\lambda_n$ and $\P_n$ as in Section \ref{secwave}. Note that $\Hrond(\lambda)$ is naturally embedded in $ \Hrond(\infty) :=\bigoplus_{n\ge1}\big\{\P_n\;\!\ltwo(\Sigma)\oplus\P_n\;\!\ltwo(\Sigma)\big\} $. Now, for any $\xi\in \R$, we let $\gamma(\xi):{\mathscr S}(\R)\to\C$ be the trace operator given by $\gamma(\xi)f=f(\xi)$, with ${\mathscr S}(\R)$ the Schwartz space on $\R$. Also, we define for each $\lambda\in[\lambda_1,\infty)\setminus\tau$ the operator $T(\lambda):\ltwo(\Sigma)\odot{\mathscr S}(\R)\to\Hrond(\lambda)$ by $$ \big(T(\lambda)\;\!\varphi\big)_n :=(\lambda-\lambda_n)^{-1/4} \big\{\big(\P_n\otimes\gamma(-\sqrt{\lambda-\lambda_n})\big)\varphi, \big(\P_n\otimes\gamma(\sqrt{\lambda-\lambda_n})\big)\varphi\big\}, \quad n\in\N(\lambda). $$ Some regularity properties of the map $\lambda\mapsto T(\lambda)$ have been established in \cite[Lemma~2.4]{Tie06}, and additional properties are derived below for the related map $\lambda\mapsto\F_0(\lambda)$ which we now define. Let $\F:\ltwo(\R)\to\ltwo(\R)$ be the Fourier transform and let $\Hrond:=\int_{[\lambda_1,\infty)}^\oplus\Hrond(\lambda)\,\d\lambda$. Then, the operator $\F_0:\H\to\Hrond$ given by $$ (\F_0\;\!\varphi)(\lambda) \equiv\F_0(\lambda)\;\!\varphi :=2^{-1/2}\;\!T(\lambda)(1\otimes \F)\;\!\varphi, \quad\lambda\in[\lambda_1,\infty)\setminus\tau, ~\varphi\in\ltwo(\Sigma)\odot{\mathscr S}(\R), $$ is unitary and satisfies $\F_0H_0\F_0^*=\int_{[\lambda_1,\infty)}^\oplus\lambda\,\d\lambda$ (see \cite[Prop.~2.5]{Tie06}). We shall need some expansions for the map $\lambda\mapsto\F_0(\lambda)$ in neighbourhoods of points $\lambda\in\tau\cup\sigma_{\rm p}(H)$. For this, we define for each $\lambda>\lambda_1$, each $n\ge1$ such that $\lambda_n<\lambda$, and each $\sigma\in\{+,-\}$ $$ \F_0(\lambda;n,\sigma)\;\!\varphi := 2^{-1/2}(\lambda-\lambda_n)^{-1/4} \big(\P_n\otimes \gamma(\sigma\sqrt{\lambda-\lambda_n})\F\big)\varphi, \quad\varphi\in\ltwo(\Sigma)\odot{\mathscr S}(\R). $$ The operator $\F_0(\lambda;n,\sigma):\ltwo(\Sigma)\odot{\mathscr S}(\R)\to\P_n\;\!\ltwo(\Sigma)$ is defined on a slightly larger set of $\lambda$ than the operator $\F_0(\lambda):\ltwo(\Sigma)\odot{\mathscr S}(\R)\to\Hrond(\lambda)$. Also, we define the sets $$ \partial O(\varepsilon) :=\big\{\kappa\in\C\mid\kappa\in(0,\varepsilon)\cup(0,-i\varepsilon)\big\} \subset\widetilde O(\varepsilon), \quad\varepsilon>0, $$ for which $-\kappa^2\in(-\varepsilon^2,\varepsilon^2)\setminus\{0\}$ if $\kappa\in\partial O(\varepsilon)$, and we let $\ltwo_s(\R)$ be the domain of $\langle Q\rangle^s$, $s\in\R$, endowed with the graph norm. Then, given $\lambda\in\tau\cup\sigma_{\rm p}(H)$, we consider for each $\kappa\in\partial O(\varepsilon)$ with $\varepsilon>0$ small enough the asymptotic expansion in $\kappa$ of the operator $\F_0(\lambda-\kappa^2;n,\sigma)$. If $\lambda_n<\lambda$, one has for $\kappa\in\partial O(\varepsilon)$ with $\varepsilon>0$ small enough $$ (\lambda-\kappa^2-\lambda_n)^{-1/4} =(\lambda-\lambda_n)^{-1/4}\left(1+\frac{\kappa^2}{4(\lambda-\lambda_n)} +\O(\kappa^4)\right). $$ Similarly, if $s>0$ is big enough and if $\sigma\in\{+,-\}$, one has in $\B\big(\ltwo_s(\R),\C\big)$ $$ \gamma(\sigma\sqrt{\lambda-\kappa^2-\lambda_n})\;\!\F =\gamma(\sigma\sqrt{\lambda-\lambda_n})\;\!\F \left(1+\frac{i\sigma\kappa^2}{2\sqrt{\lambda-\lambda_n}}\;\!Q\right)+\O(\kappa^4). $$ As a consequence, we have in $\B\big(\ltwo(\Sigma)\otimes\ltwo_s(\R);\P_n\;\!\ltwo(\Sigma)\big)$ \begin{equation}\label{dev1} \F_0(\lambda-\kappa^2;n,\sigma) =\F_0(\lambda;n,\sigma)\left(1+\frac{\kappa^2}{4(\lambda-\lambda_n)} +\frac{i\sigma\kappa^2}{2\sqrt{\lambda-\lambda_n}}\;\!Q\right)+\O(\kappa^4). \end{equation} On the other hand, if $\lambda=\lambda_n\in\tau$ and $-\kappa^2>0$, then one obtains in $\B\big(\ltwo(\Sigma)\otimes\ltwo_s(\R),\P_n\;\!\ltwo(\Sigma)\big)$ \begin{equation}\label{dev2} \F_0(\lambda-\kappa^2;n,\sigma) =(-\kappa^2)^{-1/4}\;\!\gamma_0(n)-i\sigma(-\kappa^2)^{1/4}\;\!\gamma_1(n) +\O(|\kappa|^{3/2}) \end{equation} with $\gamma_j(n):\ltwo(\Sigma)\otimes\ltwo_s(\R)\to\P_n\;\!\ltwo(\Sigma)$ the operator given by $$ \big(\gamma_j(n)\varphi\big)(\omega) :=\frac1{2\;\!j!\sqrt\pi}\int_\R x^j\big((\P_n\otimes1)\varphi\big)(\omega,x)\,\d x \quad\hbox{for almost every }\omega\in\Sigma. $$ With these expansions at hand, we can start the study of the regularity properties of the scattering matrix at thresholds or at embedded eigenvalues. Before that, we just need to give a final auxiliary result. Recall that the orthogonal projections $S_0$ and $S_1$ have been introduced in the proof of Proposition \ref{Prop_Asymp}. \begin{Lemma}\label{help1} Take $\lambda\in\tau$, $\sigma\in\{+,-\}$, and $\kappa\in\partial O(\varepsilon)$ with $\varepsilon>0$ small enough. \begin{enumerate} \item[(a)] For $n\ge1$ such that $\lambda_n<\lambda$, one has $\F_0(\lambda-\kappa^2;n,\sigma)\;\!vS_1\in\O(\kappa^2)$. \item[(b)] For $n\ge1$ such that $\lambda_n=\lambda$ and for $-\kappa^2>0$, one has $\F_0(\lambda-\kappa^2;n,\sigma)\;\!vS_0=0$. \end{enumerate} \end{Lemma} \begin{proof} (a) Due to the expansion \eqref{dev1}, it is sufficient to show the equality $\F_0(\lambda;n,\sigma)vS_1=0$. For that purpose, we define the operator $B_n:\H\to\P_n\;\!\ltwo(\Sigma)\oplus\P_n\;\!\ltwo(\Sigma)$ by $$ B_n\;\!\varphi :=\pi^{1/2}\big\{\F_0(\lambda;n,-)\;\!v\;\!\varphi, \F_0(\lambda;n,+)\;\!v\;\!\varphi\big\}, $$ and note that $B_n^*B_n=\im\big\{v\big(\P_n\otimes R^0(\lambda-\lambda_n)\big)v\big\}$. The mentioned equality then follows from Lemma \ref{relations_simples}(b). (b) The claim is a direct consequence of the identity $$ \F_0(\lambda-\kappa^2;n,\sigma)\;\!vS_0 =\F_0(\lambda-\kappa^2;n,\sigma)(\P_n\otimes1)\;\!vS_0 $$ and Lemma \ref{relations_simples}(a). \end{proof} \subsection{Continuity of the scattering matrix} Since the scattering operator $S$ commutes with $H_0$, it follows from the spectral decomposition of $H_0$ that $$ \F_0\;\!S\;\!\F_0^*=\int_{[\lambda_1,\infty)}^\oplus S(\lambda)\,\d\lambda, $$ where $S(\lambda)$, the scattering matrix at energy $\lambda$, is defined and is a unitary operator in $\Hrond(\lambda)$ for almost every $\lambda\in[\lambda_1,\infty)$. In addition, one can obtain a convenient stationary formula for $S(\lambda)$ using time-dependent scattering theory. For instance, if one uses the results of \cite[Sec.~3.1]{Tie06} and relation \eqref{resolv_eq}, one obtains for each $\lambda\in[\lambda_1,\infty)\setminus\{\tau\cup\sigma_{\rm p}(H)\}$ the equality in $\B\big(\Hrond(\lambda)\big)$ $$ S(\lambda) =1-2\pi i\;\!\F_0(\lambda)\;\!v\big(u+vR_0(\lambda)\;\!v\big)^{-1}v\;\!\F_0(\lambda)^*, $$ and that the map $$ [\lambda_1,\infty)\setminus\{\tau\cup\sigma_{\rm p}(H)\}\ni\lambda\mapsto S(\lambda) \in\Hrond(\infty) $$ is a $k$-times continuously differentiable, for any $k\ge0$. Since the regularity of the map $\lambda\mapsto S(\lambda)$ is already known when $\lambda\in[\lambda_1,\infty)\setminus\{\tau \cup \sigma_{\rm p}(H)\}$, we now describe the behavior of $S(\lambda)$ as $\lambda$ approaches points of $\tau\cup\sigma_{\rm p}(H)$. To do this, we decompose the scattering matrix $S(\lambda)$ into a collection of channel scattering matrices corresponding to the transverse modes of the waveguide. Namely, for $\lambda\in[\lambda_1,\infty)\setminus\{\tau\cup\sigma_{\rm p}(H)\}$, for $n,n'\ge1$ such that $\lambda_n<\lambda$ and $\lambda_{n'}<\lambda$, and for $\sigma,\sigma'\in\{+,-\}$, we define the operators $ S(\lambda;n,\sigma,n',\sigma') \in\B\big(\P_{n'}\;\!\ltwo(\Sigma),\P_n\;\!\ltwo(\Sigma)\big) $ by $$ S(\lambda;n,\sigma,n',\sigma') :=\delta_{n\sigma n'\sigma'}-2\pi i\;\!\F_0(\lambda;n,\sigma)\;\!v \big(u+vR_0(\lambda)\;\!v\big)^{-1}v\;\!\F_0(\lambda;n',\sigma')^* $$ with $\delta_{n\sigma n'\sigma'}:=1$ if $(n,\sigma)=(n',\sigma')$, and $\delta_{n\sigma n'\sigma'}:=0$ otherwise. We consider separately the continuity at thresholds and the continuity at embedded eigenvalues, starting with the thresholds. Note that for each $\lambda\in\tau$, a channel can either be already open (in which case one has to show the existence and the equality of the limits from the right and from the left), or can open at the energy $\lambda$ (in which case one has only to show the existence of the limit from the right). \begin{Proposition}\label{propcon} Suppose that $V\in\linf(\Omega;\R)$ has bounded support and take $\lambda\in\tau$, $\kappa\in\partial O(\varepsilon)$ with $\varepsilon>0$ small enough, $n,n'\ge1$, and $\sigma,\sigma'\in\{+,-\}$. \begin{enumerate} \item[(a)] If $\lambda_n<\lambda$ and $\lambda_{n'}<\lambda$, then the limit $\lim_{\kappa\to0}S(\lambda-\kappa^2;n,\sigma,n',\sigma')$ exists. \item[(b)] If $\lambda_n\le\lambda$, $\lambda_{n'}\le\lambda$ and $-\kappa^2>0$, then the limit $\lim_{\kappa\to0}S(\lambda-\kappa^2 ;n,\sigma,n',\sigma')$ exists. \end{enumerate} \end{Proposition} Before giving the proof, we define for $2\ge j\ge k\ge0$ the operators $$ C_{jk}(\kappa):=\big[S_j,\big(I_k(\kappa)+S_k\big)^{-1}\big]\in\B(\H). $$ We know from Lemma \ref{com_simples} that $C_{jk}(\kappa)\in\O(\kappa)$, but the formulas \eqref{form_I_0}, \eqref{form_I_1} and \eqref{form_I_2} imply in fact that $C_{jk}'(0):=\lim_{\kappa\to0}\frac1\kappa\;\!C_{jk}(\kappa)$ exists in $\B(\H)$. In other cases, we use the notation $F(\kappa)\in\Oa(\kappa^n)$ for an operator $F(\kappa)\in\O(\kappa^n)$ such that $\lim_{\kappa \to 0}\kappa^{-n}F(\kappa)$ exists in $\B(\H)$. Finally, we note that \eqref{eq_expansion_1} can be rewritten as \begin{align} &\M(\lambda,\kappa)\nonumber\\ &=2\kappa\big(I_0(\kappa)+S_0\big)^{-1}\nonumber\\ &\quad+\Big(S_0\big(I_0(\kappa)+S_0\big)^{-1}-C_{00}(\kappa)\Big) S_0\big(I_1(\kappa)+S_1\big)^{-1}S_0 \Big(\big(I_0(\kappa)+S_0\big)^{-1}S_0+C_{00}(\kappa)\Big)\nonumber\\ &\quad+\frac1\kappa\big(I_0(\kappa)+S_0\big)^{-1} \Big(S_1\big(I_1(\kappa)+S_1\big)^{-1}-S_0C_{11}(\kappa)\Big) S_1\big(I_2(\kappa)+S_2\big)^{-1}S_1\nonumber\\ &\qquad\times\Big(\big(I_1(\kappa)+S_1\big)^{-1}S_1+C_{11}(\kappa)S_0\Big) \big(I_0(\kappa)+S_0\big)^{-1}\nonumber\\ &\quad+\frac1{\kappa^2}\big(I_0(\kappa)+S_0\big)^{-1}S_0 \big(I_1(\kappa)+S_1\big)^{-1} \Big(S_2\big(I_2(\kappa)+S_2\big)^{-1}-S_1C_{22}(\kappa)\Big)S_2I_3(\kappa)^{-1}S_2\nonumber\\ &\qquad\times\Big(\big(I_2(\kappa)+S_2\big)^{-1}S_2+C_{22}(\kappa)S_1\Big) \big(I_1(\kappa)+S_1\big)^{-1}S_0\big(I_0(\kappa)+S_0\big)^{-1}\nonumber\\ &=2\kappa\big(I_0(\kappa)+S_0\big)^{-1}\nonumber\\ &\quad+\Big(S_0\big(I_0(\kappa)+S_0\big)^{-1}-C_{00}(\kappa)\Big)S_0 \big(I_1(\kappa)+S_1\big)^{-1}S_0 \Big(\big(I_0(\kappa)+S_0\big)^{-1}S_0+C_{00}(\kappa)\Big)\nonumber\\ &\quad+\frac1\kappa\bigg\{\Big(S_1\big(I_0(\kappa)+S_0\big)^{-1}-C_{10}(\kappa)\Big) \big(I_1(\kappa)+ S_1\big)^{-1}-\Big(S_0\big(I_0(\kappa)+S_0\big)^{-1} -C_{00}(\kappa)\Big)C_{11}(\kappa)\bigg\}\nonumber\\ &\qquad\times S_1\big(I_2(\kappa)+S_2\big)^{-1}S_1\bigg\{\big(I_1(\kappa)+S_1\big)^{-1} \Big(\big(I_0(\kappa)+S_0\big)^{-1}S_1+C_{10}(\kappa)\Big)\nonumber\\ &\qquad+C_{11}(\kappa)\Big(\big(I_0(\kappa)+S_0\big)^{-1}S_0 +C_{00}(\kappa)\Big)\bigg\}\nonumber\\ &\quad+\frac1{\kappa^2}\Bigg\{\bigg[\Big(S_2\big(I_0(\kappa)+S_0\big)^{-1} -C_{20}(\kappa)\Big)\big(I_1(\kappa)+S_1\big)^{-1}\nonumber\\ &\qquad-\Big(S_0\big(I_0(\kappa)+S_0\big)^{-1}-C_{00}(\kappa)\Big)C_{21}(\kappa)\bigg] \big(I_2(\kappa)+S_2\big)^{-1}\nonumber\\ &\qquad-\bigg[\Big(S_1\big(I_0(\kappa)+S_0\big)^{-1}-C_{10}(\kappa)\Big) \big(I_1(\kappa)+S_1\big)^{-1}\nonumber\\ &\qquad-\Big(S_0\big(I_0(\kappa)+S_0\big)^{-1} -C_{00}(\kappa)\Big)C_{11}(\kappa)\bigg]C_{22}(\kappa)\Bigg\}S_2I_3(\kappa)^{-1}S_2\nonumber\\ &\qquad\times\Bigg\{\big(I_2(\kappa)+S_2\big)^{-1}\bigg[\big(I_1(\kappa)+S_1\big)^{-1} \Big(\big(I_0(\kappa)+S_0\big)^{-1}S_2+C_{20}(\kappa)\Big)\nonumber\\ &\qquad+C_{21}(\kappa)\Big(\big(I_0(\kappa)+S_0\big)^{-1}S_0+C_{00}(\kappa)\Big)\bigg]\nonumber\\ &\qquad+C_{22}(\kappa)\bigg[\big(I_1(\kappa)+S_1\big)^{-1} \Big(\big(I_0(\kappa)+S_0\big)^{-1}S_1+C_{10}(\kappa)\Big)\nonumber\\ &\qquad+C_{11}(\kappa)\Big(\big(I_0(\kappa)+S_0\big)^{-1}S_0 +C_{00}(\kappa)\Big)\bigg]\Bigg\}.\label{grosse_formule} \end{align} The interest in this formulation is that the projections $S_j$ (which lead to simplifications in the proof) have been put into evidence at the beginning or at the end of each term. \begin{proof} (a) Some lengthy, but direct, computations taking into account the expansion \eqref{grosse_formule}, the relation $\big(I_j(0)+S_j\big)^{-1}S_j=S_j$, the expansion \eqref{dev1} for $\F_0(\lambda-\kappa^2;n,\sigma)$ and $\F_0(\lambda-\kappa^2;n',\sigma')$ and Lemma \ref{help1}(a) lead to the equality \begin{align*} &\lim_{\kappa\to0}\F_0(\lambda-\kappa^2;n,\sigma)v\;\!\M(\lambda,\kappa) v\F_0(\lambda-\kappa^2;n',\sigma')^*\\ &=\F_0(\lambda;n,\sigma)\;\!vS_0\big(I_1(0)+S_1\big)^{-1}S_0v\F_0(\lambda;n',\sigma')^*\\ & \quad-\F_0(\lambda;n,\sigma)\;\!v \big(C_{20}'(0)+S_0 C_{21}'(0)\big)S_2I_3(0)^{-1}S_2 \big(C_{20}'(0)+C_{21}'(0)S_0\big)v\F_0(\lambda;n',\sigma')^*. \end{align*} Since \begin{equation}\label{start} S(\lambda-\kappa^2 ;n,\sigma,n',\sigma')-\delta_{n\sigma n'\sigma'} =-2\pi i\F_0(\lambda-\kappa^2;n,\sigma)v\;\!\M(\lambda,\kappa)v \F_0(\lambda-\kappa^2;n',\sigma')^*, \end{equation} this proves the claim. (b.1) We first consider the case $\lambda_n<\lambda$, $\lambda_{n'}=\lambda$ (the case $\lambda_n=\lambda$, $\lambda_{n'}<\lambda$ is not presented since it is similar). An inspection taking into account the expansion \eqref{grosse_formule}, the relation $\big(I_j(\kappa)+S_j\big)^{-1}=\big(I_j(0)+S_j\big)^{-1}+\Oa(\kappa)$ and the relation $\big(I_j(0)+S_j\big)^{-1}S_j=S_j$ leads to the equation \begin{align} &\F_0(\lambda-\kappa^2;n,\sigma)\;\!v\;\!\M(\lambda,\kappa)\;\!v\;\! \F_0(\lambda-\kappa^2;n',\sigma')^*\nonumber\\ &=\F_0(\lambda-\kappa^2;n,\sigma)\;\!v\;\! \bigg\{\Oa(\kappa)+S_0\big(I_1(\kappa)+S_1\big)^{-1}S_0\nonumber\\ &\quad+\frac1\kappa\big(S_1+\Oa(\kappa)\big)S_1 \big(I_2(\kappa)+S_2\big)^{-1}S_1\big(S_1+\Oa(\kappa)\big)\nonumber\\ &\quad+\frac1{\kappa^2}\Big[\Oa(\kappa^2)+S_2\big(I_0(\kappa)+ S_0\big)^{-1} \big(I_1(\kappa)+S_1\big)^{-1}\big(I_2(\kappa)+S_2\big)^{-1} -C_{20}(\kappa)-S_0C_{21}(\kappa)\nonumber\\ &\qquad-S_1C_{22}(\kappa)\Big]S_2 I_3(\kappa)^{-1}S_2 \Big[\Oa(\kappa^2)+\big(I_2(\kappa)+S_2\big)^{-1}\big(I_1(\kappa)+S_1\big)^{-1} \big(I_0(\kappa)+S_0\big)^{-1}S_2\nonumber\\ &\qquad+C_{20}(\kappa)+C_{21}(\kappa)S_0+C_{22}(\kappa)S_1\Big]\bigg\} \;\!v\;\!\F_0(\lambda-\kappa^2;n',\sigma')^*.\label{eq_bordeaux} \end{align} Applying Lemma \ref{help1} to the previous equation gives \begin{align*} &\F_0(\lambda-\kappa^2;n,\sigma)\;\!v\;\!\M(\lambda,\kappa)\;\!v\;\! \F_0(\lambda-\kappa^2;n',\sigma')^*\\ &=\F_0(\lambda-\kappa^2;n,\sigma)\;\!v\;\! \bigg\{\Oa(\kappa)-\frac1{\kappa^2}\big(\O(\kappa^2)+C_{20}(\kappa)+S_0C_{21}(\kappa)\big) S_2 I_3(\kappa)^{-1}S_2\big(\Oa(\kappa^2)+C_{20}(\kappa)\big)\bigg\}\\ &\qquad\times v\;\!\F_0(\lambda-\kappa^2;n',\sigma')^*. \end{align*} Finally, taking into account the expansion \eqref{dev1} for $\F_0(\lambda-\kappa^2;n,\sigma)$ and the expansion \eqref{dev2} for $\F_0(\lambda-\kappa^2;n',\sigma')$, one ends up with \begin{align} &\F_0(\lambda-\kappa^2;n,\sigma)\;\!v\;\!\M(\lambda,\kappa)\;\!v\;\! \F_0(\lambda-\kappa^2;n',\sigma')^*\nonumber\\ &=(-\kappa^2)^{-5/4}\F_0(\lambda;n,\sigma)\;\!v\;\!\big(\O(\kappa^2)+C_{20}(\kappa) +S_0C_{21}(\kappa)\big)S_2I_3(\kappa)^{-1}S_2\big(\Oa(\kappa^2)+C_{20}(\kappa)\big) \;\!v\;\!\gamma_0(n')^*\nonumber\\ &\quad+\O(|\kappa|^{1/2})\label{fin1}, \end{align} where $\gamma_0(n')^*$ is given by $\gamma_0(n')^*\psi=\frac1{2\sqrt\pi}\;\!\psi\otimes1$ for any $\psi\in\P_{n'}\;\!\ltwo(\Sigma)$. Now, Lemma \ref{la_cle_des_champs}(c) implies that $[M_1(0),S_2]=0$, and thus that \begin{equation}\label{eq_C20} C_{20}(\kappa) =2\kappa\;\!\big(I_0(0)+S_0\big)^{-1}[M_1(0),S_2]\big(I_0(0)+S_0\big)^{-1} +\O(\kappa^2)\\ =\O(\kappa^2). \end{equation} In consequence, one infers from \eqref{fin1} that $ \F_0(\lambda-\kappa^2;n,\sigma)\;\!v\;\!\M(\lambda,\kappa)\;\!v\;\! \F_0(\lambda-\kappa^2;n',\sigma')^* $ vanishes as $\kappa\to0$, and thus that the limit $\lim_{\kappa\to0}S(\lambda-\kappa^2;n,\sigma,n',\sigma')$ also vanishes by \eqref{start}. (b.2) We are left with the case $\lambda_n=\lambda=\lambda_{n'}$. An inspection of the expansion \eqref{grosse_formule} taking into account the relation $\big(I_\ell(\kappa)+S_\ell\big)^{-1}=\big(I_\ell(0)+S_\ell\big)^{-1}+\Oa(\kappa)$, the relation $\big(I_\ell(0)+S_\ell\big)^{-1}S_\ell=S_\ell$ and Lemma \ref{help1}(b) leads to the equation \begin{align*} &\F_0(\lambda-\kappa^2;n,\sigma)\;\!v\;\!\M(\lambda,\kappa)\;\!v\;\! \F_0(\lambda-\kappa^2;n',\sigma')^*\\ &=\F_0(\lambda-\kappa^2;n,\sigma)\;\!v\;\!\bigg\{\Oa(\kappa^2) +\kappa\big(I_0(\kappa)+S_0\big)^{-1} -\frac1\kappa\;\!C_{10}(\kappa)S_1\big(I_2(\kappa)+S_2\big)^{-1}S_1C_{10}(\kappa)\\ &\quad-\frac1{\kappa^2}\;\!\big(\Oa(\kappa^2)+C_{20}(\kappa)\big)S_2 I_3(\kappa)^{-1}S_2 \big(\Oa(\kappa^2)+C_{20}(\kappa)\big)\bigg\}\;\!v\;\!\F_0(\lambda-\kappa^2;n',\sigma')^*. \end{align*} Therefore, the expansion \eqref{dev2} for $\F_0(\lambda-\kappa^2;n,\sigma)$ and $\F_0(\lambda-\kappa^2;n',\sigma')$ and the inclusion $C_{20}(\kappa)\in\O(\kappa^2)$ (see \eqref{eq_C20}), imply that the limit $$ \lim_{\kappa\to0}\F_0(\lambda-\kappa^2;n,\sigma) \;\!v\;\!\M(\lambda,\kappa)\;\!v\;\!\F_0(\lambda-\kappa^2;n',\sigma')^* $$ exists, and thus that the limit $\lim_{\kappa\to0}S(\lambda-\kappa^2;n,\sigma,n',\sigma')$ also exists by \eqref{start}. \end{proof} We finally consider the continuity of the scattering matrix at embedded eigenvalues not located at thresholds. \begin{Proposition} Suppose that $V\in\linf(\Omega;\R)$ has bounded support and take $\lambda\in\sigma_{\rm p}(H)\setminus\tau$, $\kappa\in\partial O(\varepsilon)$ with $\varepsilon>0$ small enough, $n,n'\ge1$, and $\sigma,\sigma'\in\{+,-\}$. Then, if $\lambda_n<\lambda$ and $\lambda_{n'}<\lambda$, the limit $\lim_{\kappa\to0}S(\lambda-\kappa^2;n,\sigma,n',\sigma')$ exists. \end{Proposition} \begin{proof} We know from \eqref{eq_expansion_2} that $$ \M(\lambda,\kappa) =\big(J_0(\kappa)+S\big)^{-1}+\frac1{\kappa^2}\;\!\big(J_0(\kappa)+S)^{-1}S J_1(\kappa)^{-1}S\big(J_0(\kappa)+S\big)^{-1}, $$ with $S$ the Riesz projection associated with the value $0$ of the operator $ T_0=u+\sum_nv\big(\P_n\otimes R^0(\lambda-\lambda_n)\big)\;\!v. $ Now, a commutation of $S$ with $\big(J_0(\kappa)+S\big)^{-1}$ gives $$ \M(\lambda,\kappa) =\big(J_0(\kappa)+S\big)^{-1} +\frac1{\kappa^2}\;\!\big\{S\big(J_0(\kappa)+S)^{-1}+\Oa(\kappa)\big\} SJ_1(\kappa)^{-1}S\;\!\big\{\big(J_0(\kappa)+S\big)^{-1}S+\Oa(\kappa)\big\}, $$ and a computation as in the proof of Lemma \ref{help1}(a) (but which takes directly Lemma \ref{Cor_magique} into account) shows that $\F_0(\lambda-\kappa^2;n,\sigma)\;\!vS\in\O(\kappa^2)$ and $Sv\;\!\F_0(\lambda-\kappa^2;n',\sigma')^* \in \O(\kappa^2)$. These estimates, together with the expansion \eqref{dev1} for $\F_0(\lambda-\kappa^2;n,\sigma)$ and $\F_0(\lambda-\kappa^2;n',\sigma')^*$ and the equation \eqref{start}, imply the claim. \end{proof}
1,314,259,994,042
arxiv
\section{Introduction} Rain image restoration produces visually pleasing background (i.e.,scene content) and benefits recognition systems (e.g., autonomous driving). Attempts~\cite{jiang-cvpr17-novel,fu-cvpr17-removing} of image deraining formulate rain image as the combination of rain streaks and background. These methods limit their restoration performance when the rain is heavy. The limitation occurs because heavy rain consisting of rain streaks and vapors causes severe visual degradation. When the rain streaks are clearly visible, a part of them accumulate to become vapors. The vapors produce the veiling effect which decreases image contrast and causes haze. Fig.~\ref{fig:intro} shows an example. Without considering vapors, existing deraining methods do not perform well to restore heavy rainy images as shown in Fig.~\ref{fig:intro}(b) and Fig.~\ref{fig:intro}(c). \renewcommand{\tabcolsep}{1pt} \def0.33\linewidth{0.33\linewidth} \begin{figure*}[t] \begin{center} \begin{tabular}{ccc} \includegraphics[width=0.33\linewidth]{figures/intro/input.jpg}& \includegraphics[width=0.33\linewidth]{figures/intro/wang.jpg} & \includegraphics[width=0.33\linewidth]{figures/intro/yang.jpg} \\ (a) Input & (b) SPANet~\cite{wang-cvpr19-spatial} & (c) JORDER~\cite{yang-pami19-joint} \\ \includegraphics[width=0.33\linewidth]{figures/intro/li.jpg} & \includegraphics[width=0.33\linewidth]{figures/intro/ours.jpg}& \includegraphics[width=0.33\linewidth]{figures/intro/gt.jpg} \\ (d) PYM+GAN~\cite{li-cvpr19-heavy} & (e) Ours & (f) Ground Truth \\ \end{tabular} \end{center} \caption{Rain image restoration results. Input image is shown in (a). Results of SPANet~\cite{wang-cvpr19-spatial}, JORDER~\cite{yang-pami19-joint}, PYM+GAN~\cite{li-cvpr19-heavy} are shown in (b)-(d). Ground Truth is shown in (f). The proposed visual model is effective to formulate rain streaks and vapors, which brings high quality deraining result as shown in (e).} \label{fig:intro} \end{figure*} Recent study~\cite{li-cvpr19-heavy} reformulates rain image generation via the following model: \begin{equation}\label{eq:rain-py1} \mathbf{I} = \mathbf{T} \odot (\mathbf{J} + \sum^{n}_{i=1} \mathbf{S}_{i}) + (\mathds{1}-\mathbf{T}) \odot \mathbf{A} \end{equation} where $\mathbf{I}$ is the rain image, $\mathbf{T}$ is the transmission map, $\mathbf{J}$ is the background to be recovered, $\mathbf{S}_{i}$ is the rain streak layer, and $\mathbf{A}$ is atmosphere light of the scene. Besides, $\mathds{1}$ is a matrix whose pixel values are 1 and $\odot$ indicates element-wise multiplication. The transmission map $\mathbf{T}$ encodes influence from vapors to generate rain images. Based on this model, deraining methods propose various CNNs to predict $\mathbf{T}$, $\mathbf{A}$, and $\mathbf{S}$ to calculate background image $\mathbf{J}$. Rain streaks and vapors are entangled with each other in practice. Removing them separately is not feasible~\cite{li-cvpr19-heavy}. Meanwhile, this entanglement makes Eq.~\eqref{eq:rain-py1} difficult to explicitly model both. The limitation raises that the transmission map and rain streaks are not estimated well. The incorrect estimation brings unnatural illumination and color contrasts on the background. Although a generative adversarial network~\cite{li-cvpr19-heavy} is employed to refine background beyond model constraint, the illumination and color contrasts are not completely corrected as shown in Fig.~\ref{fig:intro} (d). In this work, we rethink rain image formation by delving rainy model itself. We observe that in Eq.~\eqref{eq:rain-py1}, both rain streaks and background are modeled to have the same properties. This is due to the meaning which two terms convey in Eq.~\eqref{eq:rain-py1}. The first term $\mathbf{T} \odot (\mathbf{J} + \sum^{n}_{i=1} \mathbf{S}_{i})$ indicates that both $\mathbf{S}_{i}$ and $\mathbf{J}$ are transmitted via $\mathbf{T}$. The rain streaks are regarded as part of the background to be transmitted. The second term $(\mathds{1}-\mathbf{T}) \odot \mathbf{A}$ shows that rain streaks do not contribute to atmosphere light transmission because only vapors are considered in $\mathbf{T}$. As rain streaks and vapors are entangled with each other, the modeling of rain streaks as background is not accurate. Based on this observation, we propose a visual model which formulates rain streaks as transmission medium. The entanglement of rain streaks and vapors is modeled properly from the transmission medium perspective. We show the proposed model in the following: \begin{equation}\label{eq:rain-py2} \mathbf{I} = (\mathbf{T_s}+\mathbf{T_v})\odot \mathbf{J} + [\mathds{1}-(\mathbf{T_s}+\mathbf{T_v})]\odot \mathbf{A} \end{equation} where $\mathbf{T_s}$ and $\mathbf{T_v}$ are the transmission map of rain streaks and vapors, respectively. In our model, all the variables are extended to the same size, so that we utilize element-wise multiplication to describe the relationship of variables. Rain streaks appear in various shapes and directions. This phenomenon is more obvious in heavy rain. In order to effectively predict $\mathbf{T_s}$, we propose an encoder-decoder CNN with ShuffleNet units~\cite{zhang-cvpr18-shufflenet} named SNet. The group convolutions and channel shuffle improve network robustness upon diverse rain streaks. The learned multiple groups in ShuffleNet units are able to capture anisotropic appearances of rain streaks. Furthermore, we predict transmission map of vapors (i.e., $\mathbf{T_v}$) by using a VNet where there is a spatial pyramid pooling (SPP) structure. VNet takes the concatenation of $\mathbf{I}$ and $\mathbf{T_s}$ as input and use SPP to capture its global and local features in multi-scales for compact representation. On the other hand, we propose an encoder CNN named ANet to predict atmosphere light $\mathbf{A}$. ANet is pretrained by using training data in a simplified low transmission condition, under which we obtain estimated labels of $\mathbf{A}$ from rainy image $\mathbf{I}$. After pretraining ANet, we jointly train SNet, VNet and ANet by measuring the difference between the calculated background $\mathbf{J}$ and the ground truth background. The learned networks well predict $\mathbf{T_s}$, $\mathbf{T_v}$, and $\mathbf{A}$, which are further transformed to generate background images. We evaluate the proposed method on standard benchmark datasets. The proposed visual model is shown effective to model transmission maps of rain streaks and vapors, which are removed in the generated background images. We summarize the contributions of this work as follows: \begin{itemize}[noitemsep,nolistsep] \item We remodel the rain image formation by formulating rain streaks as transmission medium. The rain streaks and vapor contribute together to transmit both scene content and atmosphere light into input rain images. \item We propose SNet, VNet and ANet to learn rain streaks transmission map, vapor transmission map and atmosphere light. These three CNNs are jointly trained to facilitate the rain image restoration process. \item Experiments on the benchmark datasets show the proposed model is effective to predict rain streaks and vapors. The proposed deraining method performs favorably against state-of-the-art approaches. \end{itemize} \section{Related Work} Single image deraining originate from dictionary learning \cite{mairal-jmlr10-online} to solve the negative impact of various rain streaks on the background \cite{wang-tip17-a,wang-icip16-a,kang-tip12-automatic,chen-tcsvt14-visual,zhang-icme06-rain,huang-icme12-context,huang-tmm14-tmm,chen-iccv13-a,luo-iccv15-removing,li-cvpr16-rain}. Recently, deep learning has obtained better deraining performances compared with the conventional methods. Prevalent deep learning based deraining methods can be categorized as direct mapping method, residual based method and scattering model based methods. Direct mapping based methods directly estimate rain-free background from the observed rainy images via novel CNN networks. It includes the work \cite{wang-cvpr19-spatial}, in which a dataset is first built by incorporating temporal priors and human supervision. Then, a novel SPANet is proposed to solve the random distribution of rain streaks in a local-to-global manner. A residual rain model is proposed in residual based methods to formulate a rainy image as a summation of the background layer and rain layers. It covers majority of existing deraining methods. For example, Fu et al. train their DerainNet in high-frequency domain instead of the image domain to extract image details to improve deraining visual quality \cite{fu-tip17-clearing}. In the meantime, inspired by the deep residual network (ResNet) \cite{he-cvpr15-deep}, a deep detail network which is also trained in high-pass domain was proposed to reduce the mapping range from input to output, to make the learning process easier \cite{fu-cvpr17-removing}. Yang et al. create a new model which introduces atmospheric light and transmission to model various rain streaks and veiling effect, but the rainy image is finally decomposed into a rain layer and background layer by their JORDER network. During the training, a binary map is learnt to locate rain streaks to guide the deraining network \cite{yang-cvpr17-deep,yang-pami19-joint}. In \cite{zhang-cvpr18-density}, the density of rain streaks is classified into three classes and automatically estimated to guide the training of a multi-stream densely connected DID-MDN structure which can better characterize rain streaks with various shape and size. Li et al. regard the rain in rainy images as a summation of multiple rain streak layers, then use a recurrent neural network to remove rain streaks state-wisely \cite{li-eccv18-recurrent}. Hu et al. study the visual effect of rain to scene depth, based on which fog is introduced to model the formation of rainy images and the depth feature is learned to guide their end-to-end network to obtain rain layer \cite{hu-cvpr19-depth}. In \cite{ren-cvpr19-progressive}, a better and simpler deraining baseline is proposed by considering the network structure, input and output of network, and the loss functions. In the scattering model based methods, atmospheric light and transmission of vapor are rendered and learned to remove rain streaks as well as vapor effect, but rain streaks are treated the same as the background rather than the transmission medium \cite{li-cvpr19-heavy}. Different from existing approaches, we reformulate rainy image generation by modeling rain streaks as transmission medium instead of background content, and use two transmission maps to model the influence of rain streaks and vapor on the background. This formulation naturally models the entanglement of rain streaks and vapors and produce more robust results. \section{Proposed Algorithm} We show an overview of the pipeline in Fig. \ref{fig:pipeline}. It consists of SNet, VNet and ANet to estimate transmission maps and atmosphere light. The background image can then be computed as follows: \begin{equation}\label{eq:background} \mathbf{J} = \{\mathbf{I}-[\mathds{1}-(\mathbf{T_s}+\mathbf{T_v})]\odot \mathbf{A}\} \oslash (\mathbf{T_s}+\mathbf{T_v}), \end{equation} where $\oslash$ is the element-wise division operation. In the following, we first illustrate the network structure of SNet, VNet, and ANet, then we show how we train these three networks in practice and elucidate how these networks function in rain image restoration. \renewcommand{\tabcolsep}{1pt} \def0.5\linewidth{1\linewidth} \begin{figure*}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.5\linewidth]{figures/proalgo/whole_net.jpg}\\ \end{tabular} \end{center} \caption{This figure shows our network structure. The pool denotes adaptive average pooling operation. The Upsample operation after triangle-shaped network extends the atmospheric light $\mathbf{A}$ to the image size. The notations $\odot$ and $\oslash$ are the pixel-wise multiplication and division, respectively} \label{fig:pipeline} \end{figure*} \renewcommand{\tabcolsep}{1pt} \def0.5\linewidth{0.5\linewidth} \begin{figure}[t] \begin{center} \begin{tabular}{c} \includegraphics[width=0.5\linewidth]{figures/proalgo/ShuffleUnit.jpg}\\ \end{tabular} \end{center} \caption{This figure shows our revised ShuffleNet Units. ShuffleUnit(add) can keep image size unchanged, and ShuffleUnit(cat) downsamples image once. $+$ and C mean addition and concatenation respectively.} \label{fig:shuffle} \end{figure} \subsection{SNet}\label{sec:snet} We propose a SNet that takes rain image as input and predicts rain streak transmission maps $\mathbf{T_s}$. SNet is an encoder-decoder CNN with ShuffleNet units that consist of group convolutions and shuffling operations. The input CNN features are partially captured by different groups and then shuffled to fuse together. We extend ShuffleNet unit to capture anisotropic representation of rain streaks as shown in Fig. \ref{fig:pipeline}. Our extension is shown in Fig.~\ref{fig:shuffle} where we increase the number of group convolutions and deep separable convolution. The features of different groups in single unit will be more discriminative by twice grouping to boost the global feature grouping. Moreover, the depthwise convolution is symmetrically padded (SDWConv) to decrease the influence of padded $0$ on the image edges. Finally, we upsample the feature map to original size and convolution layers are followed to fuse multi-group features. The prediction of $\mathbf{T_s}$ on SNet can be written as: \begin{equation}\label{eq:SNet} \mathbf{T_s}=\mathcal{S}(\mathbf{I}) \end{equation} where $\mathbf{I}$ is the rain image and $\mathcal{S}(\cdot)$ is the SNet inference. \subsection{VNet} We propose a VNet that captures multi-scale features to predict vapor transmission maps $\mathbf{T_v}$. VNet takes the concatenation of rain image $\mathbf{I}$ and $\mathbf{T_s}$ as input where $\mathbf{T_s}$ provides the global intensity information for $\mathbf{T_v}$ and $\mathbf{I}$ supplies the local background information, as different local areas have different vapor intensity. Compared with anisotropic rain streaks, vapor is locally homogeneous and the values of different areas have high correlation. VNet utilizes SPP structure to capture global and local features to provide compact feature representation for $\mathbf{T_v}$ as shown in Fig. \ref{fig:pipeline}. The prediction of $\mathbf{T_v}$ on VNet can be written as: \begin{equation}\label{eq:VNet} \mathbf{T_v}=\mathcal{V}(\operatorname{cat}(\mathbf{I}, \mathbf{T_s})). \end{equation} \subsection{ANet} We propose an encoder network ANet to predict atmosphere light. Its structure is shown in Fig.~\ref{fig:pipeline}. The network inference can be written as: \begin{equation}\label{eq:ANet} \mathbf{A} = \mathcal{A}(\mathbf{I}), \end{equation} where $\mathcal{A}(\cdot)$ is the ANet inference. As atmosphere light is usually considered constant in the rain image, the output of the encoder is a $3\times 1$ vector. We use an ideal form of our rain model Eq. \eqref{eq:rain-py2} to create labels of atmospheric light from rain images to pretrain ANet. Then, we integrate ANet into the whole pipeline for joint training. The details are presented in the following. \subsection{Network Training} \label{sec:nettrain} The pipeline of the whole network consists of a SNet, a VNet, and an ANet to predict $\mathbf{T_s}$, $\mathbf{T_v}$, and $\mathbf{A}$, respectively. Then, we will generate $\mathbf{J}$ according to Eq. \eqref{eq:background}. We first pretrain ANet using labels from a simplified condition and perform joint training of these three networks. \begin{algorithm}[t] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \caption{Pretraining ANet} \label{alg:train_ANet} \begin{algorithmic}[1] \REQUIRE Rainy images $\{ \mathbf{I}^{\{t\}} \}$. \FOR{$i=1$ to epoch} \FOR{$j=1$ to batchnum} \STATE Locate rain pixels for $\mathbf{I}^{\{t\}}$ via \cite{yang-cvpr17-deep}. \STATE Based on Eq. \eqref{eq:real}, find highest rainy pixel as the ground truth atmospheric light $\mathbf{A}^{\{t \}}$. \STATE Calculate $\mathcal{A}(\mathbf{I}^{\{ t\}})$ via Eq. \eqref{eq:ANet}. \STATE Updating $\mathcal{A}(\cdot)$ via loss \eqref{eq:lossANet}. \ENDFOR \ENDFOR \renewcommand{\algorithmicensure}{\textbf{Output:}} \ENSURE Learned atmospheric light $\mathbf{A}$. \end{algorithmic} \end{algorithm} \subsubsection{Pretraining ANet.} \label{sec:pretrainanet} Sample collection is crucial for pretraining ANet as the ground truth value of atmosphere light is difficult to obtain. Instead of empirically modeling $\mathbf{A}$ as a uniform distribution~\cite{li-cvpr19-heavy}, we generate labels under a simplified condition based on our rain model Eq. \eqref{eq:rain-py2}, where the transmission maps of both rain streaks and vapors are 0. For one pixel $x$ in rain image $\mathbf{I}$, $\mathbf{T_s}(x)+\mathbf{T_v(x)}=0$, our visual model of rain image formation can be written as: \begin{equation}\label{eq:ideal} \mathbf{I}(x)=\mathbf{A}(x) \end{equation} where the pixel value of atmosphere light is equal to that of rain image. In practice, the values of transmission at rain pixel $x$ with high intensity approach 0 (i.e., $\mathbf{T_s}(x)+\mathbf{T_v}(x) \approx 0$). Our model in Eq.~\ref{eq:rain-py2} can be approximated by: \begin{equation}\label{eq:real} \mathbf{I}(x)=[1-(\mathbf{T_s}(x)+\mathbf{T_v}(x))]*\mathbf{A}(x) \end{equation} where the maximum value of $\mathbf{I}(x)$ at rain streak pixels is $\mathbf{A}(x)$. We use \cite{yang-cvpr17-deep} to detect rainy pixels in $\mathbf{I}$ and identify the maximum intensity value as ground truth atmospheric light $\mathbf{A}$. In this simplified form, we obtain labels for ANet and train it using the following form: \begin{equation}\label{eq:lossANet} \mathcal{L}_\mathbf{A}=\frac{1}{N}\sum_{t=1}^{N}||\mathcal{A}(\mathbf{I}_t)-\mathbf{A}_t||^2 \end{equation} where $N$ is the number of training samples. The algorithm is in Algorithm \ref{alg:train_ANet}. \subsubsection{Pretraining SNet.} We pretrain SNet by assuming an ideal case where vapors do not contribute to the transmission (i.e., $\mathbf{T_v}=0$). We use the input rain image $\mathbf{I}$ and ground truth restoration image $\mathbf{J}$ to train SNet. The objective function can be written as follows: \begin{equation}\label{eq:lossSNet} \mathcal{L}_\mathbf{S}=\frac{1}{N}\sum_{t=1}^{N}||\mathcal{J}(\mathbf{I}_{t})-\mathbf{J}_{t}||^2_{F} \end{equation} where $\mathcal{J}(\mathbf{I}_t)=\{\mathbf{I}_{t}-[\mathds{1}-\mathcal{S}(\mathbf{I}_t)]\odot\mathcal{A}(\mathbf{I}_t)\}\oslash\mathcal{S}(\mathbf{I}_t)$ derives from Eq.~\ref{eq:background}. More details are shown in Algorithm \ref{alg:train_SNet}. \subsubsection{Joint Training.} After pretraining ANet and SNet, we perform joint training of the whole network. The overall objective function can be written as: \begin{eqnarray}\label{eq:loss-VNet} \nonumber \mathcal{L}_{\rm total}=\lambda_1\cdot \frac{1}{N}\sum_{t=1}^{N}||\triangledown\mathcal{J}(\mathbf{I}_t)-\triangledown \mathbf{J}_t||^2_F +\lambda_2\cdot \frac{1}{N}\sum_{t=1}^{N} ||\mathcal{J}(\mathbf{I}_t)-\mathbf{J}_t||_{1} \end{eqnarray} where $\triangledown$ is the gradient operator in both horizontal and vertical directions, $\lambda_1$ and $\lambda_2$ are constant weights. The value $\mathcal{J}(\mathbf{I}_t)$ is from Eq. \eqref{eq:background} consisting of $\mathbf{T_s}$, $\mathbf{T_v}$, and $\mathbf{A}$. These variables are predicted from SNet, VNet, and ANet, respectively. We perform joint training to these networks. As VNet takes the concatenation of $\mathbf{I}$ and $\mathbf{T_s}$ as input, we back propagate the network gradient to SNet via VNet. The details of our joint training is shown in Algorithm \ref{alg:jonit_train}. \begin{algorithm}[t] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \caption{Pretraining SNet} \label{alg:train_SNet} \begin{algorithmic}[1] \REQUIRE Rainy images $\{ \mathbf{I}^{\{t\}} \}$ and ground truth background $\{ \mathbf{J}^{\{t\}} \}$. \renewcommand{\algorithmicrequire}{\textbf{Initialization:}} \REQUIRE $\mathbf{T_v}=0$ in Eq. \eqref{eq:background}, $\mathcal{A}(\cdot)$ is initialized with pretrained model in Alg. \ref{alg:train_ANet}. \FOR{$i=1$ to epoch} \FOR{$j=1$ to batchnum} \STATE Calculate $\mathcal{A}(\mathbf{I}^{\{ t\}})$ for $\{ \mathbf{I}^{\{t\}} \}$ via Eq. \eqref{eq:ANet}. \STATE Calculate $\mathcal{S}(\mathbf{I}^{\{ t\}})$ via Eq. \eqref{eq:SNet}. \STATE Calculate $\mathcal{J}(\mathbf{I}^{\{ t\}})$ via Eq. \eqref{eq:background}. \STATE Updating $\mathcal{S}(\cdot)$ and fine tuning $\mathcal{A}(\cdot)$ via loss \eqref{eq:lossSNet}. \ENDFOR \ENDFOR \renewcommand{\algorithmicensure}{\textbf{Output:}} \ENSURE Learned transmission map $\mathbf{T_s}$ of rain streaks. \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \renewcommand{\algorithmicrequire}{\textbf{Input:}} \caption{Joint training} \label{alg:jonit_train} \begin{algorithmic}[1] \REQUIRE Rainy images $\{ \mathbf{I}^{\{t\}} \}$ and ground truth background $\{ \mathbf{J}^{\{t\}} \}$. \renewcommand{\algorithmicrequire}{\textbf{Initialization:}} \REQUIRE $\mathcal{A}(\cdot)$ is initialized with fine tuned model in Alg. \ref{alg:train_SNet}, $\mathcal{S}(\cdot)$ is initialized with pretrained model in Alg. \ref{alg:train_SNet}. \FOR{$i=1$ to epoch} \FOR{$j=1$ to batchnum} \STATE Calculate $\mathcal{A}(\mathbf{I}^{\{ t\}})$ for $\{ \mathbf{I}^{\{t\}} \}$ via Eq. \eqref{eq:ANet}. \STATE Calculate $\mathcal{S}(\mathbf{I}^{\{ t\}})$ via Eq. \eqref{eq:SNet}. \STATE Calculate $\mathcal{V}(\operatorname{cat}(\mathbf{I}^{\{ t\}}, \mathcal{S}(\mathbf{I}^{\{ t\}})))$ via Eq. \eqref{eq:VNet}. \STATE Calculate $\mathcal{J}(\mathbf{I}^{\{ t\}})$ via Eq. \eqref{eq:background}. \STATE Updating $\mathcal{V}(\cdot)$ and fine tuning $\mathcal{A}(\cdot)$ and $\mathcal{S}(\cdot)$ via loss \eqref{eq:loss-VNet}. \ENDFOR \ENDFOR \renewcommand{\algorithmicensure}{\textbf{Output:}} \ENSURE Learned transmission map $\mathbf{T_s}$ of rain streaks. \end{algorithmic} \end{algorithm} \renewcommand{\tabcolsep}{1pt} \def0.24\linewidth{0.24\linewidth} \begin{figure}[t] \begin{center} \begin{tabular}{cccc} \includegraphics[width=0.24\linewidth]{figures/proalgo/visual/rain-9_512.jpg}&\includegraphics[width=0.24\linewidth]{figures/proalgo/visual/rain-9_g1.jpg}&\includegraphics[width=0.24\linewidth]{figures/proalgo/visual/rain-9_g2.jpg}&\includegraphics[width=0.24\linewidth]{figures/proalgo/visual/rain-9_g3.jpg}\\ \includegraphics[width=0.24\linewidth]{figures/proalgo/visual/rain-10_512.jpg}&\includegraphics[width=0.24\linewidth]{figures/proalgo/visual/rain-10_g1.jpg}&\includegraphics[width=0.24\linewidth]{figures/proalgo/visual/rain-10_g2.jpg}&\includegraphics[width=0.24\linewidth]{figures/proalgo/visual/rain-10_g3.jpg}\\ (a) & (b) & (c) & (d) \\ \end{tabular} \end{center} \caption{Feature maps of different convolution groups. (a) Input rainy images. (b) Features in 1st group. (c) Features in 2nd group. (d) Features in 3rd group. The features of rain streaks in the first group is always slim, the second group extract rain streaks with relatively large size, and the third group contains features of homogeneous vapor.} \label{fig:groupfea} \end{figure} \renewcommand{\tabcolsep}{1pt} \def0.14\linewidth{0.14\linewidth} \begin{figure*}[h] \begin{center} \begin{tabular}{ccccccc} \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-62.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-62_transmission.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-62_vapor.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-62_derain_Ts.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-62_derain_Tv.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-62_streaks.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-62_veiling.jpg}\\ \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-70.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-70_transmission.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-70_vapor.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-70_derain_Ts.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-70_derain_Tv.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-70_streaks.jpg}& \includegraphics[width=0.14\linewidth]{figures/proalgo/visual/rain-70_veiling.jpg}\\ (a) & (b) & (c) & (d) & (e) & (f) & (g) \\ \end{tabular} \end{center} \caption{Transmission map of rain streaks and vapor. Input rainy images are shown in (a). Transmission maps $\mathbf{T_s}$ of rain streaks are in (b). Transmission maps $\mathbf{T_v}$ of vapor are in (c). Deraining results with only using $\mathbf{T_s}$ are in (d). Deraining results with $\mathbf{T_v}$ involved are in (e). The Removed rain streaks are shown in (f) and the removed vapors are shown in (g). $\mathbf{T_s}$ is shown to capture anisotropic rain streaks while $\mathbf{T_v}$ models homogeneous vapors.} \label{fig:transmap} \end{figure*} \subsection{Visualizations} We visualize the intermediate results of our method to verify the effectiveness of our network. In Section \ref{sec:snet}, we extract the features of rainy image by $3$ separate convolution groups. We show the learned feature maps of different convolution groups in Fig. \ref{fig:groupfea}. The (a)-(c) shows that different groups contain different features of rain streaks in various shapes and sizes. The first group extracts slim rain streaks and their shapes are similar, the second group contains wide rain features and the shapes are diversified. The third group captures homogeneous feature representations resembling vapors. Our rain model allows for the anisotropic transmission map of rain streaks, the homogeneous transmission map of vapor and the atmospheric light of rainy scenes. In Fig. \ref{fig:transmap}, we display the learned transmission map $\mathbf{T_s}$ of rain streaks, the transmission map $\mathbf{T_v}$ of vapor and the atmospheric light $\mathbf{A}$. We can see that $\mathbf{T_s}$ captures the various rain streak information and contains the anisotropy of rainy scenes. While $\mathbf{T_v}$ models the influence of vapor, it possesses the similar values in local areas and different areas are separated by object contours. $\mathbf{A}$ keeps relatively high values, which reflects the fact that atmospheric light possesses high illumination in rainy scenes. \section{Experiments} To assess the performance of our deraining method quantitatively, the commonly used PSNR and SSIM \cite{wang-tip04-image} are used as our metrics. In order to evaluate our deraining network more robustly, we measure the quality of deraining results by calculating their Frechet Inception Distance (FID) \cite{heusel-nips17-gans} to the ground truth background. FID is defined via the deep features extracted by Inception-V3 \cite{szegedy-cvpr16-rethinking}, smaller values of FID indicate more similar deraining results to the ground truth. For visual quality evaluation, we show some restored results of real-world and synthetic rainy images. Existing methods \cite{li-eccv18-recurrent,yang-pami19-joint,li-cvpr19-heavy,wang-cvpr19-spatial} are selected to make complete comparisons in our paper. The comparisons with another two methods \cite{zhang-cvpr18-density,fu-cvpr17-removing} are provided in the supplementary file. Except for \cite{yang-pami19-joint,li-cvpr19-heavy,zhang-cvpr18-density} which need additional ground truth configuration, these methods are retrained on the same dataset for fair comparisons. In the training process, we crop $256 \times 256$ patches from the training samples, and Adam \cite{kingma-iclr15-adam} is used to optimize our network. The learning rate for pretraining ANet is $0.001$. While learning $\mathbf{T_s}$, loss $\mathcal{L}_{\mathbf{S}}$ is to train SNet and fine tune ANet in a joint way, the learning rate for SNet is $0.001$ and the learning rate for ANet is $10^{-6}$. Similarly, in the stage of jointly learning $\mathbf{T_v}$, the learning rate for VNet is $0.001$ and the learning rate for SNet and ANet is $10^{-6}$. The hyper-parameters $\lambda_{1}$ and $\lambda_{2}$ in Eq. \eqref{eq:loss-VNet} are $0.01$ and $1$ respectively. Our network is trained on a PC with NVIDIA 1080Ti GPU based on PyTorch framework. The training is converged at the 20-$th$ epoch. Our code will be released publicly. \renewcommand{\tabcolsep}{10pt} \begin{table}[t] \centering \caption{PSNR and SSIM of our ablation studies} \begin{tabular}{ccccc} \hline \hline Datasets & \multicolumn{2}{c}{Rain-I} & \multicolumn{2}{c}{Rain-II} \\ \hline Metric & PSNR & SSIM & PSNR & SSIM \\ \hline $C_1$ & 27.15 & 0.772 & 25.48 & 0.793 \\ $C_2$ & 27.49 & 0.806 & 28.57 & 0.844 \\ $C_3$ & 31.30 & 0.897 & 33.86 & 0.930 \\ \hline Ours & \textbf{31.34} & \textbf{0.908} & \textbf{34.42} & \textbf{ 0.938} \\ \hline \hline \end{tabular} \label{tab:abla} \end{table} \renewcommand{\tabcolsep}{1pt} \def0.18\linewidth{0.18\linewidth} \begin{figure*}[t] \begin{center} \begin{tabular}{ccccc} \includegraphics[width=0.18\linewidth]{figures/expe/abla/rain-3.jpg}&\includegraphics[width=0.18\linewidth]{figures/expe/abla/rain-3_no_atnet.jpg}&\includegraphics[width=0.18\linewidth]{figures/expe/abla/rain-3_no_joint.jpg}&\includegraphics[width=0.18\linewidth]{figures/expe/abla/rain-3_derain.jpg}&\includegraphics[width=0.18\linewidth]{figures/expe/abla/rain-3_derain_vapor.jpg}\\ \includegraphics[width=0.18\linewidth]{figures/expe/abla/rain-32.jpg}&\includegraphics[width=0.18\linewidth]{figures/expe/abla/rain-32_no_atnet.jpg}&\includegraphics[width=0.18\linewidth]{figures/expe/abla/rain-32_no_joint.jpg}&\includegraphics[width=0.18\linewidth]{figures/expe/abla/rain-32_derain.jpg}&\includegraphics[width=0.18\linewidth]{figures/expe/abla/rain-32_derain_vapor.jpg}\\ (a) Input & (b) $C_1$ & (c) $C_2$ & (d) $C_3$ & (e) Ours \\ \end{tabular} \end{center} \caption{Visual results of ablation studies. (a) Input rainy images. (b)-(e) Deraining results under $C_1$, $C_2$, $C_3$ and the whole pipeline, respectively.} \label{fig:abla} \end{figure*} \renewcommand{\tabcolsep}{5pt} \begin{table*}[t] \centering \caption{PSNR/SSIM comparisons on our three datasets} \label{tab:quanti} \begin{tabular}{cccccc} \hline \hline Methods & \cite{li-eccv18-recurrent} & \cite{yang-pami19-joint} & \cite{li-cvpr19-heavy} & \cite{wang-cvpr19-spatial} & Ours \\ \hline Rain-I & 27.51/0.897 & 27.69/0.898 & 17.96/0.675 & 28.43/0.848 & \textbf{31.34/0.908} \\ Rain-II & 26.68/0.830 & 29.97/0.893 & 17.99/0.605 & 30.53/0.905 & \textbf{34.42/0.938} \\ Rain-III & 34.78/0.943 & 28.39/0.902 & 18.48/0.747 & 35.10/0.948 & \textbf{35.91/0.951} \\ \hline \hline \end{tabular} \end{table*} \renewcommand{\tabcolsep}{5pt} \begin{table*}[t] \centering \caption{FID comparisons on our three datasets} \label{tab:quanti-fid} \begin{tabular}{cccccc} \hline \hline Methods & \cite{li-eccv18-recurrent} & \cite{yang-pami19-joint} & \cite{li-cvpr19-heavy} & \cite{wang-cvpr19-spatial} & Ours \\ \hline Rain-I & 62.71 & 101.74 & 104.08 & 81.54 & \textbf{50.66} \\ Rain-II & 97.30 & 134.54 & 118.10 & 88.15 & \textbf{67.18} \\ Rain-III & 81.42 & 89.63 & 134.34 & 80.68 & \textbf{79.86} \\ \hline \hline \end{tabular} \end{table*} \renewcommand{\tabcolsep}{1pt} \def0.14\linewidth{0.14\linewidth} \begin{figure*}[t] \begin{center} \begin{tabular}{ccccccc} \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-208.pdf}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-208_li.pdf}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-208_yang.pdf}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-208_Li_rt.jpg}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-208_Wang_ty.jpg}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-208_derain.pdf}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-208_gt.pdf}\\ \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-394.pdf}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-394_li.pdf}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-394_yang.pdf}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-394_Li_rt.jpg}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-394_Wang_ty.jpg}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-394_derain.pdf}& \includegraphics[width=0.14\linewidth]{figures/expe/comp/t2srain-394_gt.pdf}\\ (a) Input & (b) \cite{li-eccv18-recurrent} & (c) \cite{yang-pami19-joint} & (d) \cite{li-cvpr19-heavy} & (e) \cite{wang-cvpr19-spatial} & (f) Ours & (g) GT \\ \end{tabular} \end{center} \caption{Qualitative comparisons of selected methods and our method on synthetic rainy images. (a) Input rainy images. (b)-(f) Deraining results of RESCAN \cite{li-eccv18-recurrent}, JORDER \cite{yang-pami19-joint}, PYM+GAN \cite{li-cvpr19-heavy}, SPANet \cite{wang-cvpr19-spatial} and our method. (g) Ground truth. These two samples are two failure cases of the state-of-the-art methods.} \label{fig:synvisual} \end{figure*} \renewcommand{\tabcolsep}{10pt} \begin{table*}[t] \centering \caption{Averaged time cost of comparison methods with a fixed image size of $512 \times 512$.} \begin{tabular}{cccccccc} \hline \hline Methods & \cite{li-eccv18-recurrent} & \cite{yang-pami19-joint} & \cite{li-cvpr19-heavy} & \cite{wang-cvpr19-spatial} & Ours \\ \hline Time & $0.47s$ & $1.39s$ & $0.45s$ & $0.66s$ & $0.03s$ \\ \hline \hline \end{tabular} \label{tab:speed} \end{table*} \subsection{Dataset Constructions} We follow \cite{li-cviu18-fast} to prepare training dataset where there are 20800 training pairs. The rainy image in each pair is synthesized with ground truth and rendered rainy layer by using screen blend mode. Our evaluation datasets consists of three parts. First, we randomly select 100 images from each dataset in \cite{zhang-cvpr18-density,li-eccv18-recurrent,fu-cvpr17-removing,yang-cvpr17-deep}, which brings 400 images in total and named as Rain-I. Second, we synthesize 400 images\footnote{http://www.photoshopessentials.com/photo-effects/rain/} where the synthetic rainy images possess apparent vapor, which is named as Rain-II. Third, we follow the real-world dataset ~\cite{wang-cvpr19-spatial} and name it as Rain-III. The real-world rainy images are collected from either existing works or Internet data. The independence between our training and testing datasets ensures the generalization of proposed method. \subsection{Ablation Studies} Our network consists of SNet, ANet, and VNet. We show how these networks work together to gradually improve image restoration results. We first remove ANet and VNet. The atmosphere light is estimated via the simplified condition illustrated in Sec.~\ref{sec:pretrainanet} to train SNet. This configuration is denoted as $C_1$. On the other side, we incorporate a pretrained ANet and use its output for SNet training, which is denoted as $C_2$. Also, we perform joint training of ANet and SNet, which is denoted as $C_3$. Finally, we jointly train ANet, SNet, and VNet where ANet and SNet are initialized with pretrained models. This configuration is the whole pipeline of our method. Fig. \ref{fig:abla} and Table \ref{tab:abla} show the qualitative and quantitative results. We observe that the results from $C_2$ are of higher quality than those from $C_1$. The higher quality indicates that estimating atmosphere light in ideal condition is not stable for effectively image restoration as shown in Fig. \ref{fig:abla} (b). Compared to $C_2$, the results from $C_3$ are more effective to remove rain streaks, which indicates the importance of joint training. However, the vapors are not well removed in the results from $C_3$. In comparison, by adding VNet to model vapors, we observe haze is further reduced in Fig. \ref{fig:abla} (e). The numerical evaluations in Table \ref{tab:abla} also indicate the effectiveness of joint training and vapor modeling. \subsection{Evaluations with State-of-the-art} We compare our method with existing deraining methods on three rain datasets (i.e., Rain-I, Rain-II, and Rain-III). The comparisons are categorized as numerical and visual evaluations. The details are presented in the following: \subsubsection{Quantitative evaluation.} Table \ref{tab:quanti} shows the comparison to existing deraining methods under PSRN and SSIM metrics. Overall, our method achieves favorable results. The PSNR of our method is about 4 dB higher than \cite{wang-cvpr19-spatial} on Rain-II dataset. In Table \ref{tab:quanti-fid}, we show the evaluations under the FID metric. This comparison shows that our method achieves lowest FID scores on all three datasets, which indicates that our results resemble most to the ground truth images. The time cost of online inference of comparison methods is shown in Table \ref{tab:speed}. Our method is able to produce results efficiently. \subsubsection{Qualitative evaluation.} We show visual comparison from aspects of synthetic data and real-world data. Fig. \ref{fig:synvisual} shows two synthetic rain images where existing methods are able to restore effectively. In comparison, our method is effective to remove both rain streaks and vapors. Besides synthetic evaluations, we show visual comparisons on real-world images in Fig. \ref{fig:pracvisual}. When the rain streaks are heavy as shown on the first row of (a), existing methods do not remove these streaks completely. When the rain streaks are mild as shown on the fourth row, all the comparison methods are able to remove their appearance. When the streak edges are blur as shown on the second row, the results of RESCAN and ours are able to faithfully restore while PYM+GAN tends to change the whole color perceptions in Fig. \ref{fig:pracvisual}(d). Meanwhile, there are artifacts and blocking effects appear on the third row of Fig. \ref{fig:pracvisual}(d). The limitations also arise in JORDER and SPANET where details are missing shown in Fig. \ref{fig:pracvisual}(c) and heavy rain streaks remain in Fig. \ref{fig:pracvisual}(d). Compared to existing methods, our method is able to effectively model both rain streaks and vapors. By jointing training of three subnetworks, the parameters of our visual model is accurately predicted and produces visually pleasing results. \renewcommand{\tabcolsep}{1pt} \def0.16\linewidth{0.16\linewidth} \begin{figure*}[t] \begin{center} \begin{tabular}{cccccc} \includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-0.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-0_li.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-0_yang.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-0_Li_rt.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-0_Wang_ty.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-0_derain_lambda_-_0_1.jpg}\\ \includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-179.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-179_li.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-179_yang.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-179_Li_rt.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-179_Wang_ty.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-179_derain_lambda_-0_06.jpg}\\ \includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-14.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-14_li.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-14_yang.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-14_Li_rt.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-14_Wang_ty.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-14_derain_lambda_-_0_1.jpg}\\ \includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-74.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-74_li.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-74_yang.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-74_Li_rt.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-74_Wang_ty.jpg}&\includegraphics[width=0.16\linewidth]{figures/expe/comp/rain-74_derain_lambda_-0_06.jpg}\\ (a) Input & (b) \cite{li-eccv18-recurrent} & (c) \cite{yang-pami19-joint} & (d) \cite{li-cvpr19-heavy} & (e) \cite{wang-cvpr19-spatial} & (f) Ours \\ \end{tabular} \end{center} \caption{Qualitative comparisons of comparison methods on real-world rainy images. Input images are shown in (a). Deraining results of RESCAN \cite{li-eccv18-recurrent}, JORDER \cite{yang-pami19-joint}, PYM+GAN \cite{li-cvpr19-heavy}, SPANet \cite{wang-cvpr19-spatial} and our method are shown from (b) to (f), respectively.} \label{fig:pracvisual} \end{figure*} \section{Concluding Remarks} \begin{flushleft} ``Rain is grace; rain is the sky condescending to the earth; without rain, there would be no life.'' \end{flushleft} \begin{flushright} --- John Updike \end{flushright} Rain nourishes daily life except visual recognition systems. Recent studies on rain image restoration propose models to calculate background images according to rain image formations. A limitation occurs that the appearance of rain consists of rain streaks and vapors, which are entangled with each other in the rain images. We rethink rain image formation by formulating both rain streaks and vapors as transmission medium. We propose two networks to learn transmission maps and atmosphere light that constitute rain image formation. These essential elements in the proposed model are effectively learned via joint network training. Experiments on the benchmark dataset indicate the proposed method performs favorably against state-of-the-art approaches. {\flushleft \bf Acknowledgement.} This work was supported by NSFC (60906119) and Shanghai Pujiang Program. \bibliographystyle{splncs04}
1,314,259,994,043
arxiv
\section{Caloric Curves} In Fig. 1 several snap shots of the one--body density of a hot nuclear system with 8 neutrons and 8 protons are shown. On the left hand side the \element{16}{O} nucleus has been given an excitation energy per particle of 3.5~MeV by randomly displacing the wave packets of the ground state. After equilibration this corresponds to a temperature of about 4~MeV. One sees that the two--body interaction yields an alpha--particle substructure in $^{16}$O. There is no gas around the vibrating nucleus because the excitation energy is not high enough to evaporate particles. In the center column of Fig.~1 the excitation energy is 7~$A$MeV. Bright areas which indicate the liquid are surrounded by a cloud of gas (for details see figure caption). More over, the nuclear system very often falls apart into several smaller drops which are embedded in vapor. The right hand side displays the same system but for an excitation energy of 11~$A$MeV. Here half of the time no high density areas are visible (first and third frame) and if a drop is formed it is rather small. As we shall see later, the two excitation energies 7 and 11 $A$MeV correspond both to a temperature around 6 MeV in the coexistence region. It is quite obvious that the additional excitation energy of 4 MeV per particle is used to transform liquid to vapor so that we see a clear first order liquid--gas phase transition. This is remarkable as we are dealing with only 16 nucleons and the dynamical model evolves in time a pure state with a very limited number of degrees of freedom, actually only eight per particle, three for mean position, three for mean momentum and two for the width. Furthermore, we have a fermion system in which the level density due to antisymmetrization is much smaller than in classical mechanics. The container is very wide so that the vapor pressure is rather small. Estimates yield 10$^{-4}$ to 10$^{-2}$~MeV/fm$^3$ which should be compared to a critical pressure of about 0.5~MeV/fm$^3$. The container potential itself is at the surface of the indicated cubes only 1.2 MeV higher than in the center. To quantify the relation between energy, temperature and container size we display in Fig. 2 the caloric curve for the external parameter $\hbar\omega$ = 1, 6 and 18 MeV, which controls the thermodynamic properties of the nucleonic system in a similar way as the volume. A pronounced plateau is seen in the plot on the left hand side, where the oscillator does not influence the self--bound nucleus very much. In the middle part the more narrow container potential is already squeezing the ground state, its energy goes up to $E/A\approx -5~\mbox{MeV}$. The plateau is shifted to $T\approx 7~\mbox{MeV}$ and the latent heat is decreased. On the right hand side, for $\hbar\omega=18~\mbox{MeV}$, the coexistence region has almost vanished and the critical temperature $T_c$ is reached. The solid line represents the relation between temperature and energy for an ideal Fermi gas in a harmonic oscillator potential with $\omega_{\mbox{\tiny{eff}}}=(\omega^2+\omega_0^2)^{1/2}$, where $\hbar \omega_0=10$ MeV corresponds to the selfconsistent mean--field of $^{16}$O. The energy zero--point is shifted so that the ground state of the oscillator is at the FMD value. The dashed line shows the relation for the external container, also with the ground state shifted, because even in the gas phase the particles still feel attraction. Despite the strong interaction the liquid and the gas phase follow approximately the picture of an ideal gas in a mean--field. The coexistence region cannot be approximated by a mean--field picture like the liquid in a selfconsistent potential or the gas in the external field. The "error bars" in temperature and energy represent r.m.s. deviations from the time averaged mean. There is always an exchange of energy between thermometer and nuclear system. But the fluctuations remain rather small. The temperature fluctuations, which through relation (\ref{TversusE}) are actually fluctuations in the energy of the thermometer particles, are larger because the thermometer has a smaller heat capacity than the nucleons. \begin{figure} \unitlength1mm \begin{picture}(125,46) \put(0,0){\epsfig{file=Calorie-O16-All.eps,height=45mm}} \end{picture} \caption{Caloric curve of \element{16}{O} for the frequencies $\hbar\omega=1, 6, 18~\mbox{MeV}$ of the container potential. The solid lines show the low temperature behaviour of an ideal gas of 16 fermions in a common harmonic oscillator with level spacing $\hbar\omega_{\mbox{\tiny{eff}}}$, the dashed lines denote their high temperature behaviour in the confining oscillator ($\hbar\omega$).} \end{figure} The critical temperature $T_c$ can only be estimated from the disappearance of the coexistence phase in Fig.~2 because the fluctuations in $T$ and $E$ are rather large. Its value is about $10~\mbox{MeV}$ and has to be compared to the results of ref. \cite{JMZ84,BLV84,CaY96} for finite nuclei including Coulomb and surface effects. All authors report a week dependence of the critical temperature on the mass number in the region from calcium to lead. The result of Jaqaman et al. with the Skyrme ZR3 interaction \cite{JMZ84} can be extrapolated to \element{16}{O} to give $T_c\approx8~\mbox{MeV}$, Bonche et al. \cite{BLV84} arrive at the same number using the SKM interaction, but got $T_c\approx11~\mbox{MeV}$ with the SIII interaction. Close to the last result is the value extrapolated from ref. \cite{CaY96} where $T_c\approx11.5~\mbox{MeV}$ for Gogny's D1 interaction. \begin{figure}[hhht] \unitlength1mm \begin{picture}(125,55) \put(0,0){\epsfig{file=CaloricCurve2.eps,height=50mm,width=46mm}} \put(70,0){\epsfig{file=Pochodzalla_bw.eps,height=50mm}} \end{picture} \caption{L.h.s.: caloric curve of \element{24}{Mg}, \element{27}{Al} and \element{40}{Ca} at $\hbar\omega=1~\mbox{MeV}$, r.h.s.: caloric curve determined by the Aladin group from the decay of spectator nuclei.} \end{figure} We determined the relation between the excitation energy and the temperature also for the larger nuclei \element{24}{Mg}, \element{27}{Al} and \element{40}{Ca} using the same container potential with $\hbar\omega=1~\mbox{MeV}$ and summarize them on the left hand side of Fig. 3. In order to put them on the same scale we subtract from the averaged energy, defined in eq. (\ref{meanE}), the respective ground state energies and show the temperature as a function of excitation energy $E^*$. Like for \element{16}{O} all caloric curves clearly exhibit three different parts. Beginning at small excitation energies the temperature rises steeply with increasing energy as expected for the shell model. The nucleons remain bound in the excited nucleus which behaves like a drop of liquid. At an excitation energy of $3~\mbox{MeV}$ per nucleon the curve flattens and stays almost constant up to about $11~\mbox{MeV}$. This coexistence plateau at $T\approx$ 5 to 6 MeV reaches from $E^*/A\approx3~\mbox{MeV}$ to about $E^*/A\approx11~\mbox{MeV}$ where all nucleons are unbound and the system has reached the vapor phase. The latent heat at pressure close to zero is hence about 8~MeV. One has to keep in mind that the plateau, which due to finite size effects is rounded, is not the result of a Maxwell construction as in nuclear matter calculations. In the excitation energy range between 3 and $11~\mbox{MeV}$ per particle an increasing number of nucleons is found in the vapor phase outside the liquid phase. This has been shown in Fig. 1. The caloric curve shown in Fig. 3 has a striking similarity with the caloric curve determined by the ALADIN group \cite{Poc95} which is displayed in the same figure. The position and the extension of the plateau agree with the FMD calculation using a containing oscillator potential of $\hbar\omega=1~\mbox{MeV}$. Nevertheless, there are important differences. The measurement addresses an expanding non--equilibrium system, but the calculation deals with a contained equilibrium system. In addition the used thermometers differ; the experiment employs an isotope thermometer based on chemical equilibrium and the calculation uses an ideal gas thermometer. One explanation why the thermodynamic description of the experimental situation works and compares nicely to the equilibrium result might be, that the excited spectator matter equilibrates faster into the coexistence region \cite{FeS97a} than it expands and cools. The assumption of such a transient equilibrium situation \cite{PaN95,BDM85,Gro90} seems to work rather well at least in the plateau region. \section{Fermionic Molecular Dynamics} \label{FMDModel} This section contains a brief outline of Fermionic Molecular Dynamics (FMD). Details can be found in ref. \cite{FBS95}. The model describes the many--body system with a parameterized antisymmetric many--body state \begin{eqnarray} \ket{Q(t)}=\sum_{all \ P} sign(P) \ket{q_{P(1)}(t)} \otimes \cdots \otimes \ket{q_{P(A)}(t)} \end{eqnarray} composed of single--particle Gaussian wave packets \begin{eqnarray} \braket{\vec{x}}{q(t)} &=& \exp\left\{-\frac{(\,\vec{x}-\vec{b}(t)\,)^2}{2\,a(t)} \right\} \otimes\ket{m_s}\otimes\ket{m_t} \ , \\ && \vec{b}(t) = \vec{r}(t) + i\, a(t)\, \vec{p}(t) \nonumber \ , \label{gaussian} \end{eqnarray} which are localized in phase space at $\vec{r}$ and $\vec{p}$ with a complex width $a$. Spin and isospin are chosen to be time--independent in these calculations; they are represented by their $z$--components $m_s$ and $m_t$, respectively. Given the Hamilton operator $\Operator{H}$ the equations of motion for all parameters are derived from the time--dependent variational principle (operators are underlined with a tilde) \begin{eqnarray} \delta \int_{t_1}^{t_2} \! \! dt \; \bra{Q(t)}\; i \frac{d}{dt} - \Operator{H}\; \ket{Q(t)} \ &=&\ 0 \ . \label{var} \end{eqnarray} In the present investigation the effective two--body nucleon--nucleon interaction $\Operator{V}$ in the Hamilton operator consists of a short--range repulsive and long--range attractive central potential with spin and isospin admixtures and includes the Coulomb potential \cite{Sch96}. The parameters of the interaction have been adjusted to minimize deviations between calculated and measured binding energies for nuclei with mass numbers $4\le A\le40$. Besides the kinetic energy $\Operator{T}$ and the nucleon--nucleon interaction $\Operator{V}$ the Hamilton operator $\Operator{H}$ includes an external field \begin{eqnarray} \Operator{U}(\omega) = \frac{1}{2}\; m \omega^2 \sum_{i=1}^A \vec{\Operator{x}}_i^2 \end{eqnarray} which serves as a container. The container is an important part of the setup because it keeps the evaporated nucleons (vapor) in the vicinity of the remaining liquid drop so that it equilibrates with the surrounding vapor. The vapor pressure is controlled by the external parameter $\omega$, which appoints the accessible volume. In our model the nuclear system is quantal and strongly interacting. The quantal nature does not allow to deduce the temperature from the kinetic energy distribution as it is the case for classical systems with momentum independent forces. The zero--point motion is always present and does not imply a finite temperature. Due to the fact that the particles are strongly interacting also a fit to a Fermi distribution will give wrong answers because even in the groundstate at zero temperature we have partially occupied single--particle states. Therefore, the concept of an external thermometer, which is coupled to the nuclear system, is used in the present investigation. The thermometer consists of a quantum system of distinguishable particles moving in a common harmonic oscillator potential different from the container potential. The time evolution of the whole system is described by the FMD equations of motion. For this purpose the many--body trial state is extended and contains now both, the nucleonic degrees of freedom and the thermometer degrees of freedom \begin{eqnarray} \ket{Q} = \ket{Q_n} \otimes \ket{Q_{th}} \ . \end{eqnarray} The total Hamilton operator including the thermometer is given by \begin{eqnarray} \Operator{H} = \Operator{H}_n+ \Operator{H}_{th} + \Operator{H}_{n-th}\ ,\quad\Operator{H}_n= \Operator{T} + \Operator{V} + \Operator{U}(\omega) \end{eqnarray} with the nuclear Hamilton operator $\Operator{H}_n$ and the thermometer Hamilton operator \begin{eqnarray} \Operator{H}_{th} = \sum_{i=1}^{N_{th}} \left( \frac{\vec{\Operator{k}}^2(i)}{2\; m_{th}(i)} + \frac{1}{2}\; m_{th}(i)\; \omega_{th}^2\; \vec{\Operator{x}}^2(i) \right) \ . \end{eqnarray} The coupling between nucleons and thermometer particles, $\Operator{H}_{n-th}$, is chosen to be weak, repulsive and of short range. It has to be as weak as possible in order not to influence the nuclear system too much. On the other hand it has to be strong enough to allow for reasonable equilibration times. Our choice is to put more emphasis on small correlation energies, smaller than the excitation energy, and to tolerate long equilibration times. \input{Fig1} % The determination of the caloric curve is done in the following way. The nucleus is excited by displacing all wave packets from their ground--state positions randomly. Both, center of mass momentum and total angular momentum are kept fixed at zero. To allow a first equilibration between the wave packets of the nucleus and those of the thermometer the system is evolved over a long time, about 10000~\mbox{fm}/c. (A typical time for a nucleon to cross the hot nucleus is 30 fm/c.) After that a time--averaging of the energy of the nucleonic system as well as of the thermometer is performed over a time interval of 10000~\mbox{fm}/c. During this time interval the mean of the nucleonic excitation energy \begin{eqnarray}\label{meanE} E &=& \frac{1}{N_{steps}} \sum_{i=1}^{N_{steps}} \bra{Q_n(t_i)} \Operator{H}_n \ket{Q_n(t_i)} \end{eqnarray} is evaluated. The time--averaged energy of the thermometer $E_{th}$, which is calculated during the same time interval, determines the temperature $T$ through the relation for an ideal gas of distinguishable particles in a common harmonic oscillator potential (Boltzmann statistics) \begin{eqnarray}\label{TversusE} T &=& \hbar\omega_{th} \left[ \ln\left( \frac{E_{th}/N_{th}+\frac{3}{2}\hbar\omega_{th}} {E_{th}/N_{th}-\frac{3}{2}\hbar\omega_{th}} \right) \right]^{-1} \ . \end{eqnarray} The general idea behind is the assumption of ergodicity; time averaging should be equivalent to ensemble averaging. In an earlier investigation \cite{ScF96} we showed that FMD behaves ergodically. Time averaged occupation numbers of a weakly interacting Fermi gas coincided with a Fermi--Dirac distribution. This however does not mean necessarily that the system as a whole is in a grand canonical ensemble because the one--body occupation numbers represent only a small subset of all degrees of freedom. We believe that our system is closer to the micro canonical situation in the sense that the particle number is fixed and a pure many--body state $\ket{Q_n(t)}$ is evolved in time. This excited state is not an eigenstate of the Hamiltonian but has a certain width in energy. (If it were an eigenstate it would be stationary and there would be no dynamical evolution as seen in Fig~1.) In principle we could calculate the variance $ \bra{Q_n(t)} \Operator{H}_n^2 \ket{Q_n(t)} -\bra{Q_n(t)} \Operator{H}_n \ket{Q_n(t)}^2 $ of the Hamiltonian as a function of time to check our conjecture. But $\Operator{H}_n^2$ contains a 4--body operator which means a huge numerical effort. The coupling to the thermometer also introduces a certain amount of energy fluctuations but they remain rather small as shown in the following section. \section{Introduction} The liquid--gas phase transition of nuclear matter is presently investigated experimentally in several laboratories \cite{theseproc}. The task is very difficult because one can manipulate only finite nuclei and the measured information on the system is rather indirect. The difference to macro--physics is not only the smallness of the system with only about 200 constituents but also that one cannot control the thermodynamic quantities volume or pressure. The reason is that one is colliding two nuclei in order to produce excitation energy and compression. But as there is no container the system begins to expand into the vacuum right after the compression and heating phase. Therefore one is all the time in a transient state where equilibrium in its original meaning, namely a time--independent stationary macro state, is not reached. The excitation energy of the nuclear system can be deduced by measuring all energies of the outgoing particles and clusters. Also the number of nucleons which belong to the nuclear system under investigation is fairly well known. In peripheral collisions the so called spectator matter, which is heated by ablation and participant nucleons which enter the spectator, moves with a speed close to the beam velocity and can thus be separated from participant matter. The excited spectator pieces are assumed to have little compression. Central collisions lead to higher excitations and more compression. Selection of events with high transverse energies of the outgoing fragments are considered to be most central and are thus distinguished from more peripheral collisions. But there is always a certain amount of matter emitted in the forward backward direction which originates from the corona. Due to compression and heating the participant matter will develop a radial collective flow which obstructs equilibration. It is however possible to estimate its magnitude by assuming local equilibrium and a flow velocity profile, for example proportional to the distance from the center of the source. A "freeze--out" concept is entering all considerations. Usually the time interval in which the collisions between the nucleons and the fragments cease is believed to be short enough so that global thermodynamic properties like temperature and flow velocity are frozen in. This allows to infer from the mean kinetic energies of the fragments the division into collective and thermal energy. The argument is that the thermal part of the center of mass motion is proportional to the temperature and independent on mass while the collective part is proportional to the mass. Both, measurements and molecular dynamics calculations support this picture. Despite all these difficulties the hope is that multifragmention reactions will give information on the coexistence phase because at freeze--out several fragments coexist with vapor. The gas phase should be related to vaporization events which consist mainly of nucleons and only a few small clusters, while evaporating compound nuclei should represent the hot liquid. The challenge to measure nuclear equations of state has been accepted not only for astrophysical reasons, like supernova explosions or neutron stars, but also because the subject in itself is of interest as one is dealing with a small charged Fermi liquid which is self--bound by the strong interaction. \section{Theoretical Approaches} Different from experiment a theoretical treatment can impose thermodynamic conditions like volume and temperature. Grand canonical mean--field calculations have been performed since long, both relativistic and non-relativistic, e.g. \cite{JMZ84,GKM84,BLV84,SeW86,SCG89}. There are however two major shortcomings with that. First, a mean--field picture does not treat the coexistence region properly, fluctuations are missing and a Maxwell construction is needed. Second, they cannot describe the experimental non--equilibrium situation so that a direct comparison with data is not possible. In addition there is a general difficulty with canonical or grand canonical treatments of small systems. In principle all thermostatic information about a system, including the liquid--gas phase transition, is contained in the level density $\rho(E,N)$ of the Hamiltonian, where $E$ is the energy and $N$ the particle number. When a phase transition occurs $\rho(E,N)$ shows a rapid increase. In a grand canonical (or canonical) ensemble the thermal weight factor is $\rho(E,N) \exp \{ -(E-\mu N)/T \}$ ($T$ temperature, $\mu$ chemical potential) so that a sudden increase in $\rho(E,N)$ is washed out by the exponential Boltzmann factor. This insensitivity is annoying for small systems because there the level density $\rho(E,N)$ does not raise so steeply with $E$ or $N$ that the product $\rho(E,N) \exp \{ -(E-\mu N)/T \} $ forms a very narrow peak as a function of $E$ or $N$. The micro canonical situation is then preferable as it is directly sensitive to $\rho(E,N)$ within an interval $\Delta E$ \cite{HuellerGross}. Micro canonical statistical models \cite{BDM85,Gro90} are in this respect well suited but they are static and rely on the assumption that at freeze--out the system is in global equilibrium, both, in chemical and kinetic degrees of freedom. In the following we investigate the liquid--gas phase transition with a Fermionic Molecular Dynamics simulation. This model can treat nucleus nucleus collisions as well as equilibrium situations. We will however concentrate on an experimentally not feasible situation, namely an excited nucleus which is put in an external field. This field plays the role of a container so that evaporated nucleons cannot escape but equilibrate with the remaining nucleus (hot liquid).
1,314,259,994,044
arxiv
\section*{Introduction} In our paper \cite{CantaK} we classified all maximal open subalgebras of all simple infinite-dimensional linearly compact Lie superalgebras $S$ over an algebraically closed field $\bar{\mathbb{F}}$ of characteristic zero, up to conjugation by the group $G$ of inner automorphisms of the Lie superalgebra $Der S$ of continuous derivations of $S$. An immediate corollary of this result is Theorem 11.1 of \cite{CantaK}, which describes, up to conjugation by $G$, all maximal open subalgebras of $S$, which are invariant with respect to all inner automorphisms of $S$. Using this result and an explicit description of $Der S$ (see \cite[Proposition 6.1]{K} and its corrected version \cite[Proposition 1.8]{CantaK}), we derive the classification of all maximal among the open subalgebras of $S$, which are $Aut S$-invariant, where $Aut S$ is the group of all continuous automorphisms of $S$ (Theorem \ref{autSinv}). Such a subalgebra $S_0$ always exists, and in most of the cases it is unique (also, in most of the cases it is a subalgebra of minimal codimension). Picking a subspace $S_{-1}$ of $S$, which is minimal among $Aut S$-invariant subspaces, properly containing $S_0$, we can construct the Weisfeiler filtration (see e.g.\ \cite{CantaK} or \cite{K}). Then it is easy to see that \begin{equation} Aut S={\cal U}\rtimes Autgr S, \label{0.1} \end{equation} where $\cal{U}$ is a normal prounipotent subgroup consisting of automorphisms of $S$ inducing an identity automorphism of $Gr S$, and $Autgr S$ is a subgroup of a (finite-dimensional) algebraic group of automorphisms of $Gr S$, preserving the grading. We list all the groups $Autgr S$, along with their (faithful) action on $Gr_{-1}S$, in Table 1. This leads to the following description of the group $Aut S$: \begin{equation} Aut S=Inaut S\rtimes A, \label{0.2} \end{equation} where $Inaut S$ is the subgroup of all inner automorphisms of $S$ and $A$ is a closed subgroup of $Autgr S$, listed in Corollary \ref{outer}. Let $\mathbb{F}$ be a subfield of $\bar{\mathbb{F}}$, whose algebraic closure is $\bar{\mathbb{F}}$, and fix an $\mathbb{F}$-form $S^{\mathbb{F}}$ of $S$, i.e., a Lie superalgebra over $\mathbb{F}$, such that $S^\mathbb{F}\otimes_\mathbb{F}\bar{F}\cong S$. Then all $\mathbb{F}$-forms of $S$, up to isomorphism, are in a bijective correspondence with $H^1(Gal, Aut S)$, where $Gal=Gal(\bar{\mathbb{F}}/\mathbb{F})$ (see e.g.\ \cite{R}). Since the first Galois cohomology of a prounipotent algebraic group is trivial (see e.g.\ \cite{R}), we conclude, using the cohomology long exact sequence, that \begin{equation} H^1(Gal, Aut S)\hookrightarrow H^1(Gal, Autgr S). \label{0.3} \end{equation} The infinite-dimensional linearly compact simple Lie superalgebras have been classified in \cite{K}. The list consists of ten series $(m\geq 1)$: $W(m,n)$, $S(m,n)$ $((m,n)\neq (1,1))$, $H(m,n)$ ($m$ even), $K(m,n)$ ($m$ odd), $HO(m,m)$ $(m\geq 2)$, $SHO(m,m)$ $(m\geq 3)$, $KO(m,m+1)$, $SKO(m,m+1;\beta)$ $(m\geq 2)$, $SHO^\sim(m,m)$ ($m$ even), $SKO^\sim(m,m+1)$ ($m\geq 3$ odd), and five exceptional Lie superalgebras: $E(1,6)$, $E(3,6)$, $E(3,8)$, $E(4,4)$, $E(5,10)$. Since the following isomorphisms hold (see \cite{CantaK}, \cite{K}): $W(1,1)\cong K(1,2)\cong KO(1,2)$, $S(2,1)\cong HO(2,2)\cong SKO(2,3;0)$, $SHO^\sim(2,2)\cong H(2,1)$, when dealing with $W(m,n)$, $KO(n,n+1)$, $HO(n,n)$, $SKO(2,3;\beta)$ and $SHO^\sim(n,n)$, we will assume that $(m,n)\neq (1,1)$, $n\geq 2$, $n\geq 3$, $\beta\neq 0$, and $n>2$, respectively. We will use the construction of all these superalgebras as given in \cite{CantaK} (see also \cite{CCK}, \cite{CK}, \cite{K}, \cite{Gafa}, \cite{S}). Since the first Galois cohomology with coefficients in the groups $GL_n(\bar{F})$ and $Sp_n(\bar{\mathbb{F}})$ is trivial (see, e.g., \cite{Serre1}, \cite{Serre2}), we conclude from (\ref{0.3}) and Table 1, that $H^1(Gal, Aut S)$ is trivial in all cases except for four: $S=H(m,n)$, $K(m,n)$, $S(1,2)$, and $E(1,6)$. Thus, in all cases, except for these four, $S$ has a unique $\mathbb{F}$-form (in the $SKO(n,n+1;\beta)$ case we have to assume that $\beta\in\mathbb{F}$ in order for such a form to exist). Since $H^1(Gal, O_n(\bar{\mathbb{F}}))$ is in canonical bijective correspondence with classes of non-degenerate bilinear forms in $n$ variables over $\mathbb{F}$ (see, e.g., \cite{Serre2}), we find that all $\mathbb{F}$-forms of $H(m,n)$ and $K(m,n)$ are defined by the action on supersymplectic and supercontact forms over $\mathbb{F}$, respectively. In the cases $S=S(1,2)$ and $E(1,6)$, the answer is more interesting. We construct all $\mathbb{F}$-forms of these Lie superalgebras, using the theory of Lie conformal superalgebras. The present paper is a continuation of \cite{CantaK}, which we refer to for terminology not explained here. The base field, unlike in \cite{CantaK}, is an arbitrary field $\mathbb{F}$ of characteristic 0, and we denote by $\bar{\mathbb{F}}$ its algebraic closure. In the Lie algebra case the problems considered in the present paper were solved by Rudakov \cite{R}, whose methods we use. \section{$\boldsymbol{\Z}$-Gradings}\label{gradings} In papers \cite{CantaK} and \cite{K} the base field is $\mathbb{C}$. However, it is not difficult to extend all the results there to the case of an arbitrary algebraically closed field $\bar{\mathbb{F}}$ of characteristic zero. In order to do this one has to replace exponentiable derivations of a linearly compact algebra $S$ in the sense of \cite{CantaK}, \cite{K}, by exponentiable derivations in the sense of \cite{G2} (a derivation $d$ of a Lie superalgebra $S$ over a field $\mathbb{F}$ is called {\em exponentiable} in the sense of \cite{G2} if $d(H)\subset H$ for any closed $Aut S$-invariant subspace $H$ of $S$). Also, we define the {\em group of inner automorphisms} of $S$ to be the group generated by all elements $\exp(ad ~a)$, where $\exp(ad ~a)$ converges in linearly compact topology. Then Theorem 1.7 of \cite{CantaK} on conjugacy of maximal tori in an artinian semisimple linearly compact superalgebra still holds over $\bar{\mathbb{F}}$. Consequently, the classification given in \cite{CantaK} of primitive pairs $(L,L_0)$ up to conjugacy by inner automorphisms of $Der L$ stands as well over $\bar{\mathbb{F}}$. We first recall from \cite{CantaK} and \cite{K} the necessary information on $\Z$-gradings of Lie superalgebras in question over the field $\bar{\mathbb{F}}$. For information on finite-dimensional Lie superalgebras we refer to \cite{K2} or \cite{K}. Recall that $W(m,n)$ is the Lie superalgebra of all continuous derivations of the commutative associative superalgebra $\Lambda(m,n)=\Lambda(n) [[x_1,\dots,x_m]]$, where $\Lambda(n)$ is the Grassmann superalgebra in $n$ odd indeterminates $\xi_1,\dots,\xi_n$, and $x_1,\dots,x_m$ are even indeterminates. Recall that a $\Z$-grading of the Lie superalgebra $W(m,n)$ is called the grading of type $(a_1,\dots, a_m|b_1,\dots, b_n)$ if $a_i=\deg x_i=-\deg\frac{\partial}{\partial x_i}\in\mathbb{N}$ and $b_i=\deg\xi_i=-\deg\frac{\partial}{\partial \xi_i}\in\Z$ (cf.\ \cite[Example 4.1]{K}). Every such a grading always induces a grading on the Lie superalgebra $S(m,n)$ and it induces a grading on $S=H(m,n)$, $K(m,n)$, $HO(n,n)$, $SHO(n,n)$, $KO(n,n+1)$, or $SKO(n,n+1;\beta)$ if the defining differential form of $S$ is homogeneous with respect to this grading. The induced grading on $S$ is also called a grading of type $(a_1,\dots, a_m|b_1,\dots, b_n)$. The $\Z$-grading of type $(1,\dots,1|1,\dots, 1)$ is an irreducible grading of $W(m,n)$ called its {\em principal} grading. In this grading $W(m,n)=\prod_{j\geq -1}\mathfrak{g}_j$ has $0$-th graded component isomorphic to the Lie superalgebra $gl(m,n)$ and $-1$-st graded component isomorphic to the standard $gl(m,n)$-module $\bar{\mathbb{F}}^{m|n}$. The even part of $\mathfrak{g}_0$ is isomorphic to the Lie algebra $gl_m\oplus gl_n$ where $gl_m$ (resp.\ $gl_n$) acts trivially on $\bar{\mathbb{F}}^n$ (resp.\ $\bar{\mathbb{F}}^m$) and acts as the standard representation on $\bar{\mathbb{F}}^m$ (resp.\ $\bar{\mathbb{F}}^n$). The principal grading of $W(m,n)$ induces on $S(m,n)$, $H(m,n)$, $HO(n,n)$ and $SHO(n,n)$, irreducible gradings also called {\em principal}. The $0$-th graded component of $S(m,n)$ in its principal grading is isomorphic to the Lie superalgebra $sl(m,n)$ and its $-1$-st graded component is isomorphic to the standard $sl(m,n)$-module $\bar{\mathbb{F}}^{m|n}$. The even part of $\mathfrak{g}_0$ is isomorphic to the Lie algebra $sl_m\oplus sl_n \oplus \bar{\mathbb{F}} c$ where $sl_m$ (resp.\ $sl_n$) acts trivially on $\bar{\mathbb{F}}^n$ (resp.\ $\bar{\mathbb{F}}^m$) and acts as the standard representation on $\bar{\mathbb{F}}^m$ (resp.\ $\bar{\mathbb{F}}^n$). Here $c$ acts by multiplication by $-n$ (resp.\ $-m$) on $\bar{\mathbb{F}}^m$ (resp.\ $\bar{\mathbb{F}}^n$). Let $S=H(2k,n)=\prod_{j\geq -1}\mathfrak{g}_j$ with its principal grading. Recall that the Lie superalgebra $H(2k,n)$ can be identified with $\Lambda(2k,n)/\bar{\mathbb{F}} 1$, where we have $2k$ even indeterminates $q_1, \dots, q_k$, $p_1, \dots, p_k$, and $n$ odd indeterminates $\xi_1, \dots, \xi_n$, with bracket $[f,g]=\sum_{i=1}^k(\frac{\partial f}{\partial p_i}\frac{\partial g}{\partial q_i}-\frac{\partial f}{\partial q_i}\frac{\partial g}{\partial p_i})-(-1)^{p(f)}\sum_{i=1}^n\frac{\partial f}{\partial \xi_i} \frac{\partial g}{\partial \xi_{n-i+1}}.$ Then $\mathfrak{g}_0\cong spo(2k,n)$, and $\mathfrak{g}_{-1}$ is isomorphic to the standard $spo(2k,n)$-module $\bar{\mathbb{F}}^{2k|n}$. Here $(\mathfrak{g}_0)_{\bar{0}}$ is spanned by elements $\{p_ip_j, p_iq_j, q_iq_j\}$ for $i,j=1, \dots,k$, and $\{\xi_i\xi_j\}_{i\neq j}$ for $i, j=1, \dots, n$, hence it is isomorphic to $sp_{2k}\oplus so_n$. The odd part of $\mathfrak{g}_0$ is spanned by vectors $\{p_i\xi_j, q_i\xi_j\}$ for $i=1, \dots, k$ and $j=1, \dots, n$, hence it is isomorphic to the $sp_{2k}\oplus so_n$-module $\bar{\mathbb{F}}^{2k}\otimes\bar{\mathbb{F}}^n$, where $\bar{\mathbb{F}}^{2k}$ and $\bar{\mathbb{F}}^n$ are the standard $sp_{2k}$ and $so_n$-modules, respectively. Besides, $\mathfrak{g}_{-1}=\langle p_i, q_i, \xi_j ~|~ i=1, \dots, k, j=1, \dots, n \rangle$, hence $sp_{2k}$ acts trivially on $\bar{\mathbb{F}}^n=\langle \xi_j ~|~ j=1, \dots, n\rangle$ and acts as the standard representation on $\bar{\mathbb{F}}^{2k}=\langle p_i, q_i ~|~ i=1, \dots,k\rangle$, and $so_n$ acts trivially on $\bar{\mathbb{F}}^{2k}$ and by the standard action on $\bar{\mathbb{F}}^n$. The grading of type $(2,1,\dots,1|1,\dots,1)$ of $W(2k+1,n)$ induces an irreducible grading $K(2k+1,n)=\prod_{j\geq -2}\mathfrak{g}_j$, called the {\em principal} grading of $K(2k+1,n)$. Its $0$-th graded component $\mathfrak{g}_0$ is isomorphic to the Lie superalgebra $cspo(2k,n)$ and $\mathfrak{g}_{-1}$ is isomorphic to the standard $cspo(2k,n)$-module $\bar{\mathbb{F}}^{2k|n}$. Consider the Lie superalgebra $S=HO(n,n)=\prod_{j\geq -1}\mathfrak{g}_j$ with its principal grading. Then $\mathfrak{g}_0$ is isomorphic to the Lie superalgebra $\tilde{P}(n)=\tilde{P}(n)_{-1}+\tilde{P}(n)_{0}+\tilde{P}(n)_1$, where $\tilde{P}(n)_{0}\cong gl_n$, and, as $gl_n$-modules, $\tilde{P}(n)_{-1}\cong\Lambda^2(\bar{\mathbb{F}}^{n*})$, $\tilde{P}(n)_1\cong S^2(\bar{\mathbb{F}}^n)$, and $\mathfrak{g}_{-1} \cong \bar {\mathbb{F}}^n\oplus \bar{\mathbb{F}}^{n*}$, where $\bar{\mathbb{F}}^n$ is the standard $gl_n$-module, and the $gl_n$-submodules $\bar{\mathbb{F}}^n$ and $\bar{\mathbb{F}}^{n*}$ of $\mathfrak{g}_{-1}$ have different parities. The $0$-th graded component of $SHO(n,n)$ in its principal grading is isomorphic to the graded subalgebra $P(n)=P(n)_{-1}+P(n)_{0}+P(n)_1$ of $\tilde{P}(n)$, where $P(n)_{0}\cong sl_n$, ${P}(n)_{-1}\cong\Lambda^2(\bar{\mathbb{F}}^{n*})$ and ${P}(n)_1\cong S^2(\bar{\mathbb{F}}^n)$, and its $-1$-st graded component is isomorphic to the standard $P(n)$-module $\bar{\mathbb{F}}^n\oplus \bar{\mathbb{F}}^{n*}$. The $\Z$-grading of type $(1, \dots, 1|1, \dots,1,2)$ of $W(n,n+1)$ induces on the Lie superalgebras $KO(n,n+1)$ and $SKO(n,n+1;\beta)$ an irreducible grading called {\em principal}. In these cases the $\mathfrak{g}_0$-module $\mathfrak{g}_{-1}$ is obtained from that of $SHO(n,n)$ by adding some operators which act as scalars on $\bar{\mathbb{F}}^n$ and $\bar{\mathbb{F}}^{n*}$. The $\Z$-grading of type $(1, \dots,1|0,\dots,0)$ of $S=W(m,n)$, $S(m,n)$, $HO(n,n)$, $SHO(n,n)$, the $\Z$-gradings of type $(1,\dots,1|2, \dots,2,0,\dots,0)$ and $(2,1,\dots,1|$ $2, \dots,2,$ $0,\dots,0)$ with $h$ zeros of $S=H(m,2h)$ and $K(m,2h)$, respectively, and the $\Z$-grading of type $(1, \dots,1|0,\dots,0,1)$ of $S=KO(n,n+1)$, $SKO(n,n+1;\beta)$, is called the {\em subprincipal} grading of $S$. The Lie superalgebra $S=SKO(2,3;1)=\prod_{j\geq -1}\mathfrak{g}_j$ in its subprincipal grading has $0$-th graded component $\mathfrak{g}_0$ isomorphic to the semidirect sum of $S(0,2)$ and the subspace of $\Lambda(2)$ spanned by all the monomials except for $\xi_1\xi_2$, and $\mathfrak{g}_{-1}\cong \Lambda(2)$. The even part of $\mathfrak{g}_0$ is isomorphic to $sl_2 \oplus \bar{\mathbb{F}}$ and, as an $sl_2$-module, $\mathfrak{g}_{-1}=\bar{\mathbb{F}}^2\oplus \bar{\mathbb{F}}^2$, where the two copies of $\bar{\mathbb{F}}^2$ have different parities and $sl_2$ acts by the standard action on the even copy and trivially on the odd copy. The algebra of outer derivations of $S$ is isomorphic to $sl_2$ (cf.\ \cite[Remark 4.15]{CantaK}); it acts trivially on the even subspace of $\mathfrak{g}_{-1}$ and by the standard action on the odd one. Finally, $\bar{\mathbb{F}}$ acts on $\mathfrak{g}_{-1}$ by multiplication by $-2$. The $\Z$-grading of $W(1,2)$ of type $(2|1,1)$ induces a grading on $S(1,2)=\prod_{j\geq -2}\mathfrak{g}_j$, which is not irreducible. Then $\mathfrak{g}_0\cong sl_2 \oplus \bar{\mathbb{F}} c$ where $c$ acts on $S$ as the grading operator, and $\mathfrak{g}_{-1}=\bar{\mathbb{F}}^2\oplus \bar{\mathbb{F}}^2$, where $\bar{\mathbb{F}}^2$ is the standard $sl_2$-module. The two copies of the standard $sl_2$-module in $\mathfrak{g}_{-1}$ are both odd. Likewise, the $\Z$-grading of $W(3,3)$ of type $(2,2,2|1,1,1)$ induces a grading on $SHO(3,3)=\prod_{j\geq -2}\mathfrak{g}_j$ , which is not irreducible. Here $\mathfrak{g}_0\cong sl_3$ and $\mathfrak{g}_{-1}=\bar{\mathbb{F}}^3\oplus\bar{\mathbb{F}}^3$, where $\bar{\mathbb{F}}^3$ is the standard $sl_3$-module. The two copies of the standard $sl_3$-module in $\mathfrak{g}_{-1}$ are both odd. Consider the Lie superalgebra $K(1,6)=\prod_{j\geq -2}\mathfrak{g}_j$ with its principal grading. Then $\mathfrak{g}_0=sl_4\oplus\bar{\mathbb{F}} c$ and $\mathfrak{g}_{-1}\cong\Lambda^2\bar{\mathbb{F}}^4$, where $\bar{\mathbb{F}}^4$ denotes the standard $sl_4$-module, $\mathfrak{g}_1\cong\mathfrak{g}_{-1}^*\oplus\mathfrak{g}_1^+\oplus \mathfrak{g}_1^-$, as $sl_4$-modules, with $\mathfrak{g}_1^+\cong S^2\bar{\mathbb{F}}^4$ and $\mathfrak{g}_1^-\cong S^2(\bar{\mathbb{F}}^{4^*})$. The Lie superalgebra $S=E(1,6)$ is the graded subalgebra of $K(1,6)$ generated by $\mathfrak{g}_{-1}+\mathfrak{g}_0+(\mathfrak{g}_{-1}^*+\mathfrak{g}_1^+)$ (cf.\ \cite[Example 5.2]{K}, \cite[\S 4.2]{CK}, \cite[\S 3]{S}). It follows that the $\Z$-grading of type $(2|1,1,1,1,1,1)$ induces on $E(1,6)$ an irreducible grading, called the {\em principal} grading of $E(1,6)$, where ${\mathfrak{g}}_{-1}=\langle \xi_i, \eta_i \rangle$, $\mathfrak{g}_{-1}^*=\langle t\xi_i, t\eta_i\rangle$ and $\mathfrak{g}_1^+=\langle \xi_1\xi_2\xi_3, \xi_1\eta_2\eta_3, \xi_2\eta_1\eta_3, \xi_3\eta_1\eta_2, \xi_1(\xi_2\eta_2+\xi_3\eta_3), \xi_2(\xi_1\eta_1+\xi_3\eta_3), \eta_3(\xi_1\eta_1-\xi_2\eta_2), \xi_3(\xi_1\eta_1+\xi_2\eta_2), \eta_2(\xi_1\eta_1-\xi_3\eta_3), \eta_1(\xi_2\eta_2-\xi_3\eta_3)\rangle$, and $\mathfrak{g}_1^-$ is obtained from $\mathfrak{g}_1^+$ by exchanging $\xi_i$ with $\eta_i$ for every $i=1,2,3$. Next, the {\em principal} grading of $E(3,6)$ is an irreducible grading of depth two whose $0$-th graded component is isomorphic to $sl_3\oplus sl_2\oplus\mathbb{C}c$, and whose $-1$-st graded component is isomorphic, as an $sl_3\oplus sl_2$-module, to $\bar{\mathbb{F}}^3\boxtimes \bar{\mathbb{F}}^2$ where $\bar{\mathbb{F}}^3$ and $\bar{\mathbb{F}}^2$ denote the standard $sl_3$ and $sl_2$-modules, respectively. Here $c$ acts on $E(3,6)$ as the grading operator (with respect to its principal grading). Likewise, the {\em principal} grading of $E(3,8)$ is an irreducible grading of depth three whose $0$-th graded component is isomorphic to $sl_3\oplus sl_2\oplus\mathbb{C}c$ , and whose $-1$-st graded component is isomorphic, as an $sl_3\oplus sl_2$-module, to $\bar{\mathbb{F}}^3\boxtimes \bar{\mathbb{F}}^2$ where $\bar{\mathbb{F}}^3$ and $\bar{\mathbb{F}}^2$ denote the standard $sl_3$ and $sl_2$-modules, respectively; $c$ acts on $E(3,8)$ as the grading operator. The Lie superalgebra $S=E(4,4)$ has even part isomorphic to $W_4$ and odd part isomorphic to the $W_4$-module $\Omega^1(4)^{-\frac{1}{2}}$. The bracket between two odd elements $\omega_1$ and $\omega_2$ is defined as: $[\omega_1, \omega_2]=d\omega_1\wedge \omega_2+ \omega_1\wedge d\omega_2$. The {\em principal} grading of $S$ is an irreducible $\Z$-grading of depth 1 whose $0$-th graded component $\mathfrak{g}_0=\langle x_i\frac{\partial}{\partial x_j}, x_idx_j\rangle$ is isomorphic to the Lie superalgebra $\hat{P}(4)$ and $\mathfrak{g}_{-1}=\langle \frac{\partial}{\partial x_j}, dx_j\rangle$ is isomorphic to the standard $\hat{P}(4)$-module $\bar{\mathbb{F}}^{4|4}$. We recall that $\hat{P}(4)=P(4)+\bar{\mathbb{F}} z$ is a (non-trivial) central extension of $P(4)$ with center $\bar{\mathbb{F}} z$ (see \cite{CantaK}, \cite{K}, \cite{S}). Finally, the {\em principal} grading of the Lie superalgebra $E(5,10)$ is irreducible of depth 2, with $0$-th graded component isomorphic to $sl_5$ and $-1$-st graded component isomorphic ro $\Lambda^2 \bar{\mathbb{F}}^5$, where $\bar{\mathbb{F}}^5$ is the standard $sl_5$-module. \medskip Given a simple infinite-dimensional linearly compact Lie superalgebra $S=\prod_{j\geq -d}\mathfrak{g}_j$ with its principal or subprincipal grading, we will call $S_0=\prod_{j\geq 0} \mathfrak{g}_j$ the principal or subprincipal subalgebra of $S$, respectively. Likewise, if $S=\prod_{j\geq -d}\mathfrak{g}_j$ with a grading of a given type, we will call $S_0=\prod_{j\geq 0} \mathfrak{g}_j$ the subalgebra of $S$ of this type. \begin{remark}\label{newconstruction}\em One can show that every non-graded maximal open subalgebra of any non-exceptional simple infinite-dimensional linearly compact Lie superalgebra $S$ in its defining embedding in $W(m,n)$, can be constructed as the intersection of $S$ with a graded subalgebra of $W(m,n)$. For example, the maximal open subalgebra $L_0(0)$ of $S=H(m,1)$ constructed in \cite[Example 3.3]{CantaK}, is the intersection of $S$ with the subprincipal subalgebra of $W(m,1)$. Since the supersymplectic form is not homogeneous with respect to the subprincipal grading of $W(m,1)$, $L_0(0)$ is not graded. We shall call this subalgebra the {\em subprincipal} subalgebra of $H(m,1)$. \end{remark} If $\mathfrak{g}$ is a Lie algebra acting linearly on a vector space $V$ over $\bar{\mathbb{F}}$, we denote by $\exp(\mathfrak{g})$ the linear algebraic subgroup of $GL(V)$, generated by all $\exp a$, where $a$ is a (locally) nilpotent endomorphism of $V$, and by $t^a$, where $a$ is a diagonalizable endomorphism of $V$ with integer eigenvalues and $t\in\bar{\mathbb{F}}^\times$. If a group $G$ is an almost direct product of two subgroups $G_1$ and $G_2$ (i.e., both $G_1$ and $G_2$ are normal subgroups and $G_1\cap G_2$ is a finite central subgroup of $G$) we will denote it by $G=G_1\cdot G_2$. We will often make use of the following simple result: \begin{proposition}\label{maximal} Suppose we have a representation of a Lie superalgebra $\mathfrak{g}$ over $\bar{\mathbb{F}}$ in a vector superspace $V$, and a faithful representation of a group $G$ in $V$, containing $\exp(\mathfrak{g}_{\bar{0}})$, preserving parity and such that conjugation by elements of $G$ induces automorphisms of $\mathfrak{g}$. Then the maximal possible $G$ are as follows in the following cases: \begin{itemize} \item[$(a)$] if $\mathfrak{g}=sl_n$ and $V=\bar{\mathbb{F}}^n\oplus \bar{\mathbb{F}}^n$ with the same parity, then $G$ is an almost direct product of $GL_2$ and $SL_n$; in particular if $n=2$ then $G=\bar{\mathbb{F}}^\times\cdot SO_4$ and if $n=3$ then $G=\bar{\mathbb{F}}^\times\cdot(SL_2\times SL_3)$; \item[$(b)$] if $\mathfrak{g}=sl_2\oplus sl_3$ and $V=\bar{\mathbb{F}}^3\boxtimes \bar{\mathbb{F}}^2$, then $G=\bar{\mathbb{F}}^\times\cdot(SL_2\times SL_3)$; \item[$(c)$] if $\mathfrak{g}=sl_5$ and $V=\Lambda^2\bar{\mathbb{F}}^5$, then $G=GL_5$; \item[$(d)$] if $\mathfrak{g}=sl(m,n)$ and $V=\bar{\mathbb{F}}^{m|n}$ is the standard $sl(m,n)$-module, then $G=GL_m\times GL_n$; \item[$(e)$] if $\mathfrak{g}=spo(2k,n)$ and $V=\bar{\mathbb{F}}^{2k|n}$ is the standard $spo(2k,n)$-module, then $G=\bar{\mathbb{F}}^\times\cdot(Sp_{2k}\times O_n)$; \item[$(f)$] if $\mathfrak{g}=P_n$ and $V=\bar{\mathbb{F}}^n+\bar{\mathbb{F}}^{n*}$ is the standard $P_n$-module, then $G=\bar{\mathbb{F}}^\times\cdot GL_n$; \item[$(g)$] if $\mathfrak{g}=\hat{P}_4$ and $V=\bar{\mathbb{F}}^4+\bar{\mathbb{F}}^{4*}$ is the standard $\hat{P}_4$-module, then $G=GL_4$. \end{itemize} In all cases when $G=\bar{\mathbb{F}}^\times\cdot G_1$, the group $\bar{\mathbb{F}}^\times$ acts on $V$ by scalar multiplication. \end{proposition} {\bf Proof.} Consider the map $f: G\longrightarrow Aut(\mathfrak{g})$ that associates to every element of $G$ the induced automorphism by conjugation of $\mathfrak{g}$. Then the kernel of $f$ consists of the elements of $G$ commuting with $\mathfrak{g}$. Suppose, as in $(a)$, that $\mathfrak{g}=sl_n$ and $V=\bar{\mathbb{F}}^n\oplus\bar{\mathbb{F}}^n$, where the two copies of the standard $sl_n$-module have the same parity. Then $Im f$ consists of inner automorphisms of $sl_n$, i.e., $Im f=PGL_n$, and $\ker f=GL_2$. We have therefore the following exact sequence: $$1\longrightarrow GL_2\longrightarrow G\longrightarrow PGL_n\longrightarrow 1.$$ Since there is in $G$ a complementary to $GL_2$ subgroup, which is $SL_n$, we conclude that $G$ is an almost direct product of $GL_2$ and $SL_n$. It follows that if $n$ is odd, then $G=\bar{\mathbb{F}}^\times\cdot(SL_2\times SL_n)$, and if $n$ is even then $G=\bar{\mathbb{F}}^\times\cdot((SL_2\times SL_n)/C_2)$ where $C_2$ is the cyclic subgroup of order two of $SL_2\times SL_n$ generated by $(-I_2, -I_n)$, proving $(a)$. The same argument proves $(b)$. By the same argument, in case $(c)$ we get the exact sequence $$1\longrightarrow \bar{\mathbb{F}}^\times \longrightarrow G\longrightarrow PGL_5\longrightarrow 1.$$ Since $G$ contains a complementary to $\bar{\mathbb{F}}^\times$ subgroup, which is $SL_5$, we conclude that $G=GL_5$. If $\mathfrak{g}=sl(m,n)$ and $V=\bar{\mathbb{F}}^{m|n}$ is its standard representation, then $gl_m$ acts irreducibly on $\bar{\mathbb{F}}^m$ and $gl_n$ acts irreducibly on $\bar{\mathbb{F}}^n$, hence $G=GL_m\times GL_n$, proving $(d)$. Suppose that $\mathfrak{g}=spo(2k,n)$ and $V=\bar{\mathbb{F}}^{2k|n}$ is the standard $spo(2k,n)$-module. Define on $\mathfrak{g}_{-1}=\langle p_i, q_i, \xi_j~|~i=1, \dots, k, ~j=1, \dots, n\rangle$ the following symmetric bilinear form: $(p_i, q_j)=\delta_{i,j}$, $(\xi_i, \xi_j)=\delta_{i, n-j+1}$, $(p_i, p_j)=0=(q_i, q_j)=(p_i, \xi_j)=(q_i, \xi_j)$. Then $G$ consists of the automorphisms of $\bar{\mathbb{F}}^{2k}\oplus\bar{\mathbb{F}}^n$ preserving the bilinear form $(\cdot, \cdot)$ up to multiplication by a scalar, hence $G=\bar{\mathbb{F}}^\times\cdot(Sp_{2k}\times O_n)$, proving $(e)$. Finally, let $\mathfrak{g}=P_n$ and $V=\bar{\mathbb{F}}^n+\bar{\mathbb{F}}^{n*}$ be the standard $P_n$-module. Then $gl_n$ acts irreducibly on $\bar{\mathbb{F}}^n$ and $\bar{\mathbb{F}}^{n*}$ which have different parities. It follows that $G=\bar{\mathbb{F}}^\times\cdot GL_n$. Likewise, if $\mathfrak{g}=\hat{P}(4)$, then $G=\bar{\mathbb{F}}^\times\cdot SL_4$ since the group of automorphisms of $\mathfrak{g}$ is $SL_4$. \hfill$\Box$ \section{On $\boldsymbol{Aut S}$}\label{general} Let $S$ be a linearly compact infinite-dimensional Lie superalgebra over $\bar{\mathbb{F}}$ and let $Aut S$ denote the group of all continous automorphisms of $S$. Let $S=S_{-d}\supset\dots \supset S_0\supset \dots$ be a filtration of $S$ by open subalgebras such that all $S_j$ are $Aut S$-invariant and $Gr S=\oplus_{j\geq -d}\mathfrak{g}_j$ is a transitive graded Lie superalgebra. Denote by $Aut(Gr S)$ the group of automorphisms of $Gr S$ preserving the grading. Denote by $Autf S$ the subgroup consisting of $g\in Aut S$ which induce an identity automorphism of $Gr S$, and let $Autgr S$ be the subgroup of $Aut(Gr S)$ consisting of automorphisms induced by $g\in Aut S$. We have an exact sequence: \begin{equation} 1\rightarrow Autf S\rightarrow Aut S\rightarrow Autgr S\rightarrow 1. \label{exact} \end{equation} \begin{proposition}\label{Victor} (a) The restriction map $Autgr S\rightarrow GL(\mathfrak{g}_{-1})$ is injective. \noindent (b) $Autf S$ consists of inner automorphisms of $Der S$. In fact $Autf S=\exp ad$ $(Der S)_1$, where $(Der S)_1$ is the first member of the filtration of $Der S$, induced by that of $S$. \noindent (c) If $S_0$ is a graded subalgebra, i.e., $S_j=\mathfrak{g}_j \oplus S_{j+1}$ for all $j \geq -d$ such that $[\mathfrak{g}_i,\mathfrak{g}_j]\subset\mathfrak{g}_{i+j}$, then $Autgr S = Aut(Gr S)$ and $$Aut S=Autf S\rtimes Aut(Gr S).$$ \end{proposition} {\bf Proof.} By transitivity, $\mathfrak{g}_{-n}=\mathfrak{g}_{-1}^n$ for $n \geq 1$, and, in addition, we have the well-known injective $Autgr S$-equivariant map $\mathfrak{g}_n\rightarrow Hom(\mathfrak{g}_{-1}^{\otimes(n+1)}, \mathfrak{g}_{-1})$ for $n \geq 0$, which implies $(a)$. If $\sigma\in Autf S$, then $\sigma=1+\sigma_1$, where $\sigma_1(L_j)\subset L_{j+1}$. Hence $\log\sigma=\sum_{n\geq 1}(-1)^{n+1}\frac{\sigma_1^n}{n}$ converges and $e^{t\log\sigma}$ converges to a one-parameter subgroup of $Autf S$. Hence $\sigma$ is an inner automorphism of $Der S$, proving $(b)$. If $S_0$ is a graded subalgebra, we have an obvious inclusion $Aut(Gr S) \subset Aut S$ and exact sequence (\ref{exact}), proving $(c)$. \hfill$\Box$ \begin{remark}\em By Proposition \ref{Victor}(a), $Autgr S$ is a subgroup of $GL(\mathfrak{g}_{-1})$ whose Lie algebra is $Gr_0 Der S$ acting on $\mathfrak{g}_{-1}$. By Proposition \ref{Victor}(b), $Autf S$ is a prounipotent group. \end{remark} \section{Invariant Subalgebras}\label{invsub} Given a linearly compact Lie superalgebra $L$, we call {\em invariant} a subalgebra of $L$ which is invariant with respect to all its inner automorphisms, or, equivalently, which contains all elements $a$ of $L$ such that $\exp(ad(a))$ converges in the linearly compact topology. It turns out that an open subalgebra of minimal codimension in a linearly compact infinite-dimensional simple Lie superalgebra $S$ over $\bar{\mathbb{F}}$ is always invariant under all inner automorphisms of $S$ (see \cite{CantaK}). \begin{example}\label{SHOtilde}\em We recall that the Lie superalgebra $S=SHO^\sim(n,n)$ is the subalgebra of $HO(n,n)$ defined as follows: $$SHO^\sim(n,n)=\{X\in HO(n,n)~|~ X(F v)=0\}$$ where $v$ is the volume form associated to the usual divergence and $F=1-2\xi_1\dots\xi_n$ (cf.\ \cite[\S 5]{CantaK}). Let $S_0$ be the intersection of $S$ with the principal subalgebra of $W(n,n)$. Then the Weisfeiler filtration associated to $S_0$ has depth one and $\overline{Gr S}\cong SHO^{\prime}(n,n)$ with the $\Z$-grading of type $(1,\dots, 1|1, \dots,1)$ (cf.\ \cite[Example 5.2]{CantaK}). Here and further by $\overline{Gr S}$ we denote the completion of the graded Lie superalgebra associated to the above filtration. By \cite[Proposition 1.11]{CantaK}, $S_0$ is a maximal open subalgebra of $S$. It is easy to see that it is also an invariant subalgebra. This subalgebra is called the {\em principal} subalgebra of $S$. \end{example} \begin{example}\label{SKOtilde}\em We recall that the Lie superalgebra $S=SKO^\sim(n,n+1)$ is the subalgebra of $KO(n,n+1)$ defined as follows: $$SKO^\sim(n,n+1)=\{X\in KO(n,n+1)~|~X(F v_{\beta})=0\}$$ where $v_{\beta}$ is the volume form attached to the divergence $div_{\beta}$ for $\beta=(n+2)/n$ and $F=1+\xi_1\dots\xi_{n+1}$ (cf.\ \cite[\S 5]{CantaK}). Let $S_0$ be the intersection of $S$ with the subalgebra of $W(n,n+1)$ of type $(1,\dots,1|1,\dots,1,2)$. Then the Weisfeiler filtration associated to $S_0$ has depth 2 and $\overline{Gr S}\cong SKO(n,n+1; (n+2)/n)$ with its principal grading. By \cite[Proposition 1.11]{CantaK}, $S_0$ is a maximal open subalgebra of $S$. It is easy to see that it is also an invariant subalgebra. This subalgebra is called the {\em principal} subalgebra of $S$. \end{example} A complete list of invariant maximal open subalgebras in all simple linearly compact infinite-dimensional Lie superalgebras over $\bar{\mathbb{F}}$, is given in the following theorem (cf.\ \cite[Theorem 11.1]{CantaK}): \begin{theorem}\label{invariant} The following is a complete list of invariant maximal open subalgebras in infinite-dimensional linearly compact simple Lie superalgebras $S$ over $\bar{\mathbb{F}}$: \begin{description} \item[$(a)$] the principal subalgebra of $S$; \item[$(b)$] the subprincipal subalgebra of $S=W(m,1)$, $S(m,1)$, $H(m,1)$, $H(m,2)$, $K(m,2)$, $KO(2,3)$, $SKO(2,3;\beta)$, the subalgebra of type $(2,1,\dots,1|0,2)$ of $K(m,2)$ and the subalgebra of type $(1,\dots,1|0,2)$ of $H(m,2)$; \item[$(c)$] the subalgebra of type $(1,1|-1,-1,0)$ of $SKO(2,3;\beta)$ for $\beta\neq 1$; \item[$(d)$] the subalgebras of $S=S(1,2)$, $SHO(3,3)$, $SKO(2,3;1)$ conjugate to the principal subalgebra by the subgroup of $Aut S$ generated by the automorphisms $\exp(ad(\mathfrak{a}))$ where $\mathfrak{a}$ is the algebra of outer derivations of $S$; \item[$(e)$] the subalgebras of $S=SKO(3,4;1/3)$ conjugate to the subprincipal subalgebra by the automorphisms $\exp(ad(t\xi_1\xi_2\xi_3))$ with $t\in\bar{\mathbb{F}}$. \end{description} \end{theorem} \begin{theorem}\label{autSinv} The following is a complete list of maximal among the open $Aut S$-invariant subalgebras in infinite-dimensional linearly compact simple Lie superalgebras $S$: \begin{description} \item[$(a)$] the principal subalgebra of $S\neq S(1,2)$, $SHO(3,3)$, and $SKO(2,3;1)$; \item[$(b)$] the subprincipal subalgebra of $S=W(m,1)$, $S(m,1)$, $H(m,1)$, $KO(2,3)$, and $SKO(2,3;\beta)$; \item[$(c)$] the subalgebra of type $(1,1|-1,-1,0)$ in $S=SKO(2,3;\beta)$ with $\beta\neq 1$; \item[$(d)$] the subalgebras of type $(2|1,1)$ and $(2,2,2|1,1,1)$ in $S=S(1,2)$ and\break $SHO(3,3)$, respectively. \end{description} \end{theorem} {\bf Proof.} We will prove that the subalgebras listed in $(a)-(d)$ are $Aut S$-invariant, and, in order to show that they exhaust all maximal among open $Aut S$-invariant subalgebras of $S$, it suffices to show that for every subalgebra $S'_0$ of $S$ listed in Theorem \ref{invariant}, $\cap_{\varphi\in Aut S}\varphi(S'_0)$ is contained in one of them. Indeed, if $S_0$ is a maximal among the $Aut S$-invariant open subalgebras of $S$, then $S_0$ is an invariant subalgebra of $S$, hence, every maximal open subalgebra $S'_0$ of $S$ containing $S_0$, is invariant, i.e., $S'_0$ is one of the subalgebras of $S$ listed in Theorem \ref{invariant}. Therefore, $S_0\subset \cap_{\varphi\in Aut S}\varphi(S'_0)$. If $S\neq W(m,1)$, $S(m,1)$, $H(m,1)$, $H(m,2)$, $K(m,2)$, $KO(2,3)$, $SKO(2,3;\beta)$, $S(1,2)$, $SHO(3,3)$ and $SKO(3,4;1/3)$, then, in view of Theorem \ref{invariant}, the principal subalgebra of $S$ is the unique invariant maximal open subalgebra of $S$, hence it is invariant with respect to all automorphisms of $S$ and it is the unique maximal among open $Aut S$-invariant subalgebras of $S$. If $S=W(m,1)$, $S(m,1)$, or $KO(2,3)$, then, according to Theorem \ref{invariant}, $S$ has two invariant subalgebras: the principal and subprincipal subalgebras. These two subalgebras have different codimension hence, each of them is invariant with respect to all automorphisms of $S$. If $S=H(m,1)$, then $S$ has two invariant maximal open subalgebras: the principal and the subprincipal subalgebras. Since the principal subalgebra is graded and the subprincipal subalgebra is not (see Remark \ref{newconstruction}), each of them is invariant with respect to all automorphisms of $S$. If $S=H(m,2)$, then the principal subalgebra is the unique subalgebra of $S$ of minimal codimension, hence it is $Aut S$-invariant. Besides, it is the unique maximal among $Aut S$-invariant subalgebras of $S$. Indeed, the invariant subalgebras of $S$ of type $(1, \dots,1|2,0)$ and $(1, \dots,1|0,2)$ are conjugate by an outer automorphism of $S$ and their intersection is contained in the principal subalgebra of $S$. By the same arguments, if $S=K(m,2)$, then the principal subalgebra of $S$ is its unique maximal among open $Aut S$-invariant subalgebras. If $S=SKO(3,4;1/3)$, then, according to Theorem \ref{invariant}, $S$ has infinitely many invariant subalgebras which are conjugate to the subprincipal subalgebra. Besides, the principal subalgebra of $S$ is an invariant subalgebra. Note that the principal grading of $S$ has depth 2 and the subprincipal grading of $S$ has depth 1, therefore the principal and subprincipal subalgebras are not conjugate. It follows that the principal subalgebra is invariant with respect to all automorphisms of $S$. In fact, it is the unique maximal among $Aut S$-invariant subalgebras of $S$, since the intersection of all the subalgebras of $S$ listed in Theorem \ref{invariant}$(e)$ is the subalgebra of $S$ of type $(2,2,2|1,1,1,3)$, which is contained in the principal subalgebra. If $S=SKO(2,3;1)$, then $S$ has infinitely many invariant subalgebras which are conjugate to the principal subalgebra, besides, the subprincipal subalgebra is also an invariant subalgebra of $S$. The principal and subprincipal subalgebras have codimension $(2|3)$ and $(2|2)$, respectively, hence they cannot be conjugate. It follows that the subprincipal subalgebra is invariant with respect to all automorphisms of $S$. In fact, it is the unique maximal among $Aut S$-invariant subalgebras of $S$, since it contains the intersection of all subalgebras which are conjugate to the principal subalgebra (cf.\ \cite[Remark 4.16]{CantaK}). If $S=SKO(2,3;\beta)$ for $\beta\neq 0,1$, then, according to Theorem \ref{invariant}, $S$ has three invariant maximal open subalgebras, i.e., the subalgebras of type $(1,1|1,1,2)$, $(1,1|0,0,1)$ and $(1,1|-1,-1,0)$. The subalgebras of type $(1,1|1,1,2)$ and $(1,1|-1,-1,0)$ have codimension $(2|3)$ and the subalgebra of type $(1,1|0,0,1)$ has codimension $(2|2)$. It follows that the subprincipal subalgebra is invariant with respect to all automorphisms of $S$, since it is the unique subalgebra of minimal codimension. Consider the grading of $S$ of type $(1,1|1,1,2)$: this is an irreducible grading of depth 2, whose $0$-th graded component is isomorphic to the Lie superalgebra $\tilde{P}(2)=P(2)+\bar{\mathbb{F}}(\xi_3+\beta\Phi)$. Its $-2$-nd graded component is $\bar{\mathbb{F}} 1$, on which $\xi_3+\beta\Phi$ acts as the scalar $-2$. Its $-1$-st graded component is spanned by vectors $\{x_i\}$ and $\{\xi_i\}$, with $i=1,2$, hence it is isomorphic to the standard $P(2)$-module, and $\xi_3+\beta\Phi$ acts on $\sum_{i=1}^2\bar{\mathbb{F}} x_i$ (resp.\ $\sum_{i=1}^2\bar{\mathbb{F}} \xi_i$) as the scalar $-1+\beta$ (resp.\ $-1-\beta$). Now let us consider the grading of $S$ of type $(1,1|-1,-1,0)$: this is an irreducible grading of depth 2, whose $0$-th graded component is isomorphic to the Lie superalgebra $\tilde{P}(2)=P(2)+\bar{\mathbb{F}}(\xi_3+\beta\Phi)$. Its $-2$-nd graded component is $\bar{\mathbb{F}} \xi_1\xi_2$, on which $\xi_3+\beta\Phi$ acts as the scalar $-2\beta$. Its $-1$-st graded component is spanned by vectors $\{\xi_i(\xi_3+(2\beta-1)\Phi)\}$ and $\{\xi_i\}$, with $i=1,2$, hence it is isomorphic to the standard $P(2)$-module, and $\xi_3+\beta\Phi$ acts on $\sum_{i=1}^2\bar{\mathbb{F}} \xi_i(\xi_3+ (2\beta-1)\Phi)$ (resp.\ $\sum_{i=1}^2\bar{\mathbb{F}} \xi_i$) as the scalar $1-\beta$ (resp.\ $-1-\beta$). Since we assumed $\beta\neq 1$, the two gradings are not isomorphic, hence the subalgebras of type $(1,1|1,1,2)$ and $(1,1|-1,-1,0)$ are not conjugate by any automorphism of $S$. We conclude that they are invariant with respect to all automorphisms of $S$. Finally, if $S=S(1,2)$ or $S=SHO(3,3)$, then, each of the subalgebras listed in $(d)$ is the intersection of all invariant maximal open subalgebras of $S$, which lie in an $Aut S$-orbit, by Theorem \ref{invariant}, thus it is the unique maximal among $Aut S$-invariant subalgebras of $S$. \hfill$\Box$ \section{The Group $\boldsymbol{Autgr S}$} In this section, for every simple infinite-dimensional linearly compact Lie superalgebra $S$ over $\bar{\mathbb{F}}$, we fix the following maximal among $Aut S$-invariant open subalgebras of $S$, which we shall denote by $S_0$: \begin{enumerate} \item the principal subalgebra of $S\neq S(1,2)$, $SHO(3,3)$, $SKO(2,3;1)$; \item the subalgebra of type $(2|1,1)$ in $S=S(1,2)$; \item the subalgebra of type $(2,2,2|1,1,1)$ in $S=SHO(3,3)$; \item the subprincipal subalgebra of $S=SKO(2,3;1)$. \end{enumerate} \begin{remark}\label{canonical}\em In \cite[\S 11]{CantaK} we introduced the notion of the {\em canonical} subalgebra of $S$, defined as the intersection of all subalgebras of minimal codimension in $S$. It follows from the definition that the canonical subalgebra of $S$ is an $Aut S$-invariant subalgebra. If $S\neq KO(2,3)$, $SKO(2,3;\beta)$ with $\beta\neq 1$, and $S\neq SKO(3,4;1/3)$, then the maximal among $Aut S$-invariant subalgebras $S_0$ of $S$ we have chosen is the canonical subalgebra of $S$. \end{remark} \medskip Let $S_{-1}$ be a minimal subspace of $S$, properly containing the subalgebra $S_0$ and invariant with respect to the group $Aut S$, and let $S=S_{-d} \supsetneq S_{-d+1} \supset \cdots \supset S_{-1} \supset S_0 \supset \cdots$ be the associated Weisfeiler filtration of $S$. All members of the Weisfeiler filtration associated to $S_0$ are invariant with respect to the group $Aut S$. Let $Gr S=\oplus_{j\geq -d}\mathfrak{g}_j$ be the associated $\Z$-graded Lie superalgebra. In this section we will describe the group $Autgr S$ introduced in Section \ref{general}, for every $S$. The results are summarized in Table 1 (where by $\Pi V$ we denote $V$ with reversed parity). \bigskip \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|} \hline $S$ & $\mathfrak{g}_0$ & $\mathfrak{g}_0$-module $\mathfrak{g}_{-1}$ & $Autgr S$ \\ \hline $W(m,n)$, $(m,n)\neq (1,1)$ & $gl(m,n)$ & $\bar{\mathbb{F}}^{m|n}$ & $GL_m\times GL_n$\\ $S(1,2)$ & $sl_2\oplus\bar{\mathbb{F}}$ & $\Pi\bar{\mathbb{F}}^2\oplus\Pi\bar{\mathbb{F}}^2$ & $\bar{\mathbb{F}}^\times\cdot SO_4$ \\ $S(m,n)$, $(m,n)\neq (1,2)$ & $sl(m,n)$ & $\bar{\mathbb{F}}^{m|n}$ & $GL_m\times GL_n$ \\ $H(2k,n)$ & $spo(2k,n)$ & $\bar{\mathbb{F}}^{2k|n}$ & $\bar{\mathbb{F}}^\times\cdot(Sp_{2k}\times O_n)$ \\ $K(2k+1,n)$ & $cspo(2k,n)$ & $\bar{\mathbb{F}}^{2k|n}$ & $\bar{\mathbb{F}}^\times\cdot(Sp_{2k}\times O_n)$ \\ $HO(n,n)$, $n>2$ & $\tilde{P}(n)$ & $\bar{\mathbb{F}}^n\oplus\Pi\bar{\mathbb{F}}^{n*}$ & $\bar{\mathbb{F}}^\times\cdot GL_n$ \\ $SHO(3,3)$ & $sl_3$ & $\Pi\bar{\mathbb{F}}^3\oplus\Pi\bar{\mathbb{F}}^3$ & $\bar{\mathbb{F}}^\times\cdot(SL_3\times SL_2)$\\ $SHO(n,n)$, $n>3$ & $P(n)$ & $\bar{\mathbb{F}}^n\oplus\Pi\bar{\mathbb{F}}^{n*}$ & $\bar{\mathbb{F}}^\times\cdot GL_n$ \\ $KO(n,n+1)$, $n\geq 2$ & $c\tilde{P}(n)$ & $\bar{\mathbb{F}}^n\oplus\Pi\bar{\mathbb{F}}^{n*}$ & $\bar{\mathbb{F}}^\times\cdot GL_n$ \\ $SKO(2,3;1)$ & $\langle 1, \xi_1, \xi_2\rangle\rtimes S(0,2)$ & $\Lambda(2)$ & $\bar{\mathbb{F}}^\times\cdot(SL_2\times SL_2)$\\ $SKO(2,3;\beta)$, $\beta\neq 0,1$ & $\tilde{P}(2)$ & $\bar{\mathbb{F}}^2\oplus\Pi\bar{\mathbb{F}}^{2*}$ & $\bar{\mathbb{F}}^\times\cdot GL_2$\\ $SKO(n,n+1;\beta)$, $n>2$ & $\tilde{P}(n)$ & $\bar{\mathbb{F}}^n\oplus\Pi\bar{\mathbb{F}}^{n*}$ & $\bar{\mathbb{F}}^\times\cdot GL_n$\\ $SHO^\sim(n,n)$, $n>2$ & $P(n)$ & $\bar{\mathbb{F}}^n\oplus\Pi\bar{\mathbb{F}}^{n*}$ & $SL_n$\\ $SKO^\sim(n,n+1)$ & $\tilde{P}(n)$ & $\bar{\mathbb{F}}^n\oplus\Pi\bar{\mathbb{F}}^{n*}$ & $SL_n$\\ $E(1,6)$ & $so_6\oplus\bar{\mathbb{F}}$ & $\Pi\bar{\mathbb{F}}^6$ & $\bar{\mathbb{F}}^\times\cdot SO_6$\\ $E(3,6)$ & $sl_3\oplus sl_2\oplus\bar{\mathbb{F}}$ & $\Pi(\bar{\mathbb{F}}^3\boxtimes \bar{\mathbb{F}}^2)$ & $\bar{\mathbb{F}}^\times\cdot(SL_2 \times SL_3)$ \\ $E(5,10)$ & $sl_5$ & $\Pi(\Lambda^2 \bar{\mathbb{F}}^5)$ & $GL_5$ \\ $E(4,4)$ & $\hat{P}(4)$ & $\bar{\mathbb{F}}^{4|4}$ & $GL_4$ \\ $E(3,8)$ & $sl_3\oplus sl_2\oplus\bar{\mathbb{F}}$ & $\Pi(\bar{\mathbb{F}}^3\boxtimes \bar{\mathbb{F}}^2)$ & $\bar{\mathbb{F}}^\times\cdot(SL_2 \times SL_3)$ \\ \hline \end{tabular} \begin{center} \textbf{Table 1.} \end{center} \end{center} \end{table} \begin{theorem}\label{AutgrS} Let $S$ be a simple infinite-dimensional linearly compact Lie superalgebra over $\bar{\mathbb{F}}$. Then $Autgr S$ is the algebraic group listed in the last column of Table 1. In all cases when $Autgr S\cong \bar{\mathbb{F}}^\times\cdot G_1$, the group $\bar{\mathbb{F}}^\times$ acts on $\mathfrak{g}_{-1}$ by scalar multiplication. \end{theorem} {\bf Proof.} Let $S=W(m,n)$ with $(m,n)\neq (1,1)$ or $S=S(m,n)$ with $(m,n)\neq (1,2)$, with the principal grading. By Proposition \ref{maximal}$(d)$, $Autgr S\subset GL_m\times GL_n$. But the group on the right is contained in $Autgr S$ since it acts by automorphisms of $S$ via linear changes of indeterminates. It follows that $Autgr S\cong GL_m\times GL_n$. Let $S=K(2k+1,n)=\prod_{j\geq -2}\mathfrak{g}_j$ with its principal grading. By Proposition \ref{maximal}$(e)$, $Autgr S\subset \bar{\mathbb{F}}^\times\cdot(Sp_{2k}\times O_n)$. But the group on the right is contained in $Autgr S$ since it acts by automorphisms of $S$ via linear changes of indeterminates, preserving the supercontact differential form $dx_{2k+1} + \sum_{i=1}^k(x_i dx_{k+i}-x_{k+i}dx_i)+\sum_{j=1}^n \xi_j d\xi_{n-j+1}$ up to multiplication by a non-zero number. It follows that $Autgr S\cong \bar{\mathbb{F}}^\times\cdot(Sp_{2k}\times O_n)$. Likewise, if $S=H(2k,n)=\prod_{j\geq -1}\mathfrak{g}_j$ with the principal grading, the group $\bar{\mathbb{F}}^\times\cdot(Sp_{2k}\times O_n)$ acts by automorphisms of $S$ via linear changes of indeterminates, preserving the supersymplectic differential form $\sum_{i=1}^k dp_i \wedge dq_i + \sum_{j=1}^n d\xi_j d\xi_{n-j+1}$ up to multiplication by a non-zero number. Hence again, $Autgr S\cong \bar{\mathbb{F}}^\times\cdot(Sp_{2k}\times O_n)$. Consider the Lie superalgebras $S=HO(n,n)$, $SHO(n,n)$ with $n>3$, $KO(n,n+1)$, or $SKO(n,n+1;\beta)$ with $n>2$ or with $n=2$ and $\beta\neq 0,1$, with their principal gradings. By Proposition \ref{maximal}$(f)$, $Autgr S\subset\bar{\mathbb{F}}^\times\cdot GL_n$. On the other hand, the group on the right is contained in $Autgr S$ since it acts by automorphisms of $S$ via linear changes of indeterminates, preserving the odd supersymplectic form $\sum_{i=1}^ndx_id\xi_i$ and the volume form $v$ attached to the usual divergence up to multiplication by a non-zero number, if $S=HO(n,n)$ or $S=SHO(n,n)$ with $n>3$, and preserving the odd supercontact form $d\xi_{n+1}+\sum_{i=1}^n(\xi_idx_i+x_id\xi_i)$ up to multiplication by an invertible function, and the volume form $v_{\beta}$ attached to the $\beta$-divergence up to multiplication by a non-zero number, if $S=KO(n,n+1)$ or $S=SKO(n,n+1;\beta)$ with $\beta\neq 0,1$ if $n=2$. Therefore $Autgr S\cong\bar{\mathbb{F}}^\times\cdot GL_n$. Let $S=SHO^\sim(n,n)$, with $n>2$ even, and let $S_0$ be the principal subalgebra of $S$. The group $Aut(GrS)$ consists of the automorphisms of $SHO'(n,n)$ preserving its principal grading. By the same argument as for $SHO(n,n)$, $Aut(Gr S)\cong \bar{\mathbb{F}}^\times\cdot GL_n$. The subgroup $Autgr S$ consists of the elements in $Aut(Gr S)$ which can be lifted to automorphisms of $S$. Every element in $SL_n$ can be lifted to an automorphism of $S$, since it preserves the form $Fv$ defining the Lie superalgebra $S$. Besides, such automorphisms are inner and act on $S$ via linear changes of variables. On the contrary, the outer automorphisms of $SHO'(n,n)$ do not preserve the form $Fv$ for any $t\in\bar{\mathbb{F}}$, hence they cannot be lifted to any automorphism of $S$. It follows that $Autgr S\cong SL_n$. The argument for $S=SKO^\sim(n,n+1)$ is similar. Consider $S=S(1,2)=\prod_{j\geq -2} \mathfrak{g}_j$ with the grading of type $(2|1,1)$. By Proposition \ref{maximal}$(a)$, $Autgr S\subset\bar{\mathbb{F}}^\times\cdot SO_4$. Notice that $\exp(ad(\mathfrak{g}_0)) \cong \bar{\mathbb{F}}^\times\cdot SL_2$, acting by automorphisms of $S$ via linear changes of indeterminates which preserve the standard volume form $v$ up to multiplication by a non-zero number. Since the algebra of outer derivations of $S$ is isomorphic to $sl_2$, $Autgr S\cong\bar{\mathbb{F}}^\times\cdot SO_4$. Consider the Lie superalgebra $S=SHO(3,3)=\prod_{j\geq -j}\mathfrak{g}_j$ with the grading of type $(2,2,2|1,1,1)$. By Proposition \ref{maximal}$(a)$, $Autgr S\subset\bar{\mathbb{F}}^\times\cdot (SL_3\times SL_2)$. Notice that $\exp(ad(\mathfrak{g}_0)_{\bar{0}})\cong SL_3$, acting by automorphisms of $S$ via linear changes of indeterminates, which preserve the odd supersymplectic form and the volume form $v$ up to multiplication by a non-zero number. Since the algebra of outer derivations of $S$ is isomorphic to $gl_2$, $Autgr S\cong \bar{\mathbb{F}}^\times\cdot (SL_3\times SL_2)$. Consider the Lie superalgebra $S=SKO(2,3;1)=\prod_{j\geq -1}\mathfrak{g}_j$, with its subprincipal grading. In this case $\mathfrak{g}_{-1}=\bar{\mathbb{F}}^{2|2}$ hence, by Proposition \ref{Victor}$(a)$, $Autgr S\subset GL_2\times GL_2$. As we recalled in Section \ref{gradings}, $(\mathfrak{g}_0)_{\bar{0}}\cong sl_2+\bar{\mathbb{F}}$, where $sl_2$ acts trivially on $(\mathfrak{g}_{-1})_{\bar{1}}$ and by the standard action on $(\mathfrak{g}_{-1})_{\bar{0}}$, and $\bar{\mathbb{F}}$ acts as the scalar $-2$ on $\mathfrak{g}_{-2}$. Besides, the algebra of outer derivations of $S$ is isomorphic to $sl_2$, it acts trivially on $(\mathfrak{g}_{-1})_{\bar{0}}$ and by the standard action on $(\mathfrak{g}_{-1})_{\bar{1}}$. It follows that $Autgr S\cong \bar{\mathbb{F}}^\times\cdot (SL_2\times SL_2)$. Consider the Lie superalgebra $S=E(1,6)$ with its principal grading. By Proposition \ref{maximal}$(e)$ with $k=0$ and $n=6$, $Autgr S\subset\bar{\mathbb{F}}^\times\cdot O_6$. Notice that $\exp(ad(\mathfrak{g}_0)_{\bar{0}})\cong \bar{\mathbb{F}}^\times \cdot SO_6$ and the group $O_6/SO_6$ is generated by the change of variables $\xi_i\leftrightarrow \eta_i$ which is not an automorphism of $E(1,6)$, since it exchanges the submodules $\mathfrak{g}_1^+$ and $\mathfrak{g}_1^-$ of the $1$-st graded component of $K(1,6)$ in its principal grading (cf.\ \cite[\S 6]{CantaK}). Therefore $Autgr S\cong \bar{\mathbb{F}}^\times\cdot SO_6$. Consider $S=E(3,6)=\prod_{j\geq -2}\mathfrak{g}_j$ with its principal grading. By Proposition \ref{maximal}$(b)$, $Autgr S\subset \bar{\mathbb{F}}^\times\cdot (SL_2\times SL_3)$. Since $\exp(ad(\mathfrak{g}_0)_{\bar{0}})\cong \bar{\mathbb{F}}^\times\cdot (SL_2\times SL_3)$, the statement follows. The same argument holds for $S=E(3,8)$ in its principal grading. Consider $S=E(4,4)=\prod_{j\geq -1}\mathfrak{g}_j$ with its principal grading. By Proposition \ref{maximal}$(g)$, $Autgr S\subset \bar{\mathbb{F}}^\times\cdot SL_4$. Since $\exp(ad(\mathfrak{g}_0)_{\bar{0}})\cong GL_4$, equality holds. Finally, consider $S=E(5,10)=\prod_{j\geq -2}\mathfrak{g}_j$ with its principal grading. By Proposition \ref{maximal}$(c)$, $Autgr S\subset\bar{\mathbb{F}}^\times\cdot SL_5$. Besides, $\exp(ad(\mathfrak{g}_0))\cong SL_5$. Note that the Lie superalgebra $S$ has an outer derivation acting on $S=\prod_{j\geq -2}\mathfrak{g}_j$ as the grading operator. It follows that $Autgr S\cong GL_5$. \hfill$\Box$ \begin{corollary}\label{outer} Let $S$ be a simple infinite-dimensional linearly compact Lie superalgebra over $\bar{\mathbb{F}}$. Then $Aut S$ is the semidirect product of the group of inner automorphisms of $S$ and the finite-dimensional algebraic group $A$, described below: \begin{itemize} \item[$(a)$] if $S=SHO^\sim(n,n)$ or $SKO^\sim(n,n+1)$, then $A=\{1\}$; \item[$(b)$] if $S$ is a Lie algebra or $S=E(1,6)$, $E(3,6)$, $E(3,8)$, $E(4,4)$, $E(5,10)$, then $A\cong\bar{\mathbb{F}}^\times$; \item[$(c)$] if $S=H(m,n)$ or $K(m,n)$ with $n>0$, then $A\cong\bar{\mathbb{F}}^\times\times\mathbb{Z}_2$; \item[$(d)$] if $S=W(m,n)$ with $(m,n)\neq (1,1)$ and $n>0$, $S(m,n)$ with $m>1$ and $n>0$ or with $m=1$ and $n$ odd, $HO(n,n)$, $SHO(n,n)$ with $n>3$ even, $KO(n,n+1)$, $SKO(n,n+1;\beta)$ with $\beta\neq (n-2)/n,1$, or with $n$ even and $\beta=(n-2)/n$, or with $n$ odd and $\beta=1$, then $A\cong \bar{\mathbb{F}}^{\times 2}$; \item[$(e)$] if $S=S(1,n)$, with $n>2$ even, $SKO(n,n+1;(n-2)/n)$ with $n>2$ odd, $SKO(n,n+1;1)$ with $n>2$ even, or $SHO(n,n)$ with $n>3$ odd, then $A\cong U\rtimes\bar{\mathbb{F}}^{\times 2}$ where $U$ is a one-dimensional unipotent group; \item[$(f)$] if $S=S(1,2)$, then $A\cong \bar{\mathbb{F}}^\times\times SO_3$; \item[$(g)$] if $S=SHO(3,3)$ or $SKO(2,3;1)$, then $A\cong \bar{\mathbb{F}}^{\times}\cdot SL_2$. \end{itemize} \end{corollary} \medskip We shall now investigate the nature of all continuous automorphisms of each simple infinite-dimensional non-exceptional linearly compact Lie superalgebra $S$ over $\bar{\mathbb{F}}$. \begin{lemma}\label{cv} Consider a subalgebra $L$ of $W(m,n)$ and let $D$ be an even element of $L$ lying in the first member of a filtration of $W(m,n)$. Then $D$ lies in the Lie algebra of the group of changes of variables which map $L$ to itself. \end{lemma} {\bf Proof.} Let $D =\sum_i P_i \frac{\partial}{\partial x_i}+\sum_j Q_j \frac{\partial}{\partial \xi_j}$. Then $\exp tD$, when applied to $x_i$ and $\xi_j$, gives convergent series $S_i(t)$ and $R_j(t)$, respectively (in the linearly compact topology), hence the change of variables $x_i \rightarrow S_i(t)$, $\xi_j\rightarrow R_j(t)$ is a one-parameter group of automorphisms of $W(m,n)$ which preserves $L$. \hfill$\Box$ \begin{theorem}\label{changesof var} Let $S\subset W(m,n)$ be the defining embedding of a non-exceptional simple infinite-dimensional linearly compact Lie superalgebra. If $S\neq S(1,2)$, $SHO(3,3)$, $SKO(2,3;1)$, and $S$ is defined by an action on a volume form $v$, an even or odd supersymplectic form $\omega_s$, or an even or odd supercontact form $\omega_c$, then all continuous automorphisms of $S$ over $\bar{\mathbb{F}}$ are obtained by invertible changes of variables, multiplying $v$ and $\omega_s$ by a constant and $\omega_c$ by a function. If $S=S(1,2)$, $SHO(3,3)$, or $SKO(2,3;1)$, then all these changes of variables form a subgroup $H$ of $Aut S$ of codimension one. \end{theorem} {\bf Proof.} Let $S=W(m,n)$ with $(m,n)\neq (1,1)$. Then $Der S=S$ hence, by Lemma \ref{cv}, $Autf S$ consists of invertible changes of variables. Besides, $Autgr S$ consists of linear changes of variables, thus, by Proposition \ref{Victor}$(c)$, the statement for $W(m,n)$ follows. Let now $S=S(m,n)$ with $(m,n)\neq (1,2)$. Then $Der S=CS'(m,n) \subset W(m,n)$. By Lemma \ref{cv}, $Autf S$ lies in the group of changes of variables whose Lie algebra kills the volume form $v$ attached to the standard divergence. Hence these changes of variables preserve the form $v$. It is clear that all linear changes of variables multiply the volume form $v$ by a number, hence all of them are automorphisms of $S(m,n)$. Hence $Aut S$ is the group of changes of variables which preserve the volume form up to multiplication by a non-zero number. If $S=S(1,2)$, then, by the same argument as above, the inner automorphisms of $S$ and its automorphism $t^c$, where $c$ is the grading operator of $S$ with respect to its grading of type $(2|1,1)$, are induced by changes of variables which preserve the volume form up to multiplication by a non-zero number. We recall that the algebra $\mathfrak{a}$ of outer derivations of $S$ is isomorphic to $sl_2$, with standard generators $e$, $f$, $h$, where $e=ad(\xi_1\xi_2\frac{\partial} {\partial x})$ and $h=ad(\xi_1\frac{\partial}{\partial\xi_1}+ \xi_2\frac{\partial}{\partial\xi_2})$ (cf.\ \cite[Remark 2.12]{CantaK}). As for $S(m,n)$, $t^h$, where $t\in\bar{\mathbb{F}}$, is obtained by a linear change of variables preserving the volume form up to multiplication by a non-zero number. Besides, the element $\xi_1\xi_2\frac{\partial} {\partial x}$ is contained in the first member of the principal filtration of $S'(1,2)$, thus it is obtained by a change of variables preserving the volume form up to multiplication by a non-zero number, by Lemma \ref{cv}. On the other hand, the automorphism $\exp(f)$ cannot be induced by any change of variables, since it does not preserve the principal filtration of $S$. The argument for all other non-exceptional Lie superalgebras is similar. \hfill$\Box$ \begin{remark}\em Let $S=S(1,2)$, $SHO(3,3)$, or $SKO(2,3;1)$. In all these cases the algebra of outer derivations contains $sl_2=\langle e,h,f\rangle$. Denote by $U_{-}$ the one-parameter group of automorphisms $\exp(ad(tf))$, where $f$ is explicitely described in \cite[Remarks 2.12, 2.37, 4.15]{CantaK}. Then $U_{-}$ is the ``complementary" to $H$ subgroup in $Aut S$, namely, for every $\varphi\in Aut S$, either $\varphi\in U_{-}H$ or $\varphi\in U_{-}sH$ where $s$ is the reflection $s=\exp(e)\exp(-f)\exp(e)$. \end{remark} \section{$\boldsymbol{\mathbb{F}}$-Forms} Let ${\mathbb{F}}$ be a field of characteristic zero and let $\bar{\mathbb{F}}$ be its algebraic closure. \begin{definition} Let $L$ be a Lie superalgebra over $\bar{\mathbb{F}}$. A Lie superalgebra $L^\mathbb{F}$ over $\mathbb{F}$ is called an $\mathbb{F}$-form of $L$ if $L^\mathbb{F}\otimes_\mathbb{F}\bar{\mathbb{F}}\cong L$. \end{definition} Denote by $Gal$ the Galois group of $\mathbb{F}\subset \bar{\mathbb{F}}$. Then $Gal$ acts on $Aut L$ as follows: $$\alpha.\varphi:=\varphi^{\alpha}=\alpha\varphi\alpha^{-1}, ~~\alpha\in Gal, ~\varphi\in Aut L.$$ To any $\mathbb{F}$-form $L^\mathbb{F}$ of $L$, i.e., to any isomorphism $\phi: {L^\mathbb{F}}\otimes_\mathbb{F}\bar{\mathbb{F}}\rightarrow L$, we can associate the map $\gamma_{\alpha}:Gal\rightarrow Aut L$, $\alpha\mapsto \phi^\alpha\phi^{-1}$. The map $\gamma_{\alpha}$ satisfies the cocycle condition, i.e., $\gamma_{\alpha\beta}=\gamma_{\beta}^\alpha\gamma_{\alpha}$. Two cocycles $\gamma$ and $\delta$ are equivalent if and only if there exists an element $\psi\in Aut L$ such that $\gamma_{\alpha}=(\psi^{-1})^\alpha\delta_{\alpha}\psi$. It follows that equivalent cocycles correspond to isomorphic $\mathbb{F}$-forms. \begin{proposition}\label{basic} The map $\phi\longmapsto \{\alpha\mapsto\phi^\alpha\phi^{-1}\}$ induces a bijection between the set of isomorphism classes of $\mathbb{F}$-forms of $L$ and $H^1(Gal, Aut L)$. \end{proposition} {\bf Proof.} For a proof see \cite[\S 4]{R}. \medskip We recall the following standard result (cf.\ \cite[\S VII.2]{Serre1}): \begin{proposition}\label{longexact} If $K$ is a group and $A$, $B$, $C$ are groups with an action of $K$ by automorphisms, related by an exact sequence: $$1\rightarrow A\rightarrow B\rightarrow C\rightarrow 1,$$ then there is a cohomology long exact sequence: $$1\rightarrow H^0(K,A)\rightarrow H^0(K,B)\rightarrow H^0(K,C) \rightarrow H^1(K,A)\rightarrow H^1(K,B)\rightarrow H^1(K,C),$$ where the first three maps are group homomorphisms, and the last three are maps of pointed sets. \end{proposition} \begin{proposition}\label{reduction} Let $S$ be a simple infinite-dimensional linearly compact Lie superalgebra over $\bar{\mathbb{F}}$. Then the map $j: Aut S \rightarrow Autgr S$ induces an embedding $$j_{*}: H^1(Gal, Aut S)\longrightarrow H^1(Gal, Autgr S).$$ \end{proposition} {\bf Proof.} The same arguments as in \cite[Proposition 4.2]{R} show that $H^1(Gal,$ $Autf S)=0$. Then the statement follows from exact sequence (\ref{exact}) in Section \ref{general} and Proposition \ref{longexact}. \hfill$\Box$ \bigskip We recall the following well known results on Galois cohomology. All details can be found in \cite[\S ~X]{Serre1} and \cite[\S ~III Annexe]{Serre2}. \begin{theorem}\label{results} \begin{enumerate} \item[$(a)$] $H^1(Gal, \bar{\mathbb{F}}^\times)=1$; \item[$(b)$] $H^1(Gal, GL_n(\bar{\mathbb{F}}))=1$; \item[$(c)$] $H^1(Gal, SL_n(\bar{\mathbb{F}}))=1$; \item[$(d)$] $H^1(Gal, Sp_n(\bar{\mathbb{F}}))=1$; \item[$(e)$] if $q$ is a quadratic form over $\mathbb{F}$, then there exists a bijection between $H^1(Gal, O_n(q,\bar{\mathbb{F}}))$ and the set of classes of $\mathbb{F}$-quadratic forms which are $\bar{\mathbb{F}}$-isomorphic to $q$; \item[$(f)$] if $q$ is a quadratic form over $\mathbb{F}$, then there exists a bijection between $H^1(Gal, SO_n(q,\bar{\mathbb{F}}))$ and the set of classes of $\mathbb{F}$-quadratic forms ~$q'$ which are $\bar{\mathbb{F}}$-isomorphic to $q$ and such that $\det(q')/\det(q)\in(\mathbb{F}^\times)^2$. \end{enumerate} \end{theorem} \begin{lemma}\label{almostdirect}\em Let $G$ be an almost direct product over $\mathbb{F}$ of $\bar{\mathbb{F}}^\times$ and an algebraic group $G_1$, and let $C=\bar{\mathbb{F}}^\times\cap G_1(\bar{\mathbb{F}})$ be a cyclic group of order $k$. Then we have the following exact sequence: \begin{equation} 1\rightarrow \mathbb{F}^\times/(\mathbb{F}^\times)^k\rightarrow H^1(Gal,G_1)\rightarrow H^1(Gal,G)\rightarrow 1. \label{!} \end{equation} In particular, if $H^1(Gal,G_1)=1$, then $H^1(Gal,G)=1$. \end{lemma} {\bf Proof.} We have the following exact sequence: $$1\rightarrow G_1\rightarrow G\stackrel{\pi}{\rightarrow} \bar{\mathbb{F}}^\times\rightarrow 1,$$ where $\pi: G\rightarrow G/G_1\cong\bar{\mathbb{F}}^\times/C\rightarrow \bar{\mathbb{F}}^\times$ is the composition of the canonical map of $G$ to $G/G_1$ and the map $x\mapsto x^k$ from $\bar{\mathbb{F}}^\times/C$ to $\bar{\mathbb{F}}^\times$. By Proposition \ref{longexact}, we get the following exact sequence: $$1\rightarrow \mathbb{F}^\times/C\rightarrow\mathbb{F}^\times\rightarrow H^1(Gal,G_1)\rightarrow H^1(Gal,G)\rightarrow 1.$$ This implies exact sequence (\ref{!}). \hfill$\Box$ \bigskip We fix the $\mathbb{F}$-form $S^\mathbb{F}$ of each simple infinite-dimensional linearly compact Lie superalgebra $S$ over $\bar{\mathbb{F}}$, defined by the same conditions as in \cite{CantaK}, but over $\mathbb{F}$ (in the case of $SKO(n,n+1;\beta)$ we need to assume that $\beta\in\mathbb{F}$). This is called the {\em split} $\mathbb{F}$-form of $S$. In more invariant terms, this $\mathbb{F}$-form is characterized by the condition that it contains a split maximal torus $T$ (i.e.\ $T$ is $ad$-diagonalizable over $F$ and $T \otimes_\mathbb{F}\bar{\mathbb{F}}$ is a maximal torus of $S$). \begin{theorem} Let $S$ be a simple infinite-dimensional linearly compact Lie superalgebra over $\bar{\mathbb{F}}$ not isomorphic to $H(m,n)$, $K(m,n)$, $E(1,6)$, or $S(1,2)$. Then any $\mathbb{F}$-form of $S$ is isomorphic to the split $\mathbb{F}$-form. \end{theorem} {\bf Proof.} It follows from Propositions \ref{basic}, \ref{reduction} and the description of the group $Autgr S$ given in Theorem \ref{AutgrS}, using Theorem \ref{results} and Lemma \ref{almostdirect}. \hfill$\Box$ \begin{remark}\label{HeK}\em Let $S=H(2k,n)$ or $S=K(2k+1,n)$. Then, according to Table 1 and Lemma \ref{almostdirect}, we have the exact sequence $$1\rightarrow \mathbb{F}^\times/(\mathbb{F}^\times)^2\rightarrow H^1(Gal, Sp_{2k}\times O_n)\rightarrow H^1(Gal, Autgr S)\rightarrow 1.$$ Here and further, $Sp_{2k}=Sp_{2k}(\bar{\mathbb{F}})$ and $O_n\subset GL_n(\bar{\mathbb{F}})$ is the orthogonal group over $\bar{\mathbb{F}}$ which leaves invariant the quadratic form $\sum_{i=1}^nx_ix_{n-i+1}$. Since $H^1(Gal,$ $G_1\times G_2)\cong H^1(Gal, G_1)\times H^1(Gal, G_2)$, by Theorem \ref{results}$(d)$, $H^1(Gal, Sp_{2k}\times O_n)\cong H^1(Gal,$ $O_n)$, hence we have the exact sequence $$1\rightarrow \mathbb{F}^\times/(\mathbb{F}^\times)^2\rightarrow H^1(Gal, O_n)\rightarrow H^1(Gal, Autgr S)\rightarrow 1.$$ \end{remark} \medskip Given a non-degenerate quadratic form $q$ over $\mathbb{F}$ in $n$ indeterminates, associated to a symmetric matrix $c=(c_{ij})$, introduce the following supersymplectic and supercontact differential forms $\sigma_q$ and $\Sigma_q$: $$\sigma_q = \sum_{i=1}^k dp_i\wedge dq_i+ \sum_{i,j=1}^n c_{ij} d\xi_i d\xi_j,$$ $$\Sigma_q= dt+\sum_{i=1}^k (p_idq_i-q_idp_i)+\sum_{i,j=1}^n c_{ij} \xi_i d\xi_j.$$ \begin{theorem}\label{H(m,n)} $(a)$ Any $\mathbb{F}$-form of the Lie superalgebra $S=H(2k,n)$ is isomorphic to one of the Lie superalgebras $H_q (2k,n):=\{X\in W(2k,n)^{\mathbb{F}}~|~X\sigma_q=0\}$. \noindent $(b)$ Any $\mathbb{F}$-form of the Lie superalgebra $S=K(2k+1,n)$ is isomorphic to one of the Lie superalgebras $K_q (2k+1,n):=\{X\in W(2k+1,n)^{\mathbb{F}}~|~X\Sigma_q=f\Sigma_q\}$. Two such $\mathbb{F}$-forms $S_q$ and $S_{q'}$ of $S$ are isomorphic if and only if $q$ and $q'$ are equivalent non-degenerate quadratic forms over $\mathbb{F}$, up to multiplication by a non-zero scalar in $\mathbb{F}$. \end{theorem} {\bf Proof.} It is easy to see that every non-degenerate quadratic form $q$ over $\mathbb{F}$, with matrix $c=(c_{ij})$, gives rise to the $\mathbb{F}$-forms $H_q(2k,n)$ and $K_q(2k+1,n)$ of the Lie superalgebras $S=H(2k,n)$ and $S=K(2k+1,n)$, respectively, attached to the corresponding cocycles. By construction, equivalent quadratic forms give rise to isomorphic $\mathbb{F}$-forms of $S$. Besides, if $\lambda\in\mathbb{F}^\times$ and $q'$ is the quadratic form associated to the matrix $\lambda c$, then $S_q\cong S_{q'}$, and the isomorphism is given by the following change of variables: $$p_i\mapsto \lambda^{-1}p_i, ~~q_i\mapsto q_i, ~~\xi_i\mapsto \xi_i, ~~~\mbox{if}~ S=H(2k,n)$$ $$t\mapsto\lambda^{-1}t, ~~p_i\mapsto \lambda^{-1}p_i, ~~q_i\mapsto q_i, ~~\xi_i\mapsto \xi_i, ~~~\mbox{if}~ S=K(2k+1,n).$$ The $\mathbb{F}$-forms $S_q$ exhaust all $\mathbb{F}$-forms of the Lie superalgebra $S$, due to Proposition \ref{basic}, Theorem \ref{AutgrS}, Remark \ref{HeK} and Theorem \ref{results}$(e)$. \hfill$\Box$ \begin{example}\label{realE(1,6)}\em Consider the $\mathbb{F}$-form $K_q(1,6)$ of $K(1,6)$ corresponding to the supercontact form $\Sigma_q=dt+\sum_{i=1}^6c_{ij}\xi_id\xi_j$. Then the principal grading of $K(1,6)$ induces an irreducible grading on $K_q(1,6)$: $K_q(1,6)=\prod_{j\geq -2}\mathfrak{g}_j$, where $\mathfrak{g}_0=\h\oplus\mathbb{F}$, $\h$ is an $\mathbb{F}$-form of $so_6(\bar{\mathbb{F}})$, $\mathfrak{g}_{-1}\cong\mathbb{F}^6$, and $\mathfrak{g}_1=\mathfrak{g}_{-1}^*\oplus \Lambda^3(\mathbb{F}^6)$. Let $d$ be the discriminant of the quadratic form $q$. If $-d\in(\mathbb{F}^\times)^2$, then the $\mathfrak{g}_0$-module $\Lambda^3(\mathbb{F}^6)$ is not irreducible, and decomposes over $\mathbb{F}$ into the direct sum of two $\mathfrak{g}_0$-submodules $\mathfrak{g}_1^+$ and $\mathfrak{g}_1^-$, which are the eigenspaces of the Hodge operator ${}^*$, see Example \ref{CK6} below (they are obtained from one another by an automorphism of $\mathfrak{g}_0$). It follows that we can define an $\mathbb{F}$-form $E_q(1,6)$ of the Lie superalgebra $E(1,6)$ by repeating the same construction as the one described in Section \ref{gradings}, namely, $E_q(1,6)$ will be the graded subalgebra of $K_q(1,6)$ generated by $\mathfrak{g}_{-1}+\mathfrak{g}_0+(\mathfrak{g}_{-1}^*+\mathfrak{g}_1^+)$. \end{example} \begin{theorem}\label{FformsofE(1,6)} Any $\mathbb{F}$-form of the Lie superalgebra $S=E(1,6)$ is isomorphic to one of the Lie superalgebras $E_q(1,6)$ constructed in Example \ref{realE(1,6)}, where $q$ is a non-degenerate quadratic form over $\mathbb{F}$ in six indeterminates, with discriminant $d\in-(\mathbb{F}^\times)^2$. Two such $\mathbb{F}$-forms $E_q(1,6)$ and $E_{q'}(1,6)$ of $E(1,6)$ are isomorphic if and only if the quadratic forms $q$ and $q'$ are equivalent, up to multiplication by a non-zero scalar in $\mathbb{F}$. \end{theorem} {\bf Proof.} By Lemma \ref{almostdirect} and Theorem \ref{AutgrS}$(h)$, we have the exact sequence $$1\rightarrow \mathbb{F}^\times/(\mathbb{F}^\times)^2\rightarrow H^1(Gal, SO_{6})\rightarrow H^1(Gal, Autgr S)\rightarrow 1.$$ The statement follows, due to Proposition \ref{basic}, Theorem \ref{results}$(f)$ and the proof of Theorem \ref{H(m,n)}. \hfill$\Box$ \begin{remark}\label{S(1,2)inK(1,4)}\em Consider the Lie superalgebra $K(1,4)=\prod_{j\geq -2}\mathfrak{g}_j$ over $\bar{\mathbb{F}}$ with respect to its principal grading. Then: $\mathfrak{g}_0\cong cso_4=so_4+\bar{\mathbb{F}}t$, $\mathfrak{g}_{-1}\cong\bar{\mathbb{F}}^4$ and $\mathfrak{g}_{-2}=[\mathfrak{g}_{-1}, \mathfrak{g}_{-1}]\cong\bar{\mathbb{F}}$. Besides, $\mathfrak{g}_1=V_1\oplus V_{-1}$, where for every $\lambda=\pm 1$, $V_{\lambda}$ is isomorphic to the standard $so_4$-module, $[V_{\lambda}, V_{\lambda}]$ is isomorphic to the trivial $so_4$-module $\bar{\mathbb{F}}$, and $\mathfrak{g}_{-2}+\mathfrak{g}_{-1}+gl_2+V_{\lambda}+[V_{\lambda},V_{\lambda}]\cong sl(2,2)/\bar{\mathbb{F}}$. Finally, $\mathfrak{g}_2=\bar{\mathbb{F}}\oplus so_4\oplus\bar{\mathbb{F}}$, where $so_4$ and $\bar{\mathbb{F}}$ denote the adjoint and the trivial $so_4$-module, respectively. Here $t$ acts as the grading operator. The Lie superalgebra $S(1,2)$ over $\bar{\mathbb{F}}$ is the subalgebra of $K(1,4)$ generated by $\h_{-1}\oplus \h_0\oplus \h_1\oplus\h_2$ where $\h_{-1}=\mathfrak{g}_{-1}$, $\h_0=sl_2+\bar{\mathbb{F}}t$, $\h_1=V_1$ and $\h_2=\bar{\mathbb{F}}+sl_2$, where $sl_2$ denotes the adjoint $sl_2$-module (see also \cite[Remark 2.33]{CantaK}). \end{remark} \begin{example}\label{S(1,2)}\em Consider a Lie superalgebra $\mathfrak{g}_{-}$ over $\mathbb{F}$ with consistent $\Z$-grading $\mathfrak{g}_{-}=\mathfrak{g}_{-2}+\mathfrak{g}_{-1}+\mathfrak{g}_0$, where $\mathfrak{g}_0=\h\oplus\mathbb{F}$, $\h$ is an $\mathbb{F}$-form of $sl_2$, $\mathfrak{g}_{-2}=\mathbb{F} z$, and $\mathfrak{g}_{-1}$ is a four-dimensional $[\mathfrak{g}_0, \mathfrak{g}_0]$-module such that $\mathfrak{g}_{-1}\otimes_\mathbb{F}\bar{\mathbb{F}}$ is the direct sum of two copies of the standard $sl_2$-module, and where the bracket in $\mathfrak{g}_{-1}$ is defined as follows: $$[a,b]=q(a,b)z,$$ where $q$ is a non-degenerate bilinear form on $\mathfrak{g}_{-1}$ over $\mathbb{F}$, which is symmetric, i.e. $q(a,b)=q(b,a)$. Such a superalgebra $\mathfrak{g}_{-}$ exists if and only if the discriminant $d$ of the quadratic form $q$ lies in $(\mathbb{F}^\times)^2$. Let $S_q(1,2)$ denote the full prolongation of $\mathfrak{g}_{-}$ over $\mathbb{F}$ (see \cite[\S 1.6]{CK}). Then $S_q(1,2)$ is an $\mathbb{F}$-form of $S(1,2)$. \end{example} \begin{theorem}\label{Fformsof S(1,2)} Any $\mathbb{F}$-form of the Lie superalgebra $S=S(1,2)$ is isomorphic to one of the Lie superalgebras $S_q(1,2)$ constructed in Example \ref{S(1,2)}, where $q$ is a non-degenerate quadratic form over $\mathbb{F}$ in four indeterminates, with discriminant $d\in(\mathbb{F}^\times)^2$. Two such $\mathbb{F}$-forms $S_q(1,2)$ and $S_{q'}(1,2)$ of $S(1,2)$ are isomorphic if and only if the quadratic forms $q$ and $q'$ are equivalent, up to multiplication by a non-zero scalar in $\mathbb{F}$. \end{theorem} {\bf Proof.} By Lemma \ref{almostdirect} and Theorem \ref{AutgrS}, we have the exact sequence $$1\rightarrow \mathbb{F}^\times/(\mathbb{F}^\times)^2\rightarrow H^1(Gal, SO_{4})\rightarrow H^1(Gal, Autgr S)\rightarrow 1.$$ The statement follows, due to Proposition \ref{basic}, Theorem \ref{results}$(f)$ and the proof of Theorem \ref{H(m,n)}. \hfill$\Box$ \bigskip We summarize the results of this section in the following theorem: \begin{theorem}\label{summary} Let $S$ be a simple infinite-dimensional linearly compact Lie superalgebra over $\bar{\mathbb{F}}$. If $S$ is not isomorphic to $H(m,n)$, $K(m,n)$, $E(1,6)$, or $S(1,2)$, then the split $\mathbb{F}$-form $S^\mathbb{F}$ is, up to isomorphism, the unique $\mathbb{F}$-form of $S$. In the remaining four cases, all $\mathbb{F}$-forms of $S$ are, up to isomorphism, as follows: \begin{itemize} \item[$(a)$] the Lie superalgebras $H_q(m,n):=\{X\in W(m,n)^{\mathbb{F}}~|~X\sigma_q=0\}$ where $\sigma_q$ is a supersymplectic differential form over $\mathbb{F}$, if $S=H(m,n)$; \item[$(b)$] the Lie superalgebras $K_q(m,n):=\{X\in W(m,n)^{\mathbb{F}}~|~X\Sigma_q=f\Sigma_q\}$ where $\Sigma_q$ is a supercontact differential form over $\mathbb{F}$, if $S=K(m,n)$; \item[$(c)$] the Lie superalgebras $E_q(1,6)$ constructed in Example \ref{realE(1,6)}, where $q$ is a non-degenerate quadratic form over $\mathbb{F}$ in six indeterminates with discriminant $d\in-(\mathbb{F}^\times)^2$, if $S=E(1,6)$; \item[$(d)$] the Lie superalgebras $S_q(1,2)$ constructed in Example \ref{S(1,2)}, where $q$ is a non-degenerate quadratic form over $\mathbb{F}$ in four indeterminates with discriminant $d\in(\mathbb{F}^\times)^2$, if $S=S(1,2)$. \end{itemize} The isomorphisms between these $\mathbb{F}$-forms are described in Theorems \ref{H(m,n)}, \ref{FformsofE(1,6)}, \ref{Fformsof S(1,2)}. \end{theorem} \begin{remark}\label{real}\em It follows immediately from Theorem \ref{summary} that a simple infinite-dimensional linearly compact Lie superalgebra $S$ over $\mathbb{C}$ has, up to isomorphism, one real form if $S$ is not isomorphic to $H(m,n)$, $K(m,n)$, $E(1,6)$, or $S(1,2)$, two real forms if $S$ is isomorphic to $E(1,6)$ or $S(1,2)$, and $[n/2]+1$ real forms if $S$ is isomorphic to $H(m,n)$ or $K(m,n)$. \end{remark} \section{Finite Simple Lie Conformal Superalgebras} In this section we use the theory of Lie conformal superalgebras in order to give an explicit construction of all non-split forms of all simple infinite-dimensional linearly compact Lie superalgebras. In conclusion of the section, we give the related classification of all $\mathbb{F}$-forms of all simple finite Lie conformal superalgebras. We briefly recall the definition of a Lie conformal superalgebra and of its annihilation algebra. For notation, definitions and results on Lie conformal superalgebras we refer to \cite{DK}, \cite{FK} and \cite{V}. A Lie conformal superalgebra $R$ over $\mathbb{F}$ is a left $\Z/2\Z$-graded $\mathbb{F}[\partial]$-module endowed with an $\mathbb{F}$-linear map, called the $\lambda$-bracket, $$R\otimes R\rightarrow \mathbb{F}[\lambda]\otimes R, ~a\otimes b\mapsto[a_\lambda b],$$ satisfying the axioms of sesquilinearity, skew-commutativity, and the Jacobi identity. One writes $[a_\lambda b]=\sum_{n\in\Z_+}\frac{\lambda^n}{n!}(a_{(n)}b)$; the coefficient $(a_{(n)}b)$ is called the $n$-th product of $a$ and $b$. A Lie conformal superalgebra $R$ is called {\em finite} if it is finitely generated as an $\mathbb{F}[\partial]$-module. Given a finite Lie conformal superalgebra $R$, we can associate to it a linearly compact Lie superalgebra $L(R)$ as follows. Consider the Lie conformal superalgebra $R[[t]]$, where $t$ is an even indeterminate, the $\partial$-action is defined by $\partial+\partial_t$, and the $n$-th products are defined by: $$a(t)_{(n)}b(t):=\sum_{j\geq 0}(\partial_t^j a(t))_{(n+j)}b(t)/j!,$$ where $a(t)$,$b(t)\in R[[t]]$, and the $n$-th products on the right are extended from $R$ to $R[[t]]$ by bilinearity. Then $(\partial+\partial_t)R[[t]]$ is a two-sided ideal of $R[[t]]$ with respect to $0$-th product, and this product induces a Lie superalgebra bracket on $L(R):=R[[t]]/(\partial +\partial_t)R[[t]]$. The linearly compact Lie superalgebra $L(R)$ is called the {\em annihilation algebra} of $R$. For the classification of finite simple Lie conformal superalgebras over an algebraically closed field $\bar{F}$ of characteristic zero we refer to \cite{FK}. The list consists of four series ($N\in\Z_+$): $W_N$, $S_{N+2,a}$, $\tilde S_{N+2}$, $K_N$ ($N \neq 4$), $K'_4$, the exceptional Lie conformal superalgebra $CK_6$ of rank 32, and $Cur \mathfrak{s}$, where $\mathfrak{s}$ is a simple finite-dimensional Lie superalgebra. \begin{example}\label{K_N}\em Let $V$ be an $N$-dimensional vector space over $\mathbb{F}$ with a non-degenerate symmetric bilinear form $q$. The Lie conformal superalgebra $K_{N,q}$ associated to $V$ is $\mathbb{F}[\partial]\Lambda(V)$ with $\lambda$-bracket: \begin{equation} [A_{\lambda} B]=(\frac{r}{2} -1)\partial(AB) + (-1)^r \frac{1}{2} \sum_{j=1}^N (i_{a_j}A)(i_{b_j}B) +\lambda (\frac{r+s}{2} -2) AB, \label{lambdabracket} \end{equation} where $A,B \in \Lambda (V)$, $r=\deg(A)$, $s=\deg(B)$, $a_i,b_i \in V$, $q(a_i,b_j)=\delta_{i,j}$, and $i_a$, for $a \in V$, denotes the contraction with $a$, i.e., $i_a$ is the odd derivation of $\Lambda(V)$ defined by: $i_a(b)=q(a,b)$ for $b \in V$ (cf \cite[Example 3.8]{FK}). The annihilation algebra of $K_{N,q}$ is isomorphic to the Lie superalgebra $K_q(1,N)$ defined in Theorem \ref{H(m,n)}$(b)$. The Lie conformal superalgebra $K_{N,q}$ is an $\mathbb{F}$-form of the finite simple Lie conformal superalgebra $K_N$. \end{example} \begin{example}\label{CK6}\em Let $V$ be an $N$-dimensional vector space over $\mathbb{F}$ with a non-degenerate symmetric bilinear form $q$, and let $K_{N,q}$ be the Lie conformal superalgebra over $\mathbb{F}$ constructed in Example \ref{K_N}. Choose a basis $\xi_1, \dots, \xi_N$ of $V$, and let ${}^*$ denote the Hodge star operator on $V$ associated to the form $q$, i.e., $$(\xi_{j_1} \wedge \xi_{j_2} \wedge\dots \wedge\xi_{j_k})^*= i_{\xi_{j_1}}i_{\xi_{j_2}}\dots i_{\xi_{j_k}} (\xi_1\wedge\dots \wedge \xi_N).$$ It is easy to check that, for every $a\in\Lambda(V)$, $(a^*)^*=(-1)^{N(N-1)/2}\det(q)a$. Let $N=6$ and choose $\alpha\in \mathbb{F}$ such that $\alpha^2=-1/det(q)$. Consider the following elements in $\Lambda(V)$: $$-1+\alpha\partial^3 1^*, ~\xi_i\xi_j+\alpha\partial(\xi_i\xi_j)^*, ~\xi_{i}-\alpha\partial^2\xi_{i}^*, ~\xi_i\xi_j\xi_k+\alpha(\xi_i\xi_j\xi_k)^*.$$ It is easy to check that the $\mathbb{F}[\partial]$-span of these elements is closed under $\lambda$-bracket (\ref{lambdabracket}), hence they form an $\mathbb{F}$-form $CK_{6,q}$ of the Lie conformal subalgebra $CK_6$ of $K_6$ (cf \cite[Theorem 3.1]{ChengK}). Likewise, if $N=4$ and $\beta^2=1/det(q)$, the $\mathbb{F}[\partial]$-span of the elements: $$-1-\beta\partial^2 1^*, ~\xi_i\xi_j-\beta(\xi_i\xi_j)^*, ~\xi_{i}+\beta\partial\xi_{i}^*$$ is closed under $\lambda$-bracket (\ref{lambdabracket}). It follows that these elements form a subalgebra $S_{2,q}$ of $K_{4,q}$, which is an $\mathbb{F}$-form of the Lie conformal superalgebra $S_{2,0}$ (cf. \cite[Remark p.\ 225]{ChengK}). \end{example} \begin{remark}\label{explicit}\em The annihilation algebras of the Lie conformal superalgebras $CK_{6,q}$ and $S_{2,q}$, constructed in Examples \ref{K_N} and \ref{CK6}, are the Lie superalgebras $E_q(1,6)$ and $S_q(1,2)$, constructed in Examples \ref{realE(1,6)} and \ref{S(1,2)}, respectively. Due to Theorems \ref{FformsofE(1,6)} and \ref{Fformsof S(1,2)}, Example \ref{CK6} provides an explicit construction of all $\mathbb{F}$-forms of the Lie superalgebras $E(1,6)$ and $S(1,2)$. \end{remark} We conclude by classifying all $\mathbb{F}$-forms of all simple finite Lie conformal superalgebras over $\bar{F}$. The following theorem can be derived from \cite[Remark 3.1]{FKR}. \begin{theorem}\label{conformalsummary} Let $R$ be a simple finite Lie conformal superalgebra over $\bar{\mathbb{F}}$. If $R$ is not isomorphic to $S_{2,0}$, $K_N$, $K'_4$, $CK_6$ or $Cur \mathfrak{s}$, then there exists, up to isomorphism, a unique $\mathbb{F}$-form of $R$ (in the case $R=S_{N,a}$, we have to assume that $a\in\mathbb{F}$ for such a form to exist). In the remaining cases, all $\mathbb{F}$-forms of $R$ are as follows: \begin{itemize} \item the Lie conformal superalgebras $K_{N,q}$ if $R$ is isomorphic to $K_N$; \item the Lie conformal superalgebras $CK_{6,q}$ if $R$ is isomorphic to $CK_6$; \item the Lie conformal superalgebras $S_{2,q}$ if $R$ is isomorphic to $S_{2,0}$; \item the derived algebras of the Lie conformal superalgebras $K_{4,q}$ if $R$ is isomorphic to $K'_4$; \item the Lie conformal superalgebras $Cur {\mathfrak s}^\mathbb{F}$, where $\mathfrak{s}^\mathbb{F}$ is an $\mathbb{F}$-form of the Lie superalgebra $\mathfrak{s}$, if $R$ is isomorphic to $Cur \mathfrak{s}$. \end{itemize} \end{theorem} \medskip \section*{Acknowledgments} We would like to acknowledge the help of A.\ Maffei and E.\ B.\ Vinberg. \medskip
1,314,259,994,045
arxiv
\section{\texorpdfstring{#1}}} \begin{document} \title[Boussinesq Equations]{Global Well-posedness for The 2D Boussinesq System Without Heat Diffusion and With Either Anisotropic Viscosity or Inviscid Voigt-$\alpha$ Regularization} \date{October 24, 2010.} \author{Adam Larios} \address[Adam Larios]{Department of Mathematics\\ University of California, Irvine\\ Irvine CA 92697-3875, USA} \email[Adam Larios]{alarios@math.uci.edu} \author{Evelyn Lunasin} \address[Evelyn Lunasin]{Department of Mathematics\\ University of Michigan\\ Ann Arbor, MI 48104 USA} \email[Evelyn Lunasin]{lunasin@umich.edu} \author{Edriss S. Titi} \address[Edriss S. Titi]{Department of Mathematics, and Department of Mechanical and Aero-space Engineering\\ University of California, Irvine\\ Irvine CA 92697-3875, USA.\\ Also The Department of Computer Science and Applied Mathematics\\ The Weizmann Institute of Science, Rehovot 76100, Israel} \email[Edriss S. Titi]{etiti@math.uci.edu and edriss.titi@weizmann.ac.il} \keywords{Anisotropic Boussinesq equations, Boussinesq Equations, Voight-$\alpha$ regularization} \thanks{MSC 2010 Classification: 35Q35; 76B03; 76D03; 76D09} \begin{abstract} We establish global existence and uniqueness theorems for the two-dimensional non-diffusive Boussinesq system with viscosity only in the horizontal direction, which arises in Ocean dynamics. This work improves the global well-posedness results established recently by R. Danchin and M. Paicu for the Boussinesq system with anisotropic viscosity and zero diffusion. Although we follow some of their ideas, in proving the uniqueness result, we have used an alternative approach by writing the transported temperature (density) as $\theta = \Delta\xi$ and adapting the techniques of V. Yudovich for the 2D incompressible Euler equations. This new idea allows us to establish uniqueness results with fewer assumptions on the initial data for the transported quantity $\theta$. Furthermore, this new technique allows us to establish uniqueness results without having to resort to the paraproduct calculus of J. Bony. We also propose an inviscid $\alpha$-regularization for the two-dimensional inviscid, non-diffusive Boussinesq system of equations, which we call the Boussinesq-Voigt equations. Global regularity of this system is established. Moreover, we establish the convergence of solutions of the Boussinesq-Voigt model to the corresponding solutions of the two-dimensional Boussinesq system of equations for inviscid flow without heat (density) diffusion on the interval of existence of the latter. Furthermore, we derive a criterion for finite-time blow-up of the solutions to the inviscid, non-diffusive 2D Boussinesq system based on this inviscid Voigt regularization. Finally, we propose a Voigt-$\alpha$ regularization for the inviscid 3D Boussinesq equations with diffusion, and prove its global well-posedness. It is worth mentioning that our results are also valid in the presence of the $\beta$-plane approximation of the Coriolis force. \end{abstract} \maketitle \thispagestyle{empty \section{Introduction}\label{sec:Int} The $d$-dimensional Boussinesq system of ocean and atmosphere dynamics (without rotation) in a domain $\Omega\subset\field{R}^d$ over the time interval $[0,T]$ is given by \begin{subequations}\label{bouss} \begin{alignat}{2} \label{bouss_mo} \partial_t\vect{u} + \sum_{j=1}^d\partial_j(u^j \vect{u}) &=-\nabla p + \theta \vect{e}_d + \nu\triangle\vect{u}, \qquad&& \text{in }\Omega\times[0,T],\\ \label{bouss_div} \nabla \cdot \vect{u} &=0, \qquad&& \text{in }\Omega\times[0,T],\\ \label{bouss_den} \partial_t\theta + \nabla\cdot(\vect{u} \theta) &=\kappa\triangle\theta, \qquad&& \text{in }\Omega\times[0,T],\\ \label{bouss_IC} \vect{u}(\vect{x},0)&=\vect{u}_0(\vect{x}),\quad\theta(\vect{x},0)=\theta_0(\vect{x}), \qquad&& \text{in }\Omega, \end{alignat} \end{subequations} with appropriate boundary conditions (discussed below). Here $\nu\geq0$ is the fluid viscosity, $\kappa\geq0$ is the diffusion coefficient. The spatial variable is denoted $\vect{x}=(x^1,\ldots,x^d)\in\Omega$, and the unknowns are the fluid velocity field $\vect{u}\equiv\vect{u}(\vect{x},t)\equiv(u^1(\vect{x},t),\ldots,u^d(\vect{x},t))$, the fluid pressure $p(\vect{x},t)$, and the function $\theta\equiv\theta(\vect{x},t)$, which may be interpreted physically as a thermal variable (e.g., when $\kappa>0$), or a density variable (e.g., when $\kappa=0$). We write $\vect{e}_d=(0,\ldots,0,1)$ for the $d^{\text{th}}$ standard basis vector in $\field{R}^d$. We use the notation $P^0_{\nu,\kappa}$, for the Boussinesq system with viscosity $\nu>0$ and with diffusion $\kappa>0$. We attach a subscript $x$ to the viscosity $\nu$ when we mean that the viscosity occurs in the horizontal direction only, i.e. in the case of anisotropic viscosity (see equation \eqref{bouss_aniso} below). The superscript of zero is reserved for a parameter $\alpha$, introduced below. In two dimensions, the global regularity in time of the problem $P^0_{\nu,\kappa}$ is well-known (see, e.g., \cite{Cannon_DiBenedetto_1980, Temam_1997_IDDS}), and follows essentially from the classical methods for Navier-Stokes equations. However, in the case $\nu=0, \kappa=0$, ($P^0_{0,0}$), global existence and uniqueness still remains an open problem (see, e.g., \cite{Chae_2005,Chae_Nam_1997} for studies in this direction). The local existence and uniqueness of classical solutions to $P_{0,0}^0$ was established in \cite{Chae_Nam_1997}, assuming the initial data $(\vect{u}_0,\theta_0)\in H^3\times H^3$. In particular, an analogous Beale-Kato-Majda criterion for blow-up of smooth solutions is established in \cite{Chae_Nam_1997} for the inviscid, non-diffusive Boussinesq system; namely, that the smooth solution exists on $[0,T]$ if and only if $\int_0^T\|\nabla\theta(t)\|_{L^\infty }\;dt < \infty$. One of our main results in this study, discussed in Section \ref{sec:aniso}, involves the global existence and uniqueness theorems for the two-dimensional non-diffusive Boussinesq system with viscosity only in the horizontal direction, denoted as $P^0_{\nu_x,0}$ (see equations \eqref{bouss_aniso} below). These equations are sometimes called the non-diffusive Boussinesq equations with anisotropic viscosity. In order to set the main ideas of our proof, in Section \ref{s:P_nu_zero_zero} we first establish the global existence of a certain class of weak solutions and the global existence and uniqueness to the two-dimensional viscous and non-diffusive Boussinesq system of equations (denoted as $P^0_{\nu,0}$) with Yudovich-type initial data. The other main result we have in this study is presented in Section \ref{s:P_alpha_zero_zero}. We propose an inviscid $\alpha$-regularization for the two-dimensional inviscid, non-diffusive Boussinesq system of equations (denoted as $P^{\alpha}_{0,0}$), which we call the Boussinesq-Voigt equations, and also establish its global regularity. We include in this section a study of the behavior of solutions to $P^{\alpha}_{0,0}$ as the parameter $\alpha\rightarrow 0$, which leads to a new criterion for the finite-time blow-up of solutions to the 2D, or 3D, inviscid, non-diffusive Boussinesq equations. We also give a short discussion of a Voigt-regularization for the three-dimensional Boussinesq equations in the case $P^{\alpha}_{0,\kappa}$. The two-dimensional viscous, non-diffusive Boussinesq system, ($P^0_{\nu, 0}$) is given by: \begin{subequations}\label{bouss_nd} \begin{alignat}{2} \label{bouss_nd_mo} \partial_t\vect{u} + \sum_{j=1}^2\partial_j(u^j \vect{u}) &=\nu\Delta \vect{u}-\nabla p + \theta \vect{e}_2, \qquad&& \text{in }\mathbb T^2\times[0,T],\\ \label{bouss_nd_div} \nabla \cdot \vect{u} &=0, \qquad&& \text{in }\mathbb T^2\times[0,T],\\ \label{bouss_nd_den} \partial_t\theta + \nabla\cdot(\vect{u} \theta) &=0, \qquad&& \text{in }\mathbb T^2\times[0,T],\\ \label{bouss_nd_IC} \vect{u}(\vect{x},0)&=\vect{u}_0(\vect{x}),\quad\theta(\vect{x},0)=\theta_0(\vect{x}), \qquad&& \text{in }\mathbb T^2. \end{alignat} \end{subequations} It has been shown in \cite{Hou_Li_2005, Chae_2005} that the system $P^0_{\nu,0}$, in the case of whole space $\field{R}^2$, admits a unique global solution provided the initial data $(\vect{u}_0, \theta_0) \in H^m(\field{R}^2)\times H^{m}(\field{R}^2)$ with $m\geq 3$, $m$ an integer. In fact, in \cite{Hou_Li_2005}, the authors only required $(\vect{u}_0, \theta_0) \in H^m(\field{R}^2)\times H^{m-1}(\field{R}^2)$ with $m\geq 3$. In \cite{Chae_2005}, it is shown that a Beale-Kato-Majda-type criterion is satisfied for the partially viscous system and therefore the system is globally well-posed. In \cite{Chae_2005}, it is shown that the problems $P^0_{0,\kappa}$ and $P^0_{\nu,0}$ both admit a unique global solution provided the initial data $(\vect{u}_0, \theta_0) \in H^m(\field{R}^2)\times H^{m}(\field{R}^2)$ with $m\geq 3$. Similar results are shown in \cite{Hou_Li_2005} for $P^0_{0,\kappa}$ but with initial data $(\vect{u}_0, \theta_0) \in H^m(\field{R}^2)\times H^{m-1}(\field{R}^2)$. Global well-posedness results for rough initial data (in Besov spaces) is established in \cite{Hmidi_Keraani_2007}. We establish in Section \ref{s:P_nu_zero_zero} the global well-posedness of $P^0_{\nu,0}$ in a periodic domain $\mathbb T^2 = [0,1]^2$ assuming weaker initial data, namely, $\vect{u}_0\in H^1(\mathbb T^2)$, (we always assume $\nabla\cdot\vect{u}_0=0$) and $\theta_0\in L^2(\mathbb T^2)$. Our key idea in proving the uniqueness result is by writing $\theta = \Delta \xi$, with $\int_{\mathbb T^2}\xi\,dx=0$, for some $\xi$, and then adapting the techniques of Yudovich in \cite{Yudovich_1963} (see also \cite{Majda_Bertozzi_2002}). We note that the authors in \cite{Danchin_Paicu_2008_French} have shown the global well-posedness results in the whole space under a weaker assumption that $\vect{u}_0,\theta_0 \in L^2(\field{R}^2)$. The proof of their main results arise under the Besov and Lorentz space setting and involves the use of Littlewood-Paley decomposition and paradifferential calculus introduced by J. Bony \cite{Bony_1981}. We include in this study global well-posedness results for the problem $P^0_{\nu,0}$ under a stronger assumption on the initial data, namely $\vect{u}_0\in H^1(\mathbb T^2)$, and $\theta_0 \in L^2(\mathbb T^2)$ but using only elementary techniques in PDEs. Although this particular result is not an improvement to that of \cite{Danchin_Paicu_2008_French}, we will see that applying our method in the case of anisotropic viscosity, we can establish an improvement to the global well-posedness results established in \cite{Danchin_Paicu_2008}. In Section \ref{sec:aniso}, we then consider the case where the viscosity $\nu$ occurs in the horizontal direction only. More precisely, assuming initial vorticity $\omega_0\in \sqrt{L}$ (defined below in \eqref{root_L_norm}), initial temperature (density) $\theta_0 \in L^\infty(\mathbb T^2)$, and $\int_{\mathbb T^2}\omega_0\,d\vect{x}= \int_{\mathbb T^2}\theta_0\,d\vect{x}=0$, we establish global well-posedness for the following system, which we denote as $P^0_{\nu_x,0}$: \begin{subequations}\label{bouss_aniso} \begin{alignat}{2} \label{bouss_aniso_mo} \partial_t\vect{u} + \sum_{j=1}^2\partial_j(u^j \vect{u}) &=\nu\partial_{1}^2 \vect{u}-\nabla p + \theta \vect{e}_2, \qquad&& \text{in }\mathbb T^2\times[0,T],\\ \label{bouss_aniso_div} \nabla \cdot \vect{u} &=0, \qquad&& \text{in }\mathbb T^2\times[0,T],\\ \label{bouss_aniso_den} \partial_t\theta + \nabla\cdot(\vect{u} \theta) &=0, \qquad&& \text{in }\mathbb T^2\times[0,T],\\ \label{bouss_aniso_IC} \vect{u}(\vect{x},0)&=\vect{u}_0(\vect{x}),\quad\theta(\vect{x},0)=\theta_0(\vect{x}), \qquad&& \text{in }\mathbb T^2. \end{alignat} \end{subequations} Recently, in \cite{Danchin_Paicu_2008}, a global well-posedness result for the system $P^0_{\nu_x,0}$ (in the whole space $\field{R}^2$), under various regularity conditions on initial data, was successfully established. More precisely, given that $\theta_0\in H^s(\field{R}^2)\cap L^\infty(\field{R}^2)$, with $s\in (1/2,1]$, $\vect{u}_0\in H^1(\field{R}^2)$ and $\omega_0\in L^p(\field{R}^2)$ for all $2\leq p < \infty$, and such the $\omega_0$ satisfy \begin{align}\label{root_L_norm} \|\omega_0\|_{\sqrt{L}}:=\sup_{p\geq2}\frac{\|\omega_0\|_{L^p(\field{R}^2)}}{\sqrt{p-1}} <\infty, \end{align} the Boussinesq system \eqref{bouss_aniso} in the whole space with anisotropic viscosity admits a unique global regular solution. The condition $\theta_0\in H^s$ with $s\in(\frac{1}{2},1]$ was needed for establishing uniqueness in \cite{Danchin_Paicu_2008}. We relax this condition in our current contribution. We remark again that the main idea is to write $\theta = \triangle\xi$, and then proceed using the techniques of Yudovich \cite{Yudovich_1963} for the 2D incompressible Euler equations to prove uniqueness. Furthermore, our method uses more elementary tools than those used in \cite{Danchin_Paicu_2008}. It is worth mentioning that very recently, in \cite{Adhikari_Cao_Wu_2010}, the global regularity of classical solutions to the two-dimensional Boussinesq system in the case of vertical viscosity and vertical thermal diffusion was established provided an additional extra thermal fractional diffusion of the form $(-\Delta)^\delta$ for $\delta>0$ is added. Let us denote by $P_{\nu,\kappa}^{\alpha}$ the following system: \begin{subequations}\label{bouss_v} \begin{alignat}{2} \label{bouss_v_mo} -\alpha^2\triangle\partial_t\vect{u}+\partial_t\vect{u} + \sum_{j=1}^d\partial_j(u^j \vect{u}) &=-\nabla p + \theta \vect{e}_d + \nu\triangle\vect{u}, \qquad&& \text{in }\mathbb T^d\times[0,T],\\ \label{bouss_v_div} \nabla \cdot \vect{u} &=0, \qquad&& \text{in }\mathbb T^d\times[0,T],\\ \label{bouss_v_den} \partial_t\theta + \nabla\cdot(\vect{u} \theta) &=\kappa\triangle\theta \qquad&& \text{in }\mathbb T^d\times[0,T],\\ \label{bouss_v_IC} \vect{u}(\vect{x},0)=\vect{u}_0(\vect{x}),\quad\theta(\vect{x},0)&=\theta_0(\vect{x}), \qquad&& \text{in }\mathbb T^d. \end{alignat} \end{subequations} In Section \ref{s:P_alpha_zero_zero}, we study in dimension $d=2$ the inviscid ($\nu = 0$), Voigt-$\alpha$ (with $\alpha>0$) regularized momentum equation, namely the system $P_{0,0}^{\alpha}$, and in dimension $d=3$ the system $P_{0,\kappa}^{\alpha}$ (with $\kappa >0$). In the case $d=2$ we establish global well-posedness results for the problem $P^\alpha_{0,0}$ given initial data $\vect{u}_0\in H^2(\mathbb T^2)$ with $\nabla\cdot\vect{u}_0$, and $\theta_0 \in L^2(\mathbb T^2)$. This result also hold in the easier cases $\kappa>0$ or $\nu>0$. In the case $d=3$, we require $\kappa>0$ to establish global well-posedness results. We show that the problem $P^\alpha_{0,\kappa}$ with given initial data $\vect{u}_0\in H^3(\mathbb T^3)$ with $\nabla\cdot\vect{u}_0$, and $\theta_0 \in L^\infty(\mathbb T^3)$ is well-posed globally in time. This result also hold in the easier case $\nu>0$. Observe that the system $P^\alpha_{0,0}$ formally coincides with the inviscid, non-diffusive Boussinesq equations when $\alpha = 0$. This type of inviscid $\alpha$-regularization can be traced back to the work of Cao, {\it et. al.}~\cite{Cao_Lunasin_Titi_2006} who proposed the inviscid simplified Bardina model (studied in \cite{Layton_Lewandowski_2006}) as regularization of the 3D Euler equations. The model consists of the Euler equations with the term $-\alpha^2\Delta\partial_t \vect{u}$ added to the momentum equation. We refer to this term as the Voigt term, and we refer to equations with this additional term as Voigt-regularized equations. The reason for this terminology is that if one adds the Voigt term to the Navier-Stokes equations, the resulting equations happen to coincide with equations governing certain visco-elastic fluids known as Kelvin-Voigt fluids, which were first introduced and studied in the context of the 3D Navier-Stokes equations by A.P. Oskolkov \cite{Oskolkov_1973, Oskolkov_1982}, and were studied later in \cite{Kalantarov_1986}. These equations are known as the Navier-Stokes-Voigt equations. They were first proposed in \cite{Cao_Lunasin_Titi_2006} as a regularization for either the Navier-Stokes (for $\nu>0$) or Euler (for $\nu=0$) equations, for small values of the regularization parameter $\alpha$. We briefly discuss the merits of the Navier-Stokes-Voigt equations, as they are a special case of the Boussinesq-Voigt equations. Voigt-regularizations of parabolic equations are a special case of pseudoparabolic equations, that is, equations of the form $Mu_t+Nu=f$, where $M$ and $N$ are (possibly non-linear, or even non-local) operators. For more about pseudoparabolic equations, see, e.g., \cite{DiBenedetto_Showalter_1981,Peszynska_Showalter_Yi_2009,Showalter_1975_nonlin,Showalter_1975_Sobolev2,Showalter_1972_rep,Carroll_Showalter_1976,Showalter_1970_SG,Showalter_1970_odd,Bohm_1992}. Whether in the presence of either periodic boundary conditions or physical boundary conditions (under the assumption of the no-slip boundary conditions $u|_{\partial\Omega}=0$), the Navier-Stokes-Voigt equations enjoy global well-posedness, even in three-dimensions), as it has been pointed out in \cite{Cao_Lunasin_Titi_2006}. The Euler-Voigt equations enjoy global well-posedness in the case of periodic boundary conditions (see, e.g., \cite{Cao_Lunasin_Titi_2006,Larios_Titi_2009}). It is worth mentioning that the long-term dynamics and estimates for the global attractor, and the Gevrey regularity of solutions on the global attractor, of the three-dimensional Navier-Stokes-Voigt model were studied in \cite{Kalantarov_Titi_2009} and \cite{Kalantarov_Levant_Titi_2009}, respectively. Moreover, it was shown recently in \cite{Ramos_Titi_2010} that the statistical solutions (i.e., invariant probability measures) of the three-dimensional Navier-Stokes-Voigt equations converge, in a suitable sense, to a corresponding statistical solution (invariant probability measure) of the three-dimensional Navier-Stokes equations. In the context of numerical computations, the Navier-Stokes-Voigt system appears to have less stiffness than the Navier-Stokes system (see, e.g., \cite{Ebrahimi_Holst_Lunasin_2009, Levant_Ramos_Titi_2009}). In \cite{Levant_Ramos_Titi_2009}, the statistical properties of the Navier-Stokes-Voigt model were investigated numerically in the context of the Sabra shell phenomenological model of turbulence and were compared with the corresponding Navier-Stokes shell model. Due to its simplicity, the Voigt $\alpha$-regularization is also well-suited to being applied to other hydrodynamic models, such as the two-dimensional surface quasi-geostrophic equations, demonstrated in \cite{Khouider_Titi_2008}, and the three-dimensional magnetohydrodynamic (MHD) equations, demonstrated in \cite{Larios_Titi_2009}. See also \cite{Ebrahimi_Holst_Lunasin_2009} for the application of Navier-Stokes-Voigt model in image inpainting. It is also worth mentioning that in the case of the inviscid Burgers equation, $u_t+uu_{x}=0$, this type of regularization leads to $-\alpha^2u_{xxt}+u_t+uu_x=0$, which is the well-known Benjamin-Bona-Mahony equation of water waves \cite{Benjamin_Bona_Mahony_1972}. One goal of the present work is to lay some of the mathematical groundwork necessary to extend the Voigt regularization to the two-dimensional Boussinesq-equations, for the purpose of simplifying numerical simulations of the solutions to these equations. It is worth mentioning that all the results reported here are equally valid in the presence of the Coriolis rotation term. \section{Preliminaries}\label{sec:Pre} In this section, we introduce some preliminary material and notations which are commonly used in the mathematical study of fluids, in particular in the study of the Navier-Stokes equations (NSE). For a more detailed discussion of these topics, we refer to \cite{Constantin_Foias_1988,Temam_1995_Fun_Anal,Temam_2001_Th_Num,Foias_Manley_Rosa_Temam_2001}. Let $\mathcal{F}$ be the set of all trigonometric polynomials with periodic domain $\mathbb T^d:=[0,1]^d$. We define the space of smooth functions which incorporates the divergence-free and zero-average condition to be \[\mathcal{V}:=\set{\varphi\in\mathcal{F}^d:\nabla\cdot\varphi=0 \text{ and} \int_{\mathbb T^d}\varphi\;dx=0}.\] \noindent For the majority of this work, we take $d=2$. We denote by $L^p$, $W^{s,p}$, $H^s\equiv W^{s,2}$, $C^{0,\gamma}$ the usual Lebesgue, Sobolev, and H\"older spaces, and define $H$ and $V$ to be the closures of $\mathcal{V}$ in $L^2$ and $H^1$ respectively. We restrict ourselves to finding solutions whose average over the periodic box $\mathbb T^d$ is zero. Observe from the evolution equation of $\theta$ in the Boussinesq system of equations (as well as the Boussinesq-Voigt system of equations), if we assume that the average $\int_{\mathbb T^d}\theta_0(x) dx=0$, then the average of $\int_{\mathbb T^d}\theta(x,t)\;dx=0$ for all $t\geq 0$, and also$\int_{\mathbb T^d}\vect{u}(x,t)\;dx = 0$ for all $t\geq0$ provided $\int_{\mathbb T^d}\vect{u}_0(x) \;dx=0$. Therefore, we can work in the spaces defined above consistently. The notation $V^s:=H^s(\mathbb T^d)\cap V$ will be convenient. When necessary, we write the components of a vector $\vect{y}$ as $y^j$, $j=1,2$. We define the inner products on $H$ and $V$ respectively by \[(\vect{u},\vect{v})=\sum_{i=1}^2\int_{\mathbb T^d} u^iv^i\,dx \quad\text{and}\quad ((\vect{u},\vect{v}))=\sum_{i,j=1}^2\int_{\mathbb T^d}\partial_ju^i\partial_jv^i\,dx, \] and the associated norms $|\vect{u}|=(\vect{u},\vect{u})^{1/2}$, $\|\vect{u}\|=((\vect{u},\vect{u}))^{1/2}$. (We use these notations indiscriminately for both scalars and vectors, which should not be a source of confusion). Note that $((\cdot,\cdot))$ is a norm due to the Poincar\'e inequality, \eqref{poincare}, below. We denote by $V'$ the dual space of $V$. The action of $V'$ on $V$ is denoted by $\ip{\cdot}{\cdot}\equiv \ip{\cdot}{\cdot}_{V'}$. Note that we have the continuous embeddings \begin{equation}\label{embed} V\hookrightarrow H\hookrightarrow V'. \end{equation} Moreover, by the Rellich-Kondrachov Compactness Theorem (see, e.g., \cite{Evans_1998,Adams_Fournier_2003}), these embeddings are compact. Following \cite{Danchin_Paicu_2008}, we define the spaces \begin{align*} \sqrt{L}:=\set{w\big|\|w\|_{\sqrt{L}}<\infty}, \end{align*} where $\|\cdot\|_{\sqrt{L}}$ is defined by \eqref{root_L_norm}. This space arises naturally, due to the following inequality, proven in \cite{Lieb_Loss_2001} (see also \cite{Danchin_Paicu_2008}), which is valid in two dimensions: \begin{align}\label{CZ_est} \|\vect{w}\|_p\leq C\sqrt{p-1}\|\vect{w}\|_{H^1}, \end{align} for all $\vect{w}\in H^1(\mathbb T^2)$, for any $p\in[2,\infty)$, and where we denote by $\|\cdot\|_p$ the usual $L^{p}$ norm. Note that clearly $L^\infty\subset\sqrt{L}\subset L^p$ for every $p\in[2,\infty)$. We also recall the well-known elliptic estimate, due to the Biot-Savart law for an incompressible vector field $\vect{u}$, satisfying $\nabla\cdot \vect{u}=0$, and $\nabla\times\vect{u} = \omega$, by means of the Calder\'on-Zygmund theory for singular integrals: \begin{equation}\label{Calderon} \|\nabla\vect{u}\|_p \leq C p \|\omega\|_p \end{equation} for any $p\in (1,\infty)$ (see, e.g., \cite{Yudovich_1963}). Let $Y$ be a Banach space. We denote by $L^p([0,T],Y)$ (which we also denote as $L^p_TY_x$), the space of (Bochner) measurable functions $t\mapsto w(t)$, where $w(t)\in Y$ for a.e. $t\in[0,T]$, such that the integral $\int_0^T\|w(t)\|_Y^p\,dt$ is finite (see, e.g., \cite{Adams_Fournier_2003}). A similar convention is used in the notation $C^k([0,T],X)$ for $k$-times differentiable functions of time on the interval $[0,T]$ with values in $Y$. Abusing notation slightly, we write $w(\cdot)$ for the map $t\mapsto w(t)$. In the same vein, we often write the vector-valued function $w(\cdot,t)$ as $w(t)$ when $w$ is a function of $x$ and $t$. We denote by $\dot{C}^\infty(\mathbb T^2\times [0,T])$ the set of infinitely differentiable functions in the variable $x$ and $t$ which are periodic in $x$ with $\int_{\mathbb T^2}\varphi(\cdot,t)\;dx=0$. Similarly, we denote by $\dot{L}^p(\mathbb T^2) = \set{\varphi\in L^p(\mathbb T^2) : \int_{\mathbb T^2}\varphi(x)\;dx=0}$. We denote by $P_\sigma:\dot{L}^2\rightarrow H$ the Leray-Helmholtz projection operator and define the Stokes operator $A:=-P_\sigma\triangle$ with domain $\mathcal{D}(A):=H^2\cap V$. For $\varphi\in \mathcal{D}(A)$, we have the norm equivalence $|A\varphi|\cong\|\varphi\|_{H^2}$ (see, e.g., \cite{Temam_2001_Th_Num, Constantin_Foias_1988}). In particular, the Stokes operator $A$ can be extended as a linear operator from $V$ into $V'$ associated with the bilinear form $((\vect{u},\vect{v}))$, $$\ip{A\vect{u}}{\vect{v}} = ((\vect{u},\vect{v})) \quad \mbox{ for all } \vect{v}\in V.$$ It is known that $A^{-1}:H\rightarrow \mathcal{D}(A) \hookrightarrow H$ is a positive-definite, self-adjoint, compact operator from $H$ into itself, and therefore it has an orthonormal basis of positive eigenvectors $\set{\vect{w}_k}_{k=1}^\infty$ in $H$ corresponding to a non-increasing sequence of eigenvalues (see, e.g., \cite{Constantin_Foias_1988,Temam_1995_Fun_Anal}). The vectors $\set{\vect{w}_k}_{k=1}^\infty$ are also the eigenvectors of $A$. Since the corresponding eigenvalues of $A^{-1}$ can be ordered in a decreasing order, we can label the eigenvalues $\lambda_k$ of $A$ so that $0<\lambda_1\leq\lambda_2\leq\lambda_3\leq\cdots$. Let $H_n:=\text{span}\set{\vect{w}_1,\ldots,\vect{w}_n}$, and let $P_n:H\rightarrow H_n$ be the $L^2$ orthogonal projection onto $H_n$. Notice that in the case of periodic boundary conditions in the torus $\mathbb T^2$ we have $\lambda_1=(2\pi)^{-2}$. We will abuse notation slightly and also use $P_n$ in the scalar case for the corresponding projection onto eigenfunctions of $-\triangle$, but this should not be a source of confusion. Furthermore, in our case it is known that $A=-\triangle$ due to the periodic boundary conditions (see, e.g., \cite{Constantin_Foias_1988,Temam_1995_Fun_Anal}) and the eigenvectors $\vect{w}_j$ are of the form ${\bf a}_{\bf k}e^{2\pi i{\bf k} \cdot {\bf x}}$, with ${\bf a}_{\bf k}\cdot {\bf k} =0$. It will be convenient to use the standard notation of the Navier-Stokes bilinear term \begin{equation}\label{Bdef} B(\vect{w}_1,\vect{w}_2):=P_\sigma\sum_{j=1}^d\partial_j(w_1^j\vect{w}_2) \end{equation} for $\vect{w}_1,\vect{w}_2\in\mathcal{V}$. We list some important properties of $B$ which can be found for example in \cite{Constantin_Foias_1988, Foias_Manley_Rosa_Temam_2001, Temam_1995_Fun_Anal, Temam_2001_Th_Num}. \begin{lemma}\label{B:prop} The operator $B$ defined in \eqref{Bdef} is a bilinear form which can be extended as a continuous map $B:V\times V\rightarrow V'$ such that \begin{equation} \ip{B(\vect{w}_1, \vect{w}_2)}{\vect{w}_3} = \int _{\mathbb T^d} (\vect{w}_1\cdot\nabla\vect{w}_2)\cdot\vect{w}_3\;dx, \end{equation} for every $\vect{w}_1, \vect{w}_2, \vect{w}_3 \in \mathcal{V}$. satisfying the following properties: \begin{enumerate}[(i)] \item For $\vect{w}_1$, $\vect{w}_2$, $\vect{w}_3\in V$, \begin{equation}\label{B:Alt} \ip{B(\vect{w}_1,\vect{w}_2)}{\vect{w}_3}_{V'}=-\ip{B(\vect{w}_1,\vect{w}_3)}{\vect{w}_2}_{V'}, \end{equation} and therefore \begin{equation}\label{B:zero} \ip{B(\vect{w}_1,\vect{w}_2)}{\vect{w}_2}_{V'}=0. \end{equation} \item For $\vect{w}_1$, $\vect{w}_2$, $\vect{w}_3\in V$, \begin{align}\label{B:ineq1} |\ip{B(\vect{w}_1,\vect{w}_2)}{\vect{w}_3}_{V'}| &\leq C|\vect{w}_1|^{1/2}\|\vect{w}_1\|^{1/2}\|\vect{w}_2\||\vect{w}_3|^{1/2}\|\vect{w}_3\|^{1/2}\\ |\ip{B(\vect{w}_1,\vect{w}_2)}{\vect{w}_3}_{V'}| &\leq C|\vect{w}_1|^{1/2}\|\vect{w}_1\|^{1/2}|\vect{w}_2|^{1/2}\|\vect{w}_2\|^{1/2}\|\vect{w}_3\|. \end{align} \end{enumerate} \end{lemma} Let us define another very similar bilinear operator motivated by the transport term in the temperature equation. \begin{equation}\label{B_theta_def} \mathcal{B}(\vect{w},\psi):=\sum_{j=1}^d\partial_j(w^j \psi) \end{equation} for $\vect{w}\in\mathcal{V}$ and $\psi \in \mathcal{F} $ with $\int_{\mathbb T^d}\psi\;dx=0$. We have the following similar properties for $\mathcal{B}$ which can be proven easily as in the proof of Lemma \ref{B:prop}. \begin{lemma} \label{B_theta:prop} The operator $\mathcal{B}$ defined in \eqref{B_theta_def} is a bilinear form which can be extended as a continuous map $\mathcal{B}:V\times H^1\rightarrow H^{-1}$, such that \begin{equation}\label{B_theta:def} \ip{\mathcal{B}(\vect{w}, \psi)}{\phi}_{H^{-1}} = - \int _{\mathbb T^d} \vect{w}\cdot\nabla\phi\;\psi\;dx, \end{equation} for every $\vect{w} \in \mathcal{V}$ and $\phi,\psi\in \dot{C}^1$. Moreover, \begin{equation}\label{B_theta:Alt} \ip{\mathcal{B}(\vect{w},\psi)}{\phi}_{H^{-1}}=-\ip{\mathcal{B}(\vect{w},\phi)}{\psi}_{H^{-1}}, \end{equation} and therefore \begin{equation}\label{B_theta:zero} \ip{\mathcal{B}(\vect{w},\phi)}{\phi}_{H^{-1}}=0. \end{equation} Furthermore, $\mathcal{B}$ is also a bilinear form which can be extended as a continuous map $\mathcal{B}:\mathcal D(A)\times L^2\rightarrow H^{-1}$. \end{lemma} Here and below, $C, C_j$, etc. denote generic constants which may change from line to line. $C_\alpha,C(\cdots)$, etc. denote generic constants which depend only upon the indicated parameters. $K, K_j$, etc. denote constants which depend on norms of initial data, and also may vary from line to line. Next, we recall that for an integrable function $f$ such that $\int_{\mathbb T^2} f\;dx=0$, we have in two dimensions, \begin{equation}\label{L4_to_H1} \|f\|_{L^4}\leq|f|^{1/2}\|f\|^{1/2}. \end{equation} We also recall Agmon's inequality in two dimensions (see, e.g., \cite{Agmon_1965, Constantin_Foias_1988}). For $\vect{w}\in\mathcal{D}(A)$ we have \begin{equation}\label{Agmon1/2} \|\vect{w}\|_{L^\infty} \leq C|\vect{w}|^{1/2}|A\vect{w}|^{1/2}\:\:. \end{equation} Furthermore, for all $\varphi\in V$, we have the Poincar\'e inequality \begin{equation}\label{poincare} \|\varphi\|_{L^2}\leq\lambda_1^{-1/2} \|\nabla\varphi\|_{L^2}. \end{equation} We will also make use of the following inequality, valid in two dimensions, which is based on the Br\'ezis-Gallouet inequality, and which we prove in the appendix. For every $\epsilon>0$, sufficiently small, and $\vect{w}\in H^2(\mathbb T^2)$, \begin{align}\label{brezis} \|\vect{w}\|_{L^\infty}\leq C\pnt{\|\vect{w}\|\epsilon^{-1/4}+ |A\vect{w}|e^{-1/\epsilon^{1/4}}}, \end{align} where $C$ is independent of $\epsilon$. Finally, we note a result of deRham \cite{Wang_1993, Temam_2001_Th_Num}, which states that if $g$ is a locally integrable function (or more generally, a distribution), we have \begin{equation}\label{deRham} g =\nabla p \text{ for some distribution $p$ iff } \ip{g}{\vect{w}}=0\quad\text{for all } \vect{w}\in\mathcal{V}, \end{equation} which one uses to recover the pressure. \section{Global Well-posedness Results for the Viscous and Non-diffusive Boussinesq Equations. \texorpdfstring{($P^0_{\nu,0}$)}{}} \label{s:P_nu_zero_zero} Let us first define the weak formulation of problem $P_{\nu,\kappa}^0$ in $\mathbb T^2 \times [0,T]$. By choosing a suitable phase space which incorporates the divergence free condition of the Boussinesq equations, we can eliminate the pressure from the equation, as is standard in the theory of the Navier-Stokes equations. Consider the scalar test functions $\varphi(x,t) \in \dot{C}^\infty(\mathbb T^2 \times [0,T])$, such that $\varphi(x,T) =0$; and the vector test functions $\Phi(x,t)\in [ \dot{C}^\infty(\mathbb T^2 \times [0,T])]^2$ such that $\nabla\cdot\Phi(\cdot,t) =0$ and $\Phi(x,T) =0$. Then the weak formulation of problem $P_{\nu,\kappa}^0$ in $\mathbb T^2 \times [0,T]$ (and similarly of problem $P_{\nu,0}^0$, when $\kappa=0$, in $\mathbb T^2 \times [0,T]$) is written as follows: \begin{subequations}\label{bouss_wk} \begin{align} &\quad\notag -\int_0^T(\vect{u}(s), \Phi'(s))\,ds +\nu\int_0^T((\vect{u}(s), \Phi(s) ))\,ds + \sum_{j=1}^2\int_0^T(u_j\vect{u},\partial_j\Phi) \,ds \\&\label{bouss_wk_mo} = (\vect{u}_0(x),\Phi(x,0) ) +\int_0^T(\theta(s)\vect{e}_2,\Phi(s))\,ds \\ & \notag \\ &\quad\notag -\int_0^T(\theta(s),\varphi'(s))\,ds +\label{bouss_wk_den} \int_0^T(\vect{u}\theta,\nabla\varphi)\,ds +\kappa\int_0^T((\theta(s), \varphi(s) ))\,ds \\&=(\theta_0(x),\varphi(x,0)). \end{align} \end{subequations} \begin{remark}\label{test_fcns_are_trig_polys} Note that it will become clear later that \eqref{bouss_wk} will hold for a larger class of test functions, and consequently it will be sufficient to consider only test functions of the form \begin{subequations}\label{test_fcns} \begin{align} \label{test_vect} \Phi(x,t) &= \Gamma_{\vect{m}}(t) e^{2\pi i\vect{m}\cdot \vect{x}}, \text{ with } \Gamma_{\vect{m}} \in [C^\infty([0,T])]^2\text{ and }\vect{m}\cdot\Gamma_{\vect{m}}(t)=0, \intertext{and} \label{test_scal} \varphi(x,t) &= \chi_{\vect{m}}(t) e^{2\pi i\vect{m}\cdot \vect{x}}, \text{ with }\chi_{\vect{m}} \in C^\infty([0,T]), \end{align} \end{subequations} for $\vect{m}\in(\field{Z}\backslash \{0\})^2$, since such functions form a basis for the corresponding larger spaces of test functions. \end{remark} In the two-dimensional case, the global well-posedness of system $P_{\nu,\kappa}^0$ in \eqref{bouss}, that is, in the case $\kappa>0$, $\nu>0$, is well-known, and can be proved in a similar manner following the work of \cite{Foias_Manley_Temam_1987} (see also \cite{Temam_1997_IDDS,Cannon_DiBenedetto_1980}). We have the following existence and uniqueness results for the system $P_{\nu,\kappa}^0$, which will be used to prove the existence of weak solutions for the system $P_{\nu,0}^0$. From here on, we only work on spaces of functions which are periodic and with spatial average zero. Therefore, to simplify notation, we write $\dot{L}^2$ as $L^2$, $\dot{C}^k$ as $C^k$, etc. \begin{theorem}\label{thm:diffusion} Let $T>0$, $\nu>0$ be fixed but arbitrary. Then, the following results hold: \begin{enumerate}[(i)] \item If $\vect{u}_0\in H$, $\theta_0\in L^2$ then for each $\kappa>0$, \eqref{bouss} has a unique solution $(\vect{u}_\kappa,\theta_\kappa)$ in the sense of \eqref{bouss_wk} such that $\vect{u}_\kappa\in C([0,T],H)\cap L^2([0,T],V)$, $\theta_\kappa\in C_w([0,T],L^2)$. Furthermore, there exists a constant $K_0 > 0$ independent of $\kappa$ such that the following bounds hold: $\|\vect{u}_\kappa\|_{L^2([0,T],V)}\leq K_0$, $\|\vect{u}_\kappa\|_{L^\infty([0,T],H)}\leq K_0$, $\|\frac{d}{dt}\vect{u}_\kappa\|_{L^2([0,T],V')}\leq K_0$, $\|\theta_\kappa\|_{L^\infty([0,T],L^2)} \leq |\theta_0|$, $\|\frac{d}{dt}\theta_\kappa\|_{L^2([0,T],H^{-2})} \leq K_0$ and $\sqrt{\kappa}\|\theta_\kappa\|_{L^2([0,T],H^1)}\leq K_0$. \item If the initial data $\vect{u}_0\in V$ and $\theta_0\in L^2$, then the solution $u_\kappa\in C([0,T], V)\cap L^2([0,T],\mathcal D(A))$ and we also have the following bounds: $\|\vect{u}_\kappa\|_{L^2([0,T],\mathcal{D}(A))}\leq K_0$, $\|\vect{u}_\kappa\|_{L^\infty([0,T],V)}\leq K_0$, $\|\frac{d}{dt}\vect{u}_\kappa\|_{L^2([0,T],H)} \leq K_0$ and $\|\frac{d}{dt}\theta_\kappa\|_{L^2([0,T],H^{-1})} \leq K_0$. \item If $\theta_0\in L^\infty$ and $\vect{u}_0\in H$, then $\|\theta_\kappa\|_{L^\infty([0,T],L^\infty)}\leq \|\theta_0\|_\infty$. \item If $u_0 \in H^3$ and $\theta_0 \in H^2$ then for each $\kappa>0$, \eqref{bouss} has a unique solution $u_\kappa\in C([0,T],H^3)\cap L^2([0,T],H^4)$ and $\theta_\kappa \in C([0,T],H^2)\cap L^2([0,T],H^3)$. \end{enumerate} \end{theorem} \begin{proof} Parts (i) and (ii) are essentially proven in \cite{Cannon_DiBenedetto_1980,Foias_Manley_Temam_1987, Temam_1997_IDDS} following the classical theory of Navier-Stokes equations. The uniform bounds in part (ii) will be established explicitly in the later proofs when called for. Part (iii) can be proven using maximum principle and is proven for example in \cite{Cannon_DiBenedetto_1980,Temam_1997_IDDS}. An explicit proof of this theorem will also be provided below. Part (iv) can be proved using basic energy estimates and Gr\"onwall's inequality again following the classical theory of the Navier-Stokes equations. \end{proof} For the current study, we now define what we mean by weak solutions and strong solutions for the viscous non-diffusive Boussinesq equations ($P^0_{\nu,0}$). We then state and prove our main results. \begin{definition}[Weak solution]\label{def:weak} Let $T>0$. Suppose $\vect{u}_0\in H$ and $\theta_0\in L^2$. We say that $(\vect{u},\theta)$ is a \textit{weak solution} to $P_{\nu,0}^0$ (that is, \eqref{bouss_nd} with $\kappa=0$) on the interval $[0,T]$, if $(\vect{u},\theta)$ satisfies the weak formulation \eqref{bouss_wk} (with $\kappa=0$), and $\vect{u}\in L^2([0,T],V)\cap C([0,T],H)$, $\pd{\vect{u}}{t}\in L^1([0,T],V')$, with $\theta\in C([0,T],L^2)$ and $\pd{\theta}{t}\in L^1([0,T],H^{-2})$. \end{definition} \begin{definition}[Strong solution]\label{def:strong} Let $T>0$. Suppose $\theta_0\in L^2$ and $\vect{u}_0\in V$. We say that $(\vect{u},\theta)$ is a \textit{strong solution} to $P_{\nu,0}^0$ (that is, \eqref{bouss_nd} with $\kappa=0$) on the interval $[0,T]$, if it is a weak solution in the sense of Definition \ref{def:weak}, and furthermore, $\vect{u}\in L^2([0,T],\mathcal D(A))\cap C([0,T],V)$, $\pd{\vect{u}}{t}\in L^1([0,T],H)$, and $\pd{\theta}{t}\in L^1([0,T],H^{-1})$. \end{definition} We now state and prove our main results regarding global existence of weak and strong solutions to problem $P_{\nu,0}^0$. \begin{theorem}[Existence of weak solutions]\label{exist_weak_visc} Let $T>0$ be given. Let $\vect{u}_0\in H$ and $\theta_0\in L^2$. Then there exists a weak solution of \eqref{bouss} on the interval $[0,T]$. Furthermore, system \eqref{bouss_wk} with $\kappa=0$ is equivalent to the functional form \begin{subequations}\label{e:functional_nu} \begin{align}\label{e:funct_1} \pd{\vect{u}}{t} + \nu A\vect{u} + B(\vect{u},\vect{u}) &= P_\sigma(\theta\vect{e}_2)\quad \mbox{in}\quad L^2([0,T], V')\quad \mbox{and}\\\label{e:funct_2} \pd{\theta}{t} + \mathcal{B}(\vect{u},\theta) &= 0\quad \mbox{in}\quad L^2([0,T], H^{-2}). \end{align} \end{subequations} Moreover, if we assume $\theta_0\in L^\infty$, then $\theta\in L^\infty([0,T], L^\infty)$. \end{theorem} \begin{proof} Our method of proof involves passing to the limit of the weak solution of \eqref{bouss} as $\kappa\rightarrow 0$, that is, we consider $\kappa>0$ to be a regularization parameter to system \eqref{bouss_nd}. Without loss of generality, we can assume $0<\kappa < 1$. In accordance with Remark \ref{test_fcns_are_trig_polys} and Definition \ref{def:weak}, we only consider test functions of the form \eqref{test_fcns}. We will show that the weak formulation \begin{subequations}\label{bouss_kappa_weak} \begin{align} &\quad\notag -\int_0^T(\vect{u}_\kappa(s), \Gamma_{\vect{m}}'(s) e^{2\pi i\vect{m}\cdot \vect{x}})\,ds +\nu\int_0^T((\vect{u}_\kappa(s), \Gamma_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}} ))\,ds \\&\quad \notag +\sum_{j=1}^2\int_0^T(u^j_\kappa\vect{u}_\kappa, \Gamma_{\vect{m}}(s) \partial_j e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds \\& \label{bouss_kappa_wk_mo}= (\vect{u}_{0},\Gamma_{\vect{m}}(0) e^{2\pi i\vect{m}\cdot \vect{x}} ) +\int_0^T(\theta_\kappa(s)\vect{e}_2,\Gamma_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds \\ & \notag \\ & -\int_0^T(\theta_\kappa(s),e^{2\pi i\vect{m}\cdot \vect{x}} )\chi_{\vect{m}}'(s)\,ds +\notag \int_0^T(\vect{u}_\kappa(s)\theta_\kappa(s),\nabla e^{2\pi i\vect{m}\cdot \vect{x}} \chi_{\vect{m}}(s))\,ds \\&\label{bouss_kappa_wk_den} + \kappa \int_0^T((\theta_\kappa(s), e^{2\pi i \vect{m}\cdot \vect{x}}\chi_{\vect{m}}(s)))\;ds=(\theta_{0},e^{2\pi i\vect{m}\cdot \vect{x}})\chi_{\vect{m}}(0) \end{align} \end{subequations} converges to the weak formulation of $P^0_{\nu,0}$ (see \eqref{bouss_wk} with $\kappa=0$) as $\kappa\rightarrow 0$. After passing to the limit in the system we then show that the limiting functions satisfy the aforementioned regularity properties. We proceed with the following steps. \begin{list}{}{\leftmargin=0em} \item {\em {\bf Step 1: } Using compactness arguments to prove convergence of a subsequence.} From Theorem \ref{thm:diffusion}, in particular from the uniform bounds (with respect to $\kappa$) of $\vect{u}_\kappa$, $\pd{\vect{u}_\kappa}{t}$, $\theta_\kappa$ and $\pd{\theta_\kappa}{t}$ in the corresponding norms, one can use the Banach-Alaoglu Theorem and the Aubin Compactness theorem (see, e.g., \cite[Lemma 8.2]{Constantin_Foias_1988} or \cite{Temam_2001_Th_Num}) to justify that one can extract a subsequence of $(\vect{u}_\kappa,\theta_\kappa)$ (which we still write as $(\vect{u}_\kappa,\theta_\kappa)$) as $\kappa\rightarrow 0$ and elements $\vect{u}$ and $\theta$, such that \begin{subequations}\label{wk_conv} \begin{align} \label{st_u_L2H} \vect{u}_\kappa&\rightarrow\vect{u} \quad\text{strongly in }L^2([0,T],H),\\ \label{wk_u_L2V} \vect{u}_\kappa&\rightharpoonup\vect{u} \quad\text{weakly in }L^2([0,T],V)\text{ and weak-\textasteriskcentered \;in }L^\infty([0,T],H),\\ \label{wk_du_dt} \pd{\vect{u}_\kappa}{t}&\rightharpoonup\pd{\vect{u}}{t} \quad\text{weakly in }L^2([0,T],V'),\\ \label{wk_theta_LiH} \theta_\kappa&\rightharpoonup\theta \quad\text{weakly in }L^2([0,T],L^2)\text{ and weak-\textasteriskcentered \;in }L^\infty([0,T],L^2), \\ \label{wk_dtheta_dt} \pd{\theta_\kappa}{t}&\rightharpoonup\pd{\theta}{t} \quad\text{weakly in }L^2([0,T],H^{-2}). \end{align} \end{subequations}\\ \item {\em {\bf Step 2:} Passing to the limit in the system.} The results from Step 1 imply that for the linear terms in \eqref{bouss_kappa_weak}, we have, by the weak convergence in \eqref{wk_u_L2V} and \eqref{wk_theta_LiH}, as $\kappa\rightarrow 0$, \begin{align*} \int_0^T(\vect{u}_\kappa(s), \Gamma'_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}})\,ds &\rightarrow \int_0^T(\vect{u}(s), \Gamma'_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}})\,ds ,\\ \nu\int_0^T((\vect{u}_\kappa(s), \Gamma_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}} ))\,ds &\rightarrow \nu\int_0^T((\vect{u}(s), \Gamma_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}} ))\,ds ,\\ \int_0^T(\theta_\kappa(s)\vect{e}_2,\Gamma_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds &\rightarrow \int_0^T(\theta(s)\vect{e}_2,\Gamma_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds ,\\ \int_0^T(\theta_\kappa(s),e^{2\pi i\vect{m}\cdot\vect{x}})\chi_{\vect{m}}'(s)\,ds &\rightarrow \int_0^T(\theta(s),e^{2\pi i\vect{m}\cdot\vect{x}})\chi_{\vect{m}}'(s)\,ds ,\\ \kappa\abs{\int_0^T((\theta_\kappa(s), e^{2\pi i\vect{m}\cdot \vect{x}}\chi_{\vect{m}}(s)))\;ds} & \leq C\sqrt{\kappa} (\sqrt{\kappa}\|\theta_\kappa\|_{L^2([0,T], H^1)})\leq CK_0\sqrt{\kappa} \rightarrow 0 \end{align*} \bigskip It remains to show the convergence of the remaining non-linear terms. Let \begin{align*} I(\kappa)&:=\sum_{j=1}^2\int_0^T(u^j_\kappa\vect{u}_\kappa, \Gamma_{\vect{m}}(s)\partial_j e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds- \sum_{j=1}^2\int_0^T(u^j\vect{u}, \Gamma_{\vect{m}}(s) \partial_j e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds \\ J(\kappa)&:=\int_0^T\adv{\vect{u}_\kappa(s)\theta_\kappa(s),\chi_{\vect{m}}(s)\nabla e^{2\pi i\vect{m}\cdot \vect{x}}}\,ds-\int_0^T\adv{\vect{u}(s)\theta(s),\chi_{\vect{m}}(s)\nabla e^{2\pi i\vect{m}\cdot \vect{x}}}\,ds \end{align*} The convergence $I(\kappa)\rightarrow 0$ as $\kappa\maps0$ is standard in the theory of the Navier-Stokes equations, thanks to \eqref{st_u_L2H} and \eqref{wk_u_L2V} (see, e.g., \cite{Temam_2001_Th_Num, Constantin_Foias_1988}). To show $J(\kappa)\maps0$ as $\kappa\maps0$, we write $J(\kappa)=J_1(\kappa)+J_2(\kappa)$, the definitions of which are given below. We have \begin{align*} J_1(\kappa) &:= \int_0^T(( \vect{u}_\kappa(s)-\vect{u}(s) ) \theta_\kappa(s),\nabla e^{2\pi i\vect{m}\cdot \vect{x}})\chi_{\vect{m}}(s)\,ds \maps0 \end{align*} as $\kappa\maps0$, since $\vect{u}_\kappa\rightarrow\vect{u}$ strongly in $L^2([0,T],H)$ and $\theta_\kappa\rightarrow\theta$ weakly in $L^2([0,T],H)$. For $J_2$, we have \begin{align*} J_2(\kappa)&:= \int_0^T\left(\vect{u}(s)(\theta_\kappa(s)-\theta(s)),\nabla e^{2\pi i\vect{m}\cdot \vect{x}}\right)\chi_{\vect{m}}(s)\,ds \maps0 \end{align*} thanks to the weak convergence in \eqref{wk_theta_LiH} and the fact that $\vect{u}\in L^2([0,T],H)$. Thus, $J(\kappa)=J_1(\kappa)+J_2(\kappa)\maps0$. Hence, sending $\kappa\maps0$, we see that $\vect{u}$ and $\theta$ satisfy \eqref{bouss_wk}. \\ \item {\em {\bf Step 3: }Show that $\vect{u}\in C([0,T],H)$, that $\theta\in C_w([0,T],L^2)$, and that in fact $\theta\in C([0,T],L^2)$ .} The uniform bound with respect to $\kappa$ on the time derivative of $\vect{u}_\kappa$ given in Theorem \ref{thm:diffusion} (ii) allows us to pass to an additional subsequence if necessary to find that $\frac{d\vect{u}}{dt}\in L^2([0,T],V')$. Since $\vect{u}\in L^2([0,T],V)$ and $\frac{d\vect{u}}{dt}\in L^2([0,T],V')$, following the standard theory of NSE, (see, e.g. Theorem 7.2 of \cite{Robinson_2001}) we obtain that $\vect{u}\in C([0,T],H)$. Next, we would like to show that $\theta\in C_w([0,T], L^2)$. This can be proven without difficulty using standard arguments. For completeness and for use in the later section we present the proof here. We follow similar arguments as in \cite{Levermore_Oliver_Titi_1996, Temam_2001_Th_Num}. We start by showing that the sequence of solutions $\{\theta_\kappa\}$ (as $\kappa \rightarrow 0$) is relatively compact in $C_w([0,T], H)$. By the Arzela-Ascoli theorem, it suffices to show that (a) $\{\theta_\kappa(t)\}$ is a relatively compact set in the weak topology of $L^2([0,T],\mathbb T^2)$ for a.e $t\geq 0$ and (b) for every $\phi\in L^2(\mathbb T^2)$ the sequence $\{(\theta_\kappa,\phi)\}$ is equicontinuous in $C([0,T])$. Condition (a) follows from the uniform boundedness of $\theta_\kappa(t)$ in $L^2(\mathbb T^2)$ for a.e. $t\geq 0$, as stated in Theorem \ref{thm:diffusion} part (i). Next, we show that condition (b) is satisfied. Following classical arguments, we start by assuming that $\phi$ is smooth, for example we can assume that $\phi$ is a trigonometric polynomial. We have \begin{equation} \aligned \label{e:continuity_theta_kappa} &\quad |(\theta_\kappa(t_2),\phi) - (\theta_\kappa(t_1), \phi)| \\&\leq \abs{ \kappa\int_{t_1}^{t_2}((\theta_\kappa(t),\phi))\;dt } +\abs{ \int_{t_1}^{t_2} (\nabla(\vect{u}_\kappa\theta_\kappa)(t),\phi)\;dt } \\&\leq \kappa \int_{t_1}^{t_2} \|\theta_\kappa(t)\||\nabla\phi| \;dt + \int_{t_1}^{t_2} (\vect{u}_\kappa(t)\theta_\kappa(t),\nabla\phi)\;dt \\&\leq C\kappa^{1/2}|t_2-t_1|^{1/2}\left(\kappa\int_{t_1}^{t_2}\|\theta_\kappa(t)\|^2\;dt\right)^{1/2} \\&\quad+ \|\nabla\phi\|_\infty |t_2-t_1|^{1/4}\left( \int_{t_1}^{t_2}\|\vect{u}_\kappa(t)\|^4_4 \;dt \right)^{1/4}\left(\int_{t_1}^{t_2}|\theta_\kappa(t)|^2\;dt\right)^{1/2}. \endaligned \end{equation} Without loss of generality assume $0<\kappa<1$ and use \eqref{L4_to_H1}, one then obtains \begin{equation}\label{t-small} \aligned |(\theta_\kappa(t_2),\phi) - (\theta_\kappa(t_1), \phi)|&\leq C |t_2-t_1|^{1/2} \left(\kappa\int_{t_1}^{t_2}\|\theta_\kappa(t)\|^2\;dt\right)^{1/2} \\ &\quad + C|t_2-t_1|^{1/4} \int_{t_1}^{t_2}|\vect{u}_\kappa(t)|^2\|\vect{u}_\kappa(t)\|^2 \;dt. \endaligned \end{equation} From Theorem \ref{thm:diffusion} part (i), since $\vect{u}_\kappa$ is uniformly bounded with respect to $\kappa$ in $L^\infty([0,T], H) \cap L^2([0,T], V)$ and $\displaystyle\kappa\int_{0}^T \|\theta_\kappa(t)\|^2\;dt < K_0$, with $K_0$ independent of $\kappa$, we have that the set $\{(\theta_\kappa,\phi) \}$ is equicontinuous in $C([0,T])$. We now extend this result for all test functions $\phi$ in $\dot{L}^2(\mathbb T^2)$ using a simple density argument of trigonometric polynomials in $\dot{L}^2(\mathbb T^2)$. Let $\epsilon>0$. We choose a trigonometric polynomial $\phi_\epsilon$ such that $|\phi-\phi_\epsilon | < \frac{\epsilon}{3|\theta_0| + 1}$. Then, we have \begin{equation}\label{eq:above} \aligned |(\theta_\kappa(t_2),\phi) - (\theta_\kappa(t_1), \phi)| &= |( \theta_\kappa(t_2)- \theta_\kappa(t_1), \phi-\phi_\epsilon) + ( \theta_\kappa(t_2)- \theta_\kappa(t_1),\phi_\epsilon) |\\ &\leq |\phi-\phi_\epsilon|\left(|\theta_\kappa(t_2)| + |\theta_\kappa(t_1)|\right) + |( \theta_\kappa(t_2)- \theta_\kappa(t_1),\phi_\epsilon) |. \endaligned \end{equation} From the uniform $L^\infty([0,T],L^2)$ bound of $\theta_\kappa$, with respect to $\kappa$, we conclude that the first term on the right-hand side of \eqref{eq:above} is less than $\frac{2}{3}\epsilon$. Choosing $|t_2-t_1|$ small enough in \eqref{t-small} we can make the second term on the right-hand side of \eqref{eq:above} to be less than $\epsilon/3$. Thus, the whole expression can be made less than $\epsilon$. This completes the proof that $\theta\in C_w([0,T], L^2)$. Finally, as pointed out by the authors \cite{Danchin_Paicu_2008}, since $\theta$ is transported by the div-free velocity field $\vect{u}\in L^2([0,T], V)$, we get in addition that $\theta\in C([0,T], L^2)$, (see, e.g. \cite{DiPerna_Lions_1989} ). From these results, standard arguments from the theory of the Navier-Stokes equations (see, e.g., \cite{Constantin_Foias_1988,Temam_2001_Th_Num,Robinson_2001}) now show that the initial conditions are satisfied in the sense of Definition \ref{def:weak}. \item {\em {\bf Step 4:} Show that if $\theta_0\in L^\infty$ then $\theta \in L^\infty([0,T], L^\infty)$.} Here we will use E. Hopf and G. Stampacchia technique which are very similar to those used in \cite{Foias_Manley_Temam_1987} (see also \cite{Kinderlehrer_Stampacchia_1980, Temam_1997_IDDS}), but we give the details here for the sake of completeness. For any function $f\in H^1$, we use the standard notation $f^+:=\max\{f,0\}.$ It is a standard exercise to show that if $f\in H^1$, then $f^+\in H^1$. Let $(\vect{u}_\kappa,\theta_\kappa)$ be a solution of \eqref{bouss}, as given in Theorem \ref{thm:diffusion}. Let us denote $\Theta_\kappa:=\theta_\kappa-\|\theta_0\|_{L^\infty}$. Notice that $\Theta_\kappa$ satisfies the evolution equation \eqref{bouss_den} with $\theta$ replaced by $\Theta_\kappa$ and $\vect{u}$ replaced by $\vect{u}_\kappa$. Thus we have $(\Theta_\kappa)^+\in L^2([0,T],H^1)$. Taking the action of \eqref{bouss_den} with $(\Theta_\kappa)^+$ yields \begin{align*} \frac{1}{2}\frac{d}{dt}\|(\Theta_\kappa)^+\|_{L^2}^2 &= -\kappa\int_{\mathbb T^2}|\nabla(\Theta_\kappa)^+|^2\,d\vect{x} +\int_{\mathbb T^2}\vect{u}_\kappa\Theta_\kappa\cdot \nabla\Theta_\kappa^+\,d\vect{x} \\&= -\kappa\int_{\mathbb T^2}|\nabla(\Theta_\kappa)^+|^2\,d\vect{x} +\frac{1}{2}\int_{\mathbb T^2} \vect{u}_\kappa\cdot\nabla(\Theta^+)^2 \;d\vect{x} \\&= -\kappa\int_{\mathbb T^2}|\nabla(\Theta_\kappa)^+|^2\,d\vect{x} \leq0, \end{align*} thanks to \eqref{bouss_div}. Thus $\|(\Theta_\kappa)^+(t)\|_{L^2}\leq \|(\Theta_\kappa)^+(0)\|_{L^2}=0$ which implies that $(\theta_\kappa(\vect{x},t)- \|\theta_0\|_{L^\infty})^+\leq0$ a.e. Similarly, one can show that $( \|\theta_0\|_{L^\infty}-\theta_\kappa(\vect{x},t))^+\geq 0$ a.e. It now follows that $\|\theta_\kappa\|_{L^\infty([0,T],L^\infty)}\leq \|\theta_0\|_{L^\infty}$ for all $\kappa>0$. Thus, we have $\theta_\kappa$ is bounded uniformly with respect to $\kappa$ in $L^\infty([0,T],L^\infty)$. Therefore, it follows from the Banach-Alaoglu Theorem, that there exists a subsequence of the previous subsequence which we also denote as $\theta_\kappa$ converging in weak-\textasteriskcentered\; topology of $L^\infty([0,T], L^\infty)$ to $\theta$ and satisfies the following bounds: $\displaystyle\|\theta\|_{L^\infty([0,T],L^\infty)}\leq\lim\inf_{\kappa\rightarrow 0} \|\theta_\kappa\|_{L^\infty([0,T],L^\infty)} \leq\|\theta_0\|_{L^\infty}<\infty$. The equivalence of \eqref{bouss_wk} to the functional form \eqref{e:functional_nu} follows from the standard argument of NSE (see, e.g., \cite{Temam_2001_Th_Num}). \end{list} \end{proof} \begin{theorem}[Existence of strong solutions]\label{exist_strong_visc} Let $\vect{u}_0\in V$ and $\theta_0\in L^2$. Then there exists a strong solution of $P_{\nu,0}^0$. Furthermore, the functional equation \eqref{e:funct_1} now holds in $L^2([0,T],H)$, and \eqref{e:funct_2} now holds in $L^2([0,T], H^{-1})$. \end{theorem} \begin{proof} Since $\vect{u}_0\in V$ and $\theta_0\in L^2$, we have by Theorem \ref{thm:diffusion} that there exists a solution $(\vect{u}_\kappa,\theta_\kappa)$ of \eqref{bouss} with $\vect{u}_\kappa\in L^\infty([0,T],V)\cap L^2([0,T],\mathcal{D}(A))$, and furthermore, $\frac{d}{dt}\vect{u}_\kappa\in L^2([0,T],H)$. In order to show that in fact the bounds on higher-order norms are independent of $\kappa$ (as stated in Theorem \ref{thm:diffusion} part (ii)), let us take the inner product of \eqref{bouss_nd_mo} with $A\vect{u}_\kappa$. Using the Lions-Magenes lemma (see, e.g., \cite{Temam_2001_Th_Num}) to show that $\ip{\pd{\vect{u}}{t}}{A\vect{u}} = \frac{1}{2}\frac{d}{dt}\|\vect{u}\|^2$ and the fact that $(B(\vect{u}_\kappa,\vect{u}_\kappa),A\vect{u}_\kappa)=0$ due to the periodic boundary conditions, we have \begin{align*} \frac{1}{2}\frac{d}{dt}\|\vect{u}_\kappa\|^2 +\nu|A\vect{u}_\kappa|^2 &= (\theta_\kappa \vect{e}_2,A\vect{u}_\kappa) \leq |\theta_0||A\vect{u}_\kappa| \leq \frac{1}{2\nu}|\theta_0|^2+\frac{\nu}{2}|A\vect{u}_\kappa|^2. \end{align*} Subtracting $\frac{\nu}{2}|A\vect{u}_\kappa|^2$ and using Gr\"onwall's inequality yields \begin{align}\label{u_H2_bdd} \|\vect{u}_\kappa(t)\|^2 +\nu\int_0^t|A\vect{u}_\kappa|^2 \,ds &\leq \|\vect{u}_0\|^2+\frac{1}{\nu}|\theta_0|^2t \leq \|\vect{u}_0\|^2+\frac{1}{\nu}|\theta_0|^2T:=K_3. \end{align} Thus $\vect{u}_\kappa$ is bounded in $L^{\infty}([0,T],V)\cap L^2([0,T],\mathcal{D}(A))$ independently of $\kappa$. Furthermore, \begin{align*} \abs{\frac{d\vect{u}_\kappa}{dt}} &=\sup_{|\vect{w}|=1}\pair{B(\vect{u}_\kappa,\vect{u}_\kappa)}{\vect{w}} +\nu\sup_{|\vect{w}|=1}(A\vect{u}_\kappa,\vect{w}) +\sup_{|\vect{w}|=1}(\theta_\kappa\vect{e}_2,\vect{w}) \\&\leq \sup_{|\vect{w}|=1}|A\vect{u}_\kappa|\|\vect{u}_\kappa\||\vect{w}| +\nu\sup_{|\vect{w}|=1}|A\vect{u}_\kappa||\vect{w}| +\sup_{|\vect{w}|=1}|\theta_\kappa||\vect{w}| \\&\leq K_3|A\vect{u}_\kappa| +\nu|A\vect{u}_\kappa| +|\theta_0| \end{align*} Thus, $\frac{d}{dt}\vect{u}_\kappa$ is bounded in $L^2([0,T],H)$ independently of $\kappa$ due to \eqref{u_H2_bdd}. We also have, \begin{align*} \norm{\frac{d\theta_{\kappa}}{dt}}_{H^{-1}} &= \sup_{\|\vect{w}\|=1}\abs{\ip{\mathcal{B}(\vect{u}_{\kappa},\theta_{\kappa})}{\vect{w}}} + \kappa\sup_{\|\vect{w}\|=1}\abs{\ip{\nabla\theta_{\kappa}}{\nabla\vect{w}}}\\ &\leq \sup_{\|\vect{w}\|=1}\|\vect{u}_{\kappa}\|_{L^\infty}|\theta_{\kappa}|\|\vect{w}\| +\kappa \sup_{\|\vect{w}\|=1}\|\theta_\kappa\| \|w\| \\ &\leq \|\vect{u}_{\kappa}\|_{H^2}|\theta_0| + \sqrt{\kappa}\|\theta_\kappa\|, \end{align*} where we have used here the assumption that $0< \kappa < 1$. Hence, from Theorem \ref{thm:diffusion}, we have that $\frac{d\theta_{\kappa}}{dt}$ is bounded in $L^2([0,T],H^{-1})$ independently of $\kappa$. The above estimates allow us to use the Banach-Alaoglu Theorem and the Aubin Compactness Theorem (see, e.g., \cite{Temam_2001_Th_Num, Constantin_Foias_1988}) , as $\kappa\maps0$, to extract a further subsequence (extracted from the sequences in \eqref{st_u_L2H}, \eqref{wk_u_L2V} and \eqref{wk_theta_LiH}, and which we still label with a subscript $\kappa$, such that \begin{subequations}\label{st_conv} \begin{align} \label{st_u_L2V} \vect{u}_\kappa&\rightarrow\vect{u} \quad\text{strongly in }L^2([0,T],V), \\ \label{wk_u_L2DA} \vect{u}_\kappa&\rightharpoonup\vect{u} \quad\text{weakly in }L^2([0,T],\mathcal D(A))\text{ and weak-\textasteriskcentered\; in }L^\infty([0,T],V),\\ \label{wk_d_u_H} \frac{d\vect{u}_\kappa}{dt}&\rightharpoonup\frac{d\vect{u}}{dt} \quad\text{weakly in }L^2([0,T],H),\\ \label{wk_d_theta} \frac{d\theta_\kappa}{dt}&\rightharpoonup\frac{d\theta}{dt} \quad\text{weakly in }L^2([0,T],H^{-1}), \end{align} \end{subequations} where the limit $\vect{u}$ and $\theta$ are the same elements as in \eqref{wk_conv}, by the uniqueness of limits since the current topology are stronger than those in \eqref{wk_conv}. Furthermore, since $\vect{u}\in L^2([0,T],\mathcal D(A))$ and $\frac{d\vect{u}}{dt}\in L^2([0,T],H)$, following the standard theory of NSE, (see, e.g. Theorem 7.2 of \cite{Robinson_2001}) we obtain that $\vect{u}\in C([0,T],V)$. Thus we have shown the existence of a strong solution as defined in Definition \ref{def:strong}. \end{proof} In the next theorem we will show the uniqueness of strong solutions. We note that in the work of \cite{Danchin_Paicu_2008_French}, global well-posedness in the case of whole plane $\field{R}^2$ was established with initial data $\vect{u}_0$ and $\theta_0$ both only in $L^2$. This optimal global well-posedness result was established using elegantly {\em a priori} estimates in Besov spaces for the heat equation and the transport equation. Here we give an alternate proof which uses more elementary techniques but we require stronger initial data for the velocity field. This will allow us to fix some basic ideas that we will use to get the optimal global well-posedness results for the anisotropic Boussinesq equations which we will present in the next section. \begin{theorem}[Uniqueness of Strong Solutions of $P_{\nu,0}^0$]\label{uniqueness_visc} Let $T>0$. Suppose $\theta_0\in L^2$ and $\vect{u}_0\in V$. Then there exists a unique strong solution $(\vect{u},\theta)$ to $P_{\nu,0}^0$. \end{theorem} \begin{proof} The existence of solutions satisfying the hypothesis is already given by Theorem \ref{exist_strong_visc}. It remains to show the uniqueness. Let $(\vect{u}_\ell,\theta_\ell)$ be two strong solutions, $\ell=1,2$, and define $\xi_\ell:=\triangle^{-1}\theta_\ell$ such that $\int_{\mathbb T^2}\xi_\ell\,d\vect{x}=0$ on the interval $[0,T]$. Write $\diff{\bu}:=\vect{u}_1-\vect{u}_2$, and $\diff{\xi}:=\xi_1-\xi_2$. These quantities satisfy the functional equations \begin{subequations}\label{e:functional_nu_diff} \begin{align}\label{e:funct_1_diff} \pd{\diff{\bu}}{t} + \nu A\diff{\bu} + B(\vect{u}_1,\diff{\bu}) + B(\diff{\bu},\vect{u}_2)&= P_\sigma(\triangle\diff{\xi}\vect{e}_2)\quad \mbox{in}\quad L^2([0,T], H)\quad \mbox{and}\\\label{e:funct_2_diff} \pd{\triangle\diff{\xi}}{t} + \mathcal{B}(\diff{\bu},\triangle\xi_1) + \mathcal{B}(\vect{u}_2,\triangle\diff{\xi}) &= 0\quad \mbox{in}\quad L^2([0,T], H^{-1}). \end{align} \end{subequations} Taking the inner product in $H$ of \eqref{e:funct_1_diff} with $\diff{\bu}$ , and taking the action in $H^{-1}$ of \eqref{e:funct_2_diff} on $\diff{\xi} \in L^2([0,T], H^{2})$, we obtain, thanks to Lemmas \ref{B:prop} and \ref{B_theta:prop}, \begin{subequations}\label{bouss_diff_en} \begin{align} \label{bouss_diff_en_u} \frac{1}{2}\frac{d}{dt}|\diff{\bu}|^2 &+ \nu\|\diff{\bu}\|^2 = -(B(\diff{\bu},\vect{u}_1), \diff{\bu}) +(\triangle\diff{\xi}\vect{e}_2,\diff{\bu}), \\\label{bouss_diff_en_xi} \frac{1}{2}\frac{d}{dt}\|\diff{\xi}\|^2 &= (\diff{\bu}\triangle\xi_1,\nabla\diff{\xi}) -(\vect{u}_2\triangle\diff{\xi},\nabla\diff{\xi}). \end{align} \end{subequations} In \eqref{bouss_diff_en_xi} we used the Lions-Magenes Lemma (see, e.g., \cite{Temam_2001_Th_Num}) to obtain $\frac{1}{2}\frac{d}{dt}\|\diff{\xi}\|^2 = \ip{\frac{d\diff{\xi}}{dt}}{\diff{\xi}}$. Let $K=\max_{\ell=1,2}\set{\|\vect{u}_\ell\|_{L^\infty([0,T], V)},\|\theta_\ell\|_{L^\infty([0,T], L^2)}}$. From equation \eqref{bouss_diff_en_u}, \eqref{B:ineq1} and since $\vect{u}_1\in L^\infty([0,T],V)$ we have \begin{align} \frac{1}{2}\frac{d}{dt}|\diff{\bu}|^2 + \nu\|\diff{\bu}\|^2 &\notag \leq C|\diff{\bu}|\|\diff{\bu}\|\|\vect{u}_1\|+\|\diff{\xi}\|\|\diff{\bu}\| \\&\leq\label{bouss_u_diff} \frac{K}{\nu}|\diff{\bu}|^2+\frac{\nu}{6}\|\diff{\bu}\|^2 + \frac{3}{2\nu}\|\diff{\xi}\|^2+\frac{\nu}{6}\|\diff{\bu}\|^2. \end{align} Next, let $\epsilon>0$ be given such that $\epsilon\ll1$. For the equation \eqref{bouss_diff_en_xi}, we integrate by parts and use \eqref{B_theta:zero} to find \begin{align} \frac{1}{2}\frac{d}{dt}\|\diff{\xi}\|^2 &=\notag \pair{\diff{\bu}\triangle\xi_1}{\nabla\diff{\xi}} +\sum_{j=1}^2\pair{\partial_j\vect{u}_2}{\nabla\diff{\xi}\partial_j\diff{\xi}} \\&\leq \notag \|\diff{\bu}\|_{L^\infty}|\triangle\xi_1|\|\diff{\xi}\| +\|\nabla \vect{u}_2\|_{L^{2/\epsilon}}\|\diff{\xi}\|\|\nabla\diff{\xi}\|_{L^{2/(1-\epsilon)}} \\&\leq \notag K\pnt{\|\diff{\bu}\|\epsilon^{-1/4}+|A\diff{\bu}|e^{-1/\epsilon^{1/4}}}\|\diff{\xi}\| +C\|\nabla \vect{u}_2\|_{L^{2/\epsilon}}\|\diff{\xi}\|\|\diff{\xi}\|^{1-\epsilon}\|\diff{\xi}\|_{H^2}^\epsilon \\&\leq \label{bouss_xi_diff} \frac{\nu}{6}\|\diff{\bu}\|^2 +C\pnt{\frac{K^2}{\nu}\epsilon^{-1/2} +1}\|\diff{\xi}\|^2 +|A\diff{\bu}|^2e^{-2/\epsilon^{1/4}} \\&\quad \notag +K^{\epsilon}\epsilon^{-1/2}|A\vect{u}_2|\|\diff{\xi}\|^{2-\epsilon}, \end{align} where we have used \eqref{CZ_est}, \eqref{brezis}, and the interpolation inequality $\|\nabla\diff{\xi}\|_{L^{2/(1-\epsilon)}}\leq C\|\diff{\xi}\|^{1-\epsilon}\|\diff{\xi}\|_{H^2}^\epsilon$, noting that $C$ is independent of $\epsilon$. Next, we will use the fact that, $\diff{\xi}(0)=0$ and $\diff{\bu}(0)=0$, and also that $\|\diff{\xi}(t)\|$ and $|\diff{\bu}(t)|$ are continuous in time and thus there exist a $\tau>0$ such that $\|\diff{\xi}(t)\|<1$ and $|\diff{\bu}(t)|<1$ for all $t\in[0,\tau]$. Let $t^* = \sup\{\tau\in (0,T] : |\diff{\bu}(t)|<1 \mbox{ and } \|\diff{\xi}(t)\|<1 \mbox{ for all } t\in[0,\tau)\}$. Adding \eqref{bouss_u_diff} and \eqref{bouss_xi_diff} and rearranging, we have on $[0,t^*]$, \begin{align &\quad\notag \frac{1}{2}\frac{d}{dt}\pnt{|\diff{\bu}|^2+\|\diff{\xi}\|^2 }+\frac{\nu}{2}\|\diff{\bu}\|^2 \\&\leq \notag K_\nu\pnt{1+\frac{1}{ \epsilon^{1/2}}}\pnt{|\diff{\bu}|^2 + \|\diff{\xi}\|^2} +|A\diff{\bu}|^2e^{-2/\epsilon^{1/4}} +K^{\epsilon}\epsilon^{-1/2}|A\vect{u}_2|\|\diff{\xi}\|^{2-\epsilon} \\&\leq \label{less_than_one} K_\nu\pnt{1+\frac{1}{ \epsilon^{1/2}}+\frac{K^{\epsilon}}{\epsilon^{1/2}}|A\vect{u}_2|}\pnt{|\diff{\bu}|^2 + \|\diff{\xi}\|^2}^{1-\epsilon} +|A\diff{\bu}|^2e^{-2/\epsilon^{1/4}}. \end{align} Let $\eta>0$, be arbitrary and let $z:=|\diff{\bu}|^2+\|\diff{\xi}\|^2+\eta$. Dividing \eqref{less_than_one} by $z^{1-\epsilon}$, we find \begin{align*} \frac{1}{\epsilon}\frac{d}{dt}z^{\epsilon} &\leq K_\nu\pnt{1+\frac{1}{ \epsilon^{1/2}}+\frac{K^{\epsilon}}{\epsilon^{1/2}}|A\vect{u}_2|} +z^{\epsilon-1}|A\diff{\bu}|^2e^{-2/\epsilon^{1/4}} \\&\leq K_\nu\pnt{1+\frac{1}{ \epsilon^{1/2}}+\frac{K^{\epsilon}}{\epsilon^{1/2}}|A\vect{u}_2|} +(\eta)^{\epsilon-1}|A\diff{\bu}|^2e^{-2/\epsilon^{1/4}}, \end{align*} since $z\geq\eta$. Integrating over $[0,t]$, for $t\in(0,t^*]$, we find \begin{align}\label{z_int} z(t) &\leq K_\nu^{1/\epsilon}\pnt{\epsilon T+\epsilon^{1/2}T+K^{\epsilon}\epsilon^{1/2}\int_0^T|A\vect{u}_2(s)|\,ds}^{1/\epsilon} \\&\quad\nonumber +\epsilon^{1/\epsilon}(\eta)^{1-1/\epsilon}e^{-2\epsilon^{-5/4}}\pnt{\int_0^T|A\diff{\bu}(s)|^2\,ds}^{1/\epsilon} + \eta^{1/\epsilon}. \end{align} Sending $\eta\rightarrow 0$, we obtain \begin{align}\label{e:E4} |\diff{\bu}(t)|^2 + \|\diff{\xi}(t)\|^2 \leq K_\nu^{1/\epsilon} \left(\epsilon T + \epsilon^{1/2}T + cK^\epsilon\epsilon^{1/2} \int_0^T|A\vect{u}_2(s)|^2\;ds\right)^{1/\epsilon} \end{align} for $t\in [0,t^*]$. Taking the limit of \eqref{e:E4}, as $\epsilon\maps0$, we find that $\|\diff{\xi}(t)\|=0$ and $|\diff{\bu}(t)|=0$ on $[0,t^{*}]$. In particular, $|\diff{\bu}(t^*)|^2= \|\diff{\xi}(t^*)\|^2=0<1$. Therefore, from the continuity of $|\diff{\bu}(t)|^2$ and $\|\diff{\xi}(t)\|^2$ and the definition of $t^*$, we conclude that $t^*= T$, otherwise we a contradiction to the definition of $t^*$. Hence, $\diff{\bu}(t)=0$ and $\diff{\xi}(t)=0$ for all $t\in [0,T]$. \end{proof} \section{Global Well-posedness Results for the non-diffusive Boussinesq Equations with Horizontal Viscosity \texorpdfstring{($P_{\nu_x,0}^0$)}{}}\label{sec:aniso} We now consider the Boussinesq equations with anisotropic viscosity as given in \eqref{bouss_aniso} ($P_{\nu_x,0}^0$). We will establish here global well-posedness results under some not too restricted initial conditions. In the first part of this section we will first define what we mean by weak solution to system \eqref{bouss_aniso} and then show its existence. Then, under some additional requirements on initial data, we can show uniqueness. To set additional notation, we denote the vorticity $\omega := \partial_1u^2-\partial_2 u^1$, which satisfies the following equation \begin{align}\label{bouss_aniso_vort} \partial_t\omega + \nabla\cdot(\omega\vect{u}) -\nu\partial_1^2\omega&= \partial_1\theta. \end{align} The best global well-posedness result we are aware of for problem \eqref{bouss_aniso} in the case of the whole plane $\field{R}^2$ is stated in following theorem, established in \cite{Danchin_Paicu_2008}. \begin{theorem}[Danchin and Paicu,\cite{Danchin_Paicu_2008}]\label{t:Paicu_exist} Let $\Omega = \field{R}^2$. Suppose $\theta_0\in L^2\cap L^\infty$ , and $\vect{u}_0\in V$ with $\omega_0\in \sqrt{L}$. Then system \eqref{bouss_aniso} admits a global solution $(\vect{u},\theta)$ such that $\theta\in C_B([0,\infty);L^2)\cap C_w([0,\infty);L^\infty)\cap L^\infty([0,\infty),L^\infty)$ and $\vect{u} \in C_w([0,\infty);H^1)$, $\vect{u}\cdot\vect{e}_2\in L^2_{\text{loc}}([0,\infty);H^2)$, $\omega\in L^\infty_{\text{loc}}([0,\infty),\sqrt{L})$, $\nabla \vect{u}\in L^2_{\text{loc}}([0,\infty),\sqrt{L})$. If in addition $\theta_0\in H^s$ for some $s\in (0, 1]$, then $\theta\in C([0,\infty);H^{s-\epsilon})$ for all $\epsilon > 0$. Finally, if $s > 1/2$, then the solution is unique. \end{theorem} In the present work, we improve the above result by weakening the requirements on the initial data needed for the uniqueness portion of the theorem. To begin with, we weaken the notion of solution by making the following definition. \begin{definition}[Weak Solutions for the Anisotropic Case]\label{def:soln_aniso} Let $T>0$. Let $\theta_0\in L^2$, $\omega_0\equiv\nabla^\perp\cdot\vect{u}_0\in L^2$. We say that $(\vect{u},\theta)$ is a weak solution to \eqref{bouss_aniso} on the interval $[0,T]$ if $\omega\in L^\infty([0,T];L^2)\cap C_w([0,T];L^2)$ and $\theta\in L^\infty([0,T];L^2)\cap C_w([0,T];L^2),$ $u^2\in L^2([0,T],H^2)$, $\pd{\vect{u}}{t}\in L^1([0,T], V')$, $\pd{\theta}{t}\in L^1([0,T],H^{-2})$ and also $(\vect{u},\theta)$ satisfies \eqref{bouss_aniso} in the weak sense; that is, for any $\Phi$, $\varphi$, chosen as in \eqref{test_fcns}, it holds that \begin{subequations}\label{bouss_aniso_weak_form} \begin{align} &\quad\notag -\int_0^T(\vect{u}(s),\Phi'(s))\,ds+\nu\int_0^T(\partial_1\vect{u}(s),\partial_1\Phi(s))\,ds +\sum_{j=1}^2\int_0^T(u^j\vect{u},\partial_j\Phi)\,ds \\&\label{} = (\vect{u}_0,\Phi(0))+\int_0^T(\theta(s)\vect{e}_2,\Phi(s))\,ds \\\notag\\& -\int_0^T(\theta(s),\varphi'(s))\,ds + \int_0^T(\theta\vect{u},\nabla\varphi)\,ds = (\theta_0,\varphi(0)), \end{align} \end{subequations} where $'\equiv \pd{}{s}$. \end{definition} \begin{remark} Again following standard arguments as in the theory of NSE \cite{Temam_2001_Th_Num} one can show that the above system is equivalent to the functional form \begin{subequations}\label{e:functional_nu_horizontal} \begin{align}\label{e:funct_1_horizontal} \pd{\vect{u}}{t} + \nu \partial_{1}^2\vect{u} + B(\vect{u},\vect{u}) &= P_\sigma(\theta\vect{e}_2)\quad \mbox{in}\quad L^2([0,T], V')\quad \mbox{and}\\\label{e:funct_2_horizontal} \pd{\theta}{t} + \mathcal{B}(\vect{u},\theta) &= 0\quad \mbox{in}\quad L^2([0,T], H^{-2}). \end{align} \end{subequations} \end{remark} We now state and prove our main results for the system $\eqref{bouss_aniso}$ ($P_{\nu_x,0}^0$). The global existence and regularity results will be stated in the theorem below and the uniqueness theorem will follow. \begin{theorem}[Global Existence and Regularity]\label{EU_aniso} Let $T>0$ be given. Let $\theta_0\in L^2$ and $\omega_0\in L^2$. Then, the following hold: \begin{enumerate} \item There exists a weak solution to \eqref{bouss_aniso} ($P_{\nu_x,0}^0$) in the sense of Definition \ref{def:soln_aniso}. \item If $\omega_0\in L^p$, and $\theta_0\in L^p$, with $p\in [2,\infty)$ fixed, then this weak solution satisfies $\omega\in L^\infty([0,T],L^p)$ and $\theta\in L^\infty([0,T],L^p)$. \item Furthermore, if $\omega_0\in \sqrt{L}$ and $\theta_0\in L^\infty$, then there exists a solution $\omega\in L^\infty([0,T], \sqrt{L})\cap C_w([0,T], L^2)$, $\pd{\vect{u}}{t}\in L^2([0,T], V')$ and $\theta\in L^\infty([0,T],L^\infty)\cap C([0,T],w\mbox{\textasteriskcentered-} L^\infty)$ (where $w\mbox{\textasteriskcentered-} L^\infty$ denotes the weak-$*$ topology on $L^\infty$) with $\pd{\theta}{t}\in L^\infty([0,T], H^{-1})$. \end{enumerate} \end{theorem} \begin{proof} The outline of our proof is as follows. We begin by generating approximate sequence of solutions $(\vect{u}^{(n)},\theta^{(n)})$ to $P^0_{\nu_x,0}$ by adding artificial {\em vertical viscosity} $\nu_y^{(n)}>0$, artificial diffusion $\kappa^{(n)}>0$, where $\kappa^{(n)}, \nu_y^{(n)}\maps0$ as $n\rightarrow \infty$, and also by smoothing the initial data. Global existence of solutions to the {\em fully viscous} system $P^0_{\nu,\kappa}$, given smoothed initial condition is guaranteed (see, Theorem \ref{thm:diffusion} part (iii)). Next, we establish uniform bounds, for the relevant norms of the approximate sequence of solutions which are independent of $n$ using basic energy estimates. We then employ the Aubin Compactness Theorem (see, e.g., \cite{Temam_2001_Th_Num, Constantin_Foias_1988}) to show that the sequence of approximate solutions has a subsequence converging in appropriate function spaces. This limit will serve as a candidate weak solution. We then show that one can pass to the limit to show that the candidate functions satisfy the weak formulation \eqref{bouss_aniso_weak_form}. Then we establish some regularity results. \begin{list}{}{\leftmargin=0em} \item {\em {\bf Step 1: } Generating solutions to the regularized system given smoothed initial data.} Let $\nu_x > 0$ be fixed and let $\kappa^{(n)}, {\nu^{(n)}_y}$ be a sequence of positive numbers, converging to zero. In fact, we can also assume that both $\kappa^{(n)}\leq \nu_x$ and ${\nu^{(n)}_y}\leq\nu_x$. Let $(\vect{u}^{(n)}_0, \theta^{(n)}_0)$ is a sequence of smooth initial data such that $\vect{u}^{(n)}_0\rightarrow\vect{u}_0$ in $V$ and $\theta^{(n)}_0\rightarrow\theta_0$ in $L^2$, chosen in such a way that for each $n\in\field{N}$, $\|\vect{u}^{(n)}_0\|\leq \|\vect{u}_0\|+ \frac{1}{n}$ and $|\theta^{(n)}_0|\leq |\theta_0|+\frac{1}{n}$. Notice, since $\vect{u}^{(n)}_0$ is smooth it follows that $\nabla^\perp\cdot \vect{u}^{(n)}_0 = \omega^{(n)}_0$ and so $\omega^{(n)}_0$ are smooth functions bounded in $L^2$. From Theorem \ref{thm:diffusion} part (iii), by slightly modifying the proof of this result to account for values of the viscosity which differ in the horizontal and vertical directions, we have that for each $n$, there exist $(\vect{u}^{(n)}, \theta^{(n)})$ satisfying the following equations: \begin{subequations}\label{bouss_aniso+nu_y_weak_form} \begin{align} \quad -\int_0^T(\vect{u}^{(n)}(s),\Phi'(s))\,ds&+\nu_x\int_0^T(\partial_1\vect{u}^{(n)}(s),\partial_1\Phi(s))\,ds \\\notag + \nu^{(n)}_y&\int_0^T(\partial_2\vect{u}^{(n)}(s),\partial_2\Phi(s))\,ds +\sum_{j=1}^2\int_0^T(u^{j,(n)}\vect{u}^{(n)},\partial_j\Phi)\,ds \\&\notag = (\vect{u}^{(n)}_0,\Phi(0))+\int_0^T(\theta^{(n)}(s)\vect{e}_2,\Phi(s))\,ds \\\notag\\ -\int_0^T(\theta^{(n)}(s),\varphi'(s))\,ds &+ \kappa^{(n)}\int_0^T(\nabla\theta^{(n)}(s),\nabla\Phi(s))\,ds +\int_0^T(\theta^{(n)}\vect{u}^{(n)},\nabla\varphi)\,ds \\&= (\theta^{(n)}_0,\varphi(0)). \end{align} \end{subequations} \item {\em {\bf Step 2: } A priori estimates and using compactness arguments to prove convergence of a subsequence.} We next establish {\em a priori} estimates on $(\vect{u}^{(n)},\theta^{(n)})$ uniformly in $n$ (independent of $\nu_y^{(n)}$ and $\kappa^{(n)}$). From the above smoothness properties of $(\vect{u}^{(n)},\theta^{(n)})$, we can now derive {\em a priori} estimates using basic energy estimates in which the derivatives and integrations are well defined. First, one can obtain, because div $\vect{u}^{(n)}$=0, that \begin{align}\label{eq:thetan_l2_est} |\theta^{(n)}(t)|\leq|\theta^{(n)}_0|\leq |\theta_0| + \frac{1}{n}, \end{align} and \begin{align*} |\vect{u}^{(n)}(t)|^2+2\nu_x\int_0^t|\partial_1\vect{u}^{(n)}(\tau)|^2\,d\tau &+ 2\nu_y^{(n)}\int_0^t|\partial_2\vect{u}^{(n)}(\tau)|^2\,d\tau\\ &\leq (|\vect{u}_0|+ \frac{1}{n}+t(|\theta_0|+ \frac{1}{n}))^2. \end{align*} The calculations above are justified by replacing the test functions by $\theta^{(n)}$ and $\vect{u}^{(n)}$ in \eqref{bouss_aniso+nu_y_weak_form} and then integrating by parts. Using the evolution equation of the vorticity, namely the equation \begin{align}\label{eq:omega_n} \partial_t \omega^{(n)}+\vect{u}^{(n)}\cdot\nabla\omega^{(n)}-\nu_x\partial_1^2\omega^{(n)}-\nu_y^{(n)}\partial_2^2\omega^{(n)} =\theta^{(n)}_x, \end{align} we also have \begin{align*} \frac{1}{2}\frac{d}{dt}|\omega^{(n)}|^2+\nu_x|\partial_1^2\omega^{(n)}| + \nu_y^{(n)}|\partial_2^2\omega^{(n)}| &= -(\theta^{(n)},\partial_1\omega^{(n)}) \\&\quad \leq\frac{\nu_x}{2}|\partial_1\omega^{(n)}|^2+\frac{1}{2\nu_x}|\theta^{(n)}|^2. \end{align*} Integrating this gives \begin{align}\label{omega_LiL2_omega1_L2H1} |\omega^{(n)}|^2+\nu_x\int_0^t|\partial_1\omega^{(n)}|^2\,d\tau &+ 2\nu_y^{(n)}\int_0^t|\partial_2\omega^{(n)}|^2\,d\tau\\&\leq \left(|\omega_0|+ \frac{1}{n}\right)^2+\frac{t}{2\nu_x}\left(|\theta_0|+ \frac{1}{n}\right)^2, \end{align} which implies that $\omega^{(n)}$ is uniformly bounded in $L^\infty([0,T],L^2)$ with respect to $n$, and therefore $\vect{u}^{(n)}$ is uniformly bounded in $L^\infty([0,T],V)$ with respect to $n$. Furthermore, \eqref{omega_LiL2_omega1_L2H1} shows that $\partial_1\omega^{(n)}$ is uniformly bounded in $L^2([0,T],L^2)$ with respect to $n$. We also observe that \begin{align*} \partial_1\omega^{(n)} &= \partial_1^2 u^{2,(n)}- \partial_1\partial_2 u^{1,(n)} =\partial_1^2u^{2,(n)}+\partial_2^2u^{2,(n)} = \triangle u^{2,(n)}. \end{align*} Therefore, $\triangle u^{2,(n)}$ is uniformly bounded in $L^2([0,T],L^2)$, so that $u^{2,(n)}$ is uniformly bounded in $L^2([0,T],H^2)$ by elliptic regularity, and thus $\nabla u^{2,(n)}$ is uniformly bounded in $ L^2([0,T],H^1)$, all with respect to $n$. Next we derive uniform bounds on the derivatives $(\pd{\vect{u}^{(n)}}{t})_{n\in \field{N}}$. Note that $$\pd{\omega^{(n)}}{t} = -\mathcal{B}(\omega^{(n)},\vect{u}^{(n)}) + \nu_x\partial_1^2\omega^{(n)} + \nu_y^{(n)} \partial_2^2\omega^{(n)} + \partial_1\theta^{(n)}$$ Thus, \begin{equation}\label{eq:w_t_H2} \aligned \norm{\pd{\omega^{(n)}}{t}}_{H^{-2}} &\leq \sup_{\|\vect{w}\|_{\dot{H}^2}=1}\abs{\ip{\mathcal{B}(\omega^{(n)},\vect{u}^{(n)})}{\vect{w}}} + \nu_x \sup_{\|\vect{w}\|_{\dot{H}^2}=1}\abs{\ip{ \partial_1^2\omega^{(n)} }{\vect{w}}} \\ &\quad+ \nu_y^{(n)} \sup_{\|\vect{w}\|_{\dot{H}^2}=1}\abs{\ip{ \partial_2^2\omega^{(n)} }{\vect{w}}} +\sup_{\|\vect{w}\|_{\dot{H}^2}=1}\abs{\ip{ \partial_1\theta^{(n)} }{\vect{w}}} \\ &= \sup_{\|\vect{w}\|_{\dot{H}^2}=1}\abs{\ip{\omega^{(n)}\vect{u}^{(n)}}{\nabla\vect{w}}} + \nu_x \sup_{\|\vect{w}\|_{\dot{H}^2}=1}\abs{\ip{ \omega^{(n)} }{\partial_1^2\vect{w}}} \\ &\quad+ \nu_y^{(n)} \sup_{\|\vect{w}\|_{\dot{H}^2}=1}\abs{\ip{ \omega^{(n)} }{ \partial_2^2\vect{w}}} +\sup_{\|\vect{w}\|_{\dot{H}^2}=1}\abs{\ip{ \theta^{(n)} }{\partial_1\vect{w}}} \\ &\leq |\omega^{(n)}||\vect{u}^{(n)}|^{1/2}\|\vect{u}^{(n)}\|^{1/2} + \nu_x|\omega^{(n)}| + \nu_x |\omega^{(n)}| + |\theta^{(n)}| , \endaligned \end{equation} Since each of the terms on the right-hand side of the inequality above is bounded independently of $n$, we deduce by the Calder\'on-Zygmund elliptic estimate \eqref{Calderon} that $\partial_t\vect{u}^{(n)}$ is bounded in $L^\infty([0,T],V')$ independently of $n$. Similarly, one can show easily that \begin{equation} \norm{\pd{\theta^{(n)}}{t}}_{H^{-2}} \leq |\theta^{(n)}||\vect{u}^{(n)}|^{1/2}\|\vect{u}^{(n)}\|^{1/2}, \end{equation} which implies also that $\pd{\theta^{(n)}}{t}$ is bounded in $L^\infty([0,T],H^{-2})$ independently of $n$. To summarize, we have from the above results that \begin{subequations}\label{aniso_bounds} \begin{align} (\theta^{(n)})_{n\in\field{N}}\quad &\mbox{ is bounded in } \quad L^\infty([0,T], L^2), \\ (\vect{u}^{(n)})_{n\in\field{N}} \quad&\mbox{ is bounded in } \quad L^\infty([0,T], V), \\ (u^{2,(n)})_{n\in\field{N}}\quad &\mbox{ is bounded in }\quad L^2([0,T], H^2)\\ \left(\pd{\vect{u}^{(n)}}{t}\right)_{n\in\field{N}} \quad&\mbox{ is bounded in } \quad L^\infty([0,T], V'), \\ \left(\pd{\theta^{(n)}}{t}\right)_{n\in\field{N}} \quad&\mbox{ is bounded in } \quad L^\infty([0,T], H^{-2}). \end{align} \end{subequations} Using Banach-Alaoglu and Aubin Compactness theorems (see, e.g., \cite{Temam_2001_Th_Num, Constantin_Foias_1988}), the uniform bounds with respect to $n$ as stated in \eqref{aniso_bounds} implies that one can extract a further subsequence (which we relabel with the index $n$ if necessary) such that \begin{subequations}\label{wk_conv_aniso} \begin{align} \label{wk_theta_LiH_aniso} \theta^{(n)}\rightharpoonup\theta &\quad\text{weakly in }L^2([0,T],L^2)\text{ and weak-$*$ in }L^\infty([0,T],L^2).\\ \label{st_u_L2H_aniso} \vect{u}^{(n)}\rightarrow\vect{u} &\quad\text{strongly in }L^2([0,T],H),\\ \label{wk_u_L2V_aniso} \vect{u}^{(n)}\rightharpoonup\vect{u} &\quad\text{weakly in }L^2([0,T],V)\text{ and weak-$*$ in }L^\infty([0,T],V),\\ \label{wk_u_squared_aniso} u^{2,(n)}\rightharpoonup \quad& u^{2,(n)} \quad\text{weakly in }L^2([0,T],H^2),\\ \label{wk_du_dt_aniso} \pd{\vect{u}^{(n)}}{t}\rightharpoonup\pd{\vect{u}}{t} &\quad\text{weakly in }L^2([0,T],V')\text{ and weak-$*$ in }L^\infty([0,T],V'),\\ \label{wk_dtheta_dt_aniso} \pd{\theta^{(n)}}{t}\rightharpoonup\pd{\theta}{t} &\quad\text{weakly in }L^2([0,T],H^{-2})\text{ and weak-$*$ in }L^\infty([0,T],H^{-2}). \end{align} \end{subequations} \item {\em {\bf Step 3: } Pass to the limit in the system.} It remains to show that \eqref{wk_conv_aniso} is enough to pass to the limit in \eqref{bouss_aniso+nu_y_weak_form} to show that $(\vect{u},\theta)$ satisfies \eqref{bouss_aniso_weak_form}. To do this, in accordance with Remark~\ref{test_fcns_are_trig_polys} and Definition~ \ref{def:soln_aniso}, we only consider test functions of the form \eqref{test_fcns}, which we note is sufficient for showing that $(\vect{u},\theta)$ satisfies \eqref{bouss_aniso_weak_form}. For the linear terms in \eqref{bouss_aniso+nu_y_weak_form}, we have, by the weak convergence in \eqref{wk_u_L2V_aniso} and \eqref{wk_theta_LiH_aniso}, as $n\rightarrow \infty$ (that is, $\kappa^{(n)} , \nu_y^{(n)}\rightarrow 0$), \begin{align*} \int_0^T(\vect{u}^{(n)}(s), \Gamma'_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}})\,ds &\rightarrow \int_0^T(\vect{u}(s), \Gamma'_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}})\,ds ,\\ \nu_x\int_0^T(\partial_1\vect{u}^{(n)}(s), \Gamma_{\vect{m}}(s) \partial_1e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds &\rightarrow \nu_x\int_0^T(\partial_1\vect{u}(s), \Gamma_{\vect{m}}(s) \partial_1e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds ,\\ \int_0^T(\theta^{(n)}(s)\vect{e}_2,\Gamma_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds &\rightarrow \int_0^T(\theta(s)\vect{e}_2,\Gamma_{\vect{m}}(s) e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds ,\\ \int_0^T(\theta^{(n)}(s),e^{2\pi i\vect{m}\cdot\vect{x}})\chi_{\vect{m}}'(s)\,ds &\rightarrow \int_0^T(\theta(s),e^{2\pi i\vect{m}\cdot\vect{x}})\chi_{\vect{m}}'(s)\,ds ,\\ \kappa^{(n)}\int_0^T(\partial_2\vect{u}^{(n)}(s), \Gamma_{\vect{m}}(s) \partial_2e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds & \rightarrow 0,\\ \kappa^{(n)}\abs{\int_0^T((\theta_\kappa(s), e^{2\pi i\vect{m}\cdot \vect{x}}\chi_{\vect{m}}(s)))\;ds} & \leq C\sqrt{\kappa^{(n)} }\pnt{\sqrt{\kappa^{(n)}}\|\theta_\kappa\|_{L^2_TH^1_x}} \\&\leq CK_0\sqrt{\kappa^{(n)}} \rightarrow 0. \end{align*} \bigskip It remains to show the convergence of the remaining non-linear terms. Let \begin{align*} I(n)&:=\sum_{j=1}^2\int_0^T(u^{j,(n)}\vect{u}^{(n)}, \Gamma_{\vect{m}}(s)\partial_j e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds- \sum_{j=1}^2\int_0^T(u^j\vect{u}, \Gamma_{\vect{m}}(s) \partial_j e^{2\pi i\vect{m}\cdot \vect{x}} )\,ds \\ J(n)&:=\int_0^T\adv{\vect{u}^{(n)}(s)\theta^{(n)}(s),\chi_{\vect{m}}(s)\nabla e^{2\pi i\vect{m}\cdot \vect{x}}}\,ds-\int_0^T\adv{\vect{u}(s)\theta(s),\chi_{\vect{m}}(s)\nabla e^{2\pi i\vect{m}\cdot \vect{x}}}\,ds. \end{align*} To show $I(n)\maps0$ as $n\rightarrow\infty$, we write $I(n)=I_1(n)+I_2(n)$, the definitions of which are given below. We have \begin{align*} |I_1(n)| &:= \abs{\sum_{j=1}^2\int_0^T((u^{j,(n)}(s)-u^j(s) ) \vect{u}^{(n)}(s),\partial_j e^{2\pi i\vect{m}\cdot \vect{x}})\chi_{\vect{m}}(s)\,ds }\\ &\leq \int_0^T |\vect{u}^{(n)}(s)-\vect{u}(s) ||\vect{u}^{(n)}(s)||\nabla e^{2\pi i\vect{m}\cdot \vect{x}}\chi_{\vect{m}}(s)|\,ds\\ &\leq\|\vect{u}^{(n)}-\vect{u}\|_{L^2_TH_x}\|\vect{u}^{(n)}\|_{L^\infty_TH_x}\|\nabla e^{2\pi i\vect{m}\cdot \vect{x}}\chi_{\vect{m}}\|_{L^2_TL^\infty_x}\rightarrow 0, \end{align*} as $n\rightarrow\infty$, since $\vect{u}^{(n)}\rightarrow\vect{u}$ strongly in $L^2([0,T],H)$ and $\vect{u}^{(n)}$ is uniformly bounded in $L^\infty([0,T],V)$ and hence in $L^\infty([0,T],H)$ . Similarly, for $I_2$, we have that as $n\rightarrow \infty$ \begin{align*} I_2(n)&:= \sum_{j=1}^2\int_0^T\left(u^j(s)(\vect{u}^{(n)}(s)-\vect{u}(s)),\partial_j e^{2\pi i\vect{m}\cdot \vect{x}}\right)\chi_{\vect{m}}(s)\,ds \maps0. \end{align*} To show $J(n)\maps0$ as $n\rightarrow\infty$, we write $J(n)=J_1(n)+J_2(n)$. We have \begin{align*} J_1(n) &:= \int_0^T((\vect{u}^{(n)}(s)-\vect{u}(s) ) \theta^{(n)}(s),\nabla e^{2\pi i\vect{m}\cdot \vect{x}})\chi_{\vect{m}}(s)\,ds \maps0, \end{align*} as $n\rightarrow\infty$, since $\vect{u}^{(n)}\rightarrow\vect{u}$ strongly in $L^2([0,T],H)$ and $\theta^{(n)}\rightarrow\theta$ weakly in $L^2([0,T],H)$. For $J_2$, we have \begin{align*} J_2(n)&:= \int_0^T\left(\vect{u}(s)(\theta^{(n)}(s)-\theta(s)),\nabla e^{2\pi i\vect{m}\cdot \vect{x}}\right)\chi_{\vect{m}}(s)\,ds \maps0, \end{align*} by the weak convergence in \eqref{wk_theta_LiH_aniso} and the fact that $\vect{u}\in L^2([0,T],H)$. This establishes the existence of weak solution to the system $P^0_{\nu_x,0}$ when $\vect{u}_0\in H^1 \mbox{ and } \theta_0\in L^2$.\\ \item {\em {\bf Step 4: } Show that $\omega\in C_w([0,T];L^2)$.} By the Arzela-Ascoli theorem, it suffices to show that (a) $\{\omega^{(n)}\}$ is a relatively weakly compact set in $L^2(\mathbb T^2)$ for a.e $t\geq 0$ and (b) for every $\phi\in L^2(\mathbb T^2)$ the sequence $\{(\omega^{(n)},\phi)\}$ is equicontinuous in $C([0,T])$. Condition (a) follows from the uniform boundedness of $\omega^{(n)}$ in $L^2(\mathbb T^2)$ for a.e. $t\geq 0$ given in \eqref{omega_LiL2_omega1_L2H1}. Next, we show that condition (b) is satisfied. We follow similar argument as in Step 3 of Section \ref{s:P_nu_zero_zero} equation \eqref{e:continuity_theta_kappa}, where, we start by assuming that $\phi$ is a trigonometric polynomial to obtain, \begin{align*} &\quad |(\omega^{(n)}(t_2),\phi) - (\omega^{(n)}(t_1), \phi)|\\ &\leq |\nu_x\int_{t_1}^{t_2}(\partial_1\omega^{(n)}(t),\partial_1\phi)\;dt |+ |\nu_y\int_{t_1}^{t_2}(\partial_2\omega^{(n)}(t),\partial_2\phi)\;dt |\\ &\quad + |\int_{t_1}^{t_2} (\vect{u}^{(n)}\cdot\nabla\phi,\omega^{(n)})\;dt|+ |\int_{t_1}^{t_2}(\theta^{(n)},\partial_x\phi)\;dt| \\ &\leq\nu_x \int_{t_1}^{t_2} |\partial_1\omega^{(n)}||\partial_1\phi| \;dt + \nu_x \int_{t_1}^{t_2} |\omega^{(n)}| |\partial_2^2\phi | \;dt\\ &\quad + \|\nabla\phi\|_\infty\int_{t_1}^{t_2} |\vect{u}^{(n)}||\omega^{(n)}| \;dt + \int_{t_1}^{t_2} |\theta^{(n)}||\nabla\phi| \;dt\\ &\leq |\nabla\phi|_\infty| |t_2-t_1|^{1/2} \nu_x\int_{t_1}^{t_2}|\partial_1\omega^{(n)}|^2\;dt + |\partial_2^2\phi|_\infty||t_2-t_1|\|\omega^{(n)}\|_{L^\infty_TL^2_x}\\ &\quad+ \|\nabla\phi\|_\infty |t_2-t_1|\left(\|\vect{u}^{(n)}\|_{L^\infty_TL^2_x} \|\omega^{(n)}\|_{L^\infty_TL^2_x}+ \|\theta^{(n)}\|_{L^\infty_TL^2_x}\right), \end{align*} where recall we have assumed without loss of generality that $\nu_y^{(n)}<\nu_x$. From the uniform boundedness of $\omega^{(n)}$ \eqref{omega_LiL2_omega1_L2H1} and $\theta^{(n)}$ \eqref{eq:thetan_l2_est}, the right-hand side can be made small when $|t_2-t_1|$ is small enough. Thus we have that the set $\{(\omega^{(n)},\phi) \}$ is equicontinuous in $C([0,T])$. Then one can extend this result for all test functions $\phi$ in $L^2(\mathbb T^2)$ using a simple density argument as before. This completes the proof of part (1) of Theorem \ref{EU_aniso}.\\ \item {\em {\bf Step 5} Proof of part (2) of Theorem \ref{EU_aniso}.} We choose a sequence of smooth initial data $\omega^{(n)}_0\rightarrow \omega_0$ and similarly $\theta^{(n)}_0\rightarrow \theta_0$ in every $L^p$ with $p\geq 2$ chosen in such a way that for each $n\in\field{N}$, $\|\omega^{(n)}_0\|_p\leq \|\omega_0\|_p+ \frac{1}{n}$ and $\|\theta^{(n)}_0\|_p\leq\|\theta_0\|_p+\frac{1}{n}$. From Theorem \ref{thm:diffusion}, we obtain for each $n$, a solution $u^{(n)}\in H^3$ which then gives us $\omega^{(n)}\in H^2$ which is a topological algebra, hence $|\omega^{(n)}|^{p-2}\omega^{(n)} \in H^{2}$. We take the inner product of \eqref{eq:omega_n} with $|\omega^{(n)}|^{p-2}\omega^{(n)}$. Integrating by parts, we have \begin{align*} \frac{1}{p}\frac{d}{dt}\|\omega^{(n)}\|_p^p & +\nu_x(p-1)\int_{\mathbb T^2}|\partial_1\omega^{(n)}|^2|\omega^{(n)}|^{p-2}\,dx + \nu_y^{(n)}(p-1)\int_{\mathbb T^2}|\partial_2\omega^{(n)}|^2|\omega|^{p-2}\,dx \\&\leq (p-1)\int_{\mathbb T^2}|\theta^{(n)}||\partial_1\omega^{(n)}||\omega^{(n)}|^{p-2}\,dx \\&\leq \nu_x(p-1)\int_{\mathbb T^2}|\partial_1\omega^{(n)}|^2|\omega^{(n)}|^{p-2}\,dx +\frac{p-1}{4\nu_x}\int_{\mathbb T^2}|\theta^{(n)}|^2|\omega^{(n)}|^{p-2}\,dx \\&\leq \nu_x(p-1)\int_{\mathbb T^2}|\partial_1\omega^{(n)}|^2|\omega^{(n)}|^{p-2}\,dx +\frac{p-1}{4\nu_x}\|\theta^{(n)}\|_p^2\|\omega^{(n)}\|_p^{p-2}. \end{align*} Therefore, we have \begin{align*} \frac{1}{p}\frac{d}{dt}\|\omega^{(n)}\|_p^p \leq \frac{p-1}{4\nu_x}\|\theta^{(n)}\|_p^2\|\omega^{(n)}\|_p^{p-2} \leq \frac{p-1}{4\nu_x}\left(\|\theta_0\|_p+\frac{1}{n}\right)^2\|\omega^{(n)}\|_p^{p-2}. \end{align*} That is, \begin{align*} \frac{d}{dt}\|\omega^{(n)}\|_p^2 \leq \frac{p-1}{2\nu_x}\|\theta^{(n)}_0\|_p^2 \leq \frac{p-1}{2\nu_x}\left(\|\theta_0\|_p+\frac{1}{n}\right)^2. \end{align*} Integrating in time, we have \begin{align}\label{omegan_p_bounds} \|\omega^{(n)}(t)\|_p^2 &\leq \|\omega^{(n)}_0\|_p^2+\frac{p-1}{2\nu_x}\left(\|\theta_0\|_p+\frac{1}{n}\right)^2t\\ &\leq\notag \left(\|\omega_0\|_p+\frac{1}{n}\right)^2+\frac{p-1}{2\nu_x}\left(\|\theta_0\|_p+\frac{1}{n}\right)^2t. \end{align} That is, $\omega^{(n)}$ is uniformly bounded in $L^\infty([0,T], L^p)$ for each $p \in [2, \infty)$, independent of $n$. It follows from the Banach-Alaoglu Theorem and diagonalization process, that there exists a further subsequence which we also denote as $\omega^{(n)}$ converging weak-\textasteriskcentered \, in $L^\infty([0,T], L^p)$ to some limit which we denote as $\omega$ and this limit also enjoys the limit of the upper bound, that is \begin{align}\label{omega_p_bound} \|\omega\|_p^2\leq \left(\|\omega_0\|_p+\frac{1}{n}\right)^2+\frac{p-1}{2\nu_x}\left(\|\theta_0\|_p+\frac{1}{n}\right)^2t. \end{align} This implies that $\omega\in L^\infty([0,T];L^p)$ for all $p \in [2, \infty)$. Similarly we find that \begin{align}\label{thetan_lp_bound} \|\theta^{(n)}(t)\|_p\leq\|\theta^{(n)}_0\|_p\leq\|\theta_0\|_p + \frac{1}{n}, \end{align} which implies that $\theta^{(n)}$ converges weak-\textasteriskcentered \, in $L^\infty([0,T];L^p)$ to $\theta \in L^\infty([0,T];L^p)$ for all $p \in [2, \infty)$, and $\|\theta\|_{L^\infty([0,T], L^p)} \leq \|\theta_0\|_p$.\\ \item {\em {\bf Step 6} Proof of part (3) of Theorem \ref{EU_aniso}.} To prove Theorem \ref{EU_aniso} {\em part (3) } we divide both sides of \eqref{omega_p_bound} by $p-1$ and then taking the supremum over all $p>2$ of both sides, we get that $\omega\in L^\infty([0,T], \sqrt{L})$ provided that $\omega_0\in \sqrt{L}$ and $\theta_0\in L^\infty$. Next, we want to show that $\theta\in C([0,T];w\mbox{\textasteriskcentered-} L^\infty)$. We will use the Arzela-Ascoli theorem as in Step 4. Notice that if $\theta_0\in L^\infty$ then \eqref{thetan_lp_bound} holds uniformly for all $p\in[2,\infty)$ and hence \begin{align}\label{thetan_infty_bound} \|\theta^{(n)}(t)\|_\infty\leq\|\theta_0\|_\infty + \frac{1}{n}. \end{align} This implies that the sequence $\theta^{(n)}(t)$ is a relatively compact set in the weak$-*$ topology of $L^\infty([0,T]\times\mathbb T^2)$. It suffices to show that the sequence $\{\left(\theta^{(n)},\phi\right)\}$ is equicontinuous in $C([0,T])$ for every $\phi\in L^1$. It follows automatically from the previous result and the density of $L^2(\mathbb T^2)$ in $L^1(\mathbb T^2)$ that $\theta\in C_w([0,T],L^2) $. Finally, we would like to show that $\pd{\theta}{t}\in L^\infty([0,T], H^{-1})$ and hence $\pd{\theta}{t}\in L^2([0,T], H^{-1})$. Since $\omega\in L^\infty([0,T],\sqrt{L})$, we have in particular that $\omega\in L^\infty([0,T],L^3)$, and hence $\vect{u}\in L^\infty([0,T],W^{1,3})\subset L^\infty([0,T],L^\infty)$ by \eqref{Calderon}, \eqref{poincare}, and the Sobolev Embedding Theorem. From equation \eqref{e:funct_2_horizontal}, using \eqref{B_theta:def} and the fact that $\theta\in L^\infty([0,T],L^2)$, we obtain, \begin{align} \label{e:strong_aniso_D_theta} \norm{\pd{\theta}{t}}_{H^{-1}} = \sup_{\|w\| = 1} \abs{\ip{\mathcal{B}(\vect{u},\theta)}{w}} \leq\|\vect{u}\|_\infty|\theta| <\infty \text{ a.e } t\in [0,T]. \end{align} This completes the proof of part (3) of Theorem \ref{EU_aniso}. \qedhere \end{list} \end{proof} \begin{theorem}[Uniqueness for the Anisotropic Case] Let $\theta_0\in L^\infty$, $\omega_0\in \sqrt{L}$. Then, for every $T>0$, there exists a unique solution $\omega\in L^\infty([0,T], \sqrt{L})\cap C_w([0,T];L^2)$ and $\theta\in L^\infty([0,T], L^\infty)\cap C([0,T]),w\mbox{\textasteriskcentered-} L^\infty)$ to \eqref{bouss_aniso}. \end{theorem} \begin{proof} Let $T>0$ arbitrarily large. The existence of solution on the interval $[0,T]$ is established above, therefore it suffices to show uniqueness. We note that some very important {\em a priori} estimates that we need in the beginning of this proof were first elegantly derived in \cite{Danchin_Paicu_2008}. We recall those estimates that we have borrowed from \cite{Danchin_Paicu_2008}. We have derived them rigorously in the previous theorem and we derive them here again formally to make the proof of uniqueness self-contained. First, one may easily show that for any $p\in[2,\infty]$, we have \begin{align}\label{eq:theta_lp_est} \|\theta(t)\|_{p}\leq\|\theta_0\|_{p}, \end{align} so $\theta\in L^\infty([0,T],L^p)$, $p\in[2,\infty]$. \begin{comment} We also recall that \begin{align*} |\vect{u}|^2+2\nu\int_0^t|\partial_1\vect{u}|^2\,d\tau\leq (|\vect{u}_0|+t|\theta_0|)^2. \end{align*} which implies that $\vect{u}\in L^\infty([0,T], H)$ and $\partial_1\vect{u}\in L^2([0,T],H)$, that is, \begin{align}\label{d1u1_d1u2_L2} \partial_1u^1, \partial_1 u^2\in L^2([0,T],L^2). \end{align} Thanks to \eqref{bouss_aniso_div}, this in turn implies \begin{align}\label{d2u2_L2} \partial_2 u^2 =-\partial_1u^1 \in L^2([0,T],L^2). \end{align} \end{comment} Given that $\omega_0\in \sqrt{L}$, and hence $\omega_0\in L^2$, we have \begin{align*} \frac{1}{2}\frac{d}{dt}|\omega|^2+\nu|\partial_1^2\omega|= -(\theta,\partial_1\omega) \leq\frac{\nu}{2}|\partial_1\omega|^2+\frac{1}{2\nu}|\theta|^2. \end{align*} Integrating this gives \begin{align*} |\omega|^2+\nu\int_0^t|\partial_1\omega|^2\,d\tau\leq |\omega_0|^2+\frac{t}{\nu}|\theta_0|^2. \end{align*} This implies that $\omega\in L^\infty([0,T],L^2)$, and therefore $\vect{u}\in L^\infty([0,T],V)$. Furthermore, $\partial_1\omega\in L^2([0,T],L^2)$. Using the divergence free condition \eqref{bouss_aniso_div}, we observe that \begin{align*} \partial_1\omega &= \partial_1^2 u^2- \partial_1\partial_2 u^1 =\partial_1^2u^2+\partial_2^2u^2 = \triangle u^2. \end{align*} Therefore, $\triangle u^2\in L^2([0,T],L^2)$, so that $u^2\in L^2([0,T],H^2)$ by elliptic regularity, and thus $\nabla u^2\in L^2([0,T],H^1)$. By inequality \eqref{CZ_est}, we have \begin{align}\label{Du2_root_L} \|\nabla u^2\|_{p}\leq C\sqrt{p-1}\|\nabla u^2\|_{H^1}. \end{align} so that $\nabla u^2\in L^2([0,T],\sqrt{L})$. Next, we recall that we have global in time control over the $\|\omega\|_{\sqrt{L}}$. Taking the inner product of \eqref{bouss_aniso_vort} with $|\omega|^{p-2}\omega$ for some $p>2$ and integrating by parts, and integrating in time, we have \begin{align} \|\omega(t)\|_{p}^2 \leq \|\omega_0\|_{p}^2+\frac{p-1}{2\nu}\|\theta_0\|_{p}^2t. \end{align} This shows that $\omega\in L^\infty([0,T],\sqrt{L})$. Using this, and the facts that $\partial_1u^1=-\partial_2u^2$ (by \eqref{bouss_aniso_div}) and $\partial_2 u^1=\partial_1 u^2-\omega$, we have thanks to \eqref{Du2_root_L} that $\nabla u^1\in L^2([0,T],\sqrt{L})$. Combining this with \eqref{Du2_root_L} shows that \begin{align}\label{grad_u_sqrt_L} \nabla\vect{u}\in L^2([0,T],\sqrt{L}). \end{align} We recall again that all the estimates above were first derived in \cite{Danchin_Paicu_2008} for the case where $\Omega = \field{R}^2$. We are now ready to show that if $(\vect{u}_1,\theta_1)$ and $(\vect{u}_2,\theta_2)$ are two solutions to \eqref{bouss_aniso_weak_form} on the interval $[0,T]$, with the same initial data $(\vect{u}_0,\theta_0)$ then they must be the equal. Define $\diff{\bu} := \vect{u}_1-\vect{u}_2$, $\diff{\theta}:=\theta_1-\theta_2$, and $\xi_\ell:=\triangle^{-1}\theta_\ell$, $\ell=1,2$, and $\diff{\xi}:=\xi_1-\xi_2$. Based on Remark \ref{e:functional_nu_horizontal}, these quantities satisfy the following functional equations. \begin{subequations}\label{e:functional_nu_horizontal_diff} \begin{align}\label{e:funct_1_horizontal_diff} \pd{\diff{\bu}}{t} + \nu \partial_{11}\diff{\bu} + B(\diff{\bu},\vect{u}_1) + B(\vect{u}_2,\diff{\bu}) &= P_\sigma(\triangle\diff{\xi}\vect{e}_2)\quad \mbox{in}\quad L^2([0,T], V')\quad \mbox{and}\\\label{e:funct_2_horizontal_diff} \pd{\triangle\diff{\xi}}{t} + \mathcal{B}(\diff{\bu},\triangle\xi_1) + \mathcal{B}(\vect{u}_2,\triangle\diff{\xi}) &= 0\quad \mbox{in}\quad L^2([0,T], H^{-1}). \end{align} \end{subequations} Taking the action of \eqref{e:funct_1_horizontal_diff} on $\diff{\bu}$ in $L^2([0,T], V)$ and the action of \eqref{e:funct_2_horizontal_diff} in $L^2([0,T],H^{-1})$ with $\diff{\xi}\in L^2([0,T], H^2)$, thanks to the properties of the operator $B$ in Lemma \ref{B:prop} and the operator $\mathcal{B}$ in Lemma \ref{B_theta:prop} we obtain the following: \begin{align*} \frac{1}{2}\frac{d}{dt}|\diff{\bu}(t)|^2+\nu\|\partial_1\diff{\bu}\|^2 &= \sum_{j=1}^2(\diff{u}^j\vect{u}_1,\partial_j\diff{\bu}) + (\triangle\diff{\xi}\vect{e}_2,\diff{\bu})\\ \frac{1}{2}\frac{d}{dt}\|\diff{\xi}(t)\|^2&=-(\diff{\bu}\triangle\xi_1,\nabla\diff{\xi})-(\vect{u}_2\triangle\diff{\xi},\nabla\diff{\xi}), \end{align*} where again we have used Lions-Magenes Lemma (see, e.g., \cite{Temam_2001_Th_Num}) to get that $ \ip{\pd{\diff{\bu}}{t}}{\diff{\bu}}=\frac{1}{2}\frac{d}{dt}|\diff{\bu}(t)|^2$ and $ \ip{\pd{\triangle\diff{\xi}}{t}}{\diff{\xi}}=\frac{1}{2}\frac{d}{dt}\|\diff{\xi}(t)\|^2$. By Lemma \ref{B:prop}, we obtain \begin{align*} \frac{1}{2}\frac{d}{dt}|\diff{\bu}|^2+\nu|\partial_1\diff{\bu}|^2 &\leq \int_{\mathbb T^2}\abs{\nabla\vect{u}_1}\abs{\diff{\bu}}^2\,d\vect{x} +\abs{(\triangle\diff{\xi}\vect{e}_2,\diff{\bu})} \intertext{and} \frac{1}{2}\frac{d}{dt}\|\diff{\xi}\|^2 &\leq \abs{\int_{\mathbb T^2}\diff{\bu}\cdot\nabla\diff{\xi}\triangle\xi_1\,d\vect{x}}+\abs{\int_{\mathbb T^2}\vect{u}_2\cdot\nabla\diff{\xi}\triangle\diff{\xi}\,d\vect{x}}. \end{align*} Next, observe that, due to the divergence free condition, $\vect{e}_1\cdot\partial_1\diff{\bu}=-\vect{e}_2\cdot\partial_2\diff{\bu}$, we have \begin{align*} |(\triangle\diff{\xi}\vect{e}_2,\diff{\bu})| &\leq \int_{\mathbb T^2}\pnt{|\partial_1\diff{\xi}\vect{e}_2\cdot\partial_1\diff{\bu}| +|\partial_2\diff{\xi}\vect{e}_2\cdot\partial_2\diff{\bu}|}\,d\vect{x} \\&= \int_{\mathbb T^2}\pnt{|\partial_1\diff{\xi}\vect{e}_2\cdot\partial_1\diff{\bu}| +|\partial_2\diff{\xi}\vect{e}_1\cdot\partial_1\diff{\bu}|}\,d\vect{x} \\&\leq \frac{1}{\nu}|\partial_1\diff{\xi}|^2+\frac{\nu}{4}|\vect{e}_2\cdot\partial_1\diff{\bu}|^2 +\frac{1}{\nu}|\partial_2\diff{\xi}|^2+\frac{\nu}{4}|\vect{e}_1\cdot\partial_1\diff{\bu}|^2. \end{align*} Combining the above estimates, we find \begin{align*} \frac{1}{2}\frac{d}{dt}|\diff{\bu}|^2+\nu|\partial_1\diff{\bu}|^2 &\leq \int_{\mathbb T^2}\abs{\nabla\vect{u}_1}\abs{\diff{\bu}}^2\,d\vect{x} +\frac{2}{\nu}\|\diff{\xi}\|^2 +\frac{\nu}{2}|\partial_1\diff{\bu}|^2 \\&\leq \|\diff{\bu}\|_{\infty}^{2/p}\int_{\mathbb T^2}\abs{\nabla\vect{u}_1}\abs{\diff{\bu}}^{2-2/p}\,d\vect{x} +\frac{2}{\nu}\|\diff{\xi}\|^2 +\frac{\nu}{2}|\partial_1\diff{\bu}|^2 \\&\leq \|\nabla\vect{u}_1\|_{p}\|\diff{\bu}\|_{\infty}^{2/p}|\diff{\bu}|^{2-2/p}+\frac{2}{\nu}\|\diff{\xi}\|^2 +\frac{\nu}{2}|\partial_1\diff{\bu}|^2 \intertext{where we have used H\"older's inequality. Similarly, by Lemma \ref{B_theta:prop}} \frac{1}{2}\frac{d}{dt}\|\diff{\xi}\|^2 &\leq \abs{\int_{\mathbb T^2}\diff{\bu}\cdot\nabla\diff{\xi}\triangle\xi_1\,d\vect{x}}+\int_{\mathbb T^2}|\nabla\vect{u}_2||\nabla\diff{\xi}|^2\,d\vect{x} \\&\leq |\diff{\bu}||\nabla\diff{\xi}|\|\triangle\xi_1\|_{\infty} +\|\nabla\vect{u}_2\|_{p}\|\nabla\diff{\xi}\|_{\infty}^{2/p}|\nabla\diff{\xi}|^{2-2/p}. \end{align*} From the estimates above we can now adapt the well-known Yudovich argument for the 2D incompressible Euler equations (see, e.g., \cite{Yudovich_1963}) to complete the uniqueness proof. Let $X^2:=|\diff{\bu}(t)|^2+\|\diff{\xi}(t)\|^2+\eta^2$ for some arbitrary $\eta>0$. Adding the above two inequalities and using Young's inequality gives, \begin{align*} &\qquad \frac{1}{2}\frac{d}{dt}X^2+\frac{\nu}{2}|\partial_1\diff{\bu}|^2 \\&\leq K_{\nu}\pnt{|\diff{\bu}|^2+\|\diff{\xi}\|^2+\eta^2} \\&\quad +\pnt{\|\nabla\vect{u}_2\|_{p}+\|\nabla\vect{u}_1\|_{p}}% \pnt{\|\diff{\bu}\|_{\infty}^{2/p}+\|\nabla\diff{\xi}\|_{\infty}^{2/p}} \pnt{|\diff{\bu}|^{2-2/p}+|\nabla\diff{\xi}|^{2-2/p}} \\&\leq K_{\nu}X^2 +C\pnt{\|\nabla\vect{u}_2\|_{p}+\|\nabla\vect{u}_1\|_{p}}% \pnt{\|\diff{\bu}\|_{\infty}^{2/p}+\|\nabla\diff{\xi}\|_{\infty}^{2/p}} X^{2-2/p}. \end{align*} Neglecting the term $\frac{\nu}{2}|\partial_1\diff{\bu}|^2$, dividing by $X$, and making the change of variables $Y(t)=e^{-K_{\nu}t}X(t)$, we have after a simple calculation, \begin{align*} \dot{Y} \leq Ce^{-2K_{\nu}t/p}\pnt{\|\nabla\vect{u}_2\|_{p}+\|\nabla\vect{u}_1\|_{p}}% \pnt{\|\diff{\bu}\|_{\infty}^{2/p}+\|\nabla\diff{\xi}\|_{\infty}^{2/p}} Y^{1-2/p}. \end{align*} Integrating this equation and using the fact that $e^{-2K_{\nu}t/p} \leq 1$, we get that \begin{align*} Y(t)\leq \left[ \eta^{2/p} +C \int_0^t \frac{1}{p}\pnt{\|\nabla\vect{u}_2(s)\|_{p}+\|\nabla\vect{u}_1(s)\|_{p}}% \pnt{\|\diff{\bu}(s)\|_{\infty}^{2/p}+\|\nabla\diff{\xi}(s)\|_{\infty}^{2/p}}\;ds \right]^{p/2}. \end{align*} Letting $\eta\rightarrow 0$ we discover that for all $t\in[0,T]$, \begin{align}\label{p_eqn} \notag |\diff{\bu}(t)|^2+\|\diff{\xi}(t)\|^2 &\leq \pnt{\|\diff{\bu}\|_{L_T^\infty L^\infty_x}+\|\nabla\diff{\xi}\|_{L_T^\infty L^\infty_x}} \\&\qquad\cdot \pnt{C\int_0^t\frac{1}{p}\pnt{\|\nabla\vect{u}_2(s)\|_{p}+\|\nabla\vect{u}_1(s)\|_{p}}\,ds}^{p/2}. \end{align} Thanks to the fact that $\triangle\diff{\xi}=\diff{\theta}\in L^\infty([0,T],L^\infty)\subset L^\infty([0,T],L^4)$, we have by elliptic regularity that $\diff{\xi}\in L^\infty([0,T],W^{2,4})$, and therefore $\nabla\diff{\xi}\in L^\infty([0,T],W^{1,4})$. Thus, by the Sobolev Embedding Theorem, we have $\nabla\diff{\xi}\in L^\infty([0,T],W^{1,4}) \subset L^\infty([0,T],C^{0,\gamma})$, for some $\gamma\in(0,1)$. Furthermore, $\diff{\omega} \in L^\infty([0,T],\sqrt{L})$ implies, for instance that $\diff{\bu}\in L^\infty([0,T],W^{1,4})$ by the Calder\'on-Zygmund elliptic estimate \eqref{Calderon}. Using the Sobolev Embedding Theorem again, we have $\diff{\bu}\in L^\infty([0,T],C^{0,\gamma})$, for some $\gamma\in(0,1)$. Therefore, the first factor on the right-hand side of \eqref{p_eqn} is bounded. Now, since $\nabla\vect{u}_\ell\in L^2([0,T],\sqrt{L})$, $\ell=1,2$ by \eqref{grad_u_sqrt_L}, we have by Cauchy-Schwarz \begin{align*} \int_0^t\frac{\|\nabla\vect{u}_\ell(s)\|_p}{p}\;ds &\leq \left(t\int_0^T\sup_{p\geq 2}\frac{\|\nabla\vect{u}_\ell(s)\|^2_{p}}{p-1}\;ds \right)^{1/2}. \end{align*} Let $\displaystyle M_\ell = \int_0^T\sup_{p\geq 2}\frac{\|\nabla\vect{u}_\ell(s)\|^2_{p}}{p-1}\;ds$, $\ell = 1, 2$ and $M = \max\{M_1, M_2\}$. Thus, from the above, for every fixed $\tau\in (0,T]$ we have \begin{align}\label{p_1} |\diff{\bu}(t)|^2+\|\diff{\xi}(t)\|^2\leq K (2CM\tau)^{p/2}, \text{ for all } t\in [0,\tau], \end{align} where the constant $C$ is the same constant which appears in \eqref{p_eqn} and $K= \pnt{\|\diff{\bu}\|_{L_T^\infty L^\infty_x}+\|\nabla\diff{\xi}\|_{L_T^\infty L^\infty_x}}$. Now choose $\tau = \tau_0 = \min\{T, \frac{1}{4CM}\}$, and consider \eqref{p_1} on $[0,\tau_0]$. Taking the limit as $p\rightarrow \infty$, we get that $|\diff{\bu}(t)|^2+\|\diff{\xi}(t)\|^2\leq 0$ for all $t\in [0,\tau_0]$. Restarting the time at $t = \tau_0$ and noting the fact that \begin{align*} \int_{\tau_0}^{t+\tau_0}\frac{\|\nabla\vect{u}_\ell(s)\|_p}{p} \,ds &\leq \left(t\int_0^T\sup_{p\geq 2}\frac{\|\nabla\vect{u}_\ell(s)\|^2_{p}}{p-1}\,ds \right)^{1/2}, \end{align*} we obtain from the analogue of \eqref{p_eqn} on $[\tau_0,T]$ that $|\diff{\bu}(t)|^2+\|\diff{\xi}(t)\|^2\leq K(2CM\tau_0)^{p/2}$ for all $t\in [\tau_0,2\tau_0]$. Since we defined $\tau_0 \leq \frac{1}{4CM}$, we take the limit $p\rightarrow \infty$ and find that on the interval $[\tau_0, 2\tau_0]$, we also have that $|\diff{\bu}(t)|^2+\|\diff{\xi}(t)\|^2\leq 0$. We can continue this argument on the intervals $[2\tau_0, 3\tau_0], [3\tau_0, 4\tau_0],\dots,$ and so on. Thus, we have $|\diff{\bu}(t)|^2+\|\diff{\xi}(t)\|^2\leq 0$ for all $t\in [0,T]$. This implies that, $|\diff{\bu}(t)| = 0$ and $\|\diff{\xi}(t)\| =0$ for all $t\in [0,T]$. \end{proof} \section{Global Well-posedness Results for the Voigt-regularized Inviscid and Non-diffusive Boussinesq Equations \texorpdfstring{($P_{0,0}^\alpha$)}{}} \label{s:P_alpha_zero_zero} In this section, we investigate the problem $P_{0,0}^\alpha$, $\alpha>0$, given by \eqref{bouss_v} (with $\nu=\kappa=0$) in 2D. We first establish global well-posedness results, and then investigate the behavior of solutions as $\alpha\rightarrow 0$. In particular, we compare the limiting behavior to sufficiently regular solutions of the $P_{0,0}^0$ problem. This leads to a new criterion for the blow-up of solutions to the $P_{0,0}^0$ problem. A similar criterion was given for the blow-up of the Surface Quasi-Geostrophic equations in \cite{Khouider_Titi_2008}, for the Euler equations in \cite{Larios_Titi_2009}, and for the inviscid, resistive MHD equations in \cite{Larios_Titi_2010_MHD}. \begin{definition}\label{def:voigt_sol} Let $T>0$. Suppose $\vect{u}_0\in V$ and $\theta_0\in L^2$. We say that $(\vect{u},\theta)$ is a \textit{weak solution} to the problem $P_{0,0}^\alpha$ on the interval $[0,T]$ if for all test functions $\Phi$, $\varphi$ chosen as in \eqref{test_fcns}, $(\vect{u},\theta)$ satisfies \begin{subequations}\label{bouss_v_weak} \begin{align} &\quad\notag -\int_0^T(\vect{u}(s),\Phi'(s))\,ds-\alpha^2\int_0^T((\vect{u}(s),\Phi'(s)))\,ds +\sum_{j=1}^2\int_0^T(u^j\vect{u},\partial_j\Phi)\,ds \\&\label{bouss_v_weak_mo} = (\vect{u}_0,\Phi(0))+\alpha^2((\vect{u}_0,\Phi(0)))+\int_0^T(\theta(s)\vect{e}_2,\Phi(s))\,ds, \\&\label{bouss_v_weak_den} -\int_0^T(\theta(s),\varphi'(s))\,ds + \int_0^T(\theta\vect{u},\nabla\varphi)\,ds = (\theta_0,\varphi(0)). \end{align} \end{subequations} and furthermore, $\vect{u}\in C([0,T],V)$, $\pd{\vect{u}}{t}\in L^\infty([0,T],V)$, $\theta\in L^\infty([0,T],L^2)$, $\theta\in C_w([0,T],L^2)$ and $\pd{\theta}{t}\in L^\infty([0,T],H^{-2}) $. \end{definition} \begin{remark} Following similar arguments as those for the NSE presented in \cite{Temam_2001_Th_Num} one can show that this definition is equivalent to the functional equation \begin{subequations}\label{e:functional_voigt} \begin{align}\label{e:funct_1_voigt} (I+\alpha^2A)\pd{\vect{u}}{t} + B(\vect{u},\vect{u}) &= P_\sigma(\theta\vect{e}_2)\quad \mbox{in}\quad L^2([0,T], V')\quad \mbox{and}\\\label{e:funct_2_voigt} \pd{\theta}{t} + \mathcal{B}(\vect{u},\theta) &= 0\quad \mbox{in}\quad L^2([0,T], H^{-2}). \end{align} \end{subequations} \end{remark} \begin{theorem}\label{bouss_v_exist} Let $\vect{u}_0\in V$, $\theta_0\in L^2$. Then there exists a solution to $P^\alpha_{0,0}$, in the sense of Definition \ref{def:voigt_sol}. Furthermore, if $\theta_0\in L^\infty$, then $\theta\in L^\infty([0,T], L^\infty)$. \end{theorem} \begin{proof} We use the notation laid out in Section \ref{sec:Pre}. Let us consider the Galerkin approximation to $P^\alpha_{0,0}$ (or equivalently, \eqref{e:functional_voigt}) given by \begin{subequations}\label{bouss_v_Gal} \begin{alignat}{2} \label{bouss_v_Gal_mo} (I+\alpha^2A)\partial_t\vect{u}_n + P_nB(\vect{u}_n,\vect{u}_n) &= P_nP_\sigma(\theta_n \vect{e}_2),\\ \label{bouss_v_Gal_den} \partial_t\theta_n + P_n(\nabla\cdot(\vect{u}_n \theta_n)) &=0,\\ \label{bouss_v_Gal_in} \vect{u}_n(0)=P_n\vect{u}_0,\quad \theta_n(0)&=P_n\theta_0. \end{alignat} \end{subequations} This is a finite dimensional system of ODEs in $H_n$ with quadratic polynomial non-linearity, and therefore it has a unique local solution in $C^1([0,T_n),H_n)$ for some $T_n>0$. Let $[0,T_n^*)$ be the maximal interval of existence and uniqueness of solutions to \eqref{bouss_v_Gal}. We show below that $T_n^*=\infty$ for every $n$. Taking the inner product of \eqref{bouss_v_den} with $\theta_n$, using Lemma \ref{B_theta:prop}, and integrating in time, we find that for $t\in[0,T_n^*)$, \begin{align}\label{v_theta_L2} |\theta_n(t)| =|\theta_n(0)|\leq |\theta_0|. \end{align} \noindent Next, we take the inner product of \eqref{bouss_v_mo} with $\vect{u}_n$ and use Lemma \ref{B:prop} to find \begin{align} \frac{1}{2}\frac{d}{dt}(|\vect{u}_n|^2+\alpha^2\|\vect{u}_n\|^2) &=\notag (\theta_n\vect{e}_2,\vect{u}_n) \leq |\theta_n||\vect{u}_n| \\&\leq \label{bouss_L2_est} |\theta_0|\sqrt{|\vect{u}_n|^2+\alpha^2\|\vect{u}_n\|^2} \end{align} Consequently, we have for $t\in[0,T_n^*)$, \begin{align}\label{v_u_en} |\vect{u}_n(t)|^2+\alpha^2\|\vect{u}_n(t)\|^2 \leq |\vect{u}_0|^2+\alpha^2\|\vect{u}_0\|^2+t^2|\theta_0|^2. \end{align} According to \eqref{v_theta_L2} and \eqref{v_u_en}, we see that if $T_n^*<\infty$, then $\|\vect{u}_n\|$ and $|\theta_n|$ are both bounded in time on $[0,T_n^*)$, and thus the solutions can be continued beyond $T_n^*$, contradicting the definition of $T_n^*$ as the maximal time of existence. Thus, $T_n^*=\infty$ for all $n\in\field{N}$. Next, we find bounds on the time derivatives. From now on, we work on the interval $[0,T]$, where $T$ was arbitrarily given in the statement of the theorem. Using Lemma \ref{B:prop}, along with \eqref{poincare} and \eqref{v_theta_L2}, we have \begin{align} \norm{(I+\alpha^2A)\frac{d\vect{u}_n}{dt}}_{V'} &\leq\label{v_Au_t+u_t} \sup_{\|\vect{w}\|=1}\abs{\pair{B(\vect{u}_n,\vect{u}_n)}{P_n\vect{w}}}+ \sup_{\|\vect{w}\|=1}\abs{\pair{\theta_n\vect{e}_2}{P_n\vect{w}}} \\&\leq\notag \sup_{\|\vect{w}\|=1}|\vect{u}_n|\|\vect{u}_n\|\|\vect{w}\|+ \sup_{\|\vect{w}\|=1}|\theta_n||\vect{w}| \\&\leq\notag |\vect{u}^{(n)}|\|\vect{u}^{(n)}\| + \lambda_1^{-1/2}|\theta^{(n)}| \\&\leq\notag |\vect{u}_n|\|\vect{u}_n\|+\lambda^{-1/2}|\theta_0|. \end{align} Thanks to \eqref{v_u_en} and \eqref{v_Au_t+u_t} we obtain that $(I+\alpha^2A)\frac{d\vect{u}_n}{dt}$ is uniformly bounded in $L^\infty([0,T],V')$, which implies that $\frac{d\vect{u}_n}{dt}$ is uniformly bounded in $L^\infty([0,T],V)$, with respect to $n$. Similarly, using Lemma \ref{B_theta:prop}, we obtain \begin{equation} \norm{\pd{\theta_n}{t}}_{H^{-2}} \leq |\theta_n||\vect{u}_n|^{1/2}\|\vect{u}_n\|^{1/2}, \end{equation} which implies that $\pd{\theta_n}{t}$ is bounded in $L^\infty([0,T],H^{-2})$ independently of $n$ by virtue of \eqref{v_theta_L2} and \eqref{v_u_en}. The above bounds allow us to use the Banach-Alaoglu Theorem and the Aubin Compactness Theorem (see, e.g., \cite{Temam_2001_Th_Num, Constantin_Foias_1988}) to extract a subsequence, which we still write as $(\vect{u}_n,\theta_n)$, and elements $\vect{u}$ and $\theta$, such that \begin{subequations}\label{wk_conv_Gal} \begin{align} \label{st_u_L2H_Gal} \vect{u}_n\rightarrow\vect{u} &\quad\text{strongly in }L^2([0,T],H), \\\label{wk_u_L2V_Gal} \vect{u}_n\rightharpoonup\vect{u} &\quad\text{weakly in }L^2([0,T],V)\text{ and weak-$*$ in }L^\infty([0,T],V),\\ \label{wk_du_L2V_Gal} \pd{\vect{u}_n}{t}\rightharpoonup\pd{\vect{u}}{t} &\quad\text{ weak-$*$ in }L^\infty([0,T],V), \\\label{wk_theta_LiH_Gal} \theta_n\rightharpoonup\theta &\quad\text{weakly in }L^2([0,T],L^2)\text{ and weak-$*$ in }L^\infty([0,T],L^2), \\\label{wk_dtheta_LiH_Gal} \pd{\theta_n}{t}\rightharpoonup\pd{\theta}{t} &\quad\text{ weak-$*$ in }L^\infty([0,T],H^{-2}). \end{align} \end{subequations} Next, for arbitrary $\varphi$ and $\Phi$, chosen as in \eqref{test_fcns}, let us take the inner product of $\eqref{bouss_v_Gal_mo}$ with $\Phi$, and of \eqref{bouss_v_Gal_den} with $\varphi$ and integrate in time on $[0,T]$. After integrating by parts several times, we have \begin{subequations}\label{bouss_v_Gal_weak} \begin{align} &\quad -\int_0^T(\vect{u}_n(s),\Phi'(s))\,ds-\alpha^2\int_0^T((\vect{u}_n(s),\Phi'(s)))\,ds +\sum_{j=1}^2\int_0^T(u_n^j\vect{u}_n,P_n\partial_j\Phi)\,ds \\&\notag = (\vect{u}_n(0),\Phi(0))+\alpha^2((\vect{u}_n(0),\Phi(0)))+\int_0^T(\theta_n(s)\vect{e}_2,\Phi(s))\,ds, \\& -\int_0^T(\theta_n(s),\varphi'(s))\,ds + \int_0^T(\theta_n\vect{u}_n,P_n\nabla\varphi)\,ds = (\theta_0,\varphi(0)), \end{align} \end{subequations} where we have again denoted $'\equiv \pd{}{s}$. We would like to pass to the limit as $n\rightarrow\infty$ to obtain \eqref{bouss_v_weak}. The convergence of the linear terms is straight-forward, thanks to \eqref{wk_conv_Gal}. As for the non-linear terms, notice that the convergence in \eqref{wk_conv_Gal} is in stronger than the convergence in \eqref{wk_conv}, and so the convergence of the non-linear terms follows just as in the proof of Theorem \ref{exist_weak_visc} (note that $P_n\Phi= \Phi$ and $P_n\varphi=\varphi$ for sufficiently large $n$, due to our choice of test functions in \eqref{test_fcns}). Thus, $(\vect{u},\theta)$ satisfies \eqref{bouss_v_weak}. In particular, choosing $\phi$ and $\Phi$ to have compact support in $(0,T)$, we see that the equations of \eqref{e:funct_1_voigt} and \eqref{e:funct_2_voigt} are satisfied in the sense of distributions in time with values in $V'$ and $H^{-2}$, respectively. Acting with \eqref{e:funct_1_voigt} on $\Phi$, with \eqref{e:funct_2_voigt} on $\varphi$, and integrating in time on $[t_0,t_1]$, we find \begin{subequations}\label{bouss_v_weak_integrated} \begin{align} &\quad\label{bouss_v_weak_integrated_mo} -\int_{t_0}^{t_1}(\vect{u}(s),\Phi'(s))\,ds-\alpha^2\int_{t_0}^{t_1}((\vect{u}(s),\Phi'(s)))\,ds +\sum_{j=1}^2\int_{t_0}^{t_1}(u^j\vect{u},\partial_j\Phi)\,ds \\&\notag = (\vect{u}(t_0),\Phi(t_0))+\alpha^2((\vect{u}(t_0),\Phi(t_0))) - (\vect{u}(t_1),\Phi(t_1))-\alpha^2((\vect{u}(t_1),\Phi(t_1))) \\&\quad\notag +\int_{t_0}^{t_1}(\theta(s)\vect{e}_2,\Phi(s))\,ds, \\&\label{bouss_v_weak_integrated_den} -\int_{t_0}^{t_1}(\theta(s),\varphi'(s))\,ds + \int_{t_0}^{t_1}(\theta\vect{u},\nabla\varphi)\,ds = (\theta(t_0),\varphi(t_0))- (\theta(t_1),\varphi(t_1)). \end{align} \end{subequations} Temporarily restricting our set of test functions to those which are compactly supported in time on $[0,T]$ and considering the case $t_0=0$ and $t_1=T$, it is easy to see from a simple density argument that $(\vect{u},\theta)$ satisfies the equations of \eqref{e:functional_voigt} in the sense of distributions, thanks to \eqref{bouss_v_weak_integrated}. Next, allowing $\Phi(0)$, and $\varphi(0)$ to be arbitrary, but fixing $\Phi(T)=0$ and $\varphi(T)=0$, we act on $\Phi$ with \eqref{e:funct_1_voigt} and on $\varphi$ with \eqref{e:funct_2_voigt} and integrate on $[0,T]$, resulting equations in \eqref{bouss_v_weak}. Comparing \eqref{bouss_v_weak} with \eqref{bouss_v_weak_integrated}, we find that that $\theta(0)=\theta_0$ in the sense of $H^{-1}$ and that $(I+\alpha^2A)\vect{u}(0)=(I+\alpha^2A)\vect{u}_0$ in the sense of $V'$. Inverting $I+\alpha^2A$ gives $\vect{u}(0)=\vect{u}_0$. Furthermore, we may send $t_1\rightarrow t_0$ in \eqref{bouss_v_weak_integrated_den}, and use the density of $C^{\infty}(\mathbb T^2)$ in $L^2(\mathbb T^2)$, as well as the boundedness of $\theta$ in $L^\infty([0,T],L^2(\mathbb T^2))$, to show that $\theta\in C_w([0,T],L^2(\mathbb T^2))$. Next, since we have $\vect{u},\frac{d}{dt}\vect{u}\in L^\infty([0,T],V)\hookrightarrow L^2([0,T],V)$, it follows that $\vect{u}\in C([0,T],V)$ by the Sobolev Embedding Theorem. Thus, we have shown a weak solution exists in the sense of Definition \ref{def:voigt_sol}. Finally, one may show that if $\theta_0\in L^\infty(\mathbb T^2)$, then $\theta\in L^\infty([0,T],L^\infty)$, by following Step 4 of the proof of Theorem \ref{exist_weak_visc} line-by-line. \end{proof} \begin{theorem}[Uniqueness for the 2D Voigt model]\label{voigt_unique} Let $T>0$ be arbitrary. Suppose $\vect{u}_0\in \mathcal D(A)$ and $\theta_0\in L^2(\mathbb T^2)$. Then there exists a unique solution to \eqref{bouss_v} in the sense of Definition \ref{def:voigt_sol}. Furthermore, it holds that $\vect{u}\in L^\infty([0,T],\mathcal D(A))$. \end{theorem} \begin{proof} Here, we only sketch the proof, since the ideas a similar to those given above. The existence of solutions to \eqref{bouss_v} has already been established in Theorem \ref{bouss_v_exist}. Thanks to the hypothesis $\vect{u}_0\in \mathcal D(A)$, it is straight-forward to show that $\vect{u}\in C([0,T],\mathcal D(A))$ using, e.g., the methods of \cite{Larios_Titi_2009} and similarly that $\frac{d\theta}{dt} \in L^2([0,T], H^{-1})$. One can the prove the uniqueness of solutions by following the proof of Theorem \ref{uniqueness_visc} almost line by line. Only some slight modifications to the handling of the terms involving $\|\vect{u}\|$, and in using the parameter $\alpha^2$ rather than $\nu$ is needed. \end{proof} \begin{theorem}[Convergence as $\alpha\maps0$]\label{t:Convergence} Given initial data $(\vect{u}_0, \theta_0)\in (H^3(\mathbb T^2)\cap V)\times H^3(\mathbb T^2)$, and $(\vect{u}_0^\alpha,\theta_0^\alpha)\in (H^3(\mathbb T^2)\cap V)\times H^3(\mathbb T^2)$, let $(\vect{u},\theta)$ and $(\vect{u}^\alpha,\theta^\alpha)$ be the corresponding solutions to the problems $P_{0,0}^0$ and $P_{0,0}^\alpha$, respectively. Choose an arbitrary $T\in (0,T_{\text{max}})$, where $T_{\text{max}}$ is the maximal time for which a solution to the problem $P_{0,0}^0$ exists and is unique. Suppose that $\vect{u}_0^\alpha\rightarrow\vect{u}_0$ in $V$ and $\theta_0^\alpha\rightarrow\theta_0$ in $L^2(\mathbb T^2)$. Then $\vect{u}^\alpha\rightarrow\vect{u}$ in $L^2([0,T],V)$ and $\theta^\alpha\rightarrow\theta$ in $L^2([0,T],L^2(\mathbb T^2))$. \end{theorem} \begin{proof} Here, for simplicity, we only work formally, but note that the results can be made rigorous by using the techniques discussed above. Under the hypotheses on the initial conditions, it was proven in \cite{Chae_Nam_1997} that there exists a time $T>0$ and a unique $(\vect{u},\theta) \in C([0,T],H^3({\mathbb T^2})\cap V)\times C([0,T],H^3({\mathbb T^2})\cap V)$ solving the problem $P_{0,0}^0$, (in particular, it holds that $T_{\text{max}}>0$). Thanks to Theorem \eqref{bouss_v_exist}, we know that there also exists a unique solution to the problem $P_{0,0}^\alpha$, namely $(\vect{u}^\alpha,\theta^\alpha)\in C([0,T], V)\times C([0,T],L^2({\mathbb T^2}))$. Subtracting the corresponding equations from these problems (that is, \eqref{bouss} with $\nu=\kappa=0$ and \eqref{bouss_v}) yields \begin{subequations}\label{bouss_subtract} \begin{align} \label{bouss_subtract_mo} -\alpha^2 \frac{d}{dt}\triangle\vect{u}^\alpha+\frac{d}{dt}(\vect{u}-\vect{u}^\alpha) &=-B (\vect{u}-\vect{u}^\alpha,\vect{u}) -B (\vect{u}^\alpha, \vect{u}-\vect{u}^\alpha) +P_\sigma((\theta^\alpha-\theta) \vect{e}_2), \\ \label{bouss_subtract_den} \frac{d}{dt}(\theta^\alpha-\theta) &=- ((\vect{u}-\vect{u}^\alpha)\cdot\nabla)\theta -(\vect{u}^\alpha\cdot\nabla)(\theta-\theta^\alpha). \end{align} \end{subequations} Let us take the inner product of \eqref{bouss_subtract_mo} with $\vect{u}^\alpha-\vect{u}$ and of \eqref{bouss_subtract_den} with $\theta^\alpha-\theta$, and add the results. After integrating by parts and rearranging the terms, we find \begin{align} &\quad\label{conv_diff_est} \frac{1}{2}\frac{d}{dt}\pnt{\alpha^2\|\vect{u}-\vect{u}^\alpha\|^2+|\vect{u}-\vect{u}^\alpha|^2 +|\theta-\theta^\alpha|^2} \\&=\notag -(B (\vect{u}-\vect{u}^\alpha,\vect{u}),\vect{u}^\alpha-\vect{u}) +((\theta^\alpha-\theta) \vect{e}_2,\vect{u}^\alpha-\vect{u}) \\&\quad\notag -(((\vect{u}-\vect{u}^\alpha)\cdot\nabla)\theta,\theta^\alpha-\theta) -\alpha^2\pair{\triangle\vect{u}_t}{\vect{u}-\vect{u}^\alpha} \\&\leq \notag \|\nabla\vect{u}\|_{L^\infty}|\vect{u}-\vect{u}^\alpha|^2 +|\theta^\alpha-\theta||\vect{u}^\alpha-\vect{u}| \\&\quad\notag \|\nabla\theta\|_{L^\infty}|\vect{u}-\vect{u}^\alpha||\theta^\alpha-\theta| -\alpha^2\pair{\triangle\vect{u}_t}{\vect{u}-\vect{u}^\alpha} \\&\leq \notag K(|\vect{u}-\vect{u}^\alpha|^2 +|\theta^\alpha-\theta|^2) -\alpha^2\pair{\triangle\vect{u}_t}{\vect{u}-\vect{u}^\alpha}, \end{align} where we have used Young's inequality and the fact that $\|\nabla\vect{u}\|_{L^\infty},\|\nabla\theta\|_{L^\infty}<\infty$. It remains to estimate the integral on the left-hand side of the equality. Using the fact that $(\vect{u},\theta)$ satisfies \eqref{bouss_den}, we have \begin{align} &\quad \label{conv_est_H3} -\alpha^2\pair{ \triangle\vect{u}_t}{\vect{u}-\vect{u}^\alpha} \\&=\notag -\alpha^2\pair{ \triangle[-\vect{u}\cdot\nabla \vect{u} -\nabla p + \theta \vect{e}_2]}{\vect{u}-\vect{u}^\alpha} \\&=\notag \alpha^2\pair{ \triangle\vect{u}\cdot\nabla \vect{u}+2(\nabla\vect{u}\cdot\nabla) \nabla\vect{u}+\vect{u}\cdot\nabla \triangle\vect{u} - \triangle\theta \vect{e}_2}{\vect{u}-\vect{u}^\alpha} \\&\leq\notag C\alpha^2|\triangle \vect{u}|\|\nabla\vect{u}\|_{L^\infty}|\vect{u}-\vect{u}^\alpha|+\|\vect{u}\|_{L^\infty}|\nabla\triangle \vect{u}| |\vect{u}-\vect{u}^\alpha| + |\triangle\theta||\vect{u}-\vect{u}^\alpha| \\&\leq\notag C\alpha^2\|\vect{u}\|_{H^2}\|\vect{u}\|_{H^3}|\vect{u}-\vect{u}^\alpha|+C\|\vect{u}\|_{H^2}\| \vect{u}\|_{H^3} |\vect{u}-\vect{u}^\alpha| + C\|\theta\|_{H^2}|\vect{u}-\vect{u}^\alpha| \\&\leq\notag \alpha^2K|\vect{u}-\vect{u}^\alpha|. \end{align} For the second equality, we used \eqref{deRham}. Combining \eqref{conv_diff_est} with \eqref{conv_est_H3} and using Gr\"onwall's inequality yields \begin{align} &\quad\notag \alpha^2\|\vect{u}(t)-\vect{u}^\alpha(t)\|^2+|\vect{u}(t)-\vect{u}^\alpha(t)|^2+|\theta(t)-\theta^\alpha(t)|^2 \\&\leq C\pnt{\alpha^2\|\vect{u}_0-\vect{u}^\alpha_0\|^2+|\vect{u}_0-\vect{u}^\alpha_0|^2 +|\theta_0-\theta^\alpha_0|^2}e^{K(\alpha^2+\alpha)t}. \end{align} Thus, if $\vect{u}^\alpha_0\rightarrow \vect{u}_0$ in $V$ and $\theta_0^\alpha\rightarrow \theta_0$ in $L^2(\mathbb T^2)$, as $\alpha\rightarrow 0$, (in particular, if $\vect{u}^\alpha_0= \vect{u}_0$ and $\theta_0^\alpha= \theta_0$ for all $\alpha>0$), then $\vect{u}^\alpha\rightarrow \vect{u}$ in $L^\infty([0,T],V)$ and $\theta^\alpha\rightarrow \theta$ in $L^\infty([0,T],L^2)$, as $\alpha\rightarrow 0$. \end{proof} \begin{theorem}[Blow-up criterion]\label{t:blow_up_criterion} With the same notation and assumptions of Theorem \ref{t:Convergence}, suppose that for some $T_*<\infty$, we have \begin{align}\label{blow_up_ineq} \sup_{t\in[0,T_*)}\limsup_{\alpha\maps0}\alpha^2\|\vect{u}^\alpha(t)\|^2>0. \end{align} Then the solutions to $P_{0,0}^0$ become singular in the time interval $[0,T_*)$. \end{theorem} \begin{proof} To get a contradiction, suppose that $(\vect{u},\theta)$ stays bounded in $H^3\cap V({\mathbb T^2})\times H^3({\mathbb T^2})$ but that \eqref{blow_up_ineq}. Taking the inner product of the momentum equation with $\vect{u}^\alpha$ and integrating, we find \begin{align*} \alpha^2\|\vect{u}^\alpha(t)\|^2+|\vect{u}^\alpha(t)|^2 &=\alpha^2\|\vect{u}_0\|^2+|\vect{u}_0|^2+2\int_0^t(\theta^\alpha(s)\vect{e}_2,\vect{u}^\alpha(s))\,ds. \end{align*} Taking the $\limsup$ as $\alpha\rightarrow 0^+$ then by virtue of Theorem \ref{bouss_v_exist} and Theorem \ref{t:Convergence} we have \begin{align}\label{blow_up_limsup} \limsup_{\alpha\maps0^+}\alpha^2\|\vect{u}^\alpha(t)\|^2+|\vect{u}(t)|^2 &=|\vect{u}_0|^2+2\int_0^t(\theta(s)\vect{e}_2,\vect{u}(s))\,ds. \end{align} However, given the hypotheses on the initial data, and the well-posedness results of \cite{Chae_Nam_1997}, it is straight-forward to prove the following energy equality: \begin{align*} |\vect{u}(t)|^2&=|\vect{u}_0|^2+2\int_0^t(\theta(s),\vect{u}(s))\,ds, \end{align*} so that \eqref{blow_up_limsup} contradicts \eqref{blow_up_ineq}. \end{proof} \section{The 3D Boussinesq-Voigt Equations} \label{s:3D_Bouss_Voigt} We now briefly outline an extension of the previous results to the case of the three dimensional Boussinesq-Voigt equations. The details are very similar to the 2D case, so we only prove formal \textit{a priori} estimates. In order to control the higher-order derivatives, we add a diffusion term to the transport equation. This approach is similar to that used in \cite{Larios_Titi_2009,Larios_Titi_2010_MHD,Catania_Secchi_2009,Catania_2009} to prove global well-posedness for two Voigt-regularizations of the 3D MHD equations. We consider the following system, written in functional form, which we refer to as $P_{0,\kappa}^\alpha$. \begin{subequations}\label{bouss_v_3D} \begin{align} \label{bouss_v_3D_mom} (I+\alpha^2A)\pd{\vect{u}}{t} + B(\vect{u},\vect{u}) &= P_\sigma(\theta\vect{e}_3), \\\label{bouss_v_3D_den} \pd{\theta}{t} +\mathcal{B}(\vect{u},\theta) &= \kappa \triangle\theta, \\\label{bouss_v_3D_IC} \vect{u}(0)=\vect{u}_0,\quad\theta(0)&=\theta_0. \end{align} \end{subequations} \begin{remark} Note that one could also control the higher-order derivatives by adding the Voigt-term $-\beta^2\triangle\frac{d}{dt}\theta$, $\beta>0$ to the left-hand side of \eqref{bouss_v_3D_den} to allow for case when $\kappa=0$. Although the resulting regularized system is well-posed, for the sake of brevity, we do not pursue this type of additional Voigt-regularization here. However, a similar idea has been investigated in the context of the MHD equations in \cite{Larios_Titi_2009} (cf. \cite{Larios_Titi_2010_MHD}), and also in \cite{Catania_Secchi_2009}. \end{remark} \begin{definition}\label{def:bouss_v_3D} Let $\vect{u}_0\in V\cap H^3(\mathbb T^3)$, $\theta_0\in L^2(\mathbb T^3)$. For a given $T>0$, we say that $(\vect{u},\theta)$ is a \textit{solution} to the problem $P_{0,\kappa}^\alpha$ (in three-dimensions) on the interval $[0,T]$ if it satisfies \eqref{bouss_v_3D_mom} in the sense of $L^2([0,T],V)$ and \eqref{bouss_v_3D_den} in the sense of $L^2([0,T],H^{-1})$. Furthermore, $\vect{u}\in C([0,T],V\cap H^3)$, $\pd{\vect{u}}{t}\in L^\infty([0,T],\mathcal{D}(A))\cap L^2([0,T],V\cap H^3)$, $\theta\in L^2([0,T],H^1)\cap C_w([0,T],L^2)$ and $\pd{\theta}{t}\in L^2([0,T],H^{-1})$. \end{definition} \begin{theorem} Let $\vect{u}_0\in V\cap H^3(\mathbb T^3)$, $\theta_0\in L^2(\mathbb T^3)$, and let $T>0$ be arbitrary. Then there exists a solution to \eqref{bouss_v_3D} in the sense of Definition \ref{def:bouss_v_3D}. Furthermore, if $\theta_0\in L^p(\mathbb T^3)$ for some $p\in[2,\infty]$, then $\theta\in L^\infty([0,T],L^p)$. In the case $\theta_0\in L^\infty(\mathbb T^3)$, the solution is unique. \end{theorem} \begin{proof} As mentioned above, we only establish formal \textit{a priori} estimates here. Suppose for a moment that $\theta_0\in L^p(\mathbb T^3)$. Formally taking the inner product of \eqref{bouss_v_3D_den} with $|\theta|^{p-2}\theta$, $p\in[2,\infty)$, we find as above that \begin{align}\label{bouss_v_3D_theta_Lp} \frac{1}{p}\frac{d}{dt}\|\theta\|_{L^p}^p+\kappa(p-1)\int_{\mathbb T^3}|\nabla\theta|^2|\theta|^{p-2}\,d\vect{x} =0. \end{align} Dropping the term involving $\kappa$, integrating in time, and sending $p\rightarrow\infty$, we find for all $p\in[2,\infty]$, \begin{align*} \|\theta(t)\|_{L^p} \leq \|\theta_0\|_{L^p}. \end{align*} On the other hand, setting $p=2$ in \eqref{bouss_v_3D_theta_Lp} and integrating in time, we find \begin{align*} |\theta(t)|^2+2\kappa\int_0^t\|\theta(s)\|^2\,ds \leq |\theta_0|^2. \end{align*} Next, following similar steps as in the derivation of \eqref{bouss_L2_est} and \eqref{v_u_en}, we find for $t\in[0,T]$, \begin{align*} |\vect{u}(t)|^2+\alpha^2\|\vect{u}(t)\|^2 \leq |\vect{u}_0|^2+\alpha^2\|\vect{u}_0\|^2 + T^2|\theta_0|^2 :=(K_{\alpha,1})^2. \end{align*} By formally taking the inner product of \eqref{e:funct_1_voigt} with $A\vect{u}$, we find \begin{align*} \frac{1}{2}\frac{d}{dt}(\|\vect{u}\|^2+\alpha^2|A\vect{u}|^2) &= -(B(\vect{u},\vect{u}),A\vect{u})+(\theta \vect{e}_3,A\vect{u}) \\&\leq C\|\vect{u}\||A\vect{u}|^2+|\theta| |A\vect{u}| \leq C K_{\alpha,1}\alpha^{-1}|A\vect{u}|^2+|\theta_0| |A\vect{u}| \\&\leq C(1+K_{\alpha,1}\alpha^{-1}) |A\vect{u}|^2+|\theta_0|^2. \end{align*} Using Gr\"onwall's inequality, we obtain a constant $K_{\alpha,2}=(\|\vect{u}_0\|^2+\alpha^2|A\vect{u}_0|^2+\frac{2K_{\alpha,1}}{T})e^{c_0T}$, where $c_0:=C\alpha^{-2}+K_{\alpha,1}\alpha^{-3}$, such that $\|\vect{u}(t)\|^2+\alpha^2|A\vect{u}(t)|^2\leq K_{\alpha,2}$ for a.e. $t\in[0,T]$. Next, we formally take the inner product of \eqref{e:funct_1_voigt} with $A^2\vect{u}$ (recalling that, in the periodic case, $-\triangle = A$), to find \begin{align} &\quad\notag \frac{1}{2}\frac{d}{dt}(|A\vect{u}|^2+\alpha^2\|A\vect{u}\|^2) = -\sum_{j=1}^3(u^j\partial_j\vect{u},\triangle^2\vect{u})+(\theta \vect{e}_3,\triangle^2\vect{u}) \\&=\notag -\sum_{j=1}^3(\triangle u^j\cdot\partial_j\vect{u},\triangle \vect{u}) -2 \sum_{i,j=1}^3(\partial_i u^j\cdot\partial_i\partial_j\vect{u},\triangle \vect{u}) -\sum_{j=1}^3(u^j\partial_j\triangle \vect{u},\triangle \vect{u}) \\&\quad\notag -((\theta \vect{e}_3,\triangle\vect{u})) \\&\leq\label{bouss_v_3D_u_H3} C|A\vect{u}|^2\|A\vect{u}\|+\|\theta\| \|A\vect{u}\| \leq C\pnt{(K_{\alpha,2})^4+\|A\vect{u}\|^2+\|\theta\|^2}. \end{align} These \textit{a priori} estimates can be used to form a rigorous argument as follows. In the case $p=2$, existence can be proven by using, e.g., the Galerkin method as in the proof of theorem \eqref{bouss_v_exist}, substituting the \textit{a priori} estimates established in this section as necessary. Passage to the limit, estimates on the time derivatives, and the continuity properties of Definition \ref{def:bouss_v_3D} can be established using similar ideas to those used in the proofs of some of the previous theorems. For the case $\theta_0\in L^p(\mathbb T^3)$, with $p\in(2,\infty)$, we begin by smoothing $\theta_0$ (e.g., by convolving it with a mollifier) to get smooth functions $\theta_0^\epsilon$ for each $\epsilon>0$, which converge to $\theta_0$ as $\epsilon\maps0$ in several relevant norms. Clearly $\theta_0^\epsilon\in L^2(\mathbb T^3)$. Thus, thanks to the existence of solutions for the $p=2$ case, there exists a solution $(\vect{u}^\epsilon,\theta^\epsilon)$ to \eqref{bouss_v_3D} with initial data $(\vect{u}_0,\theta_0^\epsilon)$ such that $\theta^\epsilon \in L^\infty([0,T],L^2)$ and $\vect{u}^\epsilon\in C([0,T],V\cap H^3)$. One may then show that $\theta^\epsilon \in L^\infty([0,T],H^2)$, e.g., by using the Galerkin method and deriving straight-forward \textit{a priori} estimates on higher derivatives. Since in three dimensions $L^\infty([0,T],H^2)$ is an algebra, it follows that $|\theta^\epsilon|^{p-2}\theta^\epsilon\in L^\infty([0,T],H^2)$, so that the above \textit{a priori} estimates can be established rigorously for $(\vect{u}^\epsilon,\theta^\epsilon)$. Furthermore, the bounds can be made to be independent of $\epsilon$. Standard arguments show that one can extract subsequences of $(\vect{u}^\epsilon,\theta^\epsilon)$ which converge in several relevant norms as $\epsilon\maps0$ to a solution $(\vect{u},\theta)$ of \eqref{bouss_v_3D} corresponding to initial data $(\vect{u}_0,\theta_0)$. Taking the limit as $\epsilon\maps0$, one may show that $\theta\in L^\infty([0,T],L^p)$. Finally, in the case $p=\infty$, one may employ, e.g., the Hopf-Stampacchia technique used in Step 4 of the proof of Theorem \ref{exist_weak_visc}. With the above \textit{a priori} estimates formally established, we now show the uniqueness of solutions to \eqref{bouss_v_3D} under the additional hypothesis that $\theta_0\in L^\infty(\mathbb T^3)$. Letting $(\vect{u}_1,\theta_1)$ and $(\vect{u}_2,\theta_2)$ be two solutions to \eqref{bouss_v_3D} with initial data $(\vect{u}_0,\theta_0)$. Let us write $\diff{\bu}:=\vect{u}_1-\vect{u}_2$ and $\diff{\theta}=\theta_1-\theta_2$. As in the proof of Theorem \ref{uniqueness_visc} we also write $\diff{\xi}=\triangle^{-1} \theta$ and $\xi_i=\triangle^{-1}\theta_i$, $i=1,2$, subject to the side condition $\int_{\mathbb T^3} \xi_i\,d\vect{x}=0$ for $i=1,2$. Following nearly identical steps to the derivation of \eqref{bouss_u_diff}, we find, \begin{align} \frac{1}{2}\frac{d}{dt}(|\diff{\bu}|^2 + \alpha^2\|\diff{\bu}\|^2) &\leq\label{bouss_v_u_diff} C_{\alpha,\kappa} K_{\alpha,1}(|\diff{\bu}|^2+\alpha^2\|\diff{\bu}\|^2+\|\diff{\xi}\|^2). \end{align} Similarly, following nearly identical steps to the derivation of \eqref{bouss_diff_en_xi}, we find, \begin{align} \frac{1}{2}\frac{d}{dt}\|\diff{\xi}\|^2 +\kappa|\triangle\diff{\xi}|^2 &=\notag \pair{\diff{\bu}\triangle\xi_1}{\nabla\diff{\xi}} +\sum_{j=1}^3\pair{\partial_j\vect{u}_2}{\nabla\diff{\xi}\partial_j\diff{\xi}} \\&\leq |\diff{\bu}|\|\theta_1\|_{L^\infty}\|\diff{\xi}\| +\|\vect{u}_2\|_{H^3}\|\diff{\xi}\|^2 \leq\label{bouss_v_xi_diff} K_{\alpha,4}(|\diff{\bu}|^2+\|\diff{\xi}\|^2), \end{align} where $K_{\alpha,4} := \max\set{2\|\theta_0\|_{L^\infty},K_{\alpha,3}}<\infty$. Uniqueness now follows by adding \eqref{bouss_v_u_diff} and \eqref{bouss_v_xi_diff} and using Gr\"onwall's inequality. \end{proof} \section{Appendix} \label{s:app} We prove the inequality \eqref{brezis}. The proof is based on the proof of the Brezis-Gallouet inequality \cite{Brezis_Gallouet_1980} and follows almost line-by-line the proof given in \cite{Cao_Titi_2009}. For $\vect{w}\in\mathcal{D}(A)$, let us write \begin{align*} \vect{w} = \sum_{\vect{k}\in\field{Z}^2\setminus(0,0)}a_\vect{k}\vect{w}_\vect{k} \end{align*} where $\vect{w}_\vect{k}$ are the (normalized) eigenfunctions of $A$ (see Section \ref{sec:Pre}) and $a_\vect{k}:=(\vect{w},\vect{w}_\vect{k})$. Choose $M=(e^{1/\epsilon^{1/4}}-1)^{1/2}$ for a given $\epsilon>0$, sufficiently small so that $M>1$. We have {\allowdisplaybreaks \begin{align*} \|\vect{w}\|_{L^\infty} &\leq \sum_{\vect{k}\in\field{Z}^2\setminus(0,0)}|a_\vect{k}| = \sum_{0<|\vect{k}|\leq M}|a_\vect{k}| +\sum_{|\vect{k}|> M}|a_\vect{k}| \\&= \sum_{0<|\vect{k}|\leq M}\frac{(1+|\vect{k}|^2)^{1/2}}{(1+|\vect{k}|^2)^{1/2}}|a_\vect{k}| +\sum_{|\vect{k}|> M}\frac{(1+|\vect{k}|^2)}{(1+|\vect{k}|^2)}|a_\vect{k}| \\&\leq \pnt{\sum_{0<|\vect{k}|\leq M}(1+|\vect{k}|^2)|a_\vect{k}|^2}^{1/2} \pnt{\sum_{0<|\vect{k}|\leq M}\frac{1}{(1+|\vect{k}|^2)}}^{1/2} \\&\quad +\pnt{\sum_{|\vect{k}|> M}(1+|\vect{k}|^2)^2|a_\vect{k}|^2}^{1/2} \pnt{\sum_{|\vect{k}|> M}\frac{1}{(1+|\vect{k}|^2)^2}}^{1/2} \\&\leq C\|\vect{w}\| \pnt{\int_{|\vect{x}|\leq M}\frac{d\vect{x}}{(1+\vect{x}^2)}}^{1/2} +C|A\vect{w}| \pnt{\int_{|\vect{x}|> M}\frac{d\vect{x}}{(1+|\vect{x}|^2)^2}}^{1/2} \\&= C\|\vect{w}\| \pi\log(1+M^2) +C|A\vect{w}|\frac{\pi}{1+M^2} \\&= C\pnt{\|\vect{w}\|\epsilon^{-1/4}+ |A\vect{w}|e^{-1/\epsilon^{1/4}}}. \end{align*} } \section*{Acknowledgements} The authors are thankful for the warm hospitality of the Institute for Mathematics and its Applications (IMA), University of Minnesota, where part of this work was completed. This work was supported in part by the NSF grants no.~DMS-0708832, DMS-1009950. E.S.T. also acknowledges the kind hospitality of the Freie Universit\"at - Berlin, and the support of the Alexander von Humboldt Stiftung/Foundation and the Minerva Stiftung/Foundation. \begin{scriptsize} \bibliographystyle{amsplain
1,314,259,994,046
arxiv
\section{Introduction} The properties of polyatomic systems we encounter in physics and chemistry can be extremely challenging to understand. First of all, many of these systems are strongly correlated, in the sense that their complex behavior cannot be easily deduced from the properties of their individual constituents -- isolated atoms and molecules. Second, in realistic experiments these systems are usually found far from their thermal equilibrium, as they are perturbed by the surrounding environment, be it a solution, a gas, or lattice vibrations in a crystal. Quite often, however, insight into the behavior of such complex many-body systems can be obtained from studying the simplified problem of a single quantum particle coupled to an environment. These so-called `impurity problems' represent an important part of modern condensed matter physics~\cite{WeissBook, Breuer2002}. Interest in quantum impurities goes back to the classic works of Landau, Pekar, Fr\"ohlich, and Feynman, who studied motion of electrons in crystals~\cite{LandauPolaron, LandauPekarJETP48, FrohlichAdvPhys54, FeynmanPR55, AppelPolarons, Devreese13}. In its most general formulation, such a problem involves the coordinates and momenta of all the electrons and nuclei in the crystal -- some $10^{23}$ degrees of freedom -- and is therefore intractable by any existing numerical technique. The problem, however, can be drastically simplified by using a trick very common among condensed matter physicists -- that of introducing `quasiparticles.' A quasiparticle is a collective object, whose properties are qualitatively similar to those of free particles, however they quantitatively depend on the coupling between the particle and the environment. Fig.~\ref{quasi} shows a few examples of quasiparticles. For example, the behavior of an electron interacting with a crystalline lattice can be understood in terms of a so-called polaron quasiparticle, composed of an electron dressed by a coat of lattice excitations~\cite{EminPolarons, PolaronsExcitons}. A polaron effectively behaves as a free electron with a larger effective mass, whose exact magnitude depends on the value of the electron-lattice coupling. Casting the many-body problem in terms of polarons allowed to obtain insights into the physics of semiconductors and polymers~\cite{AppelPolarons,EminPolarons, PolaronsExcitons}, high-temperature superconductors~\cite{PolaronsHighTc}, and a variety of other strongly correlated electron materials~\cite{Nagaev1975,Trugman1988,Aleksandrov_polaron_book}, $^3$He atoms immersed in superfluid $^4$He \cite{Bardeen1967}, and nuclear matter \cite{Forbes2014}. Furthermore, using `dressed impurities' as a building block of a many-particle system is instrumental in several computational techniques, such as dynamical mean-field theory~\cite{Metzner1989,Georges1996,GullRMP11} and the dual boson approach~\cite{Rubtsov12}. Most of the impurity problems arose in the context of condensed-matter physics and were originally developed to treat point-like particles, such as electrons or single spins. However, many of the impurity models can be successfully applied to more complex, composite quantum objects, such as atoms. For instance, various polaron models have been realized in the laboratory by immersing a single atom or ion in an ultracold Bose or Fermi gas~\cite{ChikkaturPRL00, SchirotzekPRL09, PalzerPRL09, KohstallNature12, KoschorreckNature12, SpethmannPRL12, FukuharaNatPhys13, ScellePRL13, Cetina15, MassignanRPP14,Jorgensen2016, Hu16, Cetina2016}. There, none of the electronically excited states of the atom can be populated both due to weak interactions with the surrounding bath and the small collisional energies involved. Therefore, the atom resides in its (usually spherically symmetric) ground state, and can be considered a point-like particle for all practical purposes. Due attention has also been paid to the spin degrees of freedom, which play a crucial role in the properties of crystalline and amorphous solids~\cite{ChaikinLubensky}. For instance, extensive research has been done on single localized spins coupled to a bath of bosons~\cite{LeggettRMP87}, fermions~\cite{Anderson1961,LutchynPRB08,KnapPRX12}, and other spins~\cite{ProkofievSpinBath00}. However, although nonzero spin provides an impurity with an additional degree of freedom, in most cases discussed in the context of condensed matter physics they can still be described as entities with no extended spatial structure. For instance, in ultracold gases, the spin variable can be efficiently mapped onto the hyperfine states of the atoms or ions~\cite{BlochRMP08}, preserving the point-like nature of the latter. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{quasi.pdf} \caption{\label{quasi} Examples of quasiparticles. (a) Polaron: an electron in a solid interacts with the crystal lattice. As a result, it is `dressed' by the resulting polarization cloud of nuclear displacements, and forms the so-called polaron. (b) Exciton: a bound state between an excited electron and a hole, which propagates in a semiconductor. (c) Angulon: a quantum rotor dressed by a field of phonons. } \end{figure} Things change quite drastically, however, if one considers polyatomic impurities, such as molecules. Contrary to electrons or closed-shell atoms, molecules are extended objects which possess a number of fundamentally different types of internal motion. The latter correspond to the rotational and vibrational degrees of freedom, which can couple to each other, as well as to the orbital and spin angular momentum of electrons, resulting in an involved energy level structure~\cite{KreStwFrieColdMol, LemKreDoyKais13, LevebvreBrionField2}. These `extra' degrees of freedom were key in numerous applications of molecular physics, from the classic example of the ammonia maser~\cite{GordonPR55}, to controlling the stereodynamics of chemical reactions~\cite{deMirandaNatPhys11}, to the recent measurements of the electron's electric dipole moment~\cite{ACMEedm}. Finally, these degrees of freedom occupy the low-energy part of the energy spectrum. As a result, they can be easily altered by the interactions with the surrounding medium, which makes them a rich resource to study novel aspects of many-body physics. In several experimental realizations, isolated molecules are coupled to a bath of some kind. Even in the most inert environments, the lowest-energy degrees of freedom, such as molecular rotation, are perturbed by the reservoir. As an example, spectroscopists actively study molecules trapped inside inert matrices~\cite{MatrixIsolationBook}. However, even a weakly-polarizable crystalline matrix -- such as molecular parahydrogen~\cite{MomoseVibSpec04} -- splits the molecular states by its crystal field and causes rotational line broadening due to the molecule-phonon coupling. On the other hand, molecules are routinely trapped inside small droplets of superfluid helium~\cite{ToenniesAngChem04, StienkemeierJPB06, SzalewiczIRPC08}. This allows one to isolate a molecule inside a cryogenic environment and thereby perform accurate spectroscopic measurements, free from Doppler and collisional shifts. Furthermore, trapping single molecules in helium droplets makes it easier to study species that are reactive in the gas phase, such as free radicals~\cite{Kupper02}. While superfluid helium is the softest available matrix, it still induces changes in the rotational spectrum of the trapped species, such as the renormalization of the molecular rotational constant~\cite{ToenniesAngChem04}. In the context of cold and ultracold gases, recent experimental progress allows to tame both translational and internal degrees of freedom by making use of electric, magnetic, and optical fields~\cite{LemKreDoyKais13, KreStwFrieColdMol, JinYeCRev12}. In this way, experiments with cold controllable molecules open up a prospect to study their interaction with an environment in great detail, as it is currently being done with atomic impurities~\cite{ChikkaturPRL00, SchirotzekPRL09, PalzerPRL09, KohstallNature12, KoschorreckNature12, SpethmannPRL12, FukuharaNatPhys13, ScellePRL13, Cetina15, MassignanRPP14,Jorgensen2016,Cetina2016}. The goal of this tutorial is two-fold. On the one hand, we would like to introduce the reader to the impurity problem involving molecules, and put it into the context of the other, previously studied impurity problems~\cite{WeissBook}. On the other hand, we aim to address at the same time the physical chemists, working with molecules in helium nanodroplets, atomic physicists, interested in ultracold molecules, as well as condensed matter physicists, studying the polaron problem. The tutorial is organized as follows. In Sec.~\ref{sec:superfluid} we introduce the concepts of superfluidity and Bose-Einstein condensation, in the context of liquid helium as well as ultracold atomic gases. Next, in Sec.~\ref{sec:ImpInSuperfluids} we describe recent advances on trapping molecules in superfluid helium droplets, and theoretical understanding of molecule-helium interactions. Over the years there have been several extensive reviews describing the experimental techniques to trap molecules inside helium droplets~\cite{ToenniesAngChem04}, spectroscopy and dynamics of molecules~\cite{StienkemeierJPB06}, ionisation experiments~\cite{MudrichIRPC14}, as well as on the theoretical approaches to molecules in helium droplets~\cite{SzalewiczIRPC08}. Therefore, we will provide only a general survey of the topic. In addition, in this tutorial we describe the experimental settings which can be employed to study molecular impurity physics with ultracold gases. Sec.~\ref{sec:impurity} focuses on the main topic of this tutorial -- a molecule coupled to a many-particle bath. We start from a general microscopic Hamiltonian describing a rotating impurity coupled to a reservoir of bosons. We provide details on the rotational structures of rigid molecules and the angular momentum algebra involved. In order to describe bosons, we introduce the Bogoliubov approximation and transformation and discuss the origin of the roton minimum in superfluid helium. Finally, we derive the interaction between a molecule and a many-body bath from first principles. The goal of this tutorial is to derive the angulon Hamiltonian, Eq.~\eqref{Hamil1}, which is the central object of this tutorial. Next, in Sec.~\ref{sec:angulon} we study the Hamiltonian -- first, using perturbative and diagrammatic techniques. We show that even a simple model can describe rotational constant renormalization for molecules in helium droplets, and provide it with a transparent physical interpretation. Furthermore, we demonstrate that coupling of a molecule to a many-particle bath leads to the emergence of a novel fine structure in the absorption spectrum, induced by many-body interactions. Finally, we describe a novel canonical transformation that drastically simplifies the solution in the limit of a slowly-rotating molecule. The main content of this tutorial closes with conclusions and an outlook, given in Sec.~\ref{sec:conclusions}. A short note on the relation between angular momentum operators in the laboratory and molecular frames of reference is given in Appendix A. \section{Superfluidity and Bose-Einstein Condensation} \label{sec:superfluid} Already in 1908, Kamerlingh Onnes from the University of Leiden observed that below 4.2 Kelvin helium gas turns into a low-density, colourless liquid. It took around a decade, however, to discover another, way more exotic phase transition which occurs in the vicinity of 2.17 Kelvin. In 1924, Kamerlingh Onnes and coworkers noticed a density change around the transition point~\cite{GriffinJPC09}. Within the next eight years, Keesom introduced the labels He I and He II for the two phases in his paper with Wolfke~\cite{KeesomPRAA28}, and measured the celebrated $\lambda$-shaped peak in the specific heat in collaboration with Clusius~\cite{KeesomPRAA32}. A schematic phase diagram of helium-4 is shown in Fig.~\ref{Hediag}. It is interesting to note that around that time several people observed a strange behavior of helium below 2.17 K~\cite{GriffinJPC09}, however none of them considered it important enough to describe it in print. Apparently, around 1930, Keesom himself pointed out the experimental complications arising from the fact that He II easily leaks out through tiny holes in the apparatus~\cite{GriffinJPC09}. Only in late 30's were the hydrodynamic properties of He II properly measured by Allen and Misener~\cite{AllenNat38} in Cambridge, UK and, independently, by Kapitza~\cite{KapitzaNat38} in Moscow. They revealed that the low-temperature phase of helium features `superfluidity' (a term coined by Kapitza by analogy with superconductivity), i.e., it is capable of moving through thin capillaries without any friction. The results were published in two back-to-back papers in Nature in 1938~\cite{AllenNat38, KapitzaNat38}, and thereby pioneered the entire field of quantum liquids and solids~\cite{GriffinJPC09, BalibarLowTemp07, SchmittSuperfluidity, LeggettRMP99}. \begin{figure}[b] \centering \includegraphics[width=0.4\linewidth]{He_diag.pdf} \caption{\label{Hediag} Schematic phase diagram of $^4$He, as a function of pressure, $P$, and temperature, $T$.} \end{figure} Very soon after the experimental discovery, Fritz London suggested~\cite{London38} that superfluidity is closely related to Bose-Einstein condensation (BEC), whose theoretical fundamentals were known by that time \cite{Bose24, Einstein24}. After several discussions with London, Tisza developed the idea further and introduced the `two-fluid model,' postulating that two independent components are present in He II. The first, the superfluid component, represents a Bose condensate of the atoms where a single-particle quantum state is occupied by a macroscopic particle number. Tisza assumed that the resulting macroscopic coherence allowed for flux without friction or viscosity. The second, normal component, whose fraction depends on temperature, behaves as a regular viscous fluid~\cite{Tisza38}. Three years later, Landau introduced his version of the two-fluid model by quantizing the hydrodynamic theory of a classical liquid. His theory was phenomenological and did not make direct use of the Bose statistics of $^4$He particles. Moreover, in the introduction to his seminal paper~\cite{LandauPR41} Landau bluntly disagrees with Tisza and London, saying: ``Tisza's well-known attempt to consider helium II as a degenerate Bose gas cannot be accepted as satisfactory -- even putting aside the fact that liquid helium is not an ideal gas, nothing could prevent the atoms in the normal state from colliding with the excited atoms; i.e., when moving through the liquid they would experience friction and there would be no superfluidity at all.'' It took several decades to unify the theory of Landau with the BEC ideas of London and Tisza. After the first developments of Bogoliubov~\cite{Bogoliubov47} and Beliaev~\cite{Beliaev58}, several many-body calculations performed between 1957 and 1964 allowed to reveal that the superfluidity is indeed accompanied by Bose condensation of the helium atoms~\cite{Griffin99}. Nevertheless, measuring the Bose condensation in experiments with helium has proven challenging, and only indirect techniques such as neutron scattering have been successfully used~\cite{GriffinJPC09, BalibarLowTemp07, SchmittSuperfluidity, LeggettRMP99}. A direct proof of the phase transition to a Bose-Einstein condensate was achieved only in 1995 in experiments with dilute alkali gases~\cite{Pitaevskii2016, PethickSmith}. As opposed to experiments with helium, in ultracold gases BEC can be detected easier than superfluidity, which was observed later as well~\cite{OnofrioPRL00}. A flow without friction is not the only peculiar property of the superfluids. For instance, if rotated, superfluids develop vortices -- tiny strings carrying quantized angular momentum, whose number grows with the speed of rotation. Vortices, as fingerprints of the superfluid phase, have also been detected in numerous experiments on helium, as well as on ultracold alkali gases~\cite{Pitaevskii2016, PethickSmith, GomezPRL12, Gomez906}. Since the phenomenological model of Landau described all experimentally observed properties of superfluid helium, the ideas of London and Tisza remained in the shade for several decades. However, one can speculate that if the BEC was observed in ultracold gases earlier than superfluidity in helium, the validity of the London-Tisza theory would have never been in question. Since in this tutorial we did not provide theoretical details on the theory of superfluidity or BEC, the interested reader is referred to several great books that have been published on the subject~\cite{Pitaevskii2016, LeggettQuantLiquids, PinesNozieres}. Let us now discuss the conditions necessary to observe superfluidity and BEC. Since these are inherently quantum effects, they manifest themselves at large values of the de Broglie wavelength, \begin{equation} \lambda = \frac{h}{p} \end{equation} Here $h$ is Planck's constant and $p$ is the particle's momentum. For particles of mass $m$ at temperature $T$, the momentum $p \sim (m k_B T)^{1/2}$, where $k_B$ is Boltzmann's constant. For matter at `normal' conditions, the resulting de~Broglie wavelength is much smaller than the interparticle distance. As a result, the properties of gaseous or liquid matter can be successfully described by laws of classical statistical physics, treating individual atoms and molecules as rigid spheres or ellipsoids. However, this is not the case in the regime of low temperatures or high densities, where the average distance between the particles, $\langle r \rangle$, becomes shorter than the de~Broglie wavelength, $\langle r \rangle \lesssim \lambda$. Given that $\langle r \rangle$ can be expressed through the particle number density, $n$, as $\langle r \rangle = n^{-1/3}$, we obtain the following relation: \begin{equation} \label{qm_effects} k_B T \lesssim n^{2/3} \hbar^2/m \end{equation} From Eq.~\eqref{qm_effects} one can see that the role played by quantum effects in the properties of a gas or liquid is determined by the interplay between its density and temperature. In dilute alkali gases, particle densities are usually on the order of $10^{14} - 10^{15}$ particles per cm$^{3}$. Therefore, in order to achieve BEC, one needs to cool the gas down to microKelvin temperatures. In superfluid helium, on the other hand, the densities are much larger and on the order of $10^{22}$ cm$^{-3}$~\cite{Donnelly1998}. As a result, the BEC transition occurs at a relatively high temperature of $\sim 2$ Kelvin. One should keep in mind that Eq.~\eqref{qm_effects} does not take interatomic interactions into account and therefore provides only a rough estimate for the transition temperature. In dilute alkali gases, however, the interactions between the atoms are sufficiently weak such that almost all of them can reach the BEC state. On the other hand, due to strong He--He interactions, only a small fraction of superfluid helium ($\sim 6-8$\%) reaches the BEC state at $T=0$. Out of all bosonic atoms in the periodic table, $^4$He is the only one that becomes superfluid `naturally,' i.e.\ upon a transition from a normal liquid phase. What is so special about helium? The reason lies in the interplay between the kinetic energy of the atoms and their mutual interactions. Helium is extremely light, and only weakly polarizable, and therefore the kinetic energy (`zero-point motion') of the helium atoms is always larger than the interactions between them. This precludes the atoms from freezing into a crystalline lattice, unless an external pressure is applied. Other elements, on the other hand, solidify at much higher temperatures compared to those given by Eq.~\eqref{qm_effects}, and the superfluid transition never takes place. While Eq.~\eqref{qm_effects} provides a criterion concerning the importance of the quantum mechanical effects, additional phenomena arise from quantum statistics. There, the particles are divided into bosons, whose collective wavefunction stays intact after particle exchange, and fermions, whose collective wavefunction changes sign. In order for quantum statistics to play a role, however, the particles are required to be able to physically change places. This is the case in gases and liquids, but not in solids where tunnelling of the nuclei building the lattice is greatly suppressed. Therefore, quantum statistics plays no role for the lattice sites in solids. The situation is different in molecular physics. Here indistinguishable nuclei can exchange position and, hence, quantum statistics becomes relevant. For instance, if one considers a molecule, composed of two bosonic atoms, e.g. $^{12}$C$_2$, its wavefunction has to be symmetric under exchange of the nuclei. As a result, odd rotational states will be missing, since they would break the symmetry. If one replaces one of the nuclei by its isotope, say $^{14}$C, so that the nuclei become distinguishable, all rotational states will show up in the spectrum~\cite{FurtenbacherAstrJ16}. \section{Molecular impurities trapped inside superfluids} \label{sec:ImpInSuperfluids} \begin{figure}[b] \centering \includegraphics[width=0.5\linewidth]{OCSHe.pdf} \caption{\label{OCS} Rovibrational spectra of the OCS molecule. (a) In the gas phase. (b) Inside superfluid $^4$He: the lines are sharp, but the rotational level splittings are renormalized. (c) Inside non-superfluid $^3$He: the rotational lines are completely broadened by the helium environment. Adapted with permission from Ref.~\cite{ToenniesAngChem04}. } \end{figure} For several decades after the discovery of superfluidity, only macroscopic, hydrodynamic properties of helium have been studied. Employing \textit{microscopic} probes, on the other hand, seemed extremely challenging, since it turned out that superfluid helium is averse to mixing with impurities. Only in the 1990s it was demonstrated that atoms and molecules can be trapped in helium if the latter forms little droplets~\cite{ToenniesAngChem04, SzalewiczIRPC08}. Over the following years, trapping atoms, molecules, and ions inside the superfluid helium droplets -- sometimes called `nanocryostats' -- emerged as an important tool in spectroscopy~\cite{ToenniesAngChem04}. Superfluid helium droplets cool the molecules to $\sim 0.4$ Kelvin~\cite{StienkemeierJPB06} and isolate them from external perturbations. This allows to record spectra free of collisional and Doppler broadening, as well as to trap and study species reactive in the gas phase. By now, matrix isolation spectroscopy based on helium droplets evolved into a large field, which has been the subject of several review articles~\cite{ToenniesARPC98, ToenniesAngChem04, SzalewiczIRPC08, StienkemeierJPB06, MudrichIRPC14}. Therefore, here we describe only the main effects arising due to the interactions between the molecule and the helium droplet, without providing too many technical details. \begin{table}[b] \begin{tabular}{| c | c | c | c | c| } \hline Molecule & $B$ & $B^\ast$ & $B^\ast/B$ & Ref. \\ \hline HF & 19.787 & 19.47 & 0.98 & \cite{NautaJCP00} \\ HCN & 1.478 & 1.204 & 0.81 & \cite{NautaPRL99, ConjusteauJCP00} \\ CO$_2$ & 0.39 & 0.154 & 0.39 & \cite{NautaJCP01} \\ OCS & 0.2029 & 0.0732 & 0.36 & \cite{GrebenevJCP00} \\ N$_2$O & 0.4187 & 0.0717 & 0.171 & \cite{NautaJCP01, XuPRL03} \\ \hline \end{tabular} \caption{\label{tab:mol} The rotational spectrum of molecules in superfluid helium droplets can be approximated as $B^\ast J(J+1)$, where $B^\ast$ is the effective rotational constant, and $J=0,1,2 \dots$ labels the rotational levels. The table gives examples of $B^\ast$ in comparison with the gas-phase value of the rotational constant, $B$, for linear rotor molecules. Energies are given in cm$^{-1}$. } \end{table} Among all inert gases conventionally used for matrix isolation~\cite{MatrixIsolationBook}, helium represents the `softest,' i.e.\ the least polarizable environment. Furthermore, since at atmospheric pressure helium does not form a crystal, it does not break the rotational symmetry of the molecules, unlike another weakly-interacting matrix -- molecular \textit{para}-hydrogen~\cite{MomoseVibSpec04}. As a result, the impurity molecules interact with the surrounding matrix only weakly. Figure~\ref{OCS} shows the legendary experimental spectrum of the OCS molecule -- a prototypical linear rotor -- in the gas phase, in superfluid $^4$He droplets, as well as in droplets of $^3$He, not superfluid at this temperature. Comparing panels (a) and (b) one can see that the only change induced by the $^4$He environment is a renormalized spacing between the transition lines. The overall structure of the spectrum remained the same, and the helium environment does not lead to any substantial broadening of the lines~\cite{ToenniesAngChem04}. Panel (c), on the other hand, shows that interactions with a fermionic $^3$He environment broaden the spectral lines of the same molecule to the point that they are no longer resolved. In Ref.~\cite{GrebenevScience98}, where these spectra were reported first, such a behavior was attributed to the hydrodynamic properties of helium, i.e.\ the soft, non-disturbing nature of the superfluid phase. However, as was described later in Ref.~\cite{Babichenko99}, the reason rather lies in quantum statistics, which leads to a drastically different phase space for available scattering processes in bosonic and fermionic environments, as schematically illustrated in Fig.~\ref{BosonsFermions}. One can understand this effect as follows. When an impurity is immersed in a bath of atoms, the atoms scatter off the impurity, which leads to line broadening as well as shifts of the spectral lines. In each scattering event with the impurity, the atom's momentum is changed from $\vec{k}_\text{in}$ to $ \vec{k}_\text{out}$, which is accompanied by a transfer of momentum $\vec{q}=\vec{k}_\text{in}-\vec{k}_\text{out}$ to the impurity. If the bath consists of a Bose-Einstein condensate, most atoms are in a low kinetic energy state, so that the incoming momentum is small, $\vec{k}_\text{in} \approx 0$. When colliding with the impurity, atoms can be transferred into finite momentum states, $\vec{k}_\text{out}$. Since the bosons are initially in their kinetic ground state, their excitation always costs a finite amount of energy. Consequently, such processes are suppressed. In contrast, in the case of $^3\text{He}$ one deals with a degenerate Fermi gas where, due to Pauli blocking, all scattering states up to the Fermi momentum, $k_F$, are occupied, see the right panel of Fig.~\ref{BosonsFermions}. Hence the fermions which scatter off the impurity can start off a plethora of initial states just below the Fermi surface, $|\mathbf k_\text{in}| \lesssim k_F$, to unoccupied states just above the Fermi surface, $|\mathbf k_\text{out}| \gtrsim k_F$. Due to the vector addition of momenta, the resulting momentum transfer, $q=|\vec{k}_\text{in}-\vec{k}_\text{out}|$, can range from $q=0$ to $q=2k_F$, whithout changing the energy of the fermions in a substantial way. Consequently, there is a very large number of such scattering processes available, leading to much stronger effects on the observed spectral features as compared to the case of a BEC environment. Note that in this simplified argument we assumed that the mass of the impurity $m_\text{imp}$ is large compared to that of the fermions in the environment. In this case, the recoil energy $E_\text{rec} = q^2/(2m_\text{imp})$ gained by the impurity is small and can be neglected. As a result, the number of pathways leading to decoherence and renormalization of the rotational states is substantially larger in a fermionic bath compared to the bosonic one. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{F_vs_B.pdf} \caption{\label{BosonsFermions} When particles in the medium scatter off the impurity, their momentum changes from $k_\text{in}$ to $k_\text{out}$, while the impurity experiences a recoil energy $E_\text{rec}=q^2/(2m_\text{imp})$ where $q=|\vec{k}_\text{in} - \vec{k}_\text{out}|$ is the momentum transfer. In the case of a BEC (left), all bosons are initially in the $k=0$ state so that the scattering also leads to a kinetic energy cost for them. This is drastically different for fermions (right), where the same recoil energy of the impurity can create all kinds of excitations of the fermionic bath with momentum transfers ranging from $q \sim 0$ to $q \sim 2k_F$, with no additional energy cost.} \end{figure} Table~\ref{tab:mol} compares the rotational constants in the gas-phase with those inside superfluid helium for several linear-rotor molecules. We will discover later that this effect is not entirely due to the hindering of rotation by the presence of He atoms. The purpose of the present tutorial is to demonstrate that such a classical picture is by far incomplete. One can see that the rotational constants of heavier molecules are subject to stronger renormalization than the lighter ones. This comes from the fact that for slowly rotating molecules the anisotropic part of the molecule-helium interaction is, in general, more substantial compared to the kinetic energy. It is worth remembering that quite often it is said that the helium matrix perturbs the rotational states only weakly. While this is true concerning the widths of the rotational lines, the coupling to the environment is, strictly speaking, strong for all the molecules where $B^\ast /B \ll 1$, since in this case the interactions are much larger than the kinetic energy. Theoretically, calculating the properties of superfluid helium (even without the impurities involved) represents a formidable challenge. While, compared to bulk helium, the droplets consist of a relatively `small' number of atoms (several hundreds to millions), their properties largely coincide with the bulk properties. On the other hand, numerical calculations (e.g. those based on Monte-Carlo algorithms) become unfeasible already at the level of hundreds of atoms. Nevertheless, even calculations for small systems allow to understand the properties of molecules inside helium nanodroplets. This is in line with several studies showing that tens of helium atoms suffice to observe superfluidity~\cite{GrebenevScience98, SurinPRL08}. Over the years, the groups of Whaley, Zillich, Krotscheck, Blume, Roy, Moroni, and others performed sophisticated numerical calculations for several different molecules trapped inside small and large He$_n$ clusters (with $n \lesssim 100$). A detailed review of their work is provided in Refs.~\cite{ToenniesAngChem04, SzalewiczIRPC08}. Such calculations revealed the superfluid fraction of the helium droplet, the fraction of He atoms forming a BEC, as well as the spectrum of the molecular impurity, from which the change of its rotational constant. Since, due to a relatively high density of superfluid helium, the helium atoms are densely packed in the vicinity of the molecule, the details of the two-body molecule-helium potentials are of particular importance. As a result, the accuracy of the potential energy surfaces plays a crucial role in numerical simulations. An excellent review of theoretical studies of molecules in helium nanodroplets was provided in Ref.~\cite{SzalewiczIRPC08}, therefore we do not describe the computational machinery of the methods in this tutorial. It is important to note, however, that while \textit{ab initio} quantum mechanical calculations are required to quantitatively reproduce the experiments on rovibrational spectroscopy, simpler -- preferentially analytic -- models are necessary in order to understand interactions of molecules with the surrounding environment. In the following sections we introduce such models, based on the angulon quasiparticle, and show that they provide insights into molecular rotation inside superfluid helium. While studies of molecules in helium droplets traditionally belong to the domain of chemical physics and physical chemistry, during the last decades physicists working with ultracold quantum gases have achieved an unprecedented control of internal and external atomic degrees of freedom~\cite{Pitaevskii2016, PethickSmith}. There, the physics of structureless impurities -- polarons -- has been extensively studied. As one example, an ion or atom immersed {in a sea of bosons or fermions} dresses itself with a `coat' of many-particle excitations, thereby turning into a Bose- or Fermi-polaron (depending on the bath statistics). {Both kinds of polarons} have been realized in cold atomic gases by a number of groups~\cite{ChikkaturPRL00, SchirotzekPRL09, PalzerPRL09, KohstallNature12, KoschorreckNature12, SpethmannPRL12, FukuharaNatPhys13, ScellePRL13, Cetina15, MassignanRPP14, Jorgensen2016,Cetina2016}. In recent years it became possible to create samples of cold molecules using magneto- and photo-association in ultracold gases, laser and Sisyphus cooling, as well as Stark and Zeeman deceleration~\cite{LemKreDoyKais13, BasChemRev12, JinYeCRev12, KreStwFrieColdMol}. {It is within reach of state-of-the-art experiments} that these molecules can be selectively prepared in a single hyperfine state and immersed into a Bose or Fermi gas with controlled density and interactions. In addition, one can apply external electric, magnetic, or laser fields to steer molecular rotation. These developments pave the way to studying coupling of molecular rotation with a many-particle setting in a fully-controlled manner, which we describe in the following section. We note that during the last decade there have been numerous proposals on many-particle physics with ultracold molecules~\cite{LewensteinBook12, LemKreDoyKais13, JinYeCRev12, KreStwFrieColdMol}, some of which have already been realized in the laboratory~\cite{YanNature13}. {Most such proposals rely on dipole-dipole interactions, which manifest themselves when their magnitude is on the order of the molecular kinetic energy. One way to reach such a regime is to prepare a high-density molecular sample, in order to decrease the average intermolecular distance and thereby increase the dipole-dipole interactions. This is quite challenging to achieve, partly because of the reactive collisions between the particles. Another approach relies on lowering the sample's temperature in order to decrease the average kinetic energy of the molecules. This is, in turn, challenging, since the molecular laser cooling techniques are quite limited while sympathetic and evaporative cooling is precluded by reactive collisions~\cite{LemKreDoyKais13}}. In the following sections we show that a single molecule coupled to an ultracold gas represents an elementary building block of a many-particle system, and thereby allows to study {novel physics, inaccessible with atomic impurities}. As a pleasant bonus, high densities of molecules, leading to unpleasant chemical reactions and loss of atoms, can be avoided, while many intriguing aspects of many-body impurity physics can still be probed. \section{Theoretical description of the molecular impurity problem} \label{sec:impurity} In this section, we derive the Hamiltonian for a molecular impurity coupled to a bath of bosons, preparing the reader for the next section, where the problem is cast using the concept of quasiparticles. We start from a first-principle Hamiltonian which describes the systems in terms of bosonic atoms interacting with a single molecule. In its most general form it is given by: \begin{equation} \label{Hh} \hat H = \hat H_\text{mol} + \hat H_\text{bos} + \hat H_\text{mol-bos} \end{equation} where the three terms are, respectively, the Hamiltonians of the isolated molecule, the unperturbed bath of bosons, and the interaction between molecule and bosonic bath. In what follows we describe each term in detail. \subsection{Molecular Hamiltonian} In addition to the {electronic as well as fine and hyperfine structure} found in atoms, molecules possess additional types of internal motion, such as rotation and vibration. The characteristic energies for the electronic transitions correspond to visible and ultraviolet light (frequencies of $10^{14} - 10^{15}$~Hz, {wavelengths in the range of 300--3000~nm}), while vibrational excitations mostly lie in the infrared region ($10^{13} - 10^{14}$~Hz, {wavelengths in the range of 3--30~$\mu$m}), and rotational transitions correspond to microwave frequencies ($10^{9} - 10^{11}$~Hz, {wavelengths in the range of 3--300~mm}). Usually, the natural lifetimes of the electronic states are very short (at most on the order of microseconds), and molecules reside in their ground electronic state. Furthermore, at temperatures {of $T\sim 1$~K}, as they are present inside superfluid helium and which correspond to {$\nu = k_B~T/h \sim 10^{10}$ Hz}, molecules are cooled to the ground vibrational state, with very few rotational states populated {in the case of small molecules}. Moreover, interactions with the environment are rarely strong enough to disturb the vibrational and electronic spectrum of the molecules. Therefore, within the Born-Oppenheimer approximation, we can focus exclusively on the rotational degrees of freedom and disregard other internal degrees of freedom, of both the molecule and the atoms in the environment. Furthermore, here we will consider molecules whose translational degrees of freedom are frozen. While in some situations the translational motion of the molecules might play a role, neglecting it is a good approximation both for molecules at ultracold temperatures~\cite{JinYeCRev12, KreStwFrieColdMol, LemKreDoyKais13} and molecules inside helium droplets~\cite{ToenniesAngChem04}. Within these approximations, the low-energy sector of the molecular Hamiltonian can be expressed as: \begin{equation} \label{Hmol} \hat H_\text{mol} = \hat H_\text{rot} + \hat H_\text{pert} \end{equation} where $\hat H_\text{rot}$ describes the rotations of a molecule as a rigid body, while $\hat H_\text{pert}$ describes various perturbations to the rotational spectrum. The latter mostly arise due to the spin-orbit, spin-rotation, and spin-spin interactions and depend on the particular electronic state the molecule is in~\cite{LevebvreBrionField2}. For simplicity, we will consider closed-shell ($^1\Sigma$) molecules where such perturbations are negligible. The rotational Hamiltonian of a rigid rotor is obtained by quantizing its classical counterpart. In the most general case it is given by: \begin{equation} \label{Hrot1} \hat H_\text{rot} = A \hat J_{x}'^2 + B \hat J_{y}'^2 + C \hat J_{z}'^2 \end{equation} Here, the operators $\hat J_{i}'$ define the projections of the angular momentum on the molecular-frame coordinate axes, {$(x, y, z)$}, chosen to coincide with molecular axes of symmetry. The so-called rotational constants, $A, B$, and $C$, are expressed through the corresponding moments of inertia, $I_i$, as \begin{equation} \label{ABC} {A = \frac{1}{2I_{x}}, \hspace{0.5cm} B = \frac{1}{2I_{y}}, \hspace{0.5cm} C = \frac{1}{2I_{z}}}. \end{equation} As already done in this equation, in order to simplify formulae we set $\hbar \equiv 1$ from now onwards. In such a convention, momentum and energy have dimensions of reciprocal length and time, respectively. The special cases of $\hat H_\text{rot}$ cover species of various geometry: {$A=B; C=0$ corresponds to linear molecules (KRb, CO$_2$); $A=B=C$ corresponds to spherical tops (CCl$_4$, C$_{60}$); $A=B\neq C$ are called symmetric tops, where $A=B<C$ define prolate symmetric tops (CH$_3$Cl, CH$_3$C$=$CH) and $A=B>C$ oblate symmetric tops (NH$_3$, C$_6$H$_6$, CHCl$_3$); and $A \neq B \neq C$ correspond to asymmetric tops (H$_2$O, CH$_2$Cl$_2$). In the following sections, we provide details on molecules with different rotational structure. Detailed descriptions of these cases can be found in Refs.~\cite{TownesSchawlow, BernathBook, LevebvreBrionField2}. First, however, let us discuss the properties of the rotational angular momentum operators.} \subsubsection{Angular momentum operators in the molecular and laboratory frames} {The molecular (or `body-fixed') coordinate system of Eq.~\eqref{Hrot1} is introduced in addition to the laboratory (or `space-fixed') one, whose axes we will label as $(X,Y,Z)$. Furthermore, in both coordinate systems one can introduce the spherical components of angular momentum operators as: \begin{align} \label{J0x} \hat{J}_0' &= \hat{J}_z' \\ \label{Jplusx} \hat{J}_{+1}' &= -\frac{1}{\sqrt{2}} \left(\hat{J}_x' + i\hat{J}_y' \right)\\ \label{Jminusx} \hat{J}_{-1}' &= \frac{1}{\sqrt{2}} \left(\hat{J}_x' - i\hat{J}_y' \right) \end{align} (see Appendix~\ref{sec:appendixAngular} for details). Working in this representation, the molecular-frame components, $\hat J'_i$, of the angular momentum operator, $\hat{\mathbf{J}}$, can be expressed via the laboratory-frame components, $\hat J_k$, as: \begin{equation} \label{JiPrimeviaJi} \hat{J}'_{i} = \sum_k \hat D^{1}_{k, i} (\hat \phi, \hat \theta, \hat \gamma) \hat{J}_{k}, \end{equation} where $i,k = \{-1, 0, +1\}$, $\hat D^{l}_{k, i} (\hat \phi, \hat \theta, \hat \gamma)$ are so-called Wigner rotation matrices~\cite{VarshalovichAngMom}, and $(\hat \phi, \hat \theta, \hat \gamma)$ give the Euler angles of the relative orientation of the molecular coordinate system with respect to the laboratory frame. Note that the angles determining the instantaneous orientation of the molecular frame with respect to the laboratory frame are given by operators that measure the orientation of the molecular impurity. In order to clarify this point, let us consider the properties of these operators in more detail. } {Upon acting on an eigenstate of the angles, $\ket{\phi, \theta, \gamma} $, the angle operators $(\hat \phi, \hat \theta, \hat \gamma)$ are replaced by their eigenvalues \begin{align} \label{AngleOp1} \hat \phi \ket{\phi, \theta, \gamma} &= \phi \ket{\phi, \theta, \gamma}\\ \hat \theta \ket{\phi, \theta, \gamma} &= \theta \ket{\phi, \theta, \gamma} \\ \hat \gamma \ket{\phi, \theta, \gamma} &= \gamma \ket{\phi, \theta, \gamma} \end{align} Since the Wigner rotation matrices $\hat D^{l}_{k, i} (\hat \phi, \hat \theta, \hat \gamma)$ are analytical functions of the angles, the following relation is satisfied: \begin{equation} \label{DlkiAngle} \hat D^{l}_{k, i} (\hat \phi, \hat \theta, \hat \gamma) \ket{\phi, \theta, \gamma} = D^{l}_{k, i} (\phi, \theta, \gamma) \ket{\phi, \theta, \gamma} \end{equation} Thus, by acting on a given molecular state, the $\hat D$-operator `measures' the instantaneous molecular orientation in the laboratory frame, and the corresponding projections $\hat J'_i$ can be evaluated using Eq.~\eqref{JiPrimeviaJi}.} {One should keep in mind that the notions of `space-fixed' and `body-fixed' belong only to the projections of the angular momentum operator. That is, $\hat J_i'$ are not evaluated using the coordinate and momentum operators, $\hat{\mathbf{r}}$ and $\hat{\mathbf{p}}$, in the molecular frame, but give the projections of the laboratory-frame angular momentum $\hat{\mathbf{J}}$ onto the rotating axes of the molecule. The amount of angular momentum possessed by the molecule does not depend on the coordinate system and it can be easily verified that $\hat{\mathbf{J}}^2 \equiv \hat{\mathbf{J}}'^2$. } {There is one more important consequence of $(\hat \phi, \hat \theta, \hat \gamma)$ being operators: the Wigner $\hat D$-matrix of Eq.~\eqref{JiPrimeviaJi} does not commute with the angular momentum operators $\hat{J}_{k}$ and $\hat{J}'_{i}$; for the respective commutation relations see Eqs.~\eqref{JiComm}--\eqref{JiPrimeCommAst} of Appendix~\ref{sec:appendixAngular}. Using these commutation relations together with Eq.~\eqref{JiPrimeviaJi} one uncovers a surprising property: the molecular-frame operators $\hat J'_i$ possess anomalous commutation relations~\cite{NautsAmJPhys10}. For example, while for the laboratory-frame components $[\hat J_X, \hat J_Y] = i \hat J_Z$, for the molecule-frame components $[\hat J'_x, \hat J'_y] = - i \hat J'_z$. This unusual result can be understood in terms of time-reversal symmetry~\cite{ZareAngMom, JuddDiatomic}. If an observer rotates along with the molecular frame, from their point of view the laboratory frame spins in the opposite direction compared to the molecular rotation with respect to the laboratory frame. Rotation in an opposite direction corresponds to an inversion of time, $t \to -t$, which leaves the coordinates intact, $\mathbf{r} \to \mathbf{r}$, but changes the signs of momenta, $\mathbf{p} \to -\mathbf{p}$. Since $\mathbf{J} = \mathbf{r} \times \mathbf{p}$, the signs of angular momenta are also changed, $\mathbf{J} \to -\mathbf{J}$, which leads to the minus sign in the commutation relations. Note that this is analogous to performing complex conjugation, i.e. the replacement of $i \to -i$, which corresponds to the time-reversal operation in quantum mechanics~\cite{SakuraiQM}. \subsubsection{Linear molecules} In the case of linear molecules, Eq.~\eqref{Hrot1} reduces to \begin{equation} \label{Hrot2} {\hat H_\text{rot} = B \hat J_{x}'^2 + B \hat J_{y}'^2 \equiv B \mathbf{\hat{J}^2}} \end{equation} Thus the quantum state of a rigid-rotor molecule is defined by the eigenvalues of $\mathbf{\hat{J}^2}$ and one of projections with respect to the laboratory coordinate axes, usually chosen to be $\hat{J}_Z$: \begin{align} \label{Jeigen} \mathbf{\hat{J}^2} \vert j, m \rangle &= j(j+1) \vert j, m \rangle\\ \notag \hat{J}_Z \vert j, m \rangle &= m \vert j, m \rangle \end{align} In the absence of external fields, the eigenstates of a rigid linear rotor form $(2j+1)$-fold degenerate multiplets with energies $E_j = B j(j+1)$. Often it is convenient to work in the angular representation, where the linear rotor wavefunctions are given by spherical harmonics~\cite{VarshalovichAngMom, ZareAngMom, BiedenharnAngMom}: \begin{equation} \label{JYlm} \langle \theta, \phi \vert j, m \rangle = Y_{j m} (\theta, \phi) \end{equation} \subsubsection{Symmetric-top molecules} \label{sec:symtops} \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{molecules.pdf} \caption{\label{molecules} (a) In a linear-rotor molecule, the rotational angular momentum $\mathbf{J}$ is always perpendicular to the molecular-frame $z$-axis. (b) This is not the case for nonlinear molecules (symmetric and asymmetric tops), which results in a third quantum number, $n$, used to classify the rotational states.} \end{figure} For symmetric-top molecules, the Hamiltonian \eqref{Hrot1} reduces to: \begin{equation} \label{Hrot3} \hat H_\text{rot} = B \hat J_{x}'^2 + B \hat J_{y}'^2 + C \hat J_{z}'^2 = B \hat{\mathbf{J}}^2 + G \hat J_{z}'^2, \end{equation} where $G=C-B$. The values of $G<0$ correspond to prolate (cigar-shaped) symmetric tops, while $G>0$ corresponds to the oblate (disk-shaped) ones. In order to describe the rotation of a symmetric-top molecule, one needs to involve both the laboratory and molecular coordinate systems, and work with both the corresponding angular momentum operators, $ \hat{J}_i$ and $\hat{J}_i'$. The reason is that, as opposed to linear rotors, the angular momentum vector can have a finite projection on the $z$-axis of the molecular coordinate system, which needs to be taken into account; see Fig.~\ref{molecules} for a schematic illustration. Thus, in addition to the quantum numbers we used for a linear rotor, Eq.~\eqref{Jeigen}, we have to introduce an additional quantum number $n$, which gives the projection of $\mathbf{J}$ on the molecular axis: \begin{align} \label{JeigenSym} \mathbf{\hat{J}^2} \vert j, m, n \rangle &= j(j+1) \vert j, m, n \rangle\\ \hat{J}_Z \vert j, m, n \rangle &= m \vert j, m, n \rangle \\ \hat{J}'_z \vert j, m, n \rangle &= n \vert j, m, n \rangle \end{align} More details on the properties of the $ \hat{J}_i$ and $\hat{J}_i'$ operators are provided in Appendix~\ref{sec:appendixAngular}. In the angular representation, the symmetric-top wavefunctions are given by Wigner $D$-matrices: \begin{equation} \label{JDlm} \langle \phi, \theta, \gamma \vert j, m,n \rangle = \sqrt{\frac{2j+1}{8 \pi^2} }D^{j \ast}_{m n} (\phi, \theta, \gamma) \end{equation} {Note that Eq.~\eqref{JDlm} describes the molecular wavefunction (i.e.\ it is a function of a set of numbers) and hence there are no operator hats on top of the angles.} The Wigner $D$-matrices correspond to the rotation operator connecting the space-fixed and body-fixed systems, which are rotated with respect to each other by the Euler angles $(\phi, \theta, \gamma)$. A linear rotor corresponds to the limit of a prolate symmetric top whose third moment of inertia $C = 0$. This corresponds to setting $n=0$ in Eqs.~\eqref{JeigenSym} and \eqref{JDlm}, which then reduce to Eqs.~\eqref{Jeigen} and \eqref{JYlm} for the linear rotor. \subsubsection{Asymmetric-top molecules} For asymmetric-top molecules the Hamiltonian \eqref{Hrot1} can be rewritten as~\cite{BernathBook, TownesSchawlow}: \begin{multline} \label{HrotAsym} \hat H_\text{rot} = \left( \frac{A+B}{2} \right) (\hat J_{x}'^2 + \hat J_{y}'^2) + C \hat J_{z}'^2 + \left( \frac{A-B}{2} \right) (\hat J_{x}'^2 - \hat J_{y}'^2) \\ = \left( \frac{A+B}{2} \right) (\hat J_{x}'^2 + \hat J_{y}'^2) + C \hat J_{z}'^2 + \left( \frac{A-B}{2} \right) (\hat J_{-1}'^2 + \hat J_{+1}'^2), \end{multline} {where the ladder operators $\hat J_{\pm 1}'$ are defined by Eqs.~\eqref{Jplusx} and~\eqref{Jminusx}.} The eigenstates of asymmetric top molecules are written as linear combinations of the symmetric-top states: \begin{equation} \label{AsymState} \vert j, m, i \rangle = \sum_n a_n^i \vert j, m, n \rangle \end{equation} The coefficients $a_n^i$ can be obtained by diagonalizing the matrix of $\hat H_\text{rot}$, Eq.~\eqref{HrotAsym}, in the basis of the symmetric-top states, with the matrix elements obtained using Eq.~\eqref{JiPrimeKet} of Appendix~\ref{sec:appendixAngular}. \subsection{Boson Hamiltonian} \subsubsection{Introduction to second quantization} The most convenient way to work with a many-particle system, such as a BEC, is using the language of second quantization. Here one introduces operators, $\hat a^\dagger_l$ and $\a_l$ which correspond, respectively, to the creation and annihilation of a boson in a state $l$. {Here $l$ is a placeholder for any possible state relevant for a particular problem, and hence can represent an entire set of quantum numbers. For instance, $l$ might simply refer to the lattice site $i$ of an atom, its momentum $\mathbf p$, its spin state $({\uparrow,\downarrow})$, or angular momentum state $(l,m)$.} Let us consider a many-particle state, where each single-particle state $l$ is occupied by $n_l$ bosons. Then the action of the creation and annihilation operators is defined as follows: \begin{align} \a_l \ket{n_1, n_2, \dots, n_l, \dots} &= \sqrt{n_l} \ket{n_1, n_2, \dots, n_l-1, \dots} \\ \hat a^\dagger_l \ket{n_1, n_2, \dots, n_l, \dots} &= \sqrt{n_l+1} \ket{n_1, n_2, \dots, n_l+1, \dots} \end{align} Thus, the action of the annihilation operator on the vacuum state $\ket{0}$ gives zero: \begin{equation} \a_l \ket{0} = 0 \end{equation} One can define a number operator $\hat n_l = \hat a^\dagger_l \a_l$, with the following property \begin{equation} \hat n_l \ket{n_1, n_2, \dots, n_l, \dots} = n_l \ket{n_1, n_2, \dots, n_l, \dots} \end{equation} If $l$ is a discrete variable, such as the position of an atom on a lattice, the commutation relation for the operators is given by: \begin{equation} [\a_l, \hat a^\dagger_{l'}] = \delta_{ll'} \end{equation} For free particles, $l$ can be a continuous variable describing the spatial degrees of freedom, such as a coordinate, $\mathbf{r}$, or momentum, $\mathbf{k}$. The operators in the coordinate and momentum space are related to each other through the Fourier transformation, which corresponds to a change in the basis of the single-particle states, \begin{equation} \label{adFourier} \hat a^\dagger_\mathbf r=\int\frac{d^3 k}{(2\pi)^3}\hat a^\dagger_\mathbf k e^{i\mathbf k\mathbf r} \end{equation} \begin{equation} \label{aFourier} \a_\mathbf r=\int\frac{d^3k}{(2\pi)^3}\a_\mathbf k e^{-i\mathbf k\mathbf r} \end{equation} With $\mathbf{r}$ and $\mathbf{k}$ being continuous variables, the commutation relations are modified according to: \begin{equation} \label{ArComm} [\a_\mathbf{r}, \hat a^\dagger_\mathbf{r'}] = \delta(\mathbf{r-r'}) \end{equation} \begin{equation} \label{ArComm} [\a_\mathbf{k}, \hat a^\dagger_\mathbf{k'}] = (2\pi)^3\delta(\mathbf{k-k'}) \end{equation} We note that the prefactor of $(2\pi)^3$ is a matter of convention we adopt here, and varies across literature. {The convention used here allows to avoid unnecessary prefactors in the commutators of the operators in the angular momentum representation, see Eq.~\eqref{AklmComm} below.} \subsubsection{A system of interacting bosons} Let us now consider a system of interacting bosons without a molecule being present. Its Hamiltonian can be written in momentum representation as: \begin{equation} \label{Hbos} \hat H_\text{bos}= \sum_\mathbf{k} \epsilon (\mathbf k) \hat a^\dagger_\mathbf{k} \a_\mathbf{k} +\frac{1}{2} \sum_\mathbf{k, k', q} V_\text{bb} (\mathbf{q})\hat a^\dagger_\mathbf{k'-q} \hat a^\dagger_\mathbf{k+q} \a_\mathbf{k'} \a_\mathbf{k} \end{equation} {Here we introduced the label $\sum_\mathbf{k} \equiv \int d^3k/(2\pi)^3$ in order to keep the notation compact. Since we work in units where $\hbar\equiv1$, momentum carries units of wavenumbers, [Length]$^{-1}$, and each summation carries a dimensionality of [Length]$^{-3}$}. The first term of Eq.~\eqref{Hbos} describes the kinetic energy of bosons, $\epsilon (\mathbf k) = k^2/(2 m)$, with $m$ the bosonic mass. The second term gives the boson-boson interactions, whose strength in momentum space is given by $ V_\text{bb} (\mathbf{q})$. Note that in our convention the operators $\hat a^\dagger_\mathbf{k}$ and $\a_\mathbf{k}$ are not dimensionless, but carry a dimension of [Length]$^{3/2}$. {Furthermore, since the Fourier transform involves three-dimensional integration in real space, the momentum-dependent interaction potential $V_\text{bb} (\mathbf{q})$ carries a unit of [Energy]$\times$[Length]$^{3}$}. The Hamiltonian~\eqref{Hbos} represents the most general formulation of the problem. Due to its complexity, it cannot be solved analytically and one has to resort to approximate methods. In what follows we discuss the possible approximations which allow to obtain insight into the behavior of such a complex many-particle system. \subsubsection{Bogoliubov approximation and transformation} \label{sec:bogoliubov} Let us assume a weakly-interacting BEC at zero temperature. In such a case, most of the atoms reside in their energetically lowest single particle state {\cite{Pitaevskii2016}}, which for an infinite system is the zero-momentum quantum state. In order to make use of this separation of occupation numbers it is convenient to split the creation and annihilation operators into the zero-momentum and finite-momentum parts: \begin{equation} \label{BogApp} \a_\mathbf{k} = (2\pi)^3 \hat \Phi_0 \delta(\mathbf{k}) + \hat\Phi_\mathbf{k \neq 0} \end{equation} {where the factor of $(2\pi)^3$ appears due to our definition of the Fourier transform, Eqs.~\eqref{adFourier},~\eqref{aFourier}.} This formally exact shift of the creation and annihilation operators (which relies on the description of the BEC state as a coherent state) allows to treat in a convenient way the excitations on top of this state as small perturbations. Within the Bogoliubov approximation~\cite{Pitaevskii2016}, one assumes that the fraction of the bosons in the condensate, $\hat \Phi_0$, is very large, and corresponds to the classical number density of particles $n$.\footnote{Strictly speaking, $\hat \Phi_0$ corresponds to the BEC density $n_0$ which, however, for most practical purposes can be replaced by $n$ \cite{Pitaevskii2016}.} As a consequence, the field operators corresponding to the $\mathbf{k}=0$ state are replaced as $\hat \Phi_0 \to \sqrt{n}$. The second part of the approximation makes use of the fact that the population of excited states is small, $\sum_\mathbf{k} \langle |\hat\Phi_\mathbf{k}|^2 \rangle \ll n$. As a result, upon substitution of Eq.~\eqref{BogApp} into Eq.~\eqref{Hbos}, one can neglect the terms cubic and quartic in $\hat\Phi_\mathbf{k}$. Then, assuming $V_\text{b-b}(\mathbf{q}) = V_\text{b-b}(-\mathbf{q})$, and dropping the constant terms, we obtain: \begin{equation} \label{HbigB} \hat H_\text{bos}= \sum_\mathbf{k} \left[ \epsilon (\mathbf k) + V_\text{bb} (\mathbf{k}) n \right] \hat\Phi^\dagger_\mathbf{k} \hat\Phi_\mathbf{k} + \frac{n}{2} \sum_\mathbf{k} V_\text{b-b}(\mathbf{k}) \left [ \hat\Phi^\dagger_\mathbf{k} \hat\Phi^\dagger_\mathbf{-k} + \hat\Phi_\mathbf{k} \hat\Phi_\mathbf{-k} \right] \end{equation} Thus, we have arrived at a quadratic Hamiltonian, Eq.~\eqref{HbigB}{. In the second term this Hamiltonian contains, however, anomalous terms where two creation or two annihilation operators are paired together. To eliminate these terms, a diagonalization in the space of creation and annihilation operators can be performed, which at the same time diagonalizes the Hamiltonian in momentum space}. This is achieved by means of the so-called Bogoliubov transformation -- also called `Bogoliubov rotation' -- of the field operators~\cite{Pitaevskii2016}: \begin{align} \label{BogolRot} \hat\Phi_\mathbf{k} &= u_\mathbf k \hat b_\mathbf{k} + v^\ast_{-\mathbf k} \hat b^\dagger_\mathbf{-k}\\ \hat\Phi^\dagger_\mathbf{k} &= u^\ast_\mathbf k \hat b^\dagger_\mathbf{k} + v_{-\mathbf k} \hat b_\mathbf{-k} \end{align} where the coefficients $u_\mathbf k$ and $v_\mathbf k$ obey the normalization condition: \begin{equation} \label{BogNorm} |u_\mathbf k|^2 - |v_\mathbf k|^2 = 1 \end{equation} For simplicity, we assume $u_\mathbf k$, $v_\mathbf k$ to be real, symmetric with respect to $\mathbf k \to -\mathbf k$, and dependent solely on $k=|\mathbf k|$ leading to the following expressions that diagonalize the Hamiltonian {(note that this does not hold e.g.\ if the Bose gas is unstable, confined in a harmonic trap, or in the presence of vortices)}: \begin{align} \label{ukvkomega} u_k &= \left( \frac{\epsilon (\mathbf k) + V_\text{bb}(\mathbf{k}) n}{2 \omega (\mathbf k)} + \frac{1}{2} \right)^{1/2} \\ v_k &= - \left( \frac{\epsilon (\mathbf k) + V_\text{bb}(\mathbf{k}) n}{2 \omega (\mathbf k)} - \frac{1}{2} \right)^{1/2} \end{align} {where $\omega (\mathbf k) \equiv \omega (k)$ is given by: \begin{equation} \label{Bogwk} \omega (k) = \sqrt{\epsilon (k) \left[ \epsilon (k) + 2 V_\text{bb}(k) n \right]} \end{equation} } After the transformation, the boson Hamiltonian, Eq.~\eqref{HbigB}, takes the diagonal form: \begin{equation} \label{Hwk} \hat H_\text{bos}= \sum_{\mathbf k} \omega (k) \hat b^\dagger_\mathbf{k} \hat b_\mathbf{k} \end{equation} Thus, the Bogoliubov transformation allows to describe the bosonic systems in terms of non-interacting Bogoliubov quasiparticles with a dispersion relation {$\omega (k)$.} For a dilute atomic BEC, the boson-boson interaction can be approximated well by a contact potential, so that in first order Born approximation {one can replace $V_\text{bb} (k)$ by a constant, $g_\text{bb}=4\pi a_{\text{bb}}/m$,} where $a_{\text{bb}}$ is the boson-boson scattering length~\cite{Pitaevskii2016}. In order to avoid the collapse of the BEC, we assume weak effective repulsive interactions characterized by $a_{\text{bb}}>0${, such that they do not significantly disturb the BEC state, yet do prevent it from a collapse}. The dispersion relation $\omega (k)$ for such a situation is schematically shown in Fig.~\ref{dispersion}(a) by the blue line. Depending on the quasiparticle momentum $k$, one can clearly distinguish two types of behavior. At small momenta $k$, the behavior is linear, $\omega (k) \approx c k$, where $c = \sqrt{g_\text{bb} n /m}$ is the speed of sound. This regime corresponds {to collective long-wavelength excitations of the condensate, which are called `phonons.'} At large momenta, i.e.\ the ones exceeding the inverse healing length of the BEC, $\xi^{-1} = \sqrt{2} m c /\hbar$, the dispersion coincides with the one for free particles, $\omega (k) \approx \epsilon (k)$, and one speaks of `particle-like' excitations. {However, in what follows, we will refer to both particle-like and wave-like excitations described by Eq.~\eqref{Bogwk} as `phonons.'} Compared to ultracold gases, superfluid helium is extremely dense. {As a consequence, the average interparticle distance between the atoms in helium, $\langle r \rangle \sim 4$~\AA, is small enough to lie within the short-range part of the He--He potential energy surface, which possesses a minimum at $r_m \approx 3$~\AA. Therefore, He--He interactions cannot be well described as being point-like in real space when compared to the average interatomic distance.} As a result, the dispersion relation changes qualitatively, as one can see by comparing the blue and red lines in Fig.~\ref{dispersion}(a). The most drastic difference is the appearance of a `roton minimum' which emerges for helium around $k=2.0$~\AA$^{-1}$. It is important to note that, unlike first conjectured by Landau and Feynman, rotons do not represent elementary quanta of rotational excitations, or vortices. Instead, rotons can be thought of as density-wave excitations with a finite wave vector, in this sense they represent precursors or `ghosts' of crystalline structure. Furthermore, rotons are not specific to superfluid helium, in fact, they arise for any interactions $V_\text{bb}(k)$ which change the sign as a function of $k$. Such interactions appear, for instance, between Rydberg-dressed atoms~\cite{Henkel2010, OtterLemPRL14} as well as in ultracold dipolar~\cite{LahayePfauRPP2009} and quadrupolar~\cite{LarzNJP15} gases confined in quasi-1D and quasi-2D geometries. In order to illustrate how the roton minimum emerges from Eq.~\eqref{Bogwk}, let us consider the simplest possible toy model featuring such sign-changing interactions. Let us assume a boson-boson interaction described, for instance, by the Bessel function of the first kind, $V_\text{bb}(k) = J_0 (k)$ {(alternatively, one could choose any other sign-changing function with a magnitude decaying at $k \to \infty$):} \begin{equation} \label{wkModel} \omega_\text{model} (k) = \sqrt{k^2 \left[k^2 + 2 J_0 (k) n \right]} \end{equation} Since here we are interested solely in the qualitative properties of the dispersion relation, we omit all the constant factors such as the boson mass $m$. As a result, the momentum $k$ and the density $n$ of Eq.~\eqref{wkModel} are expressed in some arbitrary units. Such a sign-changing model interaction $V_\text{bb}(k)$ is shown in Fig.~\ref{dispersion}(b) by the red dashed line. The other lines correspond to $\omega_\text{model} (k)$ calculated at different values of density $n$. At lower densities, such as $n=1$ (in the arbitrary units used here), the sign changing term is negligible, $2 |J_0 (k)| n \ll k^2$, and therefore the dispersion relation looks similar to the one for a weakly-interacting BEC. When $n$ increases to $10$ and $15$, however, the sign-changing dependence of the potential starts to play a crucial role and the roton minimum arises in the dispersion relation. As the density is increased further, the minimum of the dispersion relation eventually touches zero. From this point on the dispersion relation becomes imaginary {for a finite range of momenta.} This situation reveals an instability of the system where the rotons materialize and form a crystalline structure. Note that here $n$ plays a role of a mathematical parameter (expressed in arbitrary units), which regulates the effective strength of the interatomic interactions. Therefore, the values considered in Fig.~\ref{dispersion}(b) do not directly correspond to any particular experimental scenario. However, in a qualitatively similar way, such a transition leads to crystallization of liquid helium if the density $n$ is increased by applying external pressure~\cite{AtkinsHelium}. We note that for finite systems, such as helium droplets, other types of excitations can appear. One example are ripplons -- quantized surface waves or `wrinkles' on the droplet surface~\cite{HansenPRB07, StienkemeierJPB06}. However, whether one works with molecules in small droplets, bulk helium, or a dilute BEC, many of the properties of the collective excitations are contained in the dispersion relation $\omega (k)$. Therefore, quite often, the theory can be constructed assuming a general form of the dispersion relation, which allows to understand the similarities and distinctions between different systems. \begin{figure}[t] \centering \includegraphics[width=0.8\linewidth]{He_disp.pdf} \caption{\label{dispersion} (a) Experimental dispersion relation {$\omega(k)$} of superfluid $^4$He, from Ref.~\cite{DonnellyHe98} (red line), compared to a schematic dispersion relation {$\omega(k)$} for a weakly-interacting BEC with contact interactions, Eq.~\eqref{Bogwk} with $ V_\text{bb}(k) = g_\text{bb}$~{(blue line)}. {Different types of excitations taking place are indicated: phonons, maxons, rotons, and free particles.} (b) Emergence of the roton minimum {(solid blue to solid red)} and instability {(dashed black)} for the model dispersion relation, Eq.~\eqref{wkModel}{, for different values of the density parameter $n$.} The red dashed line shows the sign-changing model potential $ V_\text{bb}(k) = J_0 (k)$.} \end{figure} \subsubsection{Angular-momentum representation of the boson operators} In the previous sections, we expressed the boson creation and annihilation operators, $\hat b^\dagger_\mathbf k$ and $\hat b_\mathbf k$, in Cartesian coordinates, $\mathbf k = \{k_x, k_y, k_z\}$, which may be rewritten in the spherical basis of $\mathbf k = \{k, \Theta_k,\Phi_k\}$. However, we are interested in rotating impurities and the angular momentum properties of the condensate. For that reason, it is much more convenient to work in the angular momentum representation for the single-particle basis instead of the Cartesian one. Hence we perform a single-particle basis change which yields the following transformation of the creation operators: \begin{equation} \label{AklmAk} \hat b^\dagger_{k\lambda \mu} =\frac{k}{(2\pi)^{3/2}} \int d\Phi_k d\Theta_k~\sin\Theta_k~\hat b^\dagger_\mathbf{k} ~ i^{-\lambda}~ Y_{\lambda \mu} (\Theta_k, \Phi_k) \end{equation} \begin{equation} \label{AkAklm} \hat b^\dagger_\mathbf{k} = \frac{(2\pi)^{3/2}}{k} \sum_{\lambda \mu} \hat b^\dagger_{k\lambda \mu}~ i^{\lambda}~Y^\ast_{\lambda \mu} (\Theta_k, \Phi_k), \end{equation} Here, the quantum numbers $\lambda$ and $\mu$ label the angular momentum of the bosonic excitation and its projection onto the laboratory-frame $Z$-axis, respectively. Note that the convention of~Eqs.~\eqref{AklmAk}, \eqref{AkAklm} differs from the one used in Refs.~\cite{SchmidtLem15, SchmidtLem16} by the conjugates of the spherical harmonics and the phase factor. The convention we use in this tutorial is thereby made consistent with other sources, such as Ref.~\cite{GreinerBook}. Within this convention, the creation and annihilation operators fulfill the following commutation relations: \begin{equation} \label{AkComm} [\hat b_\mathbf{k}, \hat b^\dagger_\mathbf{k'}] = (2\pi)^3\delta^{(3)}(\mathbf{k-k'}) \end{equation} \begin{equation} \label{AklmComm} [\hat b_{k\lambda \mu}, \hat b^\dagger_{k'\lambda' \mu'}] = \delta(k-k') \delta_{\lambda \lambda'} \delta_{\mu \mu'} \end{equation} Within the same convention as for the momentum representation, in coordinate space, the Cartesian and angular momentum representations are related by: \begin{equation} \label{ArlmAr} \hat b^\dagger_{r\lambda \mu} = r \int d\Phi_r d\Theta_r~\sin\Theta_r ~\hat b^\dagger_\mathbf{r}~Y_{\lambda \mu} (\Theta_r,\Phi_r) \end{equation} \begin{equation} \label{ArArlm} \hat b^\dagger_\mathbf{r} = \frac{1}{r} \sum_{\lambda \mu} \hat b^\dagger_{r\lambda \mu} ~Y_{\lambda \mu}^\ast (\Theta_r,\Phi_r) \end{equation} with the corresponding commutation relations: \begin{equation} \label{ArComm} [\hat b_\mathbf{r}, \hat b^\dagger_\mathbf{r'}] = \delta^{(3)}(\mathbf{r-r'}) \end{equation} \begin{equation} \label{ArlmComm} [\hat b_{r \lambda \mu}, \hat b^\dagger_{r' \lambda' \mu'}] = \delta(r-r') \delta_{\lambda \lambda'} \delta_{\mu \mu'} \end{equation} Since the Cartesian operators in the coordinate and momentum space are related through the Fourier transformation, \begin{equation} \label{brViabk} \hat b^\dagger_\mathbf{r} = \int \frac{d^3 k}{(2\pi)^3} \hat b^\dagger_\mathbf{k} e^{-i \mathbf{k \cdot r}}, \end{equation} one can obtain the corresponding relation for their angular momentum components: \begin{equation} \label{brlmViabklm} \hat b^\dagger_{r \lambda \mu} = \sqrt{\frac{2}{\pi}} r \int k dk~j_\lambda (kr)~\hat b^\dagger_{k\lambda\mu}, \end{equation} where $j_\lambda (kr)$ is the spherical Bessel function~\cite{AbramowitzStegun}. \subsection{Molecule-boson interaction} In its most general form, the interaction between an impurity and the bosonic atoms is given by: \begin{equation} \label{Himpbos} \hat H_\text{mol-bos} =\sum_\mathbf{k, q} \hat V_\text{mol-bos} (\mathbf{q}, \hat{\phi}, \hat{\theta}, \hat{\gamma}) \hat \rho(\mathbf{q}) \hat a^\dagger_\mathbf{k+q} \a_\mathbf{k}, \end{equation} where $\hat \rho(\mathbf{q}) = e^{- i \mathbf{q} \hat{\mathbf{r}}}$ is the Fourier-transformed density of an impurity which is situated at position $\hat \mathbf r$ (the corresponding density in real space is given by a Dirac $\delta$-function). Here we consider an impurity whose translational motion is frozen, and which we position at the coordinate $\mathbf r=0$ (for an impurity translationally moving in space the factor of $\hat\rho(\mathbf{q}) = e^{- i \mathbf{q}\hat{\mathbf{r}}}$ results in a hybrid problem between the angulon and the Fr\"ohlich polaron~\cite{MidyaInPrep}). This choice results in $\hat\rho(\mathbf{q}) \equiv 1$. The molecule has, however, additional degrees of freedom corresponding to the orientation of its molecular axis, and, depending on the molecular orientation, the surrounding bosons see a different interaction potential. The fact that anisotropic molecular geometry gives rise to anisotropic molecule-boson interactions is represented in Eq.~\eqref{Himpbos} by the operator $\hat V_\text{mol-bos} (\mathbf{q}, \hat{\phi}, \hat{\theta}, \hat{\gamma}) $ which explicitly depends on the orientation of the molecule in space, as given by the Euler angle operators $(\hat{\phi}, \hat{\theta}, \hat{\gamma})$. It is important to note that as opposed to the polaron problem~\cite{Devreese13}, the interactions considered here are not spherically symmetric, therefore the replacement of $\mathbf{q} \to \mathbf{-q}$ in Eq.~\eqref{Himpbos} results in an additional phase factor in the final expression of Eq.~\eqref{HintFinal}. Let us now derive an explicit expression for $ \hat V_\text{mol-bos}(\mathbf{q}, \hat{\theta}, \hat{\phi}) $, which is the Fourier transform of the molecule-boson potential in real space, starting from first principles. For simplicity, here we assume a linear rotor molecule, whose orientation is given by only two angles, $(\hat{\theta}, \hat{\phi})$. For symmetric and asymmetric tops the derivations will have a similar form, however, with a dependence on the third Euler angle, $\hat \gamma$. As previously, we define two coordinate frames: the laboratory frame, $(X,Y,Z)$, in which the bosonic atoms are at rest when the molecule is absent, and the molecular one, $(x, y, z)$, used to define the microscopic molecule-boson interaction potential, see Fig.~\ref{rotor}(a). The interaction potential between a molecule and an atom is a function of the spherical coordinates in the molecular frame, $(\theta_r, \phi_r)${, as schematically shown in Fig.~\ref{rotor}(b). Such a potential can be expanded over the spherical harmonics as:} \begin{equation} \label{Vr} V_\text{mol-bos} (\mathbf{r}) = \sum_\lambda V_\lambda (r) Y_{\lambda 0} (\theta_r, \phi_r), \end{equation} {Here, the interaction in every angular momentum channel $\lambda$ is defined by $V_\lambda (r)$.} Usually, the interaction potentials~\eqref{Vr} are obtained using quantum chemistry computer codes~\cite{SzalewiczIRPC08}, or approximated by some analytic model functions. In order to transform Eq.~(\ref{Vr}) to the laboratory frame, where the bosonic part of the Hamiltonian is defined, we use Wigner rotation matrices~\cite{VarshalovichAngMom, ZareAngMom}: \begin{equation} \label{WignerD} Y_{\lambda 0} (\theta_r, \phi_r) = \sum_\mu \hat D^{\lambda}_{\mu 0} (\hat{\phi}, \hat{\theta}, \hat{\gamma}) Y_{\lambda \mu} (\Theta_R, \Phi_R) \end{equation} For a linear molecule the third angle, $\hat{\gamma}$, can be set to zero. In this case, given that $\hat D^{\lambda}_{\mu 0} (\hat{\phi}, \hat{\theta}, 0) = \sqrt{\frac{ 4 \pi}{2 \lambda +1}} \hat Y_{\lambda \mu}^\ast (\hat{\theta}, \hat{\phi})$~\cite{VarshalovichAngMom, ZareAngMom}, we obtain: \begin{equation} \label{Vr2} \hat V_\text{mol-bos}(\mathbf{R}, \hat \theta,\hat \phi) = \sum_{\lambda \mu} \sqrt{\frac{ 4 \pi}{2 \lambda +1}} V_\lambda (r) Y_{\lambda \mu} (\Theta_R, \Phi_R) \hat Y_{\lambda \mu}^\ast (\hat \theta,\hat \phi) \end{equation} Here $\mathbf{R} \equiv (R, \Theta_R, \Phi_R)$ gives the laboratory-frame coordinates of a boson interacting with the molecule{, whose axis' orientation is measured by the operators $(\hat \theta,\hat \phi)$}. {It is important to note that the procedure described above is quite general and is often used to describe molecular rotation in the presence of an electromagnetic field acting in the laboratory frame. Here, however, we deal with a many-particle field of bosonic atoms, which makes the problem more involved.} \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{rotor.pdf} \caption{\label{rotor} (a) A linear-rotor molecule immersed into a boson bath. The molecule-boson interaction explicitly depends on the rotor angular coordinates, $(\hat{\theta}, \hat{\phi})$, in the laboratory frame. (b) The anisotropic molecule-boson interaction is defined in the molecular coordinate frame. Adapted with permission from Ref.~\cite{SchmidtLem15}.} \end{figure} The interaction in the momentum space is given by the Fourier transform of Eq.~(\ref{Vr2}): \begin{equation} \label{Vk} \hat V_\text{mol-bos}(\mathbf{k}, \hat \theta,\hat \phi) \equiv \int d^3R~\hat V_\text{mol-bos}(\mathbf{R}, \hat \theta,\hat \phi) e^{-i \mathbf{k R}} = \sum_{\lambda \mu} (2\pi)^{3/2} i^{-\lambda} \tilde{V}_\lambda (k) Y_{\lambda \mu} (\Theta_k, \Phi_k) \hat Y_{\lambda \mu}^\ast (\hat{\theta}, \hat{\phi}), \end{equation} where $\mathbf{k} \equiv (k, \Theta_k, \Phi_k)$ is the momentum vector in the laboratory frame and \begin{equation} \label{Vtilde} \tilde{V}_\lambda (k) = 2^{3/2}/\sqrt{2\lambda + 1} \int_0^\infty dr\, r^2 V_\lambda(r) j_\lambda (kr) \end{equation} In order to derive Eq.~\eqref{Vk}, we made use of the plane wave expansion in spherical harmonics: \begin{equation} \label{PlaneWave} e^{-i\mathbf k \mathbf{R}} =\sum_{lm}4 \pi~i^{-l}~j_l(kR)Y^\ast_{lm}(\Theta_R, \Phi_R)Y_{lm}(\Theta_k, \Phi_k) \end{equation} As a next step, we apply the Bogoliubov approximation and transformation {(see Sec.~\ref{sec:bogoliubov})} to Eq.~\eqref{Himpbos} and obtain: \begin{multline} \label{BogImp} \hat H_\text{mol-bos} = n \hat V_\text{mol-bos} (\mathbf{k}= 0, \hat \theta,\hat \phi) + \sqrt{n} \sum_\mathbf k \hat V_\text{mol-bos}(\mathbf{k}, \hat \theta,\hat \phi) (\hat\Phi^\dagger_\mathbf{k}+\hat\Phi_\mathbf{-k}) \\ = n \hat V_\text{mol-bos} (\mathbf{k}= 0, \hat \theta,\hat \phi) + \sqrt{n} \sum_\mathbf k \hat V_\text{mol-bos}(\mathbf{k}, \hat \theta,\hat \phi) \sqrt{\frac{\epsilon (\mathbf k)}{\omega (\mathbf k) }} (\hat b^\dagger_\mathbf{k} + \hat b_\mathbf{-k}). \end{multline} Here $\hat V_\text{mol-bos} (\mathbf{k}= 0, \hat \theta,\hat \phi)$ represents a mean-field shift, whose magnitude does not depend on the molecular orientation in space. Such a constant, spherically symmetric contribution is the same as for linearly moving polarons, and provides equal shift to all angular momentum levels of the molecule. Therefore, hereafter it will be omitted. It is important to note that in Eq.~\eqref{BogImp}, we neglected terms quadratic in the bosonic excitations at finite momentum. While in some situations these terms are important~\cite{Rath2013, Shchadilova2016, Jorgensen2016,Ashida2017}, in the limit of a large number of particles, $n \to \infty$, and in the absence of resonant impurity-bath interactions, the linear term will dominate~\cite{GirardeauPF61, llp}. As a final step, we substitute into Eq.~\eqref{BogImp} the spherical representation of the boson operators, Eq.~\eqref{AklmAk}, as well as the explicit expression for the interaction potential in momentum space, Eq.~\eqref{Vk}. After integrating over angles (and using the orthogonality relations for spherical harmonics~\cite{VarshalovichAngMom}), we obtain: \begin{equation} \label{HintFinal} \hat H_\text{mol-bos} = \sum_{k \lambda \mu} U_\lambda(k) \left[ \hat Y^\ast_{\lambda \mu} (\hat \theta,\hat \phi) \hat b^\dagger_{k \lambda \mu}+ \hat Y_{\lambda \mu} (\hat \theta,\hat \phi) \hat b_{k \lambda \mu} \right], \end{equation} where we labeled $\sum_k\equiv\int dk$ (note the difference with three-dimensional sums of Eq.~\eqref{Hbos}) and \begin{equation} \label{Ulamk} U_\lambda(k) = \left[\frac{8 n k^2\epsilon (k)}{\omega (k)(2\lambda+1)}\right]^{1/2} \int dr r^2 V_\lambda(r) j_\lambda (kr) \end{equation} The interaction Hamiltonian \eqref{HintFinal} plays a key role in this tutorial. It is worth noting that Hamiltonians featuring a linear coupling between an impurity's internal degree of freedom and a bath of harmonic oscillators have been actively studied, see e.g.\ the spin-boson~\cite{LeggettRMP87} or Jaynes-Cummings~\cite{JanyensCummings} models. In contrast to these models, however, $H_\text{mol-bos}$ explicitly depends on the molecular angle operators, $(\hat \theta,\hat \phi)$, which is essential {for the microscopic description of a rotating anisotropic impurity} and the emergence of angulon physics. {On the other hand, unlike in the Fr\"ohlich Hamiltonian for a linearly moving impurity~\cite{Devreese13}, the spherical harmonic operators $\hat Y_{\lambda \mu} (\hat \theta,\hat \phi)$ of Eq.~\eqref{HintFinal} lead to additional complexity due to the angular momentum algebra involved.} {By analogy with Eqs.~\eqref{AngleOp1} and \eqref{DlkiAngle}, we obtain: \begin{equation} \label{YlmAngle} \hat Y_{\lambda \mu} (\hat \theta,\hat \phi) \ket{\theta, \phi} = Y_{\lambda \mu}(\theta, \phi) \ket{\theta, \phi} \end{equation} Thus we can rewrite the $\hat Y_{\lambda \mu} (\hat \theta,\hat \phi)$ operators in the angular momentum basis, by inserting complete sets in angles, $\hat 1 \equiv \int \sin \theta d\theta d\phi \ket{\theta \phi}\bra{\theta \phi}$, as well as a complete set in angular momenta, $\hat 1 \equiv \sum_{jm} \ket{jm}\bra{jm}$. After integration over angles one finds~\cite{VarshalovichAngMom}:} \begin{equation} \label{YlmOper} \hat Y_{\lambda \mu} (\hat \theta,\hat \phi) = \sum_{jmj'm'} a_{jm, \lambda \mu}^{j'm'} \ket{j' m'}\bra{jm} \end{equation} where \begin{equation} \label{YlmOperCoef} a_{jm, \lambda \mu}^{j'm'} = \sqrt{\frac{(2j+1)(2\lambda+1)}{(2j'+1) 4\pi}} C_{jm, \lambda \mu}^{j'm'} C_{j0, \lambda 0}^{j' 0} \end{equation} Here $C_{jm, \lambda \mu}^{j'm'}$ are the Clebsch-Gordan coefficients~\cite{VarshalovichAngMom}. From \eqref{YlmOper} one can see that the $\hat Y_{\lambda \mu} (\hat \theta,\hat \phi)$ operators, in principle, couple all rotational states with one another, with the selection rules given by the Clebsch-Gordan coefficients of Eq.~\eqref{YlmOperCoef}. However, the situation becomes often manageable due to the fact that for realistic potentials in the expansion of the interaction potential, Eq.~\eqref{Vr2}, only a few terms have significant contributions. For example, interactions of a strongly dipolar molecule with an atom will be dominated by the term with $\lambda=1$. This interaction is analogous to an electric field and will result in the selection rules of $j' = j \pm 1$. For a homonuclear molecule, the quadrupole contribution, $\lambda=2$, will dominate, and consequently $j' = j \pm 2$. \section{The angulon quasiparticle} \label{sec:angulon} {In this section we combine the ideas outlined above to finally study the physics of the full Hamiltonian describing a molecular rotor impurity immersed inside a BEC.} For simplicity, we consider the case of a linear-rotor molecule. The resulting equations can, however, be generalized to more complex molecular species. Combining Eqs.~\eqref{Hrot2}, \eqref{Hwk}, and~\eqref{HintFinal} we obtain the following Hamiltonian: \begin{equation} \label{Hamil1} \hat H= B \mathbf{\hat{J}^2} + \sum_{k \lambda \mu} \omega (k) \hat b^\dagger_{k\lambda \mu} \hat b_{k\lambda \mu} + \sum_{k \lambda \mu} U_\lambda(k) \left[ \hat Y^\ast_{\lambda \mu} (\hat \theta,\hat \phi) \hat b^\dagger_{k \lambda \mu}+ \hat Y_{\lambda \mu} (\hat \theta,\hat \phi) \hat b_{k \lambda \mu} \right] \end{equation} It is important to note that although we have initially derived the specific form of $\omega (k)$ and $U_\lambda(k)$ in Eq.~\eqref{Hamil1} for the case of an ultracold molecule coupled to a weakly-interacting BEC, this Hamiltonian can be used to study the transfer of angular momentum between a localized impurity and a bath of harmonic oscillators in the context of other experiments. For instance, effective Hamiltonians of a similar structure can be constructed for molecules in helium droplets~\cite{LemeshkoDroplets16, YuliaPhysics17, Shepperson16}, Rydberg atoms in a BEC~\cite{BalewskiNature13, SchmidtDem2016}, electronic excitations coupled to phonons in solids~\cite{StammPRB10, TsatsoulisPRB16, FahnleJSNM17}, and several other systems. Thus, in what follows, we approach Eq.~\eqref{Hamil1} from a completely general perspective and reveal its properties in various parameter regimes. \subsection{Second-order perturbation theory} \label{sec:2dOrder} {As a first step, we account for the effect of the bath on molecular rotation within second-order perturbation theory, which applies when the interactions between the impurity and the bath are weak, $|U_\lambda(k)| \ll B$}. There, one accounts only for a single virtual phonon excitation from the vacuum $\ket{0}$ to the states {$\vert k \lambda \mu \rangle \equiv \hat b^\dagger_{k \lambda \mu} \ket{0}$}, accompanied by the change of the molecular rotational state from $\vert j, m\rangle$ to $\vert j', m'\rangle$, and back. Accordingly, the second-order energy shift acquired by the state $\vert j, m\rangle$ is given by: \begin{figure}[b] \centering \includegraphics[width=0.3\linewidth]{perturbativefeynman}\caption{Feynman diagram representing the second-order perturbation corrections to the angulon energy.} \label{sunset} \end{figure} \begin{equation} \label{DEJM} \Delta E_{jm} = \sum_{\substack{j'm' \\ k\lambda \mu}} \frac{\Bigl | \bra{k \lambda \mu} \bra{j'm'} H_\text{mol-bos} \ket{jm} \ket{0} \Bigr |^2}{B j(j+1) - Bj'(j'+1) - \omega (k) } = \sum_{k \lambda j' } \frac{2\lambda +1}{4\pi} \frac{U_\lambda(k)^2 \left[ C_{j0, \lambda 0}^{j'0} \right]^2}{B j(j+1) - Bj'(j'+1) - \omega (k) } \end{equation} The perturbative result can find a diagrammatic interpretation in terms of the `sunset' Feynman diagram shown in Fig.~\ref{sunset}, where the dashed line corresponds to a phonon excitation and the solid line represents the molecular state. Furthermore, the loop extends over the angular momentum variables, momentum $k$ and frequency $\omega$. The integration over the latter leads to Eq.~\eqref{DEJM}. Since the molecular translational motion is frozen, the molecular line in $k$ space is contracted to a single point. At the vertices, angular momentum is conserved, and within perturbation theory the incoming energy of the molecule is given by its bare energy $B j (j+1)$. There are two types of processes that contribute to Eq.~\eqref{DEJM}: the ones leaving the rotational state intact, $j'=j$, and the ones changing the rotational states virtually, $j' \neq j$. Both types of processes {can be} accompanied by the change in the quantum number $m$. \begin{figure}[b] \centering \includegraphics[width=0.5\textwidth]{BBstar.pdf} \caption{{Renormalization of the molecular rotational constants in superfluid helium, $B^\ast/B$, as a function of the dimensionless coupling parameter, $\gamma$. Most of the heavy molecules belong to the strong-coupling regime, $\gamma<1$, while light molecules to the weak-coupling regime, $\gamma>1$.} Experimental data (empty squares) is compared with the angulon theory in the strong coupling regime (red circles), and weak coupling regime (blue triangles). The intermediate-coupling interpolation is shown by green crosses. Adapted with permission from Ref.~\cite{LemeshkoDroplets16}.} \label{fig:BBstar} \end{figure} In the regime where the molecule-phonon interactions are substantially smaller than the rotational constant, the processes with $j' \neq j$ will be gapped, i.e. the virtual transitions between the levels will be suppressed by a large value of the denominator. In such a case, the energy shift will be dominated by the $j$-preserving processes: \begin{equation} \label{DEJM2} \Delta E_{jm} \approx - \sum_{k \lambda} \frac{2\lambda +1}{4\pi} \frac{U_\lambda(k)^2 \left[ C_{j0, \lambda 0}^{j0} \right]^2}{ \omega (k) } \end{equation} Furthermore, {due to the selection rule $j+\lambda+j = \text{even}$, imposed by the Clebsch-Gordan coefficient~\cite{VarshalovichAngMom}, only even values of $\lambda$ will contribute to Eq.~\eqref{DEJM2}. In most molecules, the interaction will be dominated by the lowest-order anisotropic term $U_2(k)$, thus giving:} \begin{equation} \label{DEJM3} \Delta E_{jm} \approx - \frac{5}{4\pi} \left[ C_{j0, 2 0}^{j0} \right]^2 \sum_{k} \frac{U_2(k)^2}{ \omega (k) } \end{equation} We see that all the $j$-dependence in the equation above is incapsulated in the Clebsch-Gordan coefficient $C_{j0, 2 0}^{j0}$. Due to this dependence, the levels with different $j$ acquire different shifts, resulting in the rotational constant renormalization, observed in several experiments on molecules in helium droplets~\cite{ToenniesAngChem04}. For example, within this approximation, for the transition $j=0 \to j=1$ which is most frequently addressed in experiments, the change in the transition energy is: \begin{equation} \label{DEJM4} h \Delta \nu_{0\to1} \approx - \frac{1}{2\pi} \sum_{k} \frac{U_2(k)^2}{ \omega (k) } \end{equation} {It is worth mentioning, that the notion of rotational constant renormalization is an approximation used to describe the lowest-energy part of the molecular rotational spectrum, such as the splitting between the $j=0$ and $j=1$ levels. In general, the energies of higher rotational states deviate from the rigid-rotor distribution~\cite{ToenniesAngChem04}.} Recently, it has been shown that such a simple perturbative approach suffices to describe rotational constant renormalization of light molecules in $^4$He~\cite{LemeshkoDroplets16, YuliaPhysics17}. Fig.~\ref{fig:BBstar}(c) compares the results of the weak-coupling theory (blue triangles) with experiment (empty squares) {-- one can see that an agreement within 2\% is achieved}. For heavy and medium-mass species, shown in panels (a) and (b), respectively, a different, strong-coupling approach is required, see Sec.~\ref{sec:SlowRot}. As a peculiar fact, the rotational constant renormalization for light molecules in helium represents an analogue of the Lamb shift in quantum electrodynamics~\cite{ScullyZubairy}. There, atomic states with different angular momenta acquire different shifts due to the virtual excitations of photons from the vacuum state. Similarly, in the case of a rotating molecule the virtual excitations of phonons in the superfluid lead to a state-dependent renormalization. Furthermore, second-order perturbation theory is a valid approximation to describe experiments on ultracold molecules coupled to a weakly-interacting BEC. In the following Sec.~\ref{sec:Angulon1}, we discuss this scenario in more detail and show that such shifts can be detected in modern experiments on ultracold quantum gases. \subsection{Nonperturbative analysis in the weak-coupling regime} \label{sec:Angulon1} {The second-order perturbation theory described in the previous section takes into account only one single-phonon excitation, cf. Fig.~\ref{sunset}. However, even if we assume that the molecule-bath interactions are weak enough such that {simultaneous} two- and three-phonon excitations can be neglected, multiple consecutive single-phonon excitations can still take place. In order to take such processes into account, in this section we approach the angulon Hamiltonian~\eqref{Hamil1} from a variational perspective.} As we show below, there exists a correspondence between the variational and diagrammatic approaches which allows one to get a deeper insight into the angulon properties. For instance, in addition to the ground-state properties accessible by standard variational approaches, it will be possible to recover the entire absorption spectrum of the system by making use of this correspondence. For now, however, let us start with the simplest possible variational ansatz for the many-body quantum state which is based on an expansion in single bath excitations: \begin{equation} \label{VarFunc} \vert \psi \rangle = Z_{LM}^{1/2} \ket{0} \ket{L M}+ \sum_{\substack{k \lambda \mu \\ j m}} \beta_{k \lambda j} C_{jm, \lambda \mu}^{L M} \hat b^\dagger_{k \lambda \mu} \ket{0} \ket{jm}, \end{equation} with $Z_{LM}^{1/2}$ and $\beta_{k \lambda j}$ the variational parameters{, where $L$ labels the total angular momentum of the system and $M$ its projection on the laboratory-frame $Z$-axis.} Here the first ket in each term, $\ket{0}$, represents the vacuum of phonons, i.e.\ the BEC state, while the second ket refers to the molecular state. In Eq.~\eqref{VarFunc} the first term corresponds to the non-interacting state of the system where no phonons are excited and hence the total angular momentum of the system, as given by the quantum numbers $L$ and $M$, belongs to the molecule. Due to interactions between the rotating molecule and the bosons, this state is perturbed and phonons are excited from the vacuum. This is taken into account by the second term where the excitations of phonons with {momentum} $k$ and angular momentum $\ket{\lambda, \mu}$ are accompanied by the change of the molecular state to $\ket{j,m}$. This process has to conserve the total angular momentum of the system, which is incorporated explicitly by the Clebsch-Gordan coefficient $C_{jm, \lambda \mu}^{L M}$. Furthermore, unless an external field breaking the spherical symmetry is applied, all the relevant physical properties will be independent of the quantum number $M$, which we will omit hereafter. The variational state defined in Eq.~\eqref{VarFunc} represents the `dressing' of the bare molecular state by fluctuations in its environment. Although its state is modified by the many-body field, the molecule retains some of its bare character and it becomes a `quasiparticle' with molecule-like properties. This quasiparticle has been termed `angulon' \cite{SchmidtLem15}. A question which arises is how similar the angulon is to its bare counterpart -- the unperturbed molecule. One measure for this `particle-likeness' of the angulon is given by the so-called quasiparticle weight, $Z_{LM}$. It can be determined from the coefficients of Eq.~\eqref{VarFunc} which obey the following normalisation condition: \begin{equation} \label{VarFuncNorm} Z_{LM}{=} 1- \sum_{k \lambda j} |\beta_{k \lambda j}|^2. \end{equation} {The quasiparticle weight $Z_{LM}$ gives the overlap of the many-body state (molecule plus bath), $\ket{\psi}$, with the eigenstate of the system in the absence of molecule-environment interactions, $\ket{0}\ket{LM}$, where $\ket{LM}$ reflects the state of an isolated molecule. In the regime of $Z_{LM} \to 1$, the angulon turns into a free molecule, while in the limit of $Z_{LM} \to 0$ the molecule becomes so perturbed by fluctuations in the environment that even the quasiparticle picture (and therefore the notion of the angulon), breaks down. In what follows, we explore both of these regimes.} Already at the level of the variational state \eqref{VarFunc} one can clearly see the difference in the treatment of angulons compared to polarons. Unlike the perturbative variational states used in polarons~\cite{ChevyPRA06,Rath2013,Li2014,Shchadilova2016,Ashida2017}, where translational degrees of freedom are coupled, constructing the states of the form~\eqref{VarFunc} inevitably involves angular momentum algebra. For instance, including two-phonon excitations would require a $6j$-symbol, three-phonon excitations a $9j$-symbol, and so on. This problem becomes extremely challenging in the limit of strong interactions where multiple phonon excitations contribute significantly. In Sec.~\ref{sec:SLtransfo}, we introduce a canonical transformation, which allows to drastically simplify this aspect of the angulon problem. For now, let us use Eq.~\eqref{VarFunc} to define a variational state and minimize the energy, $E = \bra{\psi}H\ket{\psi}/\langle \psi \vert \psi \rangle$, with respect to the parameters $Z_{LM}^{1/2}$ and $\beta_{k \lambda j}$. This is equivalent to minimizing the functional: \begin{equation} \label{MinE} F = \bra{\psi}H - E \ket{\psi} \end{equation} Note that during the variational procedure, the variables, $Z_{LM}^{1/2}$ and $\beta_{k \lambda j}$, and their conjugates, $(Z_{LM}^{1/2})^\ast$ and $\beta_{k \lambda j}^\ast$, have to be treated as independent variables; moreover, it has to be emphasized that the normalization condition \eqref{VarFuncNorm} is to be applied only after the variation has been performed. After minimization, we arrive at the following system of equations: \begin{align} \label{VarEqs1} \frac{\partial F}{\partial (Z_{LM}^{1/2})^\ast} &= \left[B L(L+1) - E \right] Z_{LM}^{1/2} + \sum_{k \lambda j} (-1)^\lambda \sqrt{\frac{2\lambda+1}{4 \pi}} U_\lambda(k) C_{L0, \lambda 0}^{j,0} \beta_{k \lambda j} {\,\overset{!}{=}\,}0 \\ \label{VarEqs2} \frac{\partial F}{\partial \beta_{k \lambda j}^\ast} &= \left[B j(j+1) - E + \omega (k) \right] \beta_{k \lambda j} + (-1)^\lambda \sqrt{\frac{2\lambda+1}{4 \pi}} U_\lambda(k) C_{L0, \lambda 0}^{j,0} Z_{LM}^{1/2} {\,\overset{!}{=}\,} 0 \end{align} where we used the symmetry properties of the Clebsch-Gordan coefficients~\cite{VarshalovichAngMom}. {As one can notice,} if one substitutes $\beta_{k \lambda j}$ from Eq.~\eqref{VarEqs2} into Eq.~\eqref{VarEqs1}, $Z_{LM}^{1/2}$ cancels, resulting in an equation where both sides depend on the energy: \begin{equation} \label{Dyson1} E = B L(L+1) - \Sigma_L(E), \end{equation} where \begin{equation} \label{Selfenergy} \Sigma_L (E) = \sum_{k \lambda j} \frac{2\lambda+1}{4\pi} \frac{U_\lambda (k)^2 \left[C_{L0, \lambda0}^{j0} \right]^2 }{B j(j+1) - E+ \omega (k)}. \end{equation} One can see that Eqs.~\eqref{Dyson1} and \eqref{Selfenergy} almost exactly coincide with the second-order perturbation theory expansion, Eq.~\eqref{DEJM}, up to one subtle difference: in the denominator, the energy of the free molecule, $B L(L+1)$, is replaced by $E$ from the left-hand side. In technical terms, the appearance of the energy in the denominator of Eq.~\eqref{Selfenergy} originates in the self-consistent normalization condition for the angulon wavefunction Eq.~\eqref{VarFunc}. In contrast, second order perturbation theory does not account for the normalization of the perturbative wavefunction which {renders it less accurate compared to Eqs.~\eqref{Dyson1}--\eqref{Selfenergy}}. In this present case, the energy has to be found self-consistently, as a solution of the equation: \begin{equation} \label{GFct} [G^0_L(E)]^{-1} - \Sigma_L(E) = 0 \end{equation} where \begin{equation} \label{GFct0} G^0_L(E) = \frac{1}{BL (L+1)-E} \end{equation} is the so-called `free Green's function' of the molecule, and $\Sigma_L(E)$ is the so-called `self-energy' arising due to the dressing by the phonon field. The specific form of Eq.~\eqref{GFct} allows to connect to a diagrammatic approach to the problem. This is reflected in the fact that Eq.~\eqref{GFct} is equivalent to solving for the poles in energy of the so-called Dyson equation~\cite{Mahan90} for the total angulon Green's function $G_L^\text{ang} (E)$: \begin{equation} \label{Dyson2} G_L^\text{ang} (E)= G_L^0(E) +G_L^0(E)~\Sigma_L(E)~G_L^\text{ang}(E) \end{equation} where \begin{equation} \label{GFctInt} G^\text{ang}_L(E) = \frac{1}{BL (L+1)-E - \Sigma_L(E) } \end{equation} Thus, Eq.~\eqref{GFct} gives the poles of the interacting Green's function which correspond to the angulon's eigenenergies. The equivalence between the variational equations~{\eqref{VarEqs1}--\eqref{VarEqs2}} and the Green's function formalism of Eqs.~\eqref{Dyson1}--\eqref{GFctInt} paves the way to the diagrammatic approach to the angulon problem. The Feynman diagrams contributing to the single-phonon expansion \eqref{VarFunc} are shown in Fig.~\ref{feynman}. These diagrams show a graphical representation of our earlier statement that the angulon is a quantum rotor dressed by a quantum many-body field. One can see that the difference between the diagrams of Fig.~\ref{feynman} and the ones of the second-order perturbation theory, Fig.~\ref{sunset}, is the bold line on the right-hand-side of self energy. This corresponds to summing over any number of sequential single-phonon excitations, which renders the resulting solution non-perturbative. Algebraically, the resulting resummation of infinitely many diagrams originates from the presence of energy, $E$, on both sides of Eq.~\eqref{Dyson1}. \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{feynman.pdf} \caption{\label{feynman} Representation of the Dyson equation~\eqref{Dyson2} in terms of Feynman diagrams. } \end{figure} {Using Eqs.~\eqref{GFctInt} and~\eqref{Selfenergy} we are now able to calculate the angulon's Green's function, and thereby gain access not only to the ground-state properties but also to the entire excitation spectrum of the system.} The latter is encompassed by the spectral function~\cite{AltlandSimons}, \begin{equation} \label{SpecFunc} \mathcal A_L (E)= \text{Im}[G^\text{ang}_L(E+i \varepsilon)], \end{equation} which in addition provides insight into the quasiparticle properties of the angulon. {Here $\varepsilon$ is an infinitely small positive number, $\varepsilon \to +0$.} The calculation of Eq.~\eqref{SpecFunc} requires knowledge of the self energy \eqref{Selfenergy} as a function of energy $E$, which can be calculated either numerically or using the analytic expression for its imaginary part~\cite{SchmidtLem15}: \begin{equation} \label{ImSigma} \text{Im}\left[ \Sigma_L(E) \right]= \sum_{\lambda j k_0}\theta(E-Bj(j+1))\left[C_{L0, \lambda0}^{j0} \right]^2 \frac{2\lambda+1}{4} U_\lambda (k_0)^2 \vert(\partial \omega (k)/\partial k)_{k=k_0}\vert^{-1}, \end{equation} where $k_0$ gives the roots of $E-\omega (k)-Bj(j+1) =0$. While the imaginary part $ \text{Im}\left[ \Sigma_L(E) \right]$ determines the inverse lifetime of the angulon, {the real part, $ \text{Re}\left[ \Sigma_L(E) \right]$, gives the angulon's energy.} The theta-function in Eq.~\eqref{ImSigma} represents the onset of low-energy phonon bands, shown in Fig.~\ref{Stark}(d) (see below). Similar bands have been previously observed in experiments with molecules in helium nanodroplets~\cite{Hartmann2002} as well as for translationally moving impurities in a Bose-Einstein condensate of ultracold atoms \cite{Shchadilova2016,Jorgensen2016}. The properties of the angulon, such as its spectrum given by Eq.~\eqref{SpecFunc}, depend on the specific realization of the system. The latter are parametrized, for instance, by the dispersion relation $\omega (k)$ and the particular choice of interaction potentials. In the following we focus on one specific example for the angulon spectral function, which has been studied in Ref.~\cite{SchmidtLem15}. There we considered a superfluid with contact interactions, $ V_\text{bb} (\mathbf{q}) \equiv g_\text{bb} =4\pi a_{\text{bb}}/m$, parametrized by the boson-boson scattering length $a_{\text{bb}} >0$~\cite{Pitaevskii2016}. This resulted in the Bogoliubov dispersion relation, Eq.~\eqref{Bogwk}, having the form $\omega (k)=\sqrt{\epsilon (k)(\epsilon (k)+2 g_{\text{bb}} n)}$. Real atom-molecule potentials comprise multiple terms in the spherical Harmonic expansion, Eq.~\eqref{Vr}, and can be obtained using numerical quantum chemistry calculations~\cite{SzalewiczIRPC08, StoneBook13}. In the angulon Hamiltonian, Eq.~\eqref{Hamil1}, each of these terms results in a phonon-mediated coupling between the bare rotational states, and will impose its own selection rules on the angular momentum exchange between the impurity and the bath. In order to understand the general behavior of the system, however, it is convenient to start from considering only the contributions from the leading terms $\lambda=0,1$. In Ref.~\cite{SchmidtLem15} we assumed that the shapes of the latter interaction potentials, {Eq.~\eqref{Vr},} were given by $V_\lambda (r) = u_\lambda f_\lambda(r)$, which are characterized by the magnitudes $u_0 = 1.75 \, u_1 = 218 B$ and Gaussian form-factors $f_\lambda(r) = (2\pi)^{-3/2} e^{-r^2/(2r_\lambda^2)}$, with $r_0 = r_1 = 1.5\, (m B)^{-1/2}$. This is a model potential, which does not quantitatively reproduce any particular atom-molecule system, however, does describe a typical magnitude and range of such two-body interactions. {For example, for a molecule with $B = 2 \pi \hbar \times 1$~GHz~$\approx 0.03$~cm$^{-1}$ immersed in superfluid $^4$He, the parameters correspond to the anisotropic part of the interaction $u_1 \approx 4$~cm$^{-1}$ and the range $r_1 \approx 16$~\AA.} For convenience, here and below we adapt dimensionless units, with energy measured in $B$, distances in $(m B)^{-1/2}$, and, as before, $\hbar \equiv 1$. \begin{figure*}[t] \centering \includegraphics[width=\linewidth]{StarkEffect.pdf} \caption{\label{Stark} (a) The angulon spectral function, $A_L (\tilde{E})$, as a function of the dimensionless superfluid density, $\tilde{n}=n (m B)^{-3/2}$, {and dimensionless energy $\tilde E= E/B$. As indicated by the pictograms in the bottom, the} left side of the plot corresponds to the low density regime which is realized for instance with ultracold {ground-state and weakly-bound} molecules interacting with a BEC of atoms. The high-density regime (right hand side of the plot) can in turn be realized with rotating molecules immersed in {superfluid helium. The states are labeled as $L_{j \Lambda}$, where $L$ is the total angular momentum of the system, and $j$ and $\Lambda$ are the angular momenta of the molecule and the bath, respectively; $+$ and $-$ are used to distinguish the states with the same $L_{j \Lambda}$.} The arrows indicate the densities highlighted in panel (d). (b)~Differential rotational Lamb shift for the lowest nonzero-$L$ states as given by $\tilde{\Delta}^\text{RLS}_L = (E_L - E_0)/B - L(L+1)$. (c)~Zoom-in illustrating the Many-Body-Induced Fine Structure (MBIFS) of the first kind, $L_{L,0} \to \{L_{L, 0}^-, L_{L, 0}^+\}$, and of the second kind, $L_{L,0}^- \to L_{L-1,1}^-$. (d) Spectroscopic signatures of the MBIFS for the $L=1$ state. The numbers indicate the corresponding values of $\text{Log}[\tilde{n}]$. Sharp peaks reflect long-lived angulon excitations while broad spectral features correspond to the incoherent excitation of phonons. Adapted with permission from Ref.~\cite{SchmidtLem15}.} \end{figure*} Fig.~\ref{Stark}(a) shows the resulting angulon spectral function, $A_L(\tilde{E}) \equiv \mathcal{A}_L(\tilde{E}) B$, with $\tilde{E} = E/B$. While the vertical axis denotes the energy $\tilde{E}$, the horizontal axis gives the dimensionless superfluid density, $\tilde{n}=n (m B)^{-3/2}$. Thus, the left-hand side of the plot corresponds to a weakly-interacting system (such as an ultracold molecule in a BEC), while the right-hand side corresponds to the strongly-interacting regime (e.g. a molecule in superfluid helium). The angulon is an eigenstate of the total angular momentum $L$ of the system, which is, besides $M$, the only good quantum number. However, as often done in spectroscopy~\cite{LevebvreBrionField2}, we can introduce approximate quantum numbers: $j$, the angular momentum of the molecule, and $\Lambda$, the angular momentum of the bosons. As a result, the angulon states in Fig.~\ref{Stark}(a) are labeled as $L_{j,\Lambda}$. Sharp peaks in the angulon spectrum (dark shade) correspond to long-lived angulon states which reproduces the spectrum of a free rotor at small densities $\tilde n$. Furthermore the magnitude of the peaks, see Fig.~\ref{Stark}(d), determines the quasiparticle weight $Z_{LM}$ of the corresponding angulon state. The fact that the angulon closely resembles a free rotor at small densities is reflected in the fact that its quasiparticle weight approaches $Z_{LM} \to 1$ in this regime. However, once the medium becomes denser, a few peculiar effects occur (for more details see Ref.~\cite{SchmidtLem15}). \begin{enumerate} \item \text{Effects due to isotropic interactions:} \begin{itemize} \item \textit{Polaron shift.} As previously observed for structureless impurities~\cite{LandauPolaron,LandauPekarJETP48,AppelPolarons,Devreese13,Rath2013,Shashi2014,Shchadilova2014,Grusdt2014}, the spherically-symmetric part of the molecule-atom potential leads to the uniform lowering of all the energy levels with increasing density. \item \textit{Many-body-induced fine structure (MBIFS) of the first kind}, also see Fig.~\ref{Stark}(c). One can see that at intermediate densities, every $L$-level is split into a doublet. One can understand this feature as a splitting between the states $\ket{j=L , \text{no phonons}}$ and $\ket{j=L , \text{one phonon with } \lambda=0}$, coupled by the interaction potential $U_0 (k)$, which increases with the density $n$. \end{itemize} \item \text{Effects due to anisotropic interactions:} \begin{itemize} \item \textit{Rotational Lamb shift (RLS)}. Due to the $\lambda=1$ term in the interaction potential, there appears a non-uniform shift, whose magnitude depends on the angular momentum $L$. Such a shift leads to the renormalization of rotational constants for molecules in superfluid helium droplets~\cite{ToenniesAngChem04}, and has also been discussed in Sec.~\ref{sec:2dOrder}. Fig.~\ref{Stark}(b) shows the rotational Lamb shift, defined as $\tilde{\Delta}^\text{RLS}_L = (E_L - E_0)/B - L(L+1)$, for a few of the lowest angular momentum states. \item \textit{MBIFS of the second kind}, also see Fig.~\ref{Stark}(c). There occur splittings of the angulon lines, which correspond to a resonant transfer of one quantum of angular momentum from the rotor to the many-body bath, $L^-_{L,0} \to L_{L-1, 1}^-$. It happens when the energy of the $L^-_{L,0}$ state is lowered by the interactions such that it merges with the phonon continuum of the angulon state with $L-1$. Under this condition the density of states for momentum changing collisions is enhanced, leading to the splitting of the state. Due to the enhanced phonon-molecule scattering, close to the splitting, the quasi particle weight of the angulon is largely suppressed and the quasiparticle picture breaks down. \end{itemize} \end{enumerate} \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{PA} \caption{\label{PA} Possible schemes to detect the angulon self-energy $\Sigma_L$ using (a)~photoassociation spectroscopy~\cite{UlmanisCR12} and (b)~shift of $p$- and $d$-wave Feshbach resonances~\cite{KohlerRMP06}. Adapted from Ref.~\cite{SchmidtLem16}.} \end{figure} The properties of the angulons, as given by the spectrum of Fig.~\ref{Stark}, could be observed experimentally both with molecules trapped in weakly-interacting BEC's~\cite{Bikash16, Pitaevskii2016}, as well as strongly-interacting superfluids, such as helium droplets~\cite{LemeshkoDroplets16, YuliaPhysics17, ToenniesAngChem04}. For instance, the RLS can be measured by the relative shift of the rotational states of a diatomic molecule, while the MBIFS of the first and second kind can be revealed as broadenings and splittings of the rotational transition lines. The effects are expected to be most pronounced for molecules which possess a small rotational constant $B$, such as in experiments involving molecules in highly-excited vibrational states. In the context of ultracold gases, the latter can be studied using {weakly-bound molecules~\cite{LemFriPRArapid09, LemFriPRL09, LemFriJPCA10} created by} photoassociation spectroscopy~\cite{UlmanisCR12}, or by measuring nonzero angular momentum Feshbach resonances~\cite{KohlerRMP06}, as schematically illustrated in Fig.~\ref{PA}. {Depending on the bath density, the energies of the bound molecular states will shift, and so will the positions of the continuum-to-bound transitions.} An alternative possibility is measuring the angulon self-energy as a shift of the microwave lines in the spectra of weakly bound molecules~\cite{MarkPRA07, LemFriPRArapid09, LemFriJPCA10, LemFriPRL09}, prepared using one of these techniques. For molecules in superfluid helium the interactions and the bath properties cannot be tuned as easily as in ultracold gases. However, the range of chemical species amenable to trapping is essentially unlimited~\cite{ToenniesAngChem04}, which paves the way to studying angulon physics in a broad range of parameters. We note that an effect very similar to the MBIFS of the second kind has been recently observed in the spectrum of CH$_3$ trapped in helium droplets~\cite{MorrisonJPCA13}. There, the helium environment seems to induce an (otherwise forbidden) $|\Delta K| = 3$ transition, which may find an interpretation as an angular momentum transfer to rotons in the superfluid. However, a detailed calculation is required to confirm this interpretation of the data. \subsection{The canonical transformation} \label{sec:SLtransfo} In the previous section we have seen that even the simplest many-particle processes, such as single-phonon excitations, can drastically modify the rotational spectrum of an impurity. The next step is to consider the case where the impurity-bath interactions are strong enough to excite multiple phonons at the same time. Here we are, however, facing a major challenge: if we were to construct a variational state analogous to Eq.~\eqref{VarFunc}, but involving two-, three-, or four-phonon excitations, we would need to make use of the $6j$, $9j$, and $12j$ symbols~\cite{VarshalovichAngMom} in order to assure the conservation of total angular momentum. In such a case, how would one approach the problem in the limit of strong molecule-bath interactions where \textit{infinitely} many phonons are excited and the angular momentum algebra becomes intractable? It can be demonstrated~\cite{SchmidtLem16} that there is a way around this problem, if one makes use of the following canonical transformation: \begin{equation} \label{Transformation} \hat{S} = e^{- i \hat\phi \otimes \hat \Lambda_z} e^{- i \hat\theta \otimes \hat\Lambda_y} e^{- i \hat\gamma \otimes\hat \Lambda_z} \end{equation} Here $(\hat\phi, \hat\theta, \hat\gamma)$ are the angle operators which act in the Hilbert space of the rotor, and \begin{equation} \label{Lambda} \hat {\vec\Lambda}=\sum_{k\lambda\mu\nu}\hat b^\dagger_{k\lambda\mu}\boldsymbol\sigma^{\lambda}_{\mu\nu}\hat b_{k\lambda \nu} \end{equation} is the total angular momentum operator of the phonons, acting in their Hilbert space. The vector $\boldsymbol\sigma^{\lambda}$ is composed of matrices fulfilling the angular momentum algebra in the representation of angular momentum $\lambda$. Thus, the transformation operator of Eq.~\eqref{Transformation} uses the total angular momentum of the bath as a generator of rotation, which transfers the environment degrees of freedom into the frame co-rotating along with the quantum rotor, as schematically illustrated in Fig.~\ref{transf}. \begin{figure}[b] \centering \includegraphics[width=0.5\linewidth]{transformation.pdf} \caption{\label{transf} Action of the canonical transformation, Eq. (\ref{Transformation}), on the angulon Hamiltonian, Eq.~\eqref{Hamil1}. Left: in the laboratory frame, $(X, Y, Z)$, the combination of the molecular angular momentum, $\mathbf{J}$ and the bath angular momentum, $\boldsymbol{\Lambda}$ gives the total angular momentum of the system, $\mathbf{L}$. Right: after the transformation, the bath degrees of freedom are transferred to the rotating frame of the molecule, $(x, y, z)$. As a consequence, the molecular angular momentum in the transformed space coincides with the total angular momentum of the system in the laboratory frame. Adapted from Ref.~\cite{SchmidtLem16}.} \end{figure} The transformation~\eqref{Transformation} brings the Hamiltonian of Eq.~(\ref{Hamil1}) to the following form: \begin{equation} \label{transH} \hat{\mathcal{H}} \equiv \hat S^{-1} \hat H \hat S= B (\hat{\mathbf{J}}' - \hat{\mathbf{\Lambda}})^2 + \sum_{k\lambda\mu}\omega (k) \hat b^\dagger_{k\lambda\mu}\hat b_{k\lambda\mu} + \sum_{k\lambda} V_\lambda(k) \left[\hat b^\dagger_{k\lambda0}+\hat b_{k\lambda0}\right] \end{equation} Here $V_\lambda(k)=U_\lambda(k) \sqrt{(2\lambda+1)/(4\pi)}$ and $\hat{\mathbf{J}}'$ is the `anomalous' angular momentum operator acting in the rotating frame of the impurity, discussed in Sec.~\ref{sec:symtops} and Appendix~\ref{sec:appendixAngular}. The details of the derivation can be found in Ref.~\cite{SchmidtLem16}. There are a few key features of the transformed Hamiltonian, Eq.~\eqref{transH}: \begin{enumerate} \item \label{2lab} \textit{$\hat{\mathcal{H}}$ does not contain the impurity coordinates $(\hat \theta,\hat \phi)$, which removes the intractable angular momentum algebra from the problem.} In particular, the spherical harmonic operators, $\hat Y_{\lambda \mu} (\hat \theta,\hat \phi)$, which couple three-dimensional angular momenta in Eq.~(\ref{Hamil1}), are now replaced by the term $\hat{\mathbf{J}}' \cdot \hat{\mathbf{\Lambda}}$, which couples only the angular momenta \textit{projections}. Such terms occur in various problems of physics, e.g.\ the ones involving spin-orbit and spin-spin interactions, and do not lead to $3nj$-symbols in the matrix elements of the Hamiltonian. \item \textit{$\hat{\mathcal{H}}$ is explicitly expressed through the constant of motion -- the total angular momentum of the system.} It can be shown~\cite{SchmidtLem16} that the set of eigenvalues of the $\hat{\mathbf{J}}'^2$ operator in the transformed frame exactly coincides with the spectrum of the total angular momentum operator, $\hat{\mathbf{L}}^2$. \item \textit{$\hat{\mathcal{H}}$ can be solved exactly in the limit of a slowly rotating impurity, $B\to 0$}. As we show in Sec.~\ref{sec:SlowRot} below, this allows to construct variational states involving an \textit{infinite} number of phonon excitations and use it to describe effective rotational constants of heavy molecules in $^4$He. Moreover, this allows to construct variational solutions which are based on the expansion around an intrinsically non-perturbative state. \end{enumerate} \subsection{The limit of a slowly rotating impurity} \label{sec:SlowRot} \begin{figure}[b] \centering \includegraphics[width=0.4\linewidth]{SpecFunc} \caption{\label{energies} Change of the angulon spectral function, $A_L (\omega)$, where $\omega=E-B L(L+1)$, with the rotational constant $B$, for three lowest total angular momentum states. In contrast to Fig.~\ref{Stark} we work here in units of the isotropic interaction parameter $u_0$. As in our previous result the $L>0$ states show an instability in the spectrum. The red dashed line shows the deformation energy given by the second term in Eq.~(\ref{Uhbos}), which is independent of $L$. Adapted from Ref.~\cite{SchmidtLem16}.} \end{figure} In the limit of a slowly-rotating impurity, $B \to 0$, the Hamiltonian (\ref{transH}) can be diagonalized exactly by an additional canonical transformation: \begin{equation} \label{Htilde} \hat{\mathscr{H}} = \hat{U}^{-1} \hat{\mathcal{H}} \hat{U} \end{equation} where \begin{equation} \label{Utransf} \hat{U} = \exp \left[ \sum_{k \lambda} \frac{V_\lambda (k)}{\omega (k)} \left( \hat b_{k \lambda 0} - \hat b^\dagger_{k \lambda 0} \right) \right] \end{equation} This transformation represents a coherent-state shift of the boson operators, \begin{align} \label{Ushift} \hat{U}^{-1} \hat b^\dagger_{k \lambda 0} \hat{U} &= \hat b^\dagger_{k \lambda 0} - \frac{V_\lambda (k)}{\omega (k)}\\ \hat{U}^{-1} \hat b_{k \lambda 0} \hat{U} &= \hat b_{k \lambda 0} - \frac{V_\lambda (k)}{\omega (k)} \end{align} which replaces the boson part of Eq.~\eqref{transH} by: \begin{equation} \label{Uhbos} \hat H_\text{bos} = \sum_{k\lambda\mu}\omega (k) \hat b^\dagger_{k\lambda\mu}\hat b_{k\lambda\mu} - \sum_{k \lambda} \frac{V^2_\lambda (k)}{\omega (k)} \end{equation} Since the dispersion relation $\omega (k) $ is non-negative, the ground state of the bosons is given by the phonon vacuum, $\ket{0}$. The second term of Eq.~\eqref{Uhbos} corresponds to the deformation energy of the condensate, sometimes referred to as the `chemical potential' of the impurity~\cite{ToenniesAngChem04}. In other words, in the limit of $B \to 0$ the ground state of (\ref{transH}) represents a coherent state of bosons, $\hat U \ket{0}$, which already involves a macroscopic deformation of the condensate, i.e. an \textit{infinite} number of phonon excitations. Note that constructing such a state starting from the original Hamiltonian \eqref{Hamil1} would be an extremely challenging task due to the angular momentum algebra involved. The transformation~\eqref{Transformation} allows to study the regime of a slowly rotating molecule. For instance, in Ref.~\cite{SchmidtLem16} we considered a variational state based on single phonon excitations on top of the double transformed Hamiltonian~\eqref{Htilde} to reveal the instabilities in the angulon spectrum. As an example, Fig.~\ref{energies} shows the angulon spectral function depending on the rotational constant $B$, for a few lowest rotational states. The emerging instability is qualitatively similar to the MBIFS of the second kind, shown in Fig.~\ref{Stark} and discussed in Sec.~\ref{sec:Angulon1} above. Finally, the strong-coupling angulon theory allows to explain rotational constant renormalization of heavy and medium-mass molecules in helium droplets. For a slowly-rotating molecule, one can assume that the bosonic state, derived in the limit of $B\to0$ above, does not change upon molecular rotation. In a way, it can be thought of as a microscopic formulation of the `nonsuperfluid helium shell,' rotating along with the molecule~\cite{ToenniesAngChem04, SzalewiczIRPC08}. On the other hand, the effective molecular angular momentum, given by the first term of Eq.~\eqref{transH}, is determined by the difference between the total angular momentum of the system, $ \hat{\mathbf{L}} \equiv \hat{\mathbf{J}}'$, and the angular momentum of superfluid excitations, $\hat {\vec\Lambda}$. In such a way, the energy of a state with a given \textit{total} angular momentum $L$ is lower in the presence of a superfluid ($\hat {\vec\Lambda} \neq 0$) compared to a free molecule ($\hat {\vec\Lambda} = 0$), which leads to effective renormalization of the rotational constant. In the strong-coupling regime, $\hat{\mathbf{\Lambda}}$ in Eq.~\eqref{transH} can be replaced by its expectation value, $\langle \hat{\mathbf{\Lambda}}^2 \rangle^{1/2}$, where \begin{equation} \label{ExpLambda} \langle \hat{\mathbf{\Lambda}}^2 \rangle \equiv \bra{\psi_{LM}} \hat{\mathbf{\Lambda}}^2 \ket{\psi_{LM}} = \sum_{k \lambda} \lambda(\lambda+1) \frac{V^2_\lambda (k)}{\omega^2_{k}} \end{equation} The results of the strong-coupling angulon theory are shown in Fig.~\ref{fig:BBstar} by red circles. From panels (a) and (b) one can see that a good agreement with experiment is achieved for most heavy and medium-mass molecules. We refer the reader to Ref.~\cite{LemeshkoDroplets16} for a detailed description of the strong-coupling theory, as well as of the intermediate-coupling interpolation, shown in Fig.~\ref{fig:BBstar} by green crosses. \section{Conclusions and outlook} \label{sec:conclusions} The aim of this tutorial was to familiarize the reader with a novel perspective on interactions of molecules with a many-particle environment -- namely the one based on the notion of quasiparticles. A variety of quasiparticles has been introduced in the past in order to describe point-like impurities, such as electrons, ultracold atoms, or single spins interacting with their respective environments. The rotational motion of molecules gives rise to a novel mechanism of impurity interactions with a bath due to the coupling of internal and external degrees of freedom. This stems from the peculiar properties of quantum rotations such as non-Abelianity and a discrete spectrum of eigenvalues. From quantum mechanics textbooks we know that the addition of even a few angular momenta (such as photon absorption/emission by a single molecule) can already call for involved approaches. The situation clearly does not become easier if the molecular rotation occurs in the presence of a many-particle environment, where the number of angular momenta to add is in principle infinite. We demonstrated that a natural way to treat a rotating molecule interacting with an environment is by introducing a new quasiparticle -- the angulon -- and investigating its properties. Along with purely theoretical derivations, we provided an outlook for the application of the angulon concept to molecules in BEC's as well as in superfluid helium droplets. For instance, we have shown that the angulon theory -- even in its most basic formulation -- allows to explain rotational constant renormalization observed for molecules in helium droplets. On the other hand, it predicts various phenomena, which do not occur in isolated atoms and molecules, nor in any other impurity problems. A particular example of the latter is the angulon instability, accompanied by the resonant transfer of one quantum of angular momentum between the molecule and the many-body bath. Such instabilities are fundamentally different from vortex instabilities (see Ref.~\cite{SchmidtLem16} for a detailed comparison). Furthermore, it seems that their signatures have been recently observed in spectra of CH$_3$ molecules inside superfluid helium droplets~\cite{MorrisonJPCA13}, which, however, calls for a detailed theoretical calculation. {Among other novel phenomena featured by angulons is the angular localization of molecular impurities in the presence of a bosonic bath~\cite{Li16}.} On the other hand, nowadays rotating impurities can be prepared experimentally in perfectly controllable settings, based on ultracold molecules immersed into a Bose or Fermi gas~\cite{JinYeCRev12, KreStwFrieColdMol, LemKreDoyKais13}, which opens up a prospect for a detailed study of the angulon physics. Most importantly, the angulon theory provides a common language to describe molecular impurity problems arising in different contexts. For example, although the physics of superfluid helium and dilute BEC's is substantially different, the quasiparticle picture makes the similarities and distinctions of the two settings apparent. Moreover, the present theory has already been extended to account for \textit{ab initio} potential energy surfaces~\cite{Bikash16}, which opens up a prospect to employ the techniques of quantum chemistry in the context of quantum impurity problems. As an example, in Ref.~\cite{Bikash16} it has been shown that for a CN$^-$ ion interacting with a BEC of Sr and Rb~\cite{Bikash16}, the {rotational Lamb shifts} predicted in Ref.~\cite{SchmidtLem15} can be observed under realistic experimental conditions. In the future, the presented theory can be extended to account for rotational, vibrational, and electronic structure of more complex molecules~\cite{StoneBook13}, and external electromagnetic and crystalline fields~\cite{Redchenko16, Yakaboylu16, Shepperson16, LemKreDoyKais13}, which is expected to further enrich the observed phenomena. {The question of engineering intermolecular interactions and bound states mediated by a field of ultracold atoms also represents a substantial interest~\cite{BissbortPRL13, ZhouPRA11, HerreraPRA11, HerreraPRL13, BennettPRL13, Lemeshko2013, Otterbach2014, LemFrontPhys13, LemeshkoPRA11Optical, LemFri11OpticalLong}.} While this tutorial entirely focused on molecules, the quasiparticle approach to the redistribution of angular momentum in many-particle systems extends far beyond molecular physics. For example, the notions of rotation and orbital angular momentum can be used to calculate the properties of nuclei~\cite{RoweWoodNuclearModels}, high-resolution atomic spectra~\cite{DereviankoRMP11}, or electronic structure of defect centers in solids~\cite{MazeNJP11}. In condensed matter physics, the orbital angular momentum of excited electronic states is often coupled to the lattice phonons, and such a redistribution of angular momentum is involved, e.g., in the ultrafast demagnetization of ferromagnetic thin films~\cite{StammNatMat07, StammPRB10, TowsPRL15, TsatsoulisPRB16, FahnleJSNM17}. In ultracold gases, single Rydberg excitations (which can carry angular momentum) are perturbed by phonons in the surrounding BEC~\cite{BalewskiNature13}. Finally, the angulon impurity problem can be used as a building block of a general theory describing the redistribution of orbital angular momentum in quantum many-body systems. This paves the way to apply the techniques described here to other outstanding problems of chemical and condensed matter physics. \section{Acknowledgements} We are grateful to Giacomo Bighin, Igor Cherepanov, Eugene Demler, Gary Douberly, Bretislav Friedrich, Rytis Jursenas, Johan Mentink, Elena Redchenko, Stephan Schlemmer, Henrik Stapelfeldt, Sandro Stringari, and Andrey Vilesov for insightful discussions. The work was supported by the NSF through a grant for the Institute for Theoretical Atomic, Molecular, and Optical Physics at Harvard University and Smithsonian Astrophysical Observatory.
1,314,259,994,047
arxiv
\section{Introduction} For a long time, the inflationary scenario has been regarded as an academic theory which provides an elegant solution to conceptual problems in cosmology such as the flatness problem, the horizon problem, and the origin of structure of the universe. However, the fact that all of recent cosmological observations strongly support the inflationary scenario with high precision encourages us to take the inflation more seriously. Then, taking into account that the inflation magnifies microscopic scales to macroscopic ones, it is reasonable to regard the inflation as a probe to investigate physics at the Planck scale, namely, quantum gravity. It is widely believed that string theory is a promising candidate for quantum gravity. However, it is premature to discuss a Planckian regime of the universe using string theory. Therefore, so far, study of trans-Planckian effect on inflationary predictions has been phenomenological~\cite{Martin:2000xs,Brandenberger:2000wr}. In fact, there are various phenomenological models which can mimic trans-Planckian physics and lead to a modification of the power spectrum of curvature perturbations~\cite{Brandenberger:2002sr}. It is known that these quantitative trans-Planckian corrections are suffered from severe constraints due to the backreaction problem~\cite{Tanaka:2000jw,Starobinsky:2001kn}. However, there may be more qualitative effects due to trans-Planckian physics. For example, polarization of primordial gravitational waves could be an important smoking gun of trans-Planckian physics~\cite{Lue:1998mq,Cai:2007xr,Contaldi:2008yz}. In fact, a parity violating gravitational Chern-Simons term which is ubiquitous in string theory can generate circular polarization in primordial gravitational waves~\cite{Choi:1999zy,Alexander:2004us,Lyth:2005jf}. However, it has been shown that the effect of parity violation is negligibly small for the slow roll inflation~\cite{Alexander:2004wk}. Recently, it is argued that sizable circular polarization could be generated~\cite{Satoh:2007gn,Satoh:2008ck} by resorting to a peculiar feature due to the Gauss-Bonnet term~\cite{Kawai:1998ab}. One defect in these models is the appearance of divergence in one of circular polarization modes. This divergence suggests necessity of a consistent quantum theory of gravity. Recently, quantum gravity at a Lifshitz point which is power-counting renormalizable is proposed by Ho${\rm\check{r}}$ava~\cite{Horava:2008ih,Horava:2009uw}. In contrast to string theory, the theory is not intended to be a unified theory but just quantum gravity in 4-dimensions. In this ``small" framework, one can discuss trans-Planckian effect on cosmology in a self-consistent manner. In Ho${\rm {\check r}}$ava's formulation of quantum gravity, the action necessarily contains a Cotton tensor, which violates parity invariance. Hence, we can expect circular polarization of primordial gravitational waves. Moreover, there exists no divergence in this model. Then, the purpose of this paper is to calculate degree of circular polarization during inflation and show observability of chiral primordial gravitational waves which is a robust prediction of quantum gravity at a Lifshitz point. \section{QG at a Lifshitz point} The quantum gravity proposed by Ho${\rm {\check r}}$ava can be characterized by anisotropic scaling at an ultraviolet fixed point $ {\bf x}\rightarrow b{\bf x},\ t\rightarrow b^3t , $ where $b$, ${\bf x}$ and $t$ are a scaling factor, spacial coordinates and a time coordinate, respectively. This scaling guarantees the renormalizability of the theory~\cite{Horava:2009uw}. Because of the anisotropic scaling, the time direction plays a privileged role. In other words, the spacetime has a codimension-one foliation structure in which leaves of the foliation are hypersurfaces of constant time. Since the spacetime has the anisotropic scaling and the foliation structure, the theory is not diffeomorphism invariant but invariant under the foliation-preserving diffeomorphism defined by $ {\tilde x}^{i}={\tilde x}^{i}(x^j,t),\ {\tilde t}={\tilde t}(t). $ Here, indices $i,j,k,\cdots$ represent spacial coordinates. To describe the foliation, it is convenient to use ADM decomposition of the metric $ds^2 = -N^2 dt^2 + g_{ij} (dx^i +N^i dt)(dx^j + N^j dt)$, where $N$, $N_i$, and $g_{ij}$ are the lapse function, the shift function and the 3-dimensional induced metric, respectively. In order for the theory to be unitary, the number of time derivative should be at most two in the action. The renormalizable kinetic part is then given by \begin{eqnarray} S_K = \frac{2}{\kappa^2} \int dt d^3x \sqrt{g} N \left( K^{ij}K_{ij}-\lambda K^2 \right) \ , \label{kinetic} \end{eqnarray} where $K_{ij}$ is the extrinsic curvature of the constant time hypersurface defined by $ K_{ij}= \left(\dot{g}_{ij}-N_{i|j}-N_{j|i}\right) /2N , $ and $K$ is the trace part of $K_{ij}$. Note that $\kappa$ and $\lambda$ are dimensionless coupling constants which run according to the renormalization group flow. Hereafter, we assume $\lambda$ has already settled down to the infrared fixed point $\lambda =1$ at the beginning of the inflation although we keep $\lambda$ in subsequent formulas. The most crucial assumption of Ho${\rm {\check r}}$ava's theory is the detailed balance condition \begin{eqnarray} S_V = \frac{\kappa^2}{8} \int dt d^3 {\bf x} \sqrt{g} N E^{ij} {\cal G}_{ijkl} E^{kl} \ , \end{eqnarray} where we have defined $ \sqrt{g} E^{ij} = \delta W[g_{ij}] / \delta g_{ij} $ with some functional $W$. Here, we introduced the inverse of De Witt metric $ {\cal G}^{ijkl} = ( g^{ik} g^{jl} + g^{il} g^{jk} )/2 - \lambda g^{ij} g^{kl} . $ The renormalizability of the theory requires $E^{ij}$ must be third order in spatial derivatives. The requirement uniquely selects the Cotton tensor \begin{eqnarray} C^{ij}=\varepsilon^{ikl}\nabla_k\left(R^{j}_{l}-\frac{1}{4}R\delta^{j}_{l}\right) \ , \label{cotton} \end{eqnarray} where $\epsilon^{ijk}$ denotes the totally antisymmetric tensor and $R_{ij}$ and $R$ are the 3-dimensional Ricci tensor and Ricci scalar. Including relevant deformations, we have \begin{eqnarray} W &=& \frac{1}{w^2} \int d^3 {\bf x} \sqrt{g} \epsilon^{ijk} \left( \Gamma^m_{il} \partial_j \Gamma^l_{km} + \frac{2}{3} \Gamma^n_{il}\Gamma^l_{jm}\Gamma^m_{kn} \right) \nonumber \\ && + \mu\int d^3 {\bf x} \sqrt{g} \left( R -2\Lambda_w \right) \ , \end{eqnarray} where $\Gamma^i_{jk}$ and $\Lambda_w$ are Christoffel symbols and the 3-dimensional ``cosmological constant", respectively. Here, we have introduced new coupling constants $w$ and $\mu$. Note that the first two terms lead to the Cotton tensor. Thus, we obtain the potential part of the 4-dimensional action~\cite{Horava:2009uw} \begin{eqnarray} S_V &=&\int dt d^3x \sqrt{g} N \nonumber\\ && \times \Bigl[ -\frac{\kappa^2}{2 w^4}C^{ij}C_{ij} +\frac{\kappa^2\mu}{2 w^2}\varepsilon^{ijk}R_{il}R^{l}_{k|j} -\frac{\kappa^2\mu^2}{8}R_{ij}R^{ij}\nonumber\\ &\ &\hspace{0.3cm}+\frac{\kappa^2\mu^2}{8(1-3\lambda)}\left(\frac{1-4\lambda}{4}R^2+\Lambda_wR-3\Lambda_w^2\right) \Bigr] \ , \label{action} \end{eqnarray} where a stroke $|$ denotes a covariant derivative with respect to spacial coordinates. In the above action (\ref{action}), the coefficient of scalar curvature $R$ is $\kappa^2\mu^2\Lambda_w/8(1-3\lambda)$, then the gravitational constant become negative in the low energy limit unless $\Lambda_w/(1-3\lambda)>0$. In addition to this gravity sector, we consider the action for an inflaton $\phi$ \begin{eqnarray} S_M = \int dt d^3 {\bf x} \sqrt{g} N \left[- \frac{1}{2} \partial^\mu \phi \partial_\mu \phi - V(\phi) \right] \ . \label{inflaton} \end{eqnarray} Let us assume the slow roll inflation and take the slow roll limit. Then, we can replace the action (\ref{inflaton}) with the effective cosmological constant $\bar{\Lambda}$, namely, we have $S_M = - \int dt d^3 {\bf x} \sqrt{g} N \bar{\Lambda}$. Thus, the total action is given by $S= S_K +S_V +S_M$. The total action $S$ breaks the detailed balance condition softly~\cite{Horava:2009uw}. It should be emphasized that the total action $S$ reduces to the conventional Einstein theory at low energy. \section{Primordial gravitational waves} Let us consider the background spacetime with spatial isotropy and homogeneity $ ds^2= -dt^2+a(t)^2\delta_{i j}dx^idx^j , $ where $a$ is the scale factor. Using this metric ansatz, we can get the Friedmann equation with $\lambda=1$ \begin{eqnarray} \frac{\dot{a}^2}{a^2}=\frac{\kappa^2}{12}\left(\bar{\Lambda}-\frac{3\kappa^2\mu^2\Lambda_w^2}{16}\right)\equiv H^2 \ . \end{eqnarray} The above equation leads to de Sitter spacetime, $ a(t) \propto e^{Ht} \label{solution} $. Here, we assumed $\bar{\Lambda}> 3\kappa^2\mu^2\Lambda_w^2 /16 $. Note that if we chose $\bar{\Lambda}=0$, there is no Minkowski solution. Hence, there must exist residual vacuum energy in the matter sector at the end of the day. This is related to the issue of the cosmological constant, which is beyond the scope of this paper. Now, we consider tensor perturbations $ ds^2=-dt^2+a(t)^2(\delta_{ij}+h_{ij}(t,{\bf x}))dx^idx^j , \label{perturbation} $ where $h_{ij}$ satisfies the transverse-traceless conditions. Substituting this metric into the total action, we obtain the quadratic action \begin{eqnarray} \delta^2 S&=&\int dtd^3x a^3\Bigl[\frac{1}{2\kappa^2}\dot{h}^{i}_{j}\dot{h}^{j}_{i}+\frac{\kappa^2}{8 w^4a^6}\Delta^2 h^{i}_{j}\Delta h^{j}_{i}\nonumber\\ &\ &\ +\frac{\kappa^2\mu}{8 w^2a^5}\epsilon^{ijk} \Delta h_{il}\Delta h^{l}_{k|j}-\frac{\kappa^2\mu^2}{32a^4} \Delta h^{i}_{j}\Delta h^{j}_{i}\nonumber \\ &\ &\hspace{2.5cm} +\frac{\kappa^2\mu^2\Lambda_w}{32(1-3\lambda)a^2}h^{i}_{j}\Delta h^{j}_{i} \Bigr], \label{qaction} \end{eqnarray} where $\Delta$ represents the Laplace operator. The transverse traceless tensor $h_{ij}$ can be expanded in terms of plane waves with wavenumber ${\bf k}$ as \begin{eqnarray} h_{ij}(t,{\bf x})=\sum_{A=R,L} \int\frac{d^3{\bf k}}{(2\pi)^3}\psi^{A}_{{\bf k}}(t) e^{i{\bf k}\cdot{\bf x}} p_{ij}^{A} \ , \label{expansion} \end{eqnarray} where $p_{ij}^{A}$ are circular polarization tensors which are defined by $ ik_{s}\epsilon^{rsj}p_{ij}^{A} = k\rho^{A}p^{r}_{\ i}{}^{A} $ \cite{Satoh:2007gn}. Here, $\rho^{R} =1$ and $\rho^{L} =-1$ modes are called the right handed mode and the left handed mode, respectively. We also impose normalization conditions $ p^{*}{}^{i\ A}_{\ j}p^{j\ B}_{\ i} = \delta^{AB} , $ where $p^{*}{}^{i\ A}_{\ j}$ is the complex conjugate of $p{}^{i\ A}_{\ j}$. Substituting the expansion (\ref{expansion}) into the gravitational action (\ref{qaction}), we obtain \begin{eqnarray} \delta^2 S&=&\sum_{A=R,L}\int dt\frac{d^3{\bf k}}{(2\pi)^3} a^3 \Bigl[\frac{1}{2\kappa^2}|\dot{\psi}^{A}_{\bf k}|^2\nonumber\\ &\ &\ -\Bigl\{\frac{\kappa^2 k^6}{8 w^4 a^6} -\rho^{A}\frac{\kappa^2\mu k^5}{8 w^2 a^5} +\frac{\kappa^2 \mu^2 k^4}{32a^4}\nonumber \\ &\ &\hspace{2.5cm} +\frac{\kappa^2\mu^2\Lambda_w k^2}{32(1-3\lambda)a^2} \Bigr\}|\psi^{A}_{\bf{k}}|^2\Bigr] \ . \end{eqnarray} Using the variable $v^{A}_{\bf k}\equiv a\psi^{A}_{\bf k}$ and conformal time $\eta$ defined by $d\eta/dt=1/a$, we obtain the equations of motion \begin{eqnarray} \frac{\partial^2}{\partial \eta^2}v^{A}_{\bf{k}} +\left( k^{A\ 2}_{eff} -\frac{2}{\eta^2}\right)v^{A}_{\bf{k}}=0 \ , \label{vEOM} \end{eqnarray} where we used $a=- 1/H\eta$ and defined $ k^{A\ 2}_{eff} = \alpha^2 k^2 \left\{1+ \beta (\alpha k \eta )^2 (1+\rho^{A}\gamma \alpha k \eta )^2\right\} . $ We have also defined \begin{eqnarray} \alpha^2=\frac{\kappa^4\mu^2\Lambda_w}{16(1-3\lambda)} \ , \ \beta=H^2\frac{1-3\lambda}{\Lambda_w \alpha^2} \ , \ \gamma=H\frac{2}{ w^2 \mu \alpha} \ . \end{eqnarray} Here, $\alpha$ is "the emergent speed of light"~\cite{Horava:2009uw} and $\beta$ and $\gamma$ are dimensionless parameters. Since there appears $\rho^{A}$ in Eq.(\ref{vEOM}), the evolution of right handed mode is different from that of left handed mode. Hence, the dimensionless parameter $\gamma$ characterizes "the parity violation". If $\beta=0$ and $\alpha$ is exactly the speed of light, Eq.(\ref{vEOM}) becomes the equation for gravitational waves in pure de Sitter background in Einstein theory. Then $\beta$ measures the "deviation from Einstein theory". \section{circular polarization} Now, we calculate the power spectrum $|\psi_{{\bf k}}^{A}|^2$ numerically and evaluate degree of circular polarization of primordial gravitational waves. For the numerical analysis, it is convenient to introduce dimensionless variables $k^{'}\equiv \alpha k / H$ and $y\equiv k^{'}H\eta$. Using these variables and the transformation $\zeta^{A}\equiv \sqrt{k^{'}H}v_{\bf k}^{A}/\kappa$, we can write down the basic equation \begin{eqnarray} \frac{d^2}{dy^2}\zeta^{A}+ \omega^2(y)\zeta^{A}=0 \ , \label{ODE} \end{eqnarray} where $ \omega^2(y)=1+\beta y^2(1+\rho^A\gamma y)^2- 2/y^2 . \label{omega} $ Since WKB approximation is pretty good in the asymptotic past $y\rightarrow -\infty$, we can choose the adiabatic vacuum as the initial condition. More precisely, we set the positive frequency modes as \begin{eqnarray} \zeta^{A}=\frac{1}{\sqrt{2 \omega (y)}} \exp\left\{-i\int_{y_i}^{y} \omega (y^{'})dy^{'}\right\} \ . \label{barIC} \end{eqnarray} On superhorizon scales $y\rightarrow 0$, Eq.(\ref{ODE}) has asymptotic solution $\zeta^{A}=C^{A}/y+D^{A}y^2$ with constants of integration $C^A$ and $D^A$. Hence, the power spectrum defined by $ k^3 |\psi_{\bf k}^{A}|^2= k^3 \left| v_{\bf k}^{A} / a \right|^2 = \kappa^2 H^2 \left|y\zeta^{A}\right|^2 / \alpha^3 $ reduces to \begin{eqnarray} k^3 |\psi_{\bf k}^{A}|^2 =\frac{\kappa^2 H^2}{\alpha^3} \left|C^{A}\right|^2\label{spec} \end{eqnarray} on superhorizon scales $y\rightarrow 0$. So, we only need to calculate $C^{A}$ using Eq.(\ref{ODE}) with the initial condition (\ref{barIC}). Notice that the power spectrum $P(k) = k^3 |\psi_{\bf k}^{A}|^2$ is scale free. From the mode function (\ref{barIC}), we see the vacuum depends on the chirality. In the WKB regime, the amplitude of the right handed mode grows, while that of the left handed mode decays. These two effects make the difference. In Fig.\ref{fig:1}, we plotted the time evolution of the power for a right handed mode and a left handed mode and also displayed the case of Einstein theory for comparison. Clearly, one can see that the differences of initial amplitude and the growth rate during WKB regime lead to circular polarization. \begin{figure}[htbp] \begin{center} \includegraphics[width=80mm]{evolution.eps} \end{center} \caption{The time evolution of the power is depicted. The thick solid line represents the evolution for the conventional Einstein gravity. The thin solid line and the dotted line show the time evolution of right handed mode and left handed mode, respectively. } \label{fig:1} \end{figure} Now, we are in a position to discuss observability of circular polarization. For this aim, we need to quantify polarization by defining degree of circular polarization \begin{eqnarray} \Pi = \frac{|\psi_{\bf k}^{R}|^2-|\psi_{\bf k}^{L}|^2}{|\psi_{\bf k}^{R}|^2+|\psi_{\bf k}^{L}|^2} = \frac{|C^{R}|^2-|C^{L}|^2}{|C^{R}|^2+|C^{L}|^2} \ . \label{pi} \end{eqnarray} Numerical results are plotted in Fig.\ref{fig:2}. There are two possible channels to observe circular polarization of primordial gravitational waves. One is the in-direct detection of circular polarization through the cosmic microwave background radiation, the required degree of circular polarization has been obtained as $|\Pi| \gtrsim 0.35 (r/0.05)^{-0.6}$ in \cite{Saito:2007kt}, where $r$ is the tensor-to-scalar ratio. The relevant frequency of gravitational waves in this case is around $f \sim 10^{-17}$ Hz. From Fig.\ref{fig:2}, supposing $r=0.05$ and $\gamma=1$, we see we can detect the circular polarization through the temperature and B-mode polarization correlation if $\beta > 0.2$. \begin{figure}[htbp] \begin{center} \includegraphics[width=80mm]{circular.eps} \end{center} \caption{The degree of circular polarization $\Pi$ for various $\gamma$ as a function of $\beta$ are shown. As can be seen from the figure, $\Pi$ grows as the $\beta$ becomes large. The dependence on $\gamma$ is not monotonic rather there is a value which gives the maximum polarization for fixed $\beta$.} \label{fig:2} \end{figure} The other is the direct detection of circular polarization, the required degree of circular polarization has been estimated as $\Pi \sim 0.08 (\Omega_{\rm GW} /10^{-15})^{-1} ({\rm SNR}/5)$ around the frequency $f \sim 1$ Hz~\cite{Seto:2006hf}, where $\Omega_{\rm GW}$ is the density parameter of the stochastic gravitational waves and SNR is the signal to the noise ratio~\cite{Seto:2006hf,Seto:2006dz,Seto:2007tn}. Here, 10 years observational time is assumed. Taking look at Fig.\ref{fig:2}, one can see it is easy to get the circular polarization of the order of $0.08$ in the present model. Hence, we can prove or disprove quantum gravity at a Lifshitz point by these observations. \section{Conclusion} We have considered the inflationary scenario in the context of quantum gravity at a Lifshitz point which is supposed to be a power-counting renormalizable theory. Because of the detailed balance condition, the action necessarily contains Cotton tensor which violates the parity invariance. We have calculated degree of circular polarization of primordial gravitational waves. As a consequence, we find that chiral primordial gravitational waves exist for generic parameters. It should be emphasized that the existence of circular polarization is a robust prediction of the theory. In the usual discussions on the trans-Planckian effects, phenomenological approaches have been adopted and mostly a modification of the spectrum has been discussed. While, we have used a candidate of quantum gravity and discussed chirality of primordial gravitational waves. The point is that we have found a modification of nature of gravitational waves rather than a modification of shape of the spectrum for gravitational waves. It is also apparent that the power spectrum for curvature perturbations is scale free. Thus, quantum gravity at a Lifshitz point is consistent with all of current observations. Moreover, we have a testable smoking gun of quantum gravity. There are many issues to be pursued. One of those is to find exact solutions which represent black holes. When black hole solutions are found, it is very interesting to examine their spacetime structures, thermodynamics and Hawking radiation. It would be also intriguing to apply the idea of the anisotropic scaling in gravity to braneworld cosmology~\cite{Soda:2006wr}. Furthermore, the possibility to generalize the anisotropic scaling in gravity to the cases where spatial isotropy is broken would be interesting from the point of anisotropic inflationary scenarios~\cite{Kanno:2008gn,Watanabe:2009ct}. After submitting our paper, we found a related work in the archive where inflation caused by a Lifishitz scalar is considered~\cite{Calcagni:2009ar}. \begin{acknowledgements} JS is supported by the Japan-U.K. Research Cooperative Program and Grant-in-Aid for Scientific Research Fund of the Ministry of Education, Science and Culture of Japan No.18540262. \end{acknowledgements}
1,314,259,994,048
arxiv
\section{Introduction} \label{sec:intro} In the standard problems of statistical mechanics, we begin with the definition of the Hamiltonian and proceed to calculate the expectation values or correlation functions of various observable quantities. In the inverse problem, we are given the expectation values and try to infer the underlying Hamiltonian. The history of the inverse problems goes back (at least) to 1959, when Keller and Zumino \cite{keller+zumino_59} showed that, for a classical gas, the temperature dependence of the second virial coefficient determines the interaction potential between molecules uniquely, provided that this potential is monotonic. Subsequent work on classical gases and fluids considered the connection between pair correlation functions and interaction potentials in various approximations \cite{kunkin+frisch_69}, and more rigorous constructions of Boltzmann distributions consistent with given spatial variations in density \cite{chayes+al_84} or higher order correlation functions \cite{caglioti+al_06}. In fact the inverse problem of statistical mechanics arises in many different contexts, with several largely independent literatures. In computer science, there are a number of problems where we try to learn the probability distribution that describes the observed correlations among a large set of variables in terms of some (hopefully) simpler set of interactions. Many algorithms for solving these learning problems rest on simplifications or approximations that correspond quite closely to established approximation methods in statistical mechanics \cite{yedidia+al_05}. More explicitly, in the context of neural network models \cite{hopfield_82,amit_89}, a family of models referred to as `Boltzmann machines' lean directly on the mapping of probabilistic models into statistical physics problems, identifying the parameters of the probabilistic model with the coupling constants in an Ising--like Hamiltonian \cite{hinton+sejnowski_86}. Inverse problems in statistical mechanics have received new attention because of attempts to construct explicit network models of biological systems. Physicists have long hoped that the collective behavior which emerges from statistical mechanics could provide a model for the emergence of function in biological systems, and this general class of ideas has been explored most fully for networks of neurons \cite{hopfield_82,amit_89}. Recent work shows how these ideas can be linked much more directly to experiment \cite{schneidman+al_06,tkacik+al_06} by searching for maximum entropy models that capture some limited set of measured correlations. At a practical level, implementing this program requires us to solve a class of inverse problems for Ising models with pairwise interactions among the spins, and this is the problem that we consider here. To be concrete, we consider a network of neurons. Throughout the brain, neurons communicate by generating discrete, identical electrical pulses termed action potentials or spikes. If we look in a small window of time, each neuron either generates a spike or it does not, so that there is a natural description of the instantaneous state of the network by a collection of binary or Ising variables; $\sigma_{\rm i} = +1$ indicates that neuron $\rm i$ generates a spike, and $\sigma_{\rm i} = -1$ indicates that neuron $\rm i$ is silent. Knowing the average rate at which spikes are generated by each cell is equivalent to knowing the expectation values $\langle \sigma_{\rm i}\rangle$ for all $\rm i$. Similarly, knowing the probabilities of coincident spiking (correlations) among all pairs of neurons is equivalent to knowing the expectation values $\langle \sigma_{\rm i}\sigma_{\rm j}\rangle$. Of course there are an infinite number of probability distributions $P({\bf\sigma} )$ over the states of the whole system (${\bf\sigma} \equiv \{ \sigma_{\rm i}\}$) that are consistent with these expectation values, but if we ask for the distribution that is as random as possible while still reproducing the data---the maximum entropy distribution---then this has the form of an Ising model with pairwise interactions: \begin{equation} P({\bf\sigma} ) = {1\over Z} \exp\left[ - \sum_{\rm i} h_{\rm i} \sigma_{\rm i} - {1\over 2} \sum_{\rm i\neq j} J_{\rm ij} \sigma_{\rm i}\sigma_{\rm j}\right] . \label{ising1} \end{equation} The inverse problem is to find the ``magnetic fields'' $\{ h_{\rm i}\}$ and ``exchange interactions'' $\{ J_{\rm ij}\}$ that reproduce the observed values of $\langle \sigma_{\rm i}\rangle$ and $\langle \sigma_{\rm i}\sigma_{\rm j}\rangle$. The surprising result of Ref \cite{schneidman+al_06} was that this Ising model provides an accurate quantitative description of the combinatorial patterns of spiking and silence observed in groups of order $N=10$ neurons in the retina as it responds to natural sensory inputs, despite only taking account of pairwise interactions. The Ising model allows us to understand how, in this system, weak correlations among pairs of neurons can coexist with strong collective effects at the network level, and this is even clearer as one extends the analysis to larger groups (using real data for $N=40$, and extrapolating to $N=120$), where there is a hint that the system is poised near a critical point in its dynamics \cite{tkacik+al_06}. Since the initial results, a number of groups have found that the maximum entropy models provide surprisingly accurate descriptions of other neural systems \cite{shlens+al_06,other_neurons,other_neurons_2} and similar approaches have been used to look at biochemical and genetic networks \cite{lezon+al_06,tkacik_07}. The promise of the maximum entropy approach to biological networks is that it builds a bridge from easily observable correlations among pairs of elements to a global view of the collective behavior that can emerge from the network as a whole. Clearly this potential is greatest in the context of large networks. Indeed, even for the retina, methods are emerging that make it possible to record simultaneously from hundreds of neurons \cite{litke+al_04,segev+al_04}, so just keeping up with the data will require methods to deal with much larger instances of the inverse problem. The essential difficulty, of course, is that once we have a large network, even checking that a given set of parameters $\{h_{\rm i} , J_{\rm ij}\}$ reproduce the observed expectation values requires a difficult calculation. In Ref \cite{tkacik+al_06} we took an essentially brute force Monte Carlo approach to this part of the problem, and then adjusted the parameters to improve the match between observed and predicted expectation values using a relatively naive algorithm. In this work we combine several ideas---taken both from statistical physics and from machine learning \cite{ML}---which seem likely to help arrive at more efficient solutions of the inverse problem for the pairwise Ising model. First, we adapt the histogram Monte Carlo method \cite{ferrenberg+swendsen_88} to `recycle' the Monte Carlo samples that we generate as we make small changes in the parameters of the Hamiltonian. Second, we use a coordinate descent method to adjust the parameters \cite{dudik+al_04}. Finally, we exploit the fact that neurons use their binary states in a very asymmetric fashion, so that silence is much more common that spiking. Combining these techniques, we are able to solve the inverse problem for $N=40$ neurons in tens of minutes, rather than many days for the naive approach, holding out hope for generalization to yet larger problems. \section{Ingredients of our algorithm} \label{sec:ising} \subsection{Basic formulation} Our overall goal is to build a model for the distribution $P({\bf\sigma} )$ over the states of the a system with $N$ elements, ${\bf\sigma} \equiv \{\sigma_1 , \sigma_2, \cdots , \sigma_N\}$. As ingredients for determining this model, we use low order statistics computed from a set of $m$ samples $\{ {\bf\sigma}^1 , {\bf\sigma}^2 , \cdots , {\bf\sigma}^m$\}, which we can think of as samples drawn from the distribution $P({\bf\sigma} )$. The classical idea of maximum entropy models is that we should construct $P({\bf\sigma} )$ to generate the correct values of certain average quantities (e.g., the energy in the case of the Boltzmann distribution), but otherwise the distribution should be `as random' as possible \cite{jaynes_57}. Formally this means that we find $P({\bf\sigma} )$ as the solution of a constrained optimization problem, maximizing the entropy of the distribution subject to conditions that enforce the correct expectation values. We will refer to the quantities whose averages are constrained as ``features'' of the system, ${\bf f} \equiv \{ f_1 , f_2 , \cdots , f_K\}$, where each $f_\mu$ is a function of the state ${\bf\sigma}$, $f_\mu ({\bf\sigma} )$. One special set of average features are just the marginal distributions for subsets of the variables. Thus we can construct the one--body marginals \begin{equation} P_{\rm i} (\sigma_{\rm i} ) = \sum_{\{\sigma_{\rm j \neq i}\}} P(\sigma _1, \sigma_2, \cdots , \sigma _N) , \end{equation} the two--body marginals, \begin{equation} P_{\rm ij} (\sigma_{\rm i}, \sigma_{\rm j} ) = \sum_{\{\sigma_{\rm k \neq i,j}\}} P(\sigma _1, \sigma_2, \cdots , \sigma _N) , \end{equation} and so on for larger subsets. The maximum entropy distributions consistent with marginal distributions up to $K$--body terms generates a hierarchy of models that capture increasingly higher--order correlations, monotonically reducing the entropy of the model as $K$ increases, toward the true value \cite{schneidman+al_03}. Let $\tilde P$ denote the empirical distribution \begin{equation} \label{eq:empirical_distr} \tilde P ({\bf\sigma})=\frac1m\sum_{n=1}^m\delta({\bf\sigma},{\bf\sigma}^n) , \end{equation} where $\delta({\bf\sigma},{\bf\sigma}')$ is the Kronecker delta, equal to one when ${\bf\sigma}={\bf\sigma}'$ and equal to zero otherwise. The maximum-entropy problem is then \begin{equation} \textstyle \label{eq:maxent} \max_{P} S [P] \text{ such that } \langle \mathbf{f}({\bf\sigma})\rangle_P =\langle \mathbf{f}({\bf\sigma})\rangle_{\tilde P} \enspace, \end{equation} where $S[P]$ denotes the entropy of the distribution $P$, and $\langle \cdots\rangle_P$ denotes an expectation value with respect to that distribution. Using the method of Lagrange multipliers, the solution to the maximum entropy problem has the form of a Boltzmann or Gibbs distribution \cite{jaynes_57}, \begin{equation} Q_{{\mathbf\theta}}({\mathbf \sigma})= {1\over {{\mathbf Z}({{\mathbf\theta}}) }} \exp\left[ -\sum_{\mu =1}^K \theta_\mu f_\mu (\sigma )\right] , \label{qtheta} \end{equation} where as usual the partition function ${\mathbf Z}({{\mathbf\theta}})$ is the normalization constant ensuring that the distribution $Q_{{\mathbf\theta}}$ sums to one, and the parameters $\{\theta_1 , \theta_2 , \cdots , \theta_K\}\equiv {\mathbf\theta}\in\mathbb{R}^{K}$ correspond to the Lagrange multipliers. Note that the expression for $Q_{{\mathbf\theta}}$ above describes the pairwise Ising model, Eq (\ref{ising1}), with ${{\mathbf\theta}} = \{h_{\rm i} , J_{\rm ij}\}$, and the features $\mathbf f$ are the one--spin and two--spin combinations $\{\sigma_{\rm i}, \sigma_{\rm i}\sigma_{\rm j}\}$. Rather than thinking of our problem as that of maximizing the entropy subject to constraints on the expectation values, we can now think of our task as searching the space of Gibbs distributions, parameterized as in Eq (\ref{qtheta}), to find the values of the parameters ${\mathbf\theta}$ that generate the correct expectation values. Importantly, because of basic thermodynamic relationships, this search can also be formulated as an optimization problem. Specifically, we recall that expectation values in statistical mechanics can be written as derivatives of the free energy, which in turn is the logarithm of the partition function (up to factors of the temperature, which isn't relevant here). Thus, for distribution in the form of Eq (\ref{qtheta}), we have \begin{equation} \langle f_\mu ({{\bf\sigma} })\rangle_{Q_{{\mathbf\theta}}} = -{{\partial\ln {\mathbf Z}({{\mathbf\theta} })}\over{\partial\theta_\mu}} . \end{equation} Enforcing that these expectation values are equal to the expectation values computed from our empirical samples means solving the equation \begin{equation} \langle f_\mu ({{\bf\sigma} })\rangle_{\tilde P} \equiv {1\over m}\sum_{n=1}^m f_\mu ({{\bf\sigma}}^n )= -{{\partial\ln {\mathbf Z}({{\mathbf\theta} })}\over{\partial\theta_\mu}} . \end{equation} But this can be written as \begin{widetext} \begin{eqnarray} {1\over m}\sum_{n=1}^m f_\mu ({{\bf\sigma}}^n ) &=& {1\over m}\sum_{n=1}^m {\partial\over{\partial\theta_\mu}}\sum_{\nu =1}^K \theta_\nu f_\nu({{\bf\sigma}}^n) = -{{\partial\ln {\mathbf Z}({{\mathbf\theta} })}\over{\partial\theta_\mu}} \\ 0&=& {\partial\over{\partial\theta_\mu}} {1\over m}\sum_{n=1}^m\left[ -\ln {\mathbf Z}({{\mathbf\theta}}) - \sum_{\nu=1}^K \theta_\nu f_\nu({{\bf\sigma}}^n)\right]\\ &=& {\partial\over{\partial\theta_\mu}} {1\over m}\sum_{n=1}^m \ln Q_{{\mathbf\theta}} ({{\bf\sigma}}^n) . \end{eqnarray} \end{widetext} Thus we see that matching the empirical expectation values is equivalent to looking for a local extremum (which turns out to be a maximum) of the quantity \begin{equation} {1\over m} \sum_{n=1}^m \ln Q_{{\mathbf\theta}} ({{\bf\sigma}}^n ). \label{logQ} \end{equation} But if all the samples $\{{{\bf\sigma}}^1 , {{\bf\sigma}}^2 , \cdots , {{\bf\sigma}}^n\}$ are drawn independently, then the total probability that the Gibbs distribution with parameters ${\mathbf\theta}$ generates the data is $P_{\rm total} = \prod_n Q_{{\mathbf\theta}}({{\bf\sigma}}^n )$, and so the quantity in Eq (\ref{logQ}) is (up to a factor of $m$), just $\ln P_{\rm total}$. Finding the maximum entropy distribution thus is equivalent to maximizing the probability or likelihood that our model generates the observed samples, within the class of models defined by the Gibbs distribution Eq (\ref{qtheta}). We recall that, in information theory \cite{cover+thomas_91}, probability distributions implicitly define strategies for encoding data, and the shortest codes are achieved when our model of the distribution actually matches the distribution from which the data are drawn. Since code lengths are related to the negative logarithm of probabilities, it is convenient to define the cost of coding or log loss $\mathrm{L}_{\tilde P}({{\mathbf\theta} })$ that arises when we use the model with parameters ${\mathbf\theta}$ to describe data drawn from the empirical distribution $\tilde P$: \begin{equation} \label{eq:logloss:1} \mathrm{L}_{\tilde P}({\mathbf\theta}) = -\frac1m\sum_{n=1}^m \ln Q_{{\mathbf\theta}}({\bf\sigma}^n) = \langle -\ln Q_{{\mathbf\theta}}({\bf\sigma})\rangle_{\tilde P} \enspace. \end{equation} Comparing with Eq (\ref{logQ}), we obtain the \emph{dual} formulation of the maximum entropy problem, \begin{equation} \textstyle \label{eq:dual} \min_{{\mathbf\theta}} \mathrm{L}_{\tilde P}({\mathbf\theta}) . \label{dual} \end{equation} Why is the optimization problem in Eq (\ref{dual}) difficult? In principle, the convexity properties of free energies should make the problem well behaved and tractable. But it remains possible for $\mathrm{L}_{\tilde P}({{\mathbf\theta}})$ to have a very sensitive dependence on the parameters ${\mathbf\theta}$, and this can cause practical problems, especially if, as suggested in Ref \cite{tkacik+al_06}, the systems we want to describe are poised near a critical point. Even before encountering this problem, however, we face the difficulty that computing $\mathrm{L}_{\tilde P}({{\mathbf\theta}})$ or even its gradient in parameter space involves computing expectation values with respect to the distribution $Q_{{\mathbf\theta}}({{\bf\sigma}})$. Once the space of states becomes large, it is no longer possible to do this by exact enumeration. We can try to use approximate analytical methods, or we can use Monte Carlo methods. \subsection{Monte Carlo methods} Our general strategy for solving the optimization problem in Eq (\ref{dual}) will be to use standard Monte Carlo simulations \cite{MC1,MC2, GemanGe84} to generate samples from the distribution $Q_{{\mathbf\theta}} ({{\bf\sigma}})$, approximate the relevant expectation values as averages over these samples, and then use the results to propose changes in the parameters ${\mathbf\theta}$ so as to proceed toward a minimum of $\mathrm{L}_{\tilde P}({{\mathbf\theta}})$. Implemented naively, as in Ref \cite{tkacik+al_06}, this procedure is hugely expensive, because at each new setting of the parameters we have to generate a new set of Monte Carlo samples. Some of this cost can be avoided using the ideas of histogram Monte Carlo \cite{ferrenberg+swendsen_88}. We recall that if we want to compute the expectation value of some function $\Phi ({{\bf\sigma}})$ in the distribution $Q_{{\mathbf\theta} '}({{\bf\sigma}})$, this can be written as \begin{eqnarray} \langle \Phi ({{\bf\sigma} })\rangle_{{{\mathbf\theta} '}} &\equiv& \sum_{{\bf\sigma}} Q_{{\mathbf\theta} '} ({{\bf\sigma}}) \Phi({{\bf\sigma}})\\ &=&\sum_{{\bf\sigma}} Q_{{\mathbf\theta}} ({{\bf\sigma}})\left[ {{Q_{{\mathbf\theta} '}({{\bf\sigma}})}\over{Q_{{\mathbf\theta}} ({{\bf\sigma}})}} \Phi({{\bf\sigma}})\right] \\ &=&{\Bigg\langle} {{Q_{{\mathbf\theta} '}({{\bf\sigma}})}\over{Q_{{\mathbf\theta}} ({{\bf\sigma}})}} \Phi({{\bf\sigma}}) {\Bigg\rangle}_{\theta}\\ &=& { {\langle \Phi({{\bf\sigma}}) \exp[ - ({{\mathbf\theta} '} - {{\mathbf\theta}}){\bf\cdot} {\mathbf f}({{\bf\sigma}})]\rangle_{{\mathbf\theta}}} \over {\langle \exp[ - ({{\mathbf\theta} '} - {{\mathbf\theta}}){\bf\cdot} {\mathbf f}({{\bf\sigma}})]\rangle_{{\mathbf\theta}}} } , \label{difftheta} \end{eqnarray} where we denote expectation values in the distribution $Q_{{\mathbf\theta}}({{\bf\sigma}})$ by $\langle \cdots \rangle_{{\mathbf\theta}}$, and similarly for ${\mathbf\theta} '$. We note that Eq (\ref{difftheta}) is exact. The essential step of histogram Monte Carlo is to use an approximation to this equation, replacing the expectation value in the distribution $Q_{{\mathbf\theta}}$ by an average over a set of samples drawn from Monte Carlo simulation of this distribution. Consider an algorithm $\cal A$ which searches for a minimum of $\mathrm{L}_{\tilde P}({{\mathbf\theta}})$. As this algorithm progresses, the values of the parameters ${\mathbf\theta}$ change slowly. We will divide these changes into stages $s=1, 2, \cdots$, and within each stage we will perform $t=1, 2, \cdots , T$ iterations. At the first iteration, we will generate, via Monte Carlo, $M$ samples of the state ${\bf\sigma}$ drawn out of the distribution appropriate to the current value of ${\mathbf\theta} = {{\mathbf\theta}}(s,t=1)$. Let us refer to averages over these samples as $\langle \cdots \rangle_{MC{{\mathbf\theta}}}$, which should approximate $\langle \cdots \rangle_{{\mathbf\theta}}$. At subsequent iterations, the parameters ${\mathbf\theta}$ will be adjusted (see below for details), but we keep the same Monte Carlo samples and approximate the expectation values over the distribution with parameters ${{\mathbf\theta} '}={{\mathbf\theta}}(s,t)$ as \begin{equation} \langle \Phi({{\bf\sigma}})\rangle_{{\mathbf\theta} '} \approx { {\langle \Phi({{\bf\sigma}}) \exp[ - ({{\mathbf\theta} '} - {{\mathbf\theta}}){\bf\cdot} {\mathbf f}({{\bf\sigma}})]\rangle_{MC{\mathbf\theta}}} \over {\langle \exp[ - ({{\mathbf\theta} '} - {{\mathbf\theta}}){\bf\cdot} {\mathbf f}({{\bf\sigma}})]\rangle_{MC{\mathbf\theta}}} } . \end{equation} We denote this approximation as $\langle \cdots \rangle_{{\mathbf\theta} '} \approx \langle \cdots \rangle_{{{\mathbf\theta} '} | {{\mathbf\theta}}}$. Once we reach $t=T$, we run a new Monte Carlo simulation appropriate to the current value of ${\mathbf\theta}$, and the cycle begins again at stage $s=s+1$. This strategy is summarized as pseudocode in Fig \ref{fig:alg_shell}. \begin{figure*}[t] \begin{tabbing} {\bf Input}:~~~~\= empirical observations ${\bf\sigma}^1,\cdots,{\bf\sigma}^m$ \\ \> parameters $M$ and $T$ \\ \> access to maxent inference algorithm $\mathcal{A}$\\ {\bf Algorithm}: \\ ~~~~~~~~\=initialize $\mathcal{A}$ \\ \>${\mathbf\theta}(1,1)\leftarrow $ initial parameter vector computed by $\mathcal{A}$ \\ \>for $s=1,2,\ldots$\\ \>~~~~~~~\=generate $M$ Monte Carlo samples using ${\mathbf\theta}(s,1)$\\ \> \>for $t=1,\ldots,T$\\ \> \>~~~~~~~\=run one iteration of $\mathcal{A}$, approximating $\langle \cdots \rangle_{{\mathbf\theta}(s,t)} = \langle\cdots\rangle_{{{\mathbf\theta}}(s,t)|{{\mathbf\theta}}(s,1)}$\\ \> \> \>${\mathbf\theta}(s,t+1)\leftarrow $ parameter vector computed by $\mathcal{A}$ on this iteration\\ \> \>${\mathbf\theta}(s+1,1)\leftarrow{\mathbf\theta}(s,T+1)$ \end{tabbing} \caption[Algorithm shell]{Pseudocode for our scheme, which reuses a Monte Carlo sample set across $T$ iterations of a maxent algorithm $\mathcal{A}$.} \label{fig:alg_shell} \end{figure*} Once we chose the optimization algorithm $\mathcal{A}$, there are still two free parameters in our scheme: the number of samples $M$ provided by the Monte Carlo simulation at each stage, and the number of iterations per stage $T$. In our experiments, we explore how choices of these parameters influence the convergence of the resulting algorithms. \subsection{Parameter adjustment and the use of sparsity} At the core of our problem is an algorithm which tries to adjust the parameters $\theta$ so as to minimize $\mathrm{L}_{\tilde P}(\theta )$. In Ref \cite{tkacik+al_06} we used a simple gradient descent method. Here we use a coordinate descent method \cite{dudik+al_04}, adapted from Ref \cite{CollinsScSi02} and tuned specifically for the maximum entropy problem. Where gradient descent methods attempt to find a vector in parameter space along which one can achieve the maximum reduction in $\mathrm{L}$, coordinate descent methods explore in sequence the individual parameters $\theta_\mu$. Beyond the general considerations in Refs \cite{dudik+al_04,CollinsScSi02}, coordinate descent may be especially well suited to our problem because of the the sparsity of the feature vectors. In implementing any parameter adjustment algorithm, it is useful to take account of the fact that in networks of real neurons, spikes and silences are used very asymmetrically. It thus is useful to write the basic variables as $n_{\rm i} = (\sigma_{\rm i} +1)/2 \in \{0,1\}$. This involves a slight redefinition of the parameters $\{h_{\rm i} , J_{\rm ij}\}$, but once this is done one finds that individual terms $\propto n_{\rm i}$ or $\propto n_{\rm i} n_{\rm j}$ are often zero, because spikes (corresponding to $n_{\rm i} =1$) are rare. This is true not just in the experimental data, but of course also in the Monte Carlo simulations once we have parameters that approximately reproduce the overall probability of spiking vs. silence. This sparsity can lead to substantial time savings, since all expectation values can be evaluated in time proportional to the number of non--zero elements \cite{reweighting}. As an alternative to coordinate descent method, we also used a general purpose convex optimization algorithm known as the limited memory variable metric (LMVM)~\cite{tao-user-ref}, which is known to outperform others such general purpose algorithms on a wide variety of problems \cite{Malouf02}. While LMVM has the advantage of changing multiple parameters at a time, the cost of updating may outweigh this advantage, especially for very sparse data such as ours. Both the coordinate descent and the LMVM schemes are initialized with parameters [in the notation of Eq (\ref{ising1})] $J_{\rm ij} =0$ and the $h_{\rm i}$ chosen to exactly reproduce the expectation values $\langle \sigma_{\rm i}\rangle$. Our Monte Carlo method follows the discussion of the Gibbs sampler in Ref \cite{GemanGe84}. All computations were on the same computing cluster \cite{fafner} used in Ref \cite{tkacik+al_06}. \section{Experiments} \label{sec:experiments} In this section, we evaluate the speed of convergence of our algorithms as a function of $M$ (the number of Monte Carlo samples drawn in a sampling round) and $T$ (the number of learning steps per stage) under a fixed time budget. We begin with synthetic data, for which we know the correct answer, and then consider the problem of constructing maximum entropy models to describe real data. \subsection{Synthetic data} The maximum entropy construction is a strategy for simplifying our description of a system with many interacting elements. Separate from the algorithmic question of finding the maximum entropy model is the natural science question of whether this model provides a good description of the system we are studying, making successful predictions beyond the set of low order correlations that are used to construct the model. To sharpen our focus on the algorithmic problem, we use as ``empirical data'' a set of samples generated by Monte Carlo simulation of an Ising model as in Eq (\ref{ising1}). To get as close as possible to the problems we face in analyzing real data, we use the parameters of the Ising model found in Ref \cite{tkacik+al_06} in the analysis of activity in a network of $N=40$ neurons in the retina as it responds to naturalistic movies. We note that this model has competing interactions, as in a spin glass, with multiple locally stable states; extrapolation to larger systems suggests that the parameters are close to a critical point. \begin{figure}[b] \begin{center} \includegraphics[width=\linewidth]{loss_compare.pdf} \end{center} \caption{\label{fig:loss_compare} Approximate difference in the test log loss between the current solution of an algorithm and the exact model that generates the test data, as a function of the algorithm running time. The number of Monte Carlo samples at each stage is set to 320,000. The two choices of optimization algorithm exhibit similar performance, but much greater efficiency is achieved in both cases by including multiple iterations of learning algorithm $\mathcal{A}$ per stage.} \end{figure} Starting with the parameters ${{\mathbf\theta}}_0\equiv \{h_{\rm i} , J_{\rm ij}\}$ determined in Ref \cite{tkacik+al_06}, we generate $m=2\times 10^5$ samples by Monte Carlo simulation. Let us call the empirical distribution over these samples $\hat P$. Then we proceed to minimize $\mathrm{L}_{\hat P} ({{\mathbf\theta}})$. To monitor the progress of the algorithm, we compute \begin{equation} \Delta\mathrm{L} ({{\mathbf\theta}}) = \mathrm{L}_{\hat P}({{\mathbf\theta}}) - \mathrm{L}_{\hat P}({{\mathbf\theta}}_0 ). \end{equation} Note that this computation requires us to estimate the log ratios of partition functions, for which we again use the histogram Monte Carlo approach, \begin{eqnarray} {{{\mathbf Z}({{\mathbf\theta}})}\over{{{\mathbf Z}({{\mathbf\theta}}_0)}}} &=& \langle \exp\left[ - ({{\mathbf\theta}}-{{\mathbf\theta}}_0){\bf \cdot} {\bf f}({{\bf\sigma}})\right] \rangle_{{{\mathbf\theta}}_0}\\ &\approx& {\bigg\langle} \exp\left[ - ({{\mathbf\theta}}-{{\mathbf\theta}}_0){\bf \cdot} {\bf f}({{\bf\sigma}})\right] {\bigg\rangle}_{MC{{\mathbf\theta}}_0} . \end{eqnarray} As a check on this approximation, we can exchange the roles of ${{\mathbf\theta}}$ and ${{\mathbf\theta}}_0$, \begin{equation} {{{\mathbf Z}({{\mathbf\theta}}_0)}\over{{{\mathbf Z}({{\mathbf\theta}})}}} \approx {\bigg\langle} \exp\left[ + ({{\mathbf\theta}}-{{\mathbf\theta}}_0){\bf \cdot} {\bf f}({{\bf\sigma}})\right] {\bigg\rangle}_{MC{{\mathbf\theta}}} , \end{equation} and test for consistency between the two results. Finally, to compare the performance of the two optimization algorithms, we withhold some fraction of the data, here chosen to be 10\%, for testing. \begin{figure*}[bt] \begin{center} \includegraphics[width=0.45\linewidth]{c2_compare_highH_no8.pdf} \hfill \includegraphics[width=0.45\linewidth]{c2_compare_s.pdf} \end{center} \caption{\label{fig:c2_compare} (Left) Performance of our algorithm on the empirical data according to the absolute correlation difference metric, as a function of the number of iterations of the optimization algorithm and the number of Monte Carlo samples. The algorithm was terminated after the running time exceeded 300 seconds. Choosing too many or too few iterations per stage decreases efficiency (due to overlearning or inefficient use of samples, respectively). (Right) Performance of our algorithm on the empirical data according to the absolute correlation difference metric, as a function of the number of examples generated by the Monte Carlo sampler. The algorithm was terminated after the running time exceeded 300 seconds. Also displayed is the total number of iterations completed by the time cutoff. Fewer iterations per sample allow the generation of more samples, which is a computationally expensive process; therefore, fewer iterations result in fewer total optimization stages. Choosing too many or too few Monte Carlo samples per stage decreases efficiency, either because the algorithm samples for too long and terminates before enough learning stages can be completed, or because the sampling is too sparse and expectations are poorly estimated.} \end{figure*} \fig{fig:loss_compare} illustrates the performance of the two algorithms on the synthetic dataset, measured by $\Delta\mathrm{L}$. for simplicity, we have held the number of Monte Carlo samples at each stage, $M =3.2\times 10^5$, fixed. Choosing $T=1$ iteration per stage corresponds to a naive use of Monte Carlo, with a new simulation for each new setting of the parameters, and we see that this approach converges very slowly, as found in Ref \cite{tkacik+al_06}. Simply increasing this number to $T=20$ produces at least an order of magnitude speed up in convergence. \subsection{Real data} Here we consider the problem of constructing the maximum entropy distribution consistent with the correlations observed in real data. Our example is from Refs \cite{schneidman+al_06,tkacik+al_06}, the responses of $N=40$ retinal neurons. As noted at the outset, if we divide time into small slices of duration $\Delta\tau$ then each cell either does or does not generate an action potential, corresponding to a binary or Ising variable. The experiments of Ref \cite{schneidman+al_06}, analyzed with $\Delta\tau=0.02\,{\rm s}$, provide $m=189,950$ samples of the state ${\bf\sigma}$. We note that spikes are rare, and pairwise correlations are weak, as discussed at length in Ref \cite{schneidman+al_06}. As we try to find the parameters ${\mathbf\theta}$ that describe these data, we need a metric to monitor the progress of our algorithm; of course we don't have access to the true distribution. Since we are searching in the space of Gibbs distributions which automatically have correct functional form to maximize the entropy, we check the quality of agreement between the predicted and observed expectation values. It is straightforward to insure that the model reproduces the one--body averages $\langle \sigma_{\rm i}\rangle$ almost exactly. Thus we focus on the (connected) two--body averages $C_{\rm ij} = \langle \sigma_{\rm i}\sigma_{\rm j}\rangle - \langle \sigma_{\rm i}\rangle\langle \sigma_{\rm j}\rangle$. We compute these averages via Monte Carlo in the distribution $Q_{{\mathbf\theta}}({{\bf\sigma}})$, and then form the absolute difference between computed and observed values of $C_{\rm ij}$, and finally average over all pairs $\rm ij$. The resulting average error $\Delta C$ measures how close we come to solving the maximum entropy problem. Note that since the observed values of $C_{\rm ij}$ are based on a finite set of samples, we don't expect that they are exact, and hence we shouldn't identify convergence with $\Delta C \rightarrow 0$. Instead we divide the real data in half and measure $\Delta C$ between these halves, and use this as the `finish line' for our algorithms. In Fig \ref{fig:c2_compare} we see the behavior of $\Delta C$ as a function of the number of iterations per stage ($T$), where we terminate the algorithm after 300 seconds of running time on our local cluster \cite{fafner}. As expected intuitively, both small $T$ and large $T$ perform badly, but there is a wide range $T\sim 30-200$ in which the algorithm reaches the finish line within the alloted time. This performance also depends on $M$, the number of Monte Carlo samples per stage, so that again there is a range of $M\sim 1-3\times 10^5$ that seems to work well. In effect, when we constrain the total run time of the algorithm there is a best way of apportioning this time between stages (in which we run new Monte Carlo simulations) and iterations (in which we adjust parameters using estimates based on a fixed set of Monte Carlo samples). By working within a factor of two of this optimum, we achieve substantial speed up of the algorithm, or an improvement in convergence at fixed run time. \section{Conclusion} Our central conclusion is that recycling of Monte Carlo samples, as in the histogram Monte Carlo method \cite{ferrenberg+swendsen_88}, provides a substantial speedup in the solution of the inverse Ising problem. In detail, some degree of recycling speeds up the computations, while of course too much recycling renders the underlying approximations invalid, so that there is some optimal amount of recycling; fortunately it seems from Fig \ref{fig:c2_compare} that this optimum is quite broad. We expect that these basic results will be true more generally for problems in which we have to learn the parameters of probabilistic models to provide the best match to a large body of data. In the specific context of Ising models for networks of real neurons, the experimental state of the art is now providing data on populations of neurons which are sufficiently large that these issues of algorithmic efficiency become crucial. \begin{acknowledgments} We thank MJ Berry II, E Schneidman and R Segev for helpful discussions and for their contributions to the work which led to the formulation of the problems addressed here. This work was supported in part by NIH Grant P50 GM071508, and by NSF Grants IIS--0613435 and PHY--0650617. \end{acknowledgments}
1,314,259,994,049
arxiv
\section{Introduction} \label{intro} The electron--phonon\,(EP) coupling is well known to play a key role in several physical phenomena. For example it affects the renormalization of the electronic bands\cite{allen1983}, the carriers mobility in organic devices\cite{GosarChoi1966} or the position and intensity of Raman peaks\cite{attaccalite2010}. The EP coupling\, is also the driving force that causes excitons dissociation at the donor/acceptor interface in organic photovoltaic\cite{Tamura2008} and the transition to a superconducting phase in solids\cite{supercond}. Despite the development of more powerful and efficient computational resources the calculation of the effects induced by the EP coupling in realistic materials remains a challenging task. In addition to the numerical difficulties, it has been assumed, for a long time, that this interaction can yield only minor corrections (of the order of meV) to the electronic levels. As a consequence the majority of the {\it ab-initio}\,\,simulations of the electronic and optical properties of a wide class of materials are generally performed by keeping the atoms frozen in their crystallographic positions. It is actually well--known that phonons are atomic vibrations and, as a such, can be easily populated by increasing the temperature. This naive observation is {\em de-facto} used to associate the effect of the EP coupling to a temperature effect that vanishes as the temperature goes to zero. However this is not correct as the atoms posses an intrinsic spatial indetermination due to their quantum nature, that is independent on the temperature. These quantistic oscillations are taken into account by the EP coupling when $T\rar 0$ in the shape of a zero--point--motion effect. Many years ago\cite{Cardona2006} Heine, Allen and Cardona (HAC) pointed out the EP coupling can induce corrections of the electronic levels as large as those induced by the electronic correlation. As a consequence the generally accepted statement that the EP coupling always yields minor corrections was doomed to fail. Nowadays, the advent of more refined numerical techniques, has made possible to ground the HAC approach in a fully {\it ab-initio}\,\,framework. This has been used to compute the gap renormalization in carbon--nanotubes\cite{capaz2005}, the finite temperature optical properties of semiconductors and insulators \cite{marini2008}, and to confirm a large zero--point renormalization ($615$ meV) of the band--gap of bulk diamond \cite{Giustino2010}, previously calculated by Zollner using semi--empirical methods\cite{Zollner1992}. These works are calling into question decades of results, by instilling the doubt that a solely electronic theory may be inadequate. In this work we show that in nano--structures one of the approximations most commonly used in the electronic theories, the quasi--particle (QP) approximation\cite{landauqp}, is seriously questioned by the effect of the EP coupling. Indeed in most electronic systems characterized by a moderate internal correlation, the electrons are believed to occupy well defined energy levels characterized by a precise energy, width and wave--function. The QP picture pictorially represents the effect of the correlation on these states as an electron--hole (in the case of electron--electron coupling) or electron--phonon (in the case of the EP coupling) pairs cloud which renormalizes the energy and the width of the electronic level, also reducing its effective electronic charge. The breakdown of the QP picture caused by the EP coupling\, has been already predicted in the case of superconductors by Scalapino et al. \cite{scalapino} and in complex metallic surfaces by Eiguren et al.\cite{claudia_prl}. More recently we have shown\cite{cannuccia} a strong renormalization of the electronic properties of diamond and {\it trans}-polyacetylene caused by the EP coupling in the zero temperature limit. In this paper we will extend our previous work\cite{cannuccia}, by providing more methodological and technical details of the dynamical theory we have previously used. We will also apply the same method to another polymer, polyethylene, finding a severe breakdown of the QP picture. The analysis of the polyethylene\, results will confirm and strengthen the general conclusions that we drew regarding the enormous impact of the electron--phonon coupling in carbon based nano--structures. In sections \ref{sec:MBPT} and \ref{sec:DW} we will review the derivation of the fully frequency dependent self-energy by using the many-body perturbation theory. The HAC theory will be, then, found as a static and adiabatic limit of the dynamical theory. In section \ref{sec:DynamicalSEeffects} and \ref{sec:brakdownQPapprox} we will discuss how the structures appearing in the spectral functions of {\it trans}--polyacetylene\,\,and polyethylene\,\,rule out the basic assumptions of the HAC approach imposing the use of a fully dynamical theory. In section\,\ref{sec:frohlich} we will show how the problem can be mapped in the solution of a fictitious Hamiltonian that makes possible to define the polaronic states as complex electron--phonon packets. Finally, in the conclusions, we will point out as these results represent an important step forward in the simulation of nanostructures, with a wealth of possible implications in the development of more refined theories for the electronic and atomic dynamics. \section{A dynamical approach to the electron--phonon problem} \label{sec:MBPT} We start from the generic form of the total Hamiltonian of the system that we divide in electronic ($\widehat {H}_{el}$), atomic ($\widehat{H}_{at}$) and electron--atom part ($\widehat{H}_{el-at}$): \begin{align} \widehat{H}=\widehat{H}_{el}+\widehat{H}_{at}+\widehat{H}_{el-at}. \label{eq:sec_MBPT_1} \end{align} The Hamiltonian $\widehat H$ admits both electronic and vibrational states that are coupled by $\widehat H_{el-at}$. In this work Density Functional Theory\,(DFT)\cite{R.M.Dreizler1990} is used to calculate the eigenstates of $\overline{\widehat{H}}$, where we use the notation $\overline{O}$ to indicate a quantity or an operator that is evaluated with the atoms frozen in their equilibrium crystallographic positions. Similarly the vibrational states of the Hamiltonian $\widehat H$ are described, fully {\it ab-initio}, by using the well--known extension of DFT, the Density Functional Perturbation Theory\,(DFPT)\cite{baroni2001,Gonze1995}. In DFPT the electronic correlations are embodied in a self--consistent mean potential $\widehat V_{scf}$ representing the {\it total} electronic potential which depends on the atomic positions ${\bf R}_{Is}\equiv {\bf R}_I+\tau_s$: \begin{align} \widehat{H}_{el-at}=\int_{crystal}\,d{\bf r}\, \hat{\rho}\({\bf r}\) \widehat{V}_{scf} \[\{{\bf R}\}\]\({\bf r}\). \label{eq:sec_MBPT_2} \end{align} In the definition of ${\bf R}_{Is}$, $I$ and $s$ label the lattice cell (at position ${\bf R}_I$) and the atoms in the cell (at position $\tau_s$), respectively. In Eq.(\ref{eq:sec_MBPT_2}) $\rho$ is the electron density operator. The aspect we are interested in this paper is how to properly include the modifications of the electronic levels induced by the atomic vibrations. In particular, by assuming the harmonic approximation for the phonons, we will develop a dynamical theory of the electronic dynamics. To this end we follow a purely diagrammatic approach\cite{mahan} to present a short but accurate review of the derivation of the Fan\cite{fan1951} self--energy and of the much less known Debye--Waller (DW) correction\cite{marini_2012}. If we now consider a configuration of lattice displacements $\hat{\bf u}_{Is}$, $\widehat H$ can be expressed as a Taylor expansion \begin{multline} \widehat{H}-\overline{\widehat{H}} = \sum_{I s \alpha} \overline{\frac{\partial \widehat{V}_{scf} \[\{{\bf R}\}\]\({\bf r}\)}{\partial{R_{Is\alpha}}}} \hat{u}_{I s \alpha}+\\+\frac{1}{2} \sum_{I s \alpha, J s' \beta} \overline{\frac{\partial^2 \widehat{V}_{scf} \[\{{\bf R}\}\]\({\bf r}\)}{\partial{R_{Is\alpha}}\partial{R_{Js'\beta}} }} \hat{u}_{I s \alpha}\hat{u}_{J s' \beta}, \label{eq:sec_diagrams_1} \end{multline} where $\alpha$ and $\beta$ are the Cartesian coordinates. The link with the perturbative expansion is readily done by transforming Eq.\,(\ref{eq:sec_diagrams_1}) from the space of the lattice displacements to the space of the canonical lattice vibrations by means of the identity\cite{mattuck}: \begin{multline} \hat{u}_{I s \alpha}=\sum_{{\bf q} \lambda} \(2 N_q M_s \omega_{{\bf q} \lambda}\)^{-1/2} \xi_{\alpha}\({\bf q} \lambda|s\) e^{i {\bf q}\cdot\({\bf R}_I+\tau_s\)}\times\\ \times\(\hat{b}^{\dagger}_{-{\bf q} \lambda}+\hat{b}_{{\bf q} \lambda}\), \label{eq:sec_diagrams_2} \end{multline} where $N_q$ is the number of cells (or, equivalently the number of q--points) used in the simulation and $M_s$ is the atomic mass of the $s$ atom in the unit cell. $\xi_{\alpha}\({\bf q} \lambda|s\)$ is the phonon polarization vector and $\hat{b}^{\dg}_{-\qq \gl}$ and $\hat{b}_{\qq \gl}$ are the bosonic creation and annihilation operators. By inserting Eq.\,(\ref{eq:sec_diagrams_2}) into Eq.\,(\ref{eq:sec_diagrams_1}) we get \begin{multline} \widehat{H}-\overline{\widehat{H}} = \frac{1}{\sqrt{N_q}}\sum_{{\bf k} n n^{'} {\bf q} \lambda} g^{\qq \gl}_{n n' {\bf k}} \hat{c}^{\dagger}_{n{\bf k}} \hat{c}_{n'{\bf k}-{\bf q}} \(\hat{b}^{\dagger}_{-{\bf q} \lambda} +\hat{b}_{{\bf q} \lambda} \)+\\ +\frac{1}{N_q}\sum_{n n^{'}{\bf k}}\sum_{{\bf q} \lambda, {\bf q}' \lambda'} \Lambda^{\qq \gl,{\bf q}'\lambda'}_{nn' {\bf k}} c^{\dagger}_{n{\bf k}} c_{n'{\bf k}-{\bf q}-{\bf q}'} \times\\ \times\(\hat{b}^{\dagger}_{-{\bf q} \lambda} +\hat{b}_{{\bf q} \lambda} \) \(\hat{b}^{\dagger}_{-{\bf q}' \lambda'} +\hat{b}_{{\bf q}' \lambda'} \). \label{eq:sec_diagrams_3} \end{multline} In Eq.\,(\ref{eq:sec_diagrams_3}) we have introduced the first--order ($g^{\qq \gl}_{n' n {\bf k}}$) and the second--order ($\Lambda^{\qq \gl,{\bf q}'\lambda'}_{n' n {\bf k}}$) electron--phonon matrix elements which will be shortly defined. To this purpose we rewrite $\widehat V_{scf}$ making explicit its dependence on the atomic positions: \begin{align} \widehat{V}_{scf} \[\{{\bf R}\}\]\({\bf r}\)=\sum_{Is} \widehat{V}_{scf} \({\bf r}-{\bf R}_{Is}\). \label{eq:sec_diagrams_3a} \end{align} From Eq.\,(\ref{eq:sec_diagrams_3a}) it follows that the second order derivatives in the atomic positions are diagonal, $\frac{\partial^2}{\partial {\bf R}_{Is} \partial {\bf R}_{Js'}}\widehat{V}_{scf} \[\{{\bf R}\}\]\({\bf r}\)\propto \delta_{IJ}\delta_{ss'} \frac{\partial^2}{\partial {\bf R}^2_{Is}}\widehat{V}_{scf} \[\{{\bf R}\}\]\({\bf r}\) $. By using Eq.\,(\ref{eq:sec_diagrams_3a}) the summation on ${\bf R}_{I}$ appearing in Eq.\,(\ref{eq:sec_diagrams_1}) leads to the momentum conservation both in the first and second order terms. At the first order this leads to the definition of the electron--phonon matrix elements \begin{multline} g^{\qq \gl}_{n n' {\bf k}}=\sum_{s \alpha} \(2 M_s \omega_{\qq \gl}\)^{-1/2} e^{i{\bf q}\cdot\tau_s} \times \\ \times \langle n{\bf k} |\frac{\partial \widehat{V}_{scf} ^{\(s\)}\({\bf r}\)}{\partial{R_{s\alpha}}} | n' {\bf k}-{\bf q} \rangle \xi_{\alpha}\({\bf q} \lambda|s\), \label{eq:sec_diagrams_4} \end{multline} We have also used the short form $R_{s \alpha}=\left. R_{Is\alpha}\right|_{I=0}$. A similar derivation can be followed to derive the 2$^{nd}$ order term \begin{multline} \Lambda^{\qq \gl,{\bf q}'\lambda'}_{n n' {\bf k}}= \frac{1}{2}\sum_{s}\sum_{\alpha,\beta} \frac{ \xi^{*}_{\alpha}\({\bf q} \lambda|s\) \xi_{\beta}\({\bf q}' \lambda'|s\)} {2M_s\(\omega_{\qq \gl} \omega_{{\bf q}' \lambda'} \)^{1/2}} \times \\ \times \langle n{\bf k} |\frac{\partial^2 \widehat{V}_{scf} ^{\(s\)}\({\bf r}\)}{\partial{R_{s\alpha}}\partial{R_{s\beta}}} | n' {\bf k}-{\bf q}-{\bf q}' \rangle. \label{eq:sec_diagrams_5} \end{multline} This second--order term is, in general, neglected as it is assumed to be small compared to the first--order term. Although this is correct at the level of the Hamiltonian it is not true anymore even at the lowest order of perturbation theory. Indeed the different terms in the Taylor expansion of the Hamiltonian defined in Eq.\,(\ref{eq:sec_diagrams_3}) induce a wealth of diagrams of increasing complexity and order. If we restrict to the lowest non vanishing order we have two diagrams: the Fan\cite{fan1950} and the DW. These are presented by diagrams $(a)$ and $(b)$ in Fig.\,(\ref{fig:sec_diagrams_1}). In the same Figure two fourth order (in the displacements) diagrams, ($c$) and ($d$), are also showed. They are of the same order and, as the Fan and DW diagrams, they result from the perturbative treatment of the first order and second order terms in Eq.\,(\ref{eq:sec_diagrams_3}). The actual calculation of the Fan and DW diagrams is straightforward. The Fan's diagram is similar to the one generated by the electronic correlation in the so-called GW approximation\cite{strinati}, where the screened electronic interaction is replaced by a phonon propagator of wave vector ${\bf q}$ and branch $\lambda$\cite{mahan}. \begin{figure}[h] \begin{center} \parbox[c]{4cm}{ \begin{center} \epsfig{figure=Fig1a.eps,width=3cm}\\ ${\cal G}_{n {\bf k}}^{(0)}(\omega_{n})$ \end{center} } \parbox[c]{4cm}{ \begin{center} \epsfig{figure=Fig1b.eps,width=3cm}\\ ${\cal D}_{{\bf q} \lambda}^{(0)}(\omega_{j})$ \end{center} }\\ \parbox[c]{4cm}{ \begin{center} \epsfig{figure=Fig1c.eps,width=4cm}\\ \text{(a) 2$^{nd}$ order $\Sigma^{Fan}$} \end{center} } \parbox[c]{4cm}{ \begin{center} \epsfig{figure=Fig1d.eps,width=3cm}\\ \text{(b) 2$^{nd}$ order $\Sigma^{DW}$} \end{center} }\\ \parbox[c]{4cm}{ \begin{center} \epsfig{figure=Fig1e.eps,width=4cm}\\ \text{(c) 4$^{th}$ order $\Sigma^{Fan}$} \end{center} } \parbox[c]{4cm}{ \begin{center} \epsfig{figure=Fig1f.eps,width=3cm}\\ \text{(d) 4$^{th}$ order $\Sigma^{DW}$} \end{center} } \end{center} \caption{\footnotesize{ The self-energy diagrams corresponding to the first and second order terms in the Taylor expansion of $\widehat{H}-\overline{\widehat{H}}$ (see Eq.\,(\ref{eq:sec_diagrams_1})) treated at different orders of the perturbative expansion. For example the well--known Fan self-energy is formally obtained as a 2$^{nd}$ order expansion of the first order term in Eq.\,(\ref{eq:sec_diagrams_1}). However the second term of Eq.\,(\ref{eq:sec_diagrams_1}), treated at first order gives the 2$^{nd}$ order $\Sigma^{DW}$, that is of the same order of the Fan term and, consequently, cannot be neglected. The diagram (c) is obtained as a $4^{th}$ order expansion of the first order in Eq.\,(\ref{eq:sec_diagrams_1}) while (d) comes from the second term of Eq.\,(\ref{eq:sec_diagrams_1}) treated at the $2^{th}$ order. }} \label{fig:sec_diagrams_1} \end{figure} Applying the finite temperature diagrammatic rules it is possible to define the Fan self-energy operator $\Sigma^{Fan}_{n{\bf k}}\(i\omega_{i},T\)$, recovering the expression originally evaluated by Fan\cite{fan1950}: \begin{multline} \Sigma^{Fan}_{n{\bf k}}\(i\omega_{i},T\)=-\frac {1}{\gb} \frac {1}{N_{q}} \sum_{{\bf q} \lambda} \sum_{n'} {\mid g^{\gql}_{n n' \kk} \mid}^2 \times\\ \times\sum^{+\infty}_{j=-\infty} D^{(0)}_{\qq \gl}\(i\omega_{j}\) G_{n'{\bf k}-{\bf q}}^{(0)}\(i\omega_{i}-i\omega_{j}\), \label{eq:sec_diagrams_7} \end{multline} where $\beta=\frac{1}{\mathit{K}T}$ ($\mathit k$ is the Boltzmann constant) and $T$ is the temperature of the phonon bath. By using the standard definitions of the electronic and the phononic Green's functions: $G_{n'{\bf k}}^{(0)}\(i\omega_{i}\)=\(i\omega_{i}-\varepsilon_{n{\bf k}}+\mu\)^{-1}$ (where $\mu$ is the chemical potential), $D_{\qq \gl}^{(0)}\(i\omega_{j}\)=\(\(i\omega_{j}-\omega_{\qq \gl}\)^{-1}-\(i\omega_{j}+\omega_{\qq \gl}\)^{-1}\)$, and summing over the Matsubara frequencies, we get the final expression for the Fan self--energy \begin{multline} \Sigma^{Fan}_{n{\bf k}}\(i\omega,T\) = \sum_{n'\qq \gl} \frac {{\mid g^{\gql}_{n n' \kk} \mid}^2}{N_q} \times \\ \times\[ \frac{N_{{\bf q}\lambda}\(T\)+1-f_{n'{\bf k}-{\bf q}}}{i\omega-\varepsilon_{n' {\bf k}-{\bf q}} -\omega_{\qq \gl} -i0^{+}} \right. + \\ + \left. \frac{N_{{\bf q}\lambda}\(T\)+f_{n' {\bf k}-{\bf q}}}{i\omega-\varepsilon_{n' {\bf k}-{\bf q}}+\omega_{\qq \gl} -i0^{+}}\], \label{eq:sec_diagrams_10} \end{multline} where $N_{{\bf q}\lambda}\(T\)$ is the Bose function distribution of the phonon mode $\({\bf q},\lambda\)$ at temperature $T$. A similar expression can be derived for the frequency independent DW self--energy $\Sigma_{n\kk}^{DW}$. This term comes from the equal time contractions of the $\(\hat{b}^{\dagger}_{-{\bf q} \lambda} +\hat{b}_{{\bf q} \lambda} \) \(\hat{b}^{\dagger}_{-{\bf q}' \lambda'} +\hat{b}_{{\bf q}' \lambda'} \)$ operators: \begin{multline} \langle \(\hat{b}^{\dagger}_{-{\bf q} \lambda} +\hat{b}_{{\bf q} \lambda} \) \(\hat{b}^{\dagger}_{-{\bf q}' \lambda'} +\hat{b}_{{\bf q}' \lambda'} \) \rangle =\\ \delta_{-{\bf q},{\bf q}'}\delta_{\lambda,\lambda'} \[N_{{\bf q}'\lambda}\(T\)+N_{{\bf q}\lambda}\(T\)+1\]. \label{eq:sec_diagrams_11} \end{multline} The corresponding diagram ($b$) in Fig.\,\ref{fig:sec_diagrams_1} can be easily found to be \begin{align} \Sigma^{DW}_{n\kk}\(T\)=\frac{1}{N_q}\sum_{\qq \gl} \Lambda^{{\bf q}\lambda,-{\bf q}\lambda}_{n n {\bf k}} \(2 N_{{\bf q}\lambda}\(T\) +1\). \label{eq:sec_diagrams_12} \end{align} Both the Fan and DW self-energy have been already derived previously in the framework of the Heine--Allen--Cardona\,(HAC) theory~\cite{allen1976,allen1983,Cardona2006}. The HAC approach is based on the static Rayleigh-Schr\"{o}dinger perturbation theory. More precisely the $\hat{u}_{Is\alpha}$ are used as scalar variables on which a static perturbation theory is applied. As we will mention in Sec.\,\ref{sec:DW}, the second--order derivatives appearing in the definition of the DW term can be rewritten in terms of the one--order derivatives by imposing the translational invariance of the correction to the electronic levels. On the other hand the Fr\"{o}hlich and the Holstein Hamiltonians usually neglect the DW term (diagram ($b$) in Fig.\,\ref{fig:sec_diagrams_1}), even if it is of the same order of the Fan term. The many body formulation represents the dynamical extension of the HAC approach, that is recovered from Eq.\,(\ref{eq:sec_diagrams_10}) by using $\omega\approx \varepsilon_{n{\bf k}}$ (the on--the--mass--shell\,(OMS) limit) and $\left|\varepsilon_{n{\bf k}}-\varepsilon_{n' {\bf k}-{\bf q}}\right|\gg \omega_{{\bf q}\lambda}$ (the adiabatic limit) and by considering only the real part of the self--energy. It turns out, therefore, that in the HAC approach the temperature dependent change in the single--particle energies is given by \begin{align} \Delta \varepsilon^{HAC}_{n{\bf k}}\(T\)=\Sigma^{DW}_{n\kk}(T)+ \sum_{n'\qq \gl} \frac {{\mid g^{\gql}_{n n' \kk} \mid}^2}{N_q} \frac{2 N_{{\bf q}\lambda}\(T\)+1}{\varepsilon_{n{\bf k}}-\varepsilon_{n' {\bf k}-{\bf q}} }. \label{eq:sec_diagrams_13} \end{align} The soundness of the HAC approach is then, from a MBPT perspective, connected to the validity of the on--the--mass--shell and of the adiabatic approximations. We will prove in the section\,\ref{sec:DynamicalSEeffects} that these approximations are not always well motivated. Dynamical and non--adiabatic corrections can be huge, such to invalidate the applicability of the HAC approach. \section{Second order derivatives of $\widehat V_{scf}$ and the actual calculation of the Debye Waller self-energy } \label{sec:DW} The general expression, Eq.\,(\ref{eq:sec_diagrams_3}) and Eq.\,(\ref{eq:sec_diagrams_5}) for the perturbed Hamiltonian requires the knowledge of the second--order gradients of the self--consistent potential \begin{align} \Delta^{s\alpha\beta}_{n'{\bf p},n{\bf k}}= \langle n'{\bf p} |\frac{\partial^2 \widehat{V}_{scf} ^{\(s\)}\({\bf r}\)}{\partial{R_{s\alpha}}\partial{R_{s\beta}}} | n {\bf k} \rangle, \label{eq:sec_DW_1} \end{align} where ${\bf p}$ here replaces the ${\bf k}+{\bf q}+{\bf q}'$ vector appearing in Eq.\,(\ref{eq:sec_diagrams_3}) and Eq.\,(\ref{eq:sec_diagrams_5}). These terms are extremely cumbersome to calculate and the task becomes easily prohibitive when higher orders are included. Their evaluation is, however, crucial because the ${\bf k}={\bf p}$ case is needed to calculate the lowest--order DW self-energy, while the finite momenta matrix element defines higher order diagrams (like diagram $(d)$ in Fig.\,(\ref{fig:sec_diagrams_1})). In the case of the simpler $\Sigma^{DW}_{n\kk}$ (diagram $(b)$ in Fig.\,(\ref{fig:sec_diagrams_1})) we know, from Eq.\,(\ref{eq:sec_diagrams_5}) and Eq.\,(\ref{eq:sec_DW_1}) that \begin{multline} \Lambda^{{\bf q}\lambda,-{\bf q}\lambda}_{n n {\bf k}}= \frac{1}{2}\sum_{s}\sum_{\alpha,\beta} \(2 M_s \omega_{\qq \gl} \)^{-1} \times \\ \times \frac{ \xi^{*}_{\alpha}\({\bf q} \lambda|s\) \xi_{\beta}\(-{\bf q} \lambda|s\)} {2M_s\omega_{\qq \gl}} \Delta^{s\alpha\beta}_{n{\bf k},n{\bf k}}. \end{multline} In order to evaluate the $\Delta^{s\alpha\beta}_{n'{\bf p},n{\bf k}}$ factors the HAC theory uses the fact that if all atoms were shifted by the same amount all physical quantities should not change. In other terms, being the $\Delta \varepsilon^{HAC}_{n{\bf k}}\(T\)$ an explicit functional of the atomic positions (Eq.\,(\ref{eq:sec_diagrams_3a})) that are treated classically, it is possible to impose the following translational invariance condition \begin{align} \Delta \varepsilon^{HAC}_{n{\bf k}}\[\{u_{Is\alpha}\}\]\(T\)=\Delta \varepsilon^{HAC}_{n{\bf k}}\[\{u_{Is\alpha}+d_{\alpha}\}\]\(T\). \label{eq:sec_DW_2} \end{align} From this condition it follows that~\cite{allen1976,allen1983,Cardona2006} in order to calculate $\Lambda^{{\bf q}\lambda,-{\bf q}\lambda}_{n n {\bf k}}$ that defines the DW self-energy (see Eq.\,(\ref{eq:sec_diagrams_12})) only the matrix element $\Delta^{s\alpha\beta}_{n{\bf k},n{\bf k}}$ is needed. This can be rewritten as \begin{multline} \Delta^{s\alpha\beta}_{n{\bf k},n{\bf k}}= -\sum_{n'\neq n}\frac{1}{\varepsilon_{n{\bf k}}-\varepsilon_{n'{\bf k}}} \times\\ \times \[ \(\sum_{s'} \langle n{\bf k} |\frac{\partial \widehat{V}_{scf} ^{\(s'\)}\({\bf r}\)}{\partial{R_{s'\alpha}}} | n' {\bf k} \rangle\) \langle n'{\bf k} |\frac{\partial \widehat{V}_{scf} ^{\(s\)}\({\bf r}\)}{\partial{R_{s\beta}}} | n {\bf k} \rangle \right. + \\\left. + \langle n{\bf k} |\frac{\partial \widehat{V}_{scf} ^{\(s\)}\({\bf r}\)}{\partial{R_{s\alpha}}} | n' {\bf k} \rangle \(\sum_{s'} \langle n'{\bf k} |\frac{\partial \widehat{V}_{scf} ^{\(s'\)}\({\bf r}\)}{\partial{R_{s'\beta}}} | n {\bf k} \rangle\) \]. \label{eq:sec_DW_3} \end{multline} The condition given by Eq.\,(\ref{eq:sec_DW_2}) is, however, intrinsically ill--defined in the diagrammatic approach: the correction to the energy levels is a quantity obtained in fact, from the self-energy operator that, in turns, can be defined {\em only} when the displacement operators are quantized and a second quantized form of the Hamiltonian change (Eq.\,(\ref{eq:sec_diagrams_3})) is introduced. This inconsistency can be, indeed, cured in a fully MBPT framework~\cite{marini_2012} but it requires to introduce the constant displacement vector $d_\alpha$ as an operator. This leads to the definition of new kinds of diagrams that will depend on powers of $d_\alpha$. By imposing that diagrams of the same order cancel each other it is possible to obtain a general expression for the matrix element $\Delta^{s\alpha\beta}_{n'{\bf p},n{\bf k}}$. As discussed by X. Gonze\cite{gonze}, the local dependence on the atomic positions in Eq.\,(\ref{eq:sec_diagrams_3a}) assumes that the electronic screening of the ionic potential, that defines $\widehat{V}_{scf} $, depends only smoothly on ${\bf R}_{Is}$. By taking fully into account this intrinsic dependence on the atomic positions a correction to the Debye--Waller term, named non--diagonal Debye--Waller correction, can be defined. This correction has been reported to be important for isolated molecules and atoms\cite{gonze}. Its effect in solids and, more generally, in extended systems is expected to be weakened by the efficient screening properties. \section{Dynamical Self-Energy Effects beyond the Quasi Particle Approximation} \label{sec:DynamicalSEeffects} In section \ref{sec:MBPT} we showed that the HAC theory represents the static and adiabatic limit of the dynamical electron--phonon self-energy. The more suitable are the conditions of validity of the on--the--mass--shell and of the adiabatic approximations, the sounder is the applicability of the HAC approach from a MBPT perspective. We want to prove in this section that these approximations are not always well motivated and dynamical and non--adiabatic corrections to the HAC approach cannot be neglected {\it a priori}. The HAC approach grounds on the concept of a well defined QP state: the charge carriers are assumed to be concentrated on electronic levels, being characterized by a well defined energy and wave-function. The QP concept can be firmly introduced in a many-body Green's function theory\,\cite{mahan} where the definition embodies, at the same time, its limitations, as it will be clear in the following. The fully interacting propagator can be written, for real energies $\omega$ in terms of the self-energy, (Eq.\,(\ref{eq:sec_diagrams_7})) as \begin{equation} G_{n\kk}\(\omega,T\)=\frac {1}{\omega-\varepsilon_{n\kk}-\Sigma^{Fan}_{n\kk}\(\omega,T\)-\Sigma^{DW}_{n\kk}\(T\)}. \label{eq:Dyson_withSE} \end{equation} The rotation from the imaginary to the real axis has been easily performed by a Wick rotation\cite{mattuck} as the energy dependence of the Fan self--energy is explicit. The single particle excitations are then the complex poles of Eq.\,(\ref{eq:Dyson_withSE}) \begin{multline} \omega-\varepsilon_{n\kk}-\Sigma^{DW}_{n\kk}\(T\)-\Re \[\Sigma^{Fan}_{n\kk}\(E_{n\kk}\(T\),T\)\]+\\ -i\Im \[\Sigma^{Fan}_{n\kk}\(E_{n\kk}\(T\),T\)\]=0. \label{eq:FindingPoles} \end{multline} As it is clear from Eq.\,(\ref{eq:FindingPoles}) a genuine QP state should have a zero line-width, that is a zero imaginary part of the self-energy. In practice this is never completely true. Nevertheless, when the frequency dependence of the self-energy is smooth Eq.\,(\ref{eq:Dyson_withSE}) can be rewritten by using two simple and intuitive approximations: the OMS and the QP approximation. In the specific case of a constant and real self-energy (as in the HAC case) one can introduce the OMS approximation where the solution of Eq.\,(\ref{eq:FindingPoles}) is given by \begin{align} E_{n\kk}\(T\) = \varepsilon_{n\kk} + \Sigma^{DW}_{n\kk}\(T\) + \Sigma^{Fan}_{n\kk}\(\varepsilon_{n\kk},T\). \label{eq:energy_OMS} \end{align} We notice, from Eqs.\,(\ref{eq:FindingPoles}) and (\ref{eq:energy_OMS}), that $E_{n\kk}\(T\)$ is complex. Even in the case where the self-energy is not constant, if the bare energy $\varepsilon_{n\kk}$ is far from a pole of self-energy, then $\Sigma^{Fan}_{n\kk}\(\omega,T\)$ can be Taylor expanded, up to the first order, around $\varepsilon_{n\kk}$: \begin{multline} E_{n\kk}\(T\) = \varepsilon_{n\kk}+\Sigma^{DW}_{n\kk}\(T\)+\Sigma^{Fan}_{n\kk}\(\varepsilon_{n\kk},T\)+\\ +\left. \frac{\partial \Sigma^{Fan}_{n\kk}\(\omega,T\)}{\partial \omega}\right|_{\omega=\varepsilon_{n\kk}}\(E_{n\kk}\(T\)-\varepsilon_{n\kk}\). \label{eq:QP_newton} \end{multline} Eq.\,(\ref{eq:QP_newton}) corresponds to the QP approximation. The bare energy is then renormalized because of the virtual scatterings which are described by the real part of the self-energy. This renormalization is easily described by the solution of Eq.\,(\ref{eq:QP_newton}): \begin{multline} E_{n\kk}\(T\) = \varepsilon_{n\kk} + \\+ Z_{n\kk}\(T\) \[\Sigma^{Fan}_{n\kk}\(\varepsilon_{n\kk},T\)+\Sigma^{DW}_{n\kk}\(T\)\], \label{eq:QP_energy} \end{multline} with $Z_{n\kk}\(T\) = \(1-\left. \frac{\partial \Sigma^{Fan}_{n\kk}\(\omega,T\)}{\partial \omega}\right|_{\omega=\varepsilon_{n\kk}}\)^{-1}$ the renormalization factor. From Eq.\,(\ref{eq:QP_energy}) it is evident that $E_{n\kk}$ is complex and its imaginary part $\Gamma_{n\kk}\(T\)=\Im\[E_{n\kk}\(T\)\]$, the QP line-width, is proportional to the $\Im \[ Z_{n\kk}\(T\)\Sigma^{Fan}_{n\kk} \(\varepsilon_{n\kk},T\)\]$. A small $\Gamma_{n\kk}\(T\)$ indicates a stable QP, that slowly decays because of the real scatterings with the other particles and with the phonon modes. By assuming the QP approximation to be valid Eq.\,(\ref{eq:Dyson_withSE}) can be re-written as $G_{n\kk}\(\omega,T\)=Z_{n\kk}\(T\)\(\omega-E_{n\kk}\(T\)\)^{-1}$. \begin{figure} [h] \begin{center} \parbox[c]{7cm}{ \begin{center} \epsfig{figure=Fig2a.eps,width=7.0cm,clip=,bbllx=28,bblly=200,bburx=715,bbury=530} \end{center} }\\[-.5cm] \parbox[c]{7cm}{ \begin{center} \epsfig{figure=Fig2b.eps,width=7.0cm,clip=,bbllx=28,bblly=200,bburx=715,bbury=530} \end{center} } \end{center} \vspace{-.2cm} \caption{\footnotesize{{\it Trans}--polyacetylene. Spectral function (upper frame) and self--energy (lower frame) corresponding to the state $\mid n=1,{\bf k}=\Gamma \rangle$. In the self--energy frame both the real (solid line) and imaginary (dashed line) parts are showed. The three arrows represent the bare electronic energy $\varepsilon_{n\kk}$ and the two solutions ($E^{\(1\)}_{n\kk}$,$E^{\(2\)}_{n\kk}$) of Eq.\,(\ref{eq:FindingPoles}). The thin solid straight line represents instead the function $\omega-\varepsilon_{n\kk}-\Sigma_{n\kk}^{DW}$. As the imaginary part of the self--energy shows a clear, intense and wide peak at around $-16.5$\,eV (i.e. very close to $\varepsilon_{n\kk}$) the real--part is dominated by a rapid oscillation that cannot be captured at all by the linearization of the energy dependence and causes the appearance of two solutions of Eq.\,(\ref{eq:FindingPoles}).}} \label{fig:SF_b1k1} \end{figure} Angle Resolved Photoemission Spectroscopy (ARPES) provides a definitive tool to verify if the QP approximation is accurate. If it existed a true QP should appear as a peak in the photoemission spectra with a Lorentzian lineshape. Indeed one finds that the spectral function\,(SF) $A_{n\kk}\(\omega,T\)\equiv\pi^{-1}\mid\Im\[G_{n\kk}\(\omega,T\)\]\mid$ is given, in the QP approximation and in the simple case of a purely real $Z_{n\kk}$, by \begin{align} A^{\(qp\)}_{n{\bf k}}\(\omega,T\)=\frac{Z_{n\kk}\(T\)|\Gamma_{n\kk}\(T\)|}{\pi\[\(\omega-\Re\[E_{n\kk}\(T\)\]\)^2+\Gamma^2_{n\kk}\(T\)\]}. \label{eq:SQ_qp} \end{align} In this case the QP energy and width give the peak position and the spectral peak width. It is worth noticing that a complex value of $Z_{n\kk}\(T\)$ would cause the spectral function to have an asymmetric lineshape. The SF gives a physical interpretation and a clear validation of the QP approximation. Indeed $A^{\(qp\)}_{n{\bf k}}\(\omega,T\)$ is a probability function to find an electron in the state $n\kk$ with energy $\omega$ and the total electronic charge associated to the QP state is $Z_{n\kk}\(T\)$, that corresponds to the integral of $A^{\(qp\)}_{n\kk}\(\omega,T\)$. The renormalization factor represents, therefore, the QP charge. When $Z_{n\kk}\(T\)=1$ and $\Im\[ \Sigma_{n\kk}(\varepsilon_{n\kk},T)\]\rar 0$ the SF reduces to a delta function, the SF of a particle with energy $\Re\[E_{n\kk}\(T\)\]$. It is clear that a direct comparison of $A^{\(qp\)}_{n{\bf k}}\(\omega,T\)$ with the true SF corresponding to a given self--energy or with the ARPES lineshape provides the ultimate validation of the QP picture. A paradigmatic example that well explains the basic mechanism for the breakdown of the QP picture is given in Fig.\,\ref{fig:SF_b1k1} where the zero temperature SF for the $\mid n=1,{\bf k}=\Gamma \rangle$ state of {\it trans}--polyacetylene\,\,is showed in the upper frame. It is clear that the SF of this state is far from being well represented by a Lorentzian lineshape. Indeed it is evident the appearance of two peaks at energies $E^{\(1\)}_{n\kk}$ and $E^{\(2\)}_{n\kk}$. These two peaks are the signature of a breakdown of the QP picture, because the existence condition of only one pole collecting most of the weight is not satisfied. We will come back on the physical interpretation of such structures in the next section. Nevertheless the origin of these two peaks is evident if we analyze the energy dependence of the imaginary and real parts of the corresponding self--energy, showed in the lower frame of the same figure. In this specific case the solution of Eq.\,(\ref{eq:FindingPoles}) admits two roots as a consequence of the rapid oscillation of $\Re\[\Sigma^{Fan}_{n\kk}\(\omega,T=0\)\]$ around $-16.5$\,eV (the bare electronic energy of this state). This oscillation is, in turn, induced by an intense peak appearing in the $\Im\[\Sigma^{Fan}_{n\kk}\(\omega,T=0\)\]$. We deduce that in this case a naive application of the QP approximation in form of a linearization of $\Sigma^{Fan}_{n\kk}\(\omega, T\)$ (Eq.\,(\ref{eq:QP_newton})) may produce a non physical energy dependence of the self--energy. As a consequence this leads to meaningless (negative or enormously large) values of $Z_{n\kk}$. In contrast to the case of purely electronic self--energies the QP approximation is known to lead to a too rough description of the electron--phonon spectral function for low--energy electrons in the homogeneous electron gas\,(jellium). As discussed by Engelsberg and Schrieffer\,\cite{engelsberg1963} this failure, although not as dramatic as the one found in the present case, is linked to the mixing of electronic and phononic excitations. More precisely the authors identify three kind of excitations that appear as poles of the Green's function when the energy of the electronic level is increased well above the Debye energy and the strength of the electron--phonon coupling is also increased. One is a purely QP state where the electron is dressed by a phonon cloud. The others lie in the continuum of electron--phonon pairs composed by clothed electrons and clothed phonons being excited, with a constant momentum sum. \section{Breakdown of the Quasi Particle approximation: the case of trans--polyacetylene and polyethylene} \label{sec:brakdownQPapprox} In the previous section we showed that the structures appearing in the SF of {\it trans}--polyacetylene\,\,rule out any description of the coupled electron--phonon system in terms of QPs. More importantly the rich structure of peaks appearing in the SF\,\,described in Fig.\,\ref{fig:SF_b1k1} is not a fortuitous case. It is actually a general trend both in {\it trans}--polyacetylene\,\,and in polyethylene. Indeed, in Figs.\,\ref{fig:Fig3} and \ref{fig:Fig4}, the bare electronic band structure and the corresponding SFs\,\, for a fixed ${\bf k}$ vector are shown in the upper frame. The position of the ${\bf k}$ vector in the Brillouin Zone\,(BZ) is represented by an horizontal line in the lower frame where the the valence bands of the two polymers are also reported. In the upper frame of Figs.\,\ref{fig:Fig3} and \ref{fig:Fig4} there is also a sketch of the atomic structure of the polymers that helps to understand the main key differences among them. Both systems are linear polymers. {\it Trans}--polyacetylene\, is a conjugated polymer where each carbon atom forms four nearest--neighbour bonds. Three of the four carbon valence electrons are in $sp^2$ hybridized orbitals and two of the $\sigma$-type bonds connect neighbour carbons along the one--dimensional backbone, while the third forms a bond with the hydrogen side group. Polyethylene\,, instead, is a $\sigma$-bonded, non--conjugated polymer. The atoms in the unit cell do not lie on a single plane like in {\it trans}--polyacetylene. The C atoms are sp$^3$ hybridized and, as in the polyethylene\, each C atom has four bonds. From the upper panels of Figs.\,\ref{fig:Fig3} and \ref{fig:Fig4} it is evident that the EP interaction dramatically affects the spectral functions. The most striking aspect is that the SFs exhibit a multiplicity of structures. Although in some cases a single and strong peak can be observed the general trend is a very complex ensemble of peaks that makes impossible to apply the QP approximation. We will give a more formal and mathematical description of the internal structure of these peaks in the next section. Here we would like to underline some of their general aspects. A remarkable aspect is that some SFs are so largely structured that they span a large energy range, even $3\,eV$ (see the $6^{th}$ band of polyethylene, Fig.\,\ref{fig:Fig4}). If they span a so large energy range, the SFs end up with overlapping each other in some cases (like the $4^{th}$ and $5^{th}$ band SFs in {\it trans}--polyacetylene). The crucial and straightforward consequence is that it turns out difficult to associate a single and well defined energy to the electron and, more importantly, different bands will energetically merge pointing to a non trivial mixing of the electronic states. When a SF covers a large energy range, one peak may be distant in energy from the others more than the Debye energy ($0.4\,eV$ in {\it trans}--polyacetylene\,\, and polyethylene). For this reason each peak can not be simply interpreted as a main QP peak plus a phonon replica. This point will be further discussed in Appendix A by using a two band model. Nevertheless it is reasonable to speculate that the formation of more than one peak suggests to reformulate the problem in a different framework where bare electrons are mixed with phonons, and not simply screened by phonons. Each peak appearing in the SF would be then identified by a mixed electron-phonon states. Since the many body \, framework is not suitable to add information about the composition of the “new” mixed states, we will reformulate the problem in the next section by mapping the problem into and Hamiltonian representation. \begin{figure} [t] \epsfig{figure=Fig3.eps,width=7.0cm} \vspace{0.1cm} \caption{\footnotesize{(color on line). The SFs of the last four occupied states in {\it trans}-polyacetylene (upper frame) are shown together with the DFT--LDA bands (lower frame). The horizontal line in the bands frame represents the position of the {{\bf k}}--point and the energy range along which the SFs are displayed. The vertical arrows in the upper frame represents the energy position of the unperturbed DFT levels. Since these states correspond to in plane orbitals they are strongly affected by the in plane atomic vibrations. The result is that the bare electronic levels are split in several polaronic states.}} \label{fig:Fig3} \end{figure} \begin{figure} [t] \epsfig{figure=Fig4.eps,width=7.0cm} \vspace{0.1cm} \caption{\footnotesize{(color on line). Like in Fig.(\ref{fig:Fig3}) in the polyethylene\,\,case. }} \label{fig:Fig4} \end{figure} \begin{figure} [t] \begin{center} \epsfig{figure=Fig5.eps,width=8.8cm} \end{center} \vspace{-0.5cm} \caption{\footnotesize{(color on line). Two dimensional plot of the spectral functions $\Delta Z_{n\kk}(\omega)$ for polyethylene in the last four occupied bands region. The DFT and the polaronic bands are opportunely labeled. In general the electronic levels acquire a large energy indetermination if compared to the DFT bands represented by solid black lines. The EP interaction moves up of about $300$\,meV the last two occupied bands leading to an increase of the band width. Moreover the 6$^{th}$ band near the $X$ point shows a large energy indetermination that makes it almost disappearing.}} \label{fig:PA_Ank^2} \end{figure} A global view of the effect of the ZPM on the electronic structure of polyethylene\,\,is given in Fig.\,\ref{fig:PA_Ank^2} in the energy range of the last 4 occupied valence bands. The DFT electronic bands are drawn as a reference of the electronic band structure before switching on the EP interaction. By defining $\Delta Z_{n\kk}\(\omega\)\equiv A_{n\kk}\(\omega\)\Delta \omega$ the probability to find an electron $\mid n\kk \rangle$ in the small energy range $\Delta \omega = 50\,meV$, we made a bidimensional representation of the probability amplitude. This is showed in Fig.\,\ref{fig:PA_Ank^2} by using a colored scale that goes from white (the less intense peak), to black (the most intense one). As a consequence this picture gathers all the information about the energy range covered by the SFs\,\,and the intensity of all peaks. In particular we observe that the $6^{th}$ band of polyethylene\,\,moves up close to $\Gamma$-point and then the electron completely disappears. The resulting zero point renormalization of the gap at $\Gamma$ point of polyethylene\,\,is $280 meV$, larger than the trans-polyacetylene case~\cite{cannuccia}. Such a difference is ascribed to the peculiar shape of the {\it trans}--polyacetylene\, orbital at the $X$ point whose $\pi$ character corresponds to states perpendicular to the polymer axis. Thus they feel less the effect of the in-plane vibrations. On the other hand for polyethylene\,\,at $\Gamma$, the electrons are localized along the $C-C$ bond, where the zero point motion effect of the electronic gap is sizable. For what concerns the deeper states far from the gap, the effect of the electron-phonon coupling is equally strong. In fact as they are in plane orbitals they are directly affected by in plane atomic vibrations. We also observe that each band has a different energy width which evolves in different manners moving from $\Gamma$ to $X$. An increasing of the bandwidth is normally associated to a consequent increase of the delocalization of the orbitals. This fact can be used link the effect of the EP coupling to a increased electronic mobility mediated by the polaronic states. In the next section we will go beyond this picture by introducing a general framework to link the poles of the electron--phonon Green's functions to coupled packets of electron--phonon pairs. \section{Internal structure of the polaronic states via an Hamiltonian representation} \label{sec:frohlich} In order to gain more insight into the complex structures that appear in the SFs of both {\it trans}--polyacetylene\, and polyethylene\, we propose, in this section, a mapping of the Many--Body problem into an equivalent Hamiltonian representation. In this representation the poles of the SF will appear as eigenvalues of a fictitious el--ph Hamiltonian. When this is solved in a specific restricted sub--space of the entire Fock space, it will reproduce the same SF obtained by solving the Dyson equation within the Fan and Debye--Waller approximations for the self--energy. In order to show this we start by rewriting the SF\,\,by using the well--known Lehmann representation\cite{mahan}: \begin{align} A_{n\kk}\(\omega,T\)=\sum_{I{\bf k}} {\mid \langle \Psi_0 \mid c^{\dagger}_{n\kk}\mid I{\bf k}\(T\) \rangle \mid}^2 \delta\(\omega-E_{I{\bf k}}\(T\)\), \label{eq:H_1} \end{align} where $\mid I{\bf k}\(T\) \rangle$ are the true eigenstates of the system with energy $E_{I{\bf k}}\(T\)$ which, in turns, represent the true and real poles of the GF. As our initial Hamiltonian, Eq.\,(\ref{eq:sec_MBPT_1}), is composed of electrons and phonons the states $\mid I{\bf k}\(T\) \rangle$ live an extended Fock space composed of electrons and phonons. In the QP approximation the distribution of peaks appearing in Eq.\,(\ref{eq:H_1}) is approximated with a Lorentzian distribution centered at the QP energy with a width equal to the QP line-width. Thus, Eq.\,(\ref{eq:H_1}) already underlines that the origin of the multiple poles in the SFs\,\,shown in Fig.\,\ref{fig:Fig3} and Fig.\,\ref{fig:Fig4} is connected to the existence of more than one intense state $\mid I {\bf k} \rangle$ belonging to the same state $\mid n\kk \rangle$. Now we assume that Eq.\,(\ref{eq:H_1}) remains valid also when the exact self--energy is approximated by the Fan and Debye--Waller terms. And, in order to link the states $\mid I{\bf k}\(T\) \rangle$ to an Hamiltonian problem we start rewriting Eq.\,(\ref{eq:sec_MBPT_1}) in second quantization: \begin{align} \widehat H=\widehat H_{el}+\widehat H_{ph}+\widehat H_{el-ph}, \label{eq:H_2} \end{align} where $\widehat H_{el}$ is the electronic Hamiltonian, $\widehat H_{ph}$ is the independent phonons Hamiltonian and $\widehat H_{el-ph}$ is the EP interaction Hamiltonian. The last three terms, written in the second quantization, read \begin{gather} \widehat H_{el}=\sum_{n\kk}\tilde{\varepsilon}_{n\kk}c^{\dagger}_{n\kk}c_{n\kk}, \label{eq:H_3}\\ \widehat H_{ph}=\sum_{{\bf q},\lambda}\omega_{\qq \gl}( b^{\dagger}_{{\bf q} \lambda} \hat{b}_{\qq \gl}+\frac {1}{2}), \tag{\ref{eq:H_3}$'$}\\ \widehat{H}_{el-ph}=\frac{1}{N_q}\sum_{\substack{n,n',{\bf k},\\{\bf q},\lambda}} {g^{{\bf q},\lambda}_{nn'{\bf k}} c^{\dagger}_{n{\bf k}} c_{n'{\bf k}-{\bf q}} (b^{\dagger}_{-{\bf q}\lambda}+b_{{\bf q}\lambda})}. \tag{\ref{eq:H_3}$''$} \end{gather} In Eq.\,(\ref{eq:H_3}) $\tilde{\varepsilon}_{n\kk}$ is a single particle energy that we will shortly define. $c^{\dagger}_{n{\bf k}}$ is the creation and $c_{n'{\bf k}-{\bf q}}$ is the annihilation electronic operators, $b^{\dagger}_{{\bf q} \lambda}$ and $\hat{b}_{\qq \gl}$ are the creation and annihilation operators for phonons with energy $\omega_{\qq \gl}$ and wave vector ${\bf q}$. $g^{{\bf q},\lambda}_{nn'{\bf k}}$ are the EP coupling matrix elements (see Eq.\,(\ref{eq:sec_diagrams_4})). We want now to use Eqs.\,(\ref{eq:H_3}) to calculate the states $|I{\bf k}\(T\)\rangle$. To this end we note, from the (a) frame of Fig.\ref{fig:sec_diagrams_1}, that the Fan self-energy makes an initial state $|n{\bf k}\rangle$ to scatter with a phonon state $|{\bf q}\lambda\rangle$, with population $N_{{\bf q}\lambda}\pm 1$, in a final state $|n'{\bf k}-{\bf q}\rangle$. Only one phonon is exchanged and in the self--energy loop the intermediate states are the composite pairs $ \mid n' {\bf k}-{\bf q} \rangle \otimes \mid N_{{\bf q}\lambda}\pm 1 \rangle$ with energy $\varepsilon_{n'{\bf k}-{\bf q}}\pm \omega_{{\bf q}\lambda}$. This energy, indeed, appears in the denominator of Eq.\,(\ref{eq:sec_diagrams_10}). Physically this means that, if we introduce the general state product of an electronic and phononic part $\mid n' {\bf k-q} \rangle \otimes \mid N_{{\bf q}\lambda} \pm 1 \rangle$, the intermediate states of the self--energy are all possible combinations with different ${\bf q}$ and $\lambda$. It follows that we can guess, at a given temperature, \begin{multline} \mid I{{\bf k}}\(T\) \rangle= \sum_n A^I_{n\kk}\(T\) \mid n\kk \rangle + \\ + \sum_{n'{\bf q}\lambda} B^{I\lambda}_{n'{\bf k}-{\bf q}}\(T\) \mid n' {\bf k-q} \rangle \otimes \mid N_{{\bf q}\lambda}\(T\)\pm 1 \rangle. \label{eq:H_4} \end{multline} The coefficients $A^I$ and $B^{I\lambda}$ can be found by diagonalizing Eq.\,(\ref{eq:H_2}) in the space of electron--phonon states spanned by the definition given in Eq.\,(\ref{eq:H_4}). More precisely, in order to expand the matrix form of Eq.\,(\ref{eq:H_2}) we notice that, due to Eq.\,(\ref{eq:H_4}) the basis set will be composed of the following elements \begin{align} \mid n\kk \rangle \mid N_{{\bf q}\lambda} \rangle,\,\,\mid n{\bf k}-{\bf q} \rangle \mid N_{{\bf q}\lambda}\pm 1 \rangle. \end{align} At zero temperature the basis set is reduced to \begin{align} \mid n\kk \rangle \mid 0_{ph}\rangle,\,\,\mid n{\bf k}-{\bf q} \rangle \mid 1_{{\bf q}\lambda} \rangle, \end{align} and it reflects the fact that at $T=0\,K$ there are no phonons in the ground state. In this restricted basis the Hamiltonian reads \begin{align} \widehat H = \begin{bmatrix} \ddots & & & & \\ & \tilde{\varepsilon}_{n\kk}\delta_{nn'} & & H^{ep}_{nn'{\bf q}\lambda} & \\ & & \ddots & & \\ & (H^{ep}_{nn'{\bf q}\lambda})^{\dagger} & & \tilde{\varepsilon}_{n\kk-{\bf q}}\delta_{nn'}+\omega_{\qq \gl} \delta_{{\bf q}\qq'}\delta_{\lambda\gl'} & \\ & & & & \ddots \\ \end{bmatrix} . \label{eq:H_5} \end{align} The fact that $\hat{H}_{el-ph}$ is Hermitian makes also $\hat{H}$ Hermitian. The equivalence of Eq.\,(\ref{eq:H_1}) with the spectral function calculated in Sec.\ref{sec:DynamicalSEeffects} is obtained by imposing $\tilde{\varepsilon}_{n{\bf k}}=\varepsilon_{n{\bf k}}+\Sigma^{DW}_{n{\bf k}}$. This equivalence is proved analytically, in the zero temperature limit, in Appendix~\ref{appendixA} for a simple two--levels model. From Eq.\,(\ref{eq:H_5}) we also note that the number of states $\mid I{{\bf k}}\(T\)\rangle$ is equal to the dimension of the matrix and it is obtained by multiplying the number of electronic bands times the number of ${\bf q}$ vectors times the number of phononic branches, $\lambda$. As a consequence the number of states $\mid I{{\bf k}}\(T\)\rangle$ is larger than that of $\widehat H_{el}$. This confirms the fact that, in Eq.\,(\ref{eq:H_1}), the states $|I{\bf k}\(T\)\rangle$ form a continuum that, in the QP approximation, dresses the bare electronic state. This dressing is, {\it de--facto}, represented by a cloud of mixed electron--phonon states surrounding the QP energy with a Lorentzian distribution. The Hamiltonian $\widehat H$ (Eq.\,(\ref{eq:H_5})) is diagonalized for {\it trans}--polyacetylene\,\,including $30$ electronic bands, $10\,{\bf q}$-vectors and $12$ phonon branches. Since the Hamiltonian is written on a complete basis, the coefficients $A^I_{n\kk}$ and $B^{I\lambda}_{n'{\bf k}-{\bf q}}$ satisfy the following condition \begin{align} \sum_{n} \mid A^{I}_{n\kk}\mid^2 + \sum_{n'{\bf q}\lambda} \mid B^{I\lambda}_{n'{\bf k}-{\bf q}} \mid^2 =1, \label{eq:H_6} \end{align} which ensures that the spectral function $A_{n{\bf k}}\(\omega,T\)$ is correctly normalized to 1 when integrated over all frequencies. Once the eigenstates $\mid I{{\bf k}} \rangle$ and the eigenvalues $E_{I{{\bf k}}}$ are known, the SF\,\,can be calculated according to Eq.\,(\ref{eq:H_1}) and all the peaks appearing in the SFs\,\,of state $\mid n\kk \rangle$, are unambiguously labeled with a particular $\mid I{{\bf k}} \rangle$ state, having $\mid n\kk \rangle$ as the pure electronic component. Let us consider the $\mid n=4, {\bf k}=0.2(\frac{2\pi}{a},0,0)\rangle$ state of {\it trans}--polyacetylene\,\,as an example. In Fig.\,\ref{fig:band4_k1_Impulsi} it is shown the corresponding zero temperature SF. \begin{figure} [t] \begin{center} \vspace{-0.3cm} \epsfig{figure=Fig6.eps,width=7.0cm,clip=,bbllx=61,bblly=197,bburx=712,bbury=522} \end{center} \vspace{-0.7cm} \caption{\footnotesize{{\it Trans}--polyacetylene. The SF\,\,of the state $\mid n=4, {\bf k}=0.2(\frac{2\pi}{a},0,0)\rangle$ is decomposed in polaronic states, each labeled by $\mid I {\bf k} \rangle$. Several structures appear thus ruling out the QP approximation.}} \label{fig:band4_k1_Impulsi} \end{figure} The poles and the corresponding residuals are indicated in Fig.\,\ref{fig:band4_k1_Impulsi} by bars with different heights. The residuals are given by $\mid A^{I}_{n\kk} \mid^2$, that is the probability to find the polaronic state in the pure electronic $\mid n\kk \rangle$ state. This reminds the physical meaning of the $Z_{n\kk}$ factors, and we use this similarity to define $Z^{I}_{n\kk}=\mid A^{I}_{n\kk} \mid^2$. Nevertheless from Eq.\,(\ref{eq:H_4}) it is evident that the smaller the $Z^{I}_{n\kk}$ is, the less the polaronic state can be assimilated to an electron. It means that the mixed EP\,contribution in Eq.\,(\ref{eq:H_4}) indeed weights the most. In the case of Fig.\,\ref{fig:band4_k1_Impulsi} the $\mid A^{I}_{n\kk} \mid^2$ can even be as small as 0.2. These small values of $Z^{I}_{n\kk}$ represent a general trend. In Fig.\,\ref{fig:Charge_vs_Epolaronic} $Z^{I}_{n\kk}$ is plotted as a function of the polaronic eigenvalues. \begin{figure} [t] \begin{center} \vspace{0.2cm} \epsfig{figure=Fig7.eps,width=7.0cm,clip=,bbllx=8,bblly=39,bburx=714,bbury=522} \end{center} \vspace{-0.7cm} \caption{\footnotesize{{\it Trans}--polyacetylene. The projection of the polaronic states over the corresponding pure electronic state, $Z^I_{n\kk}$ is shown. The dashed line represents $Z^I_{n\kk}$ of a pure electron state as a reference value.}} \label{fig:Charge_vs_Epolaronic} \end{figure} It can be noted that only few polaronic states have $Z^I_{n\kk}\simeq1$. Most of all are below $0.5$, instead. It means that the mixed EP part of the eigenstate, shown in Eq.\,(\ref{eq:H_4}), plays a dominant role. A small value of $Z^I_{n\kk}$ points to a non trivial physical property of the polaronic states. When $Z^I_{n\kk}\rightarrow 0$ it is evident that only the mixed terms in the sum, where the electrons and the phonons appear together, are non zero. This means that the electrons cannot move in the system alone, even if dressed by an electron--phonon cloud, but need to build up true bound electron--phonon states. This is a clear fingerprint of the breakdown of the QP approximation. The generic definition of the polaronic state, Eq.\,(\ref{eq:H_4}), allows to calculate the mean value of any observable that lives in the mixed electron--phonon space. For example we can evaluate the matrix elements of the atomic indetermination operator as follows \begin{multline} u^2_{\alpha I{\bf k} s }\equiv\langle I{\bf k} \mid u^2_{\alpha,s} \mid I{\bf k} \rangle = \\ \sum_{{\bf q}\lambda}\(\frac{1}{2N_q\omega_{\qq \gl} M_s}\) \varepsilon_{\alpha}\({\bf q}\lambda/s\)\varepsilon^{\ast}_{\alpha}\({\bf q}\lambda/s\)\times\\ \times\[\sum_{n} \mid A^{I}_{n\kk}\mid^2 + 3 \sum_{n'}\mid B^{I\lambda}_{n'{\bf k}-{\bf q}} \mid^2 \]. \label{eq:H_8} \end{multline} By using these $u^2_{\alpha I {\bf k} s }$ we can associate an average quantum size to the atoms. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{1}{|r|}{}&\multicolumn{2}{|r|}{{\it Trans}--polyacetylene}&\multicolumn{2}{c|}{polyethylene} \\ \hline & $ C $ & $ H $ & $ C $ & $ H $ \\ & $a.u. $ & $ a.u.$ & $a.u. $ & $ a.u.$ \\ \hline $\hat x$ & $0.18 $ & $0.55 $ & $0.1 $ & $0.32 $ \\ \hline $\hat y$ & $0.13 $ & $0.36 $ & $0.07 $ & $0.21 $ \\ \hline $\hat z$ & $0.11 $ & $0.56 $ & $0.07 $ & $0.34 $ \\ \hline \end{tabular} \caption{\footnotesize{Atomic amplitudes obtained by evaluating the matrix elements of operator $\sqrt {\mathbf u^2_{I{\bf k} s}}$. Because of its smaller mass the hydrogen quantum atomic size is three times larger than the carbon atom one. Nevertheless the constraint imposed by the different geometries makes the deviation of the two polymers appreciably different.}} \label{tab:AtomIndetermination} \end{center} \end{table} The values for the $C$ and $H$ species, calculated by Eq.\,(\ref{eq:H_8}), are shown in Tab.\,\ref{tab:AtomIndetermination}. These values point to the fact that the atoms acquire an indetermination larger along the polymer axis. Since $H$ is lighter than $C$ the atomic quantum size is larger. The different constraint created by the geometries is the cause of the different $\sqrt {u^2_{\alpha I {\bf k} s}}$ values between the two polymers. The values of the atomic indetermination suggest that electrons and phonons exert a cooperative effect on each other. The charge density spreads all along the polymer, while the atoms squeeze along $\hat y$ and $\hat z$ directions, widening along $\hat x$. This cooperation can cause, for example, an enhancement of the mobility, opening therefore new perspectives for future investigations and applications of polymers. \section{Conclusions} \label{sec:conclusions} In this work we have shown that the Heine--Allen--Cardona approach suffers of some severe limitations when applied to predict the zero temperature energy correction in low dimensional systems. We extensively describe a fully dynamical extension of the Heine--Allen--Cardona approach to show that the zero point motion effect severely questions the reliability of the QP picture in {\it trans}-polyacetylene and polyethylene. The single particle spectral functions, indeed, exhibit multiple structures at $T=0\,K$. The formation of additional structures caused by the strong electron--phonon interaction is interpreted in terms of composed electron--phonon states, what we call in this work polaronic states. These states are precisely defined by mapping the structures of the Many--Body spectral functions into the solution of an eigenvalue problem. Thanks to this important mapping the non perturbative nature of the polaronic states appears as a coherent superposition of electron--phonon pairs. And the cooperative dynamics between electrons and atoms in these states rules out any description in terms of bare atoms and quasiparticles. The resulting coupled electronic and atomic dynamics pave the way for new investigations in polymers and more in general in low dimensional nanostructures. The cooperative dynamics of electrons and phonons in the polaronic states can have potential physical implications, as for example, an enhancement of the electronic mobility. More generally the breakdown of the quasiparticle picture imposes a critical analysis of the previous results obtained using purely electronic theories.
1,314,259,994,050
arxiv
\section{Introduction} (Classical) spinor fields are well known to be elements in the carrier space of the Spin group irreducible representations on any given spacetime that admits a spin structure, namely, if the second Stiefel--Whitney class vanishes. Spinor fields are, in particular, employed for constructing the so called bilinear covariants, consisting of tensorial quadratic forms involving the spinors. The bilinear covariants were shown to be the homogeneous part of a multivector Fierz aggregate \cite{BBR}. Particularizing for 4D the Minkowski spacetime case, spinor fields were classified with respect to their bilinear covariants in the Dirac--Clifford algebra, by the so called Lounesto's spinor field classification \cite{lou2}. Further lattice generalizations in the context of quantum Clifford algebras were also studied \cite{Ablamowicz:2014rpa}. The bilinear covariants are not independent, but constrained by the Fierz identities \cite{lou2,Fabbri:2016msm}. Reciprocally, given the bilinear covariants, their associated spinor fields can be re-obtained up to a phase, by the reconstruction theorem \cite{Mosna:2003am,Cra}. The Lounesto's classification is based upon the U(1) gauge symmetry of the first-order equations of motion that rule spinor fields in each spinor class. However, a more general classification has been proposed in Ref. \cite{Fabbri:2017lvu} encompassing spinor multiplets as realizations of (non-Abelian) gauge fields. In this more general classification, composed flagpoles, dipoles, and flag-dipoles naturally descend within fourteen disjoint classes of spinor fields, under the gauge symmetry SU(2) $\times$ U(1). In this setup, the spinor fields in the standard Lounesto's classification were shown to be a limiting case, equivalent to Pauli singlets \cite{Fabbri:2017lvu}. Further spinor representations were studied in Refs. \cite{123,HoffdaSilva:2017vic}, with also other proposals to construct the bilinear covariants for flagpole spinors \cite{HoffdaSilva:2016ffx}. An analogous classification in the framework of second quantization and a quantum reconstruction algorithm was also proposed, being the Feynman propagator extended for regular and singular spinor fields, in Ref. \cite{Bonora:2017oyb}. In any fixed spacetime dimension, $n$, and signature, $(p,q)$, the very construction of the bilinear covariants depends on the existence of either real, or complex, or even quaternionic structures. Hence, the existence of non null bilinear covariants can be impeded by the geometric Fierz identities. Despite the natural obstructions due to the existence of algebraic and geometric structures on a given spacetime dimension/signature, the Lounesto's spinor field classification on 4D Minkowski spacetime was successfully generalized to other spacetime dimensions and signatures, of relevance in their applications, as the emergence of fermionic fields in the respective spacetime compactifications. Spinor fields on the 7-sphere $S^7$, as an Einstein space composing the compactification AdS$_4\times S^7$, were studied in Ref. \cite{BBR}, where new spinor classes were derived. On the other hand, new spinor field classes in the compactification AdS$_5\times S^5$ were derived and investigated in Ref. \cite{brito}, representing new recently obtained fermionic solutions in string theory. More precisely, Ref. \cite{BBR} proposed new classes of spinor fields on $S^7$, based on the geometric Fierz identities in Ref. \cite{1}. The underlying structure of the geometric Fierz identities on $S^7$ was shown to sternly obstruct the amount of non null bilinear covariants were found on $S^7$. Nevertheless, further three new emergent classes of fermionic fields on $S^7$. From a more physical point of view, investigating these new classes of spinors $S^7$ may afford new fermionic solutions of first order equations of motion, that can play an important role on supergravity. In fact, one of the spontaneous compactification schemes on $n = 11$ supergravity can be implemented by the so called Freund--Rubin--Englert solution, obtained on a product manifold AdS$_4\times S^7$ \cite{Gursey:1983yq}. As important as the standard $S^7$, the so called parallelizable $S^7$, a curvatureless manifold that has torsion, emerges when the antisymmetric gauge field strength in the Englert's solution excedes the Freund--Rubin one, being identified with the Cartan--Schouten torsion on the 7-sphere. Our main aim here is to construct new fermionic fields on the parallelizable $S^7$, that can be then obtained when new classes of $S^7$ spinor fields are lifted onto the parallelizable $S^7$. This paper is organised as follows: in Sect. II, after briefly reviewing how the geometric Fierz identities are used to derive additional spinor field classes on $S^7$, we propose a reconstruction procedure for obtaining the spinor fields, in these new classes, from the bilinear covariants and the geometric Fierz identities. Sect. III is then devoted to briefly review the parallelizable sphere, whose torsion is defined with respect to the non-associative $X$-product on the octonionic bundle. The geometric Fierz identities are used to derive the spinor field classes on $S^7$, that are going to be lifted onto the parallelizable $S^7$, whereon new fermionic fields can be then constructed through the introduction of a generalized octonionic law of transformation. \section{Geometric Fierz Identities and Bilinear Covariants} Let $(M,g)$ be a manifold endowed with a metric tensor. The exterior bundle $\Upomega(M)=\oplus_{i=0}^\infty\Upomega_i(M)$ has endomorphisms that come from the tensor algebra quotient construction. Given a $k$-form field\footnote{{\color{black}{One calls a $k$-form field a section of an homogeneous space of the exterior bundle.}}} $a \in \sec \Upomega^k(M)$, the grade involution, $\hat{a}=(-1)^{k}a$, is an automorphism; the reversion, $\tilde{a}=(-1)^{[(k/2)]}a,$ for $[(k)]$ denoting the integer part of the degree $k$, is an antiautomorphism. These composition of these two morphisms define the conjugation, denoted by $\bar{a}$. The Clifford bundle can be obtained by equipping the exterior bundle with the universal Clifford product $u \diamond a = u \wedge a+ u \lrcorner a$, for all 1-forms $u \in \sec\Upomega^1(M)$, where $\lrcorner$ is the left contraction. The spinor bundle of the Minkowski spacetime $\mathbb{R}^{1,3}$ is composed by spinor fields, $\psi$, carrying the ${\left(\frac12,0\right)}\oplus{\left(0,\frac12\right)}$ representations of the Lorentz group. The bilinear covariants are sections of the exterior bundle $\Upomega(M)$. With respect to a {basis $\{e^\mu\}$}, they read \begin{subequations} \begin{eqnarray} \upsigma &=& \bar{\psi}\psi\in\sec\Upomega^0(M)\,,\label{sigma}\\ \textsc{J}&=&\textsc{J}_{\mu}e^{\mu }\in\sec\Upomega^1(M)\,,\label{J}\\ \textsc{S}&=&S_{\mu\nu }e^{\mu}\wedge e^{ \nu }\in\sec\Upomega^2(M)\,,\label{S}\\ \textsc{K}&=& K_{\mu }e^{\nu }\in\sec\Upomega^3(M)\,,\label{K}\\\omega&=&\bar{\psi}\gamma_{0}\gamma_1\gamma_2\gamma_3\psi\in\sec\Upomega^4(M)\,, \label{fierz.} \end{eqnarray}\end{subequations} where $\textsc{J}_{\mu}=\bar{\psi}\gamma _{\mu }\psi$, $S_{\mu\nu }=\bar{\psi}\upsigma _{\mu \nu }\psi$, $K_{\mu }=i\bar{\psi}\gamma_{0}\gamma_1\gamma_2\gamma_3\gamma _{\mu }\psi \,,$ are the respective components in Eqs. (\ref{J}) -- (\ref{K}); $\gamma_5:=i\gamma_0\gamma_1\gamma_2\gamma_3$ and $\bar\psi=\psi^\dagger\gamma_0$. Besides, $\upsigma_{\mu\nu}:=\frac{i}{2}[\gamma_\mu, \gamma_\nu]$. Gamma matrices satisfy a Clifford algebra {\color{black}{named}} ${\mathcal{C}}\ell_{1,3}$, $\gamma_{\mu }\gamma _{\nu }+\gamma _{\nu }\gamma_{\mu }=2g_{\mu \nu }\mathbf{1}$, {where $g_{\mu\nu}$ denotes the Minkowski spacetime metric components.} When not both $\upsigma$ and $\omega$ vanish altogether, the bilinear covariants are governed by the Fierz identities \cite{lou2} \begin{equation}\label{fifi} \textsc{K}^2+{\rm J}^2 =0={\rm J}\cdot\textsc{K},\qquad(\omega+\upsigma\gamma_{0}\gamma_1\gamma_2\gamma_3)\textsc{S}={\rm K}\wedge\textsc{J},\qquad \omega^{2}+\upsigma^{2}={\rm J}^2\,. \end{equation} \noindent Lounesto derived, from the bilinear covariants, a classification of spinor fields \cite{lou2}, for ${\rm J}\neq0$. However, this condition that was firstly motivated by the Dirac electron theory, and can be circumvented in three additional classes that were recently derived in Minkowski spacetime \cite{EPJC}, conjectured to consist of ghost spinors. Apart from these classes of ghost spinors, the original Lounesto's classification splits the spinor fields on Minkowski spacetime into six disjoint classes. In Eqs. (\ref{ppo}) -- (\ref{ppl}) below, we just denote the bilinear covariants that do not vanish: \begin{subequations} \begin{eqnarray} &&1)\;\;\omega\neq0,\;\;\; \upsigma\neq0,\;\;\;\textsc{K}\neq 0, \;\;\;\textsc{S}\neq0\;\; \\ \label{ppo} &&2) \;\;\upsigma\neq0,\label{dirac1}\;\;\;\textsc{K}\neq 0, \;\;\;\textsc{S}\neq0\;\;\;\\ &&3)\;\;\omega \neq0, \;\;\;\textsc{K}\neq 0\;\;\;\textsc{S}\neq0 \;\;\;\;\\ &&4)\;\;\textsc{K}\neq 0, \;\;\;\textsc{S}\neq0 \;\;\quad\qquad\\ &&5)\;\;\textsc{K}=0,\;\;\; \textsc{S}\neq0 \;\; \quad\qquad\\ &&6)\;\;\textsc{K} \neq 0,\;\;\; \textsc{S}=0 \;\;\;\quad\qquad\label{ppl} \end{eqnarray} \end{subequations} Classes 1, 2, and 3 consist of regular spinor fields, since not both the scalar and the pseudoscalar vanish. Classes 4, 5, and 6 realize singular spinor fields, where both $\upsigma$ and $\omega$ are null. Refs. \cite{Ablamowicz:2014rpa,esk,EPJC,Cavalcanti:2014uta} illustrate a vast range of applications of these classes in quantum field theory and gravity. Ref. \cite{Fabbri:2016msm} introduced more two exclusive classes into the Lounesto's classification, through a generally-relativistic gauge classification, whereas the most general spinor field class in each spinor class was derived in Ref. \cite{Cavalcanti:2014wia} as a prominent computational tool for the reconstruction theorem. The Fierz identities (\ref{fifi}) are well known not to be valid for the case of singular spinors. In this case, based upon a Fierz aggregate, \begin{eqnarray} {\rm Z}= \frac12(\omega\gamma_{0}\gamma_1\gamma_2\gamma_3+i\textsc{K}\gamma_{0}\gamma_1\gamma_2\gamma_3+i\textsc{S}+ {\rm J} +\upsigma) \,, \label{Z}\label{zsigma} \end{eqnarray} the Fierz identities (\ref{fifi}) can be replaced by \begin{subequations}\begin{eqnarray}}\newcommand{\benu}{\begin{enumerate}}\newcommand{\enu}{\end{enumerate} \label{nilp}{\rm Z}^{2}{} &=&\upsigma{} {\rm Z}{},\\ {\rm Z}{}\gamma_{\mu}{\rm Z}{}&=&J_{\mu}{}{\rm Z}{},\\ {\rm Z}{}\sigma_{\mu\nu}{\rm Z}{}&=&S_{\mu\nu}{}{\rm Z}{},\\ {\rm Z}{}i\gamma_{0}\gamma_{1}\gamma_{2}\gamma_{3}\gamma_{\mu}{\rm Z}{} &=&K_{\mu}{}{\rm Z}{},\\ - {\rm Z}{}\gamma_{0}\gamma_{1}\gamma_{2}\gamma_{3}{\rm Z}{}&=&\omega{} {\rm Z}{}.\label{nilp1} \end{eqnarray}\end{subequations} Fierz aggregates that are self-adjoint multivectors, $\gamma_{0}{\rm Z}\gamma_{0}={\rm Z}^{\dagger}$, are better known as boomerangs \cite{lou2}. Given any spinor $\upupsilon\in\mathbb{C}^4$ such that $\bar\upupsilon\gamma_0\psi\neq 0$, the non trivial spinor $\psi$ can be, then, reconstructed by the inversion theorem, as $ \psi=\frac{1}{2\sqrt{\bar\upupsilon\mathbf{Z}\upupsilon}}\;e^{-i\alpha}\mathbf{Z}\upupsilon, \label{31}% $ for an arbitrary phase $\alpha$, such that $-i\alpha=\ln\left({2}\sqrt{\bar\upupsilon\psi\bar\upupsilon\mathbf{Z}\upupsilon}\right)$. In particular, any regular spinor can be reconstructed as \cite{Cra,Mosna:2003am} \begin{eqnarray}}\newcommand{\benu}{\begin{enumerate}}\newcommand{\enu}{\end{enumerate} \psi = \frac12\sqrt{J_0+\sigma - K_3 + S_{12}}\; {\rm Z} e^{i\alpha}(1,0,0,0)^\intercal. \end{eqnarray} {\color{black}{ Heretofore spinor fields were approached without mentioning the spinor bundle. We denoted the Minkowski spacetime manifold by $M\simeq \mathbb{R}^{1,3}$. Since it is an affine space, being isomorphic to its own tangent spaces, a lot of important structures were hidden throughout the text, for simplicity. Nevertheless, to approach spinor fields on higher dimensions, we should recall the spinor structures of Minkowski spacetime. Spinor fields are sections of the so called spinor bundle. For defining it, some underlying structures are introduced in the Appendix \ref{app}. }} {\color{black}{Now, to}} define and construct analogous classifications on spacetimes of any dimension and signature, when it is possible, the geometric Fierz identities can be analyzed when a spin structure endows an $M$ manifold. For it, the so called K\"ahler-Atiyah bundle introduced, which consists of the exterior bundle endowed with the Clifford product, denoted in this section by $\diamond$.{\color{black}{ Considering our case of interest, consisting of the 7-sphere $S^7$}}, its $S$ spin bundle is, thus, equipped with the induced $\Ganz: S\to S$ product, accordingly \cite{1}. {\color{black}{This composition just indicates the product between spinor fields, usually denoted by juxtaposition, when Minkowski spinor fields are regarded.}} {\color{black}{Denoting by ${\rm End}(S)$ all the linear mappings from $S$ to $S$ and by ``sec'' any section of a bundle}}, a bilinear pairing $B: \sec S\times \sec S\to\mathbb{R}$ can define a bilinear mapping \cite{1,BBR}. Indeed, given sections $\psi,\uppsi$ on the spin bundle, a bilinear mapping $B_0: \sec S\times \sec S\to \mathbb{R}$, on the $S^7$ spinor bundle, reads \cite{1} \begin{eqnarray} \!\!\!\!\!\!B_0(\psi,\uppsi)\!=\! B\!\left(\!{}_{\Re}\psi,\!{}_{\Re}\uppsi\right)\!-\! B\!\left(\!{}_{\Im}\psi,\!{}_{\Im}\uppsi\right)\!+\!i\!\left[B\!\left(\!{}_{\Re}\psi,\!{}_{\Im}\uppsi\right)\!+\! B\!\left(\!{}_{\Im}\psi,\!{}_{\Re}\uppsi\right)\right],\label{formaa} \end{eqnarray}\noindent {\color{black}{for the real, ${}_{\Re}\psi$, and the imaginary, ${}_{\Im}\psi$}}, components of the spinor field $\psi$ \cite{1}. {\color{black}{This bilinear mapping is the one that shall generalize the bilinear covariants (\ref{ppo} - \ref{ppl}) scalar components, that were constructed on $\mathbb{R}^{1,3}$ to the 7-sphere. This can be implemented by the bilinear mapping on the {\color{black}{$S^7$}} spin bundle: }} \begin{eqnarray}\label{formab} B_k(\psi,\uppsi)=B(\psi,\gamma_{\tau_1}\dots\gamma_{\tau_k} \uppsi)= {\bar{\psi}}{\gamma}_{\tau_1}\dots\gamma_{\tau_k} {\uppsi}\,. \end{eqnarray}\noindent To define new spinor classes on $S^7$, when $k$ is odd, the bilinear mapping $ B (\psi,\gamma^{\tau_1}\ldots\gamma^{\tau_k}\uppsi)$ is not equal to zero \cite{1}. Defining \begin{equation} \mathcal{A}_{\uppsi |\Uppsi}(\psi):= B (\psi,\Uppsi)\uppsi\,,\quad \text{for all}\;\;\;\;\; \psi,\uppsi,\Uppsi \in \sec S\,, \end{equation}\noindent given $\mathring\psi,\mathring\uppsi,\mathring\Uppsi \in \sec S$, then the (geometric) Fierz identities then read \cite{1} \begin{equation} \mathcal{A}_{\uppsi |\Uppsi}\,\Ganz\, \mathcal{A}_{\mathring\uppsi |\mathring\Uppsi}= B (\mathring\uppsi,\Uppsi)\mathcal{A}_{\uppsi |\mathring\Uppsi}\,. \end{equation} {\color{black}{Given the structure $D$ that defines the complex conjugate on $S$ by $D(\psi)={}_{\Im}\psi$}}, the elements $\mathcal{A}_{\psi |\uppsi}$ {\color{black}{are differential forms that}} can be always split into $ \mathcal{A}_{\psi |\uppsi}=D\,\Ganz\, \mathcal{A}^1_{\psi |\uppsi}+\mathcal{A}^0_{\psi |\uppsi}$ \cite{1}, where \begin{subequations} \begin{eqnarray} \mathcal{A}^{\psi |\uppsi} &=&\sum_{k=0}^7\frac{1}{k!}(-1)^{k} B (\psi, \gamma_{\tau_1}\ldots\gamma_{ \tau_k}\uppsi)e^{\tau_1}\wedge\cdots\wedge e^{\tau_k}\,,\label{eoo1} \\ \mathcal{A}_D^{\psi |\uppsi} &=&\sum_{k=0}^7 \frac{(-1)^{k}}{k!} B (\psi, D\,\Ganz\, \gamma_{\tau_1}\ldots\gamma_{ \tau_k}\uppsi)e^{\tau_1}\wedge\cdots\wedge e^{ \tau_k}\,. \label{eoo2} \end{eqnarray} \end{subequations} The geometric Fierz identities then {\color{black}{follow}} for $S^7$ \cite{1}: \begin{subequations}\begin{eqnarray} \hat{\mathcal{A}}^{\uppsi |\Uppsi}\diamond \mathcal{A}_D^{\mathring\uppsi |\mathring\Uppsi} + \mathcal{A}_D^{\uppsi |\Uppsi}\diamond \mathcal{A}^{\mathring\uppsi |\mathring\Uppsi} &=& B (\mathring\uppsi,\Uppsi) \mathcal{A}_D^{\uppsi |\mathring\Uppsi}\,,\label{fp1}\\ \mathcal{A}^{\uppsi |\Uppsi}\diamond \mathcal{A}^{\mathring\uppsi |\mathring\Uppsi} + (-1)^{k} \hat{\mathcal{A}}_D^{\uppsi |\Uppsi}\diamond \mathcal{A}_D^{\mathring\uppsi |\mathring\Uppsi} &=& B (\mathring\uppsi,\Uppsi) \mathcal{A}^{\uppsi |\mathring\Uppsi}\,.\label{fp2} \end{eqnarray}\end{subequations} These equations are the equivalent of Eqs. (\ref{fifi}), for $S^7$. {\color{black}{Moreover, the bilinear covariants on $S^7$ emulate the ones of Minkowski spacetime (\ref{ppo} - \ref{ppl}), by \begin{eqnarray} \label{ddd} \upphi_k= \frac{1}{k!}B (\psi,\gamma_{\tau_1}\ldots\gamma_{ \tau_k}\psi)e^{\tau_1}\wedge\cdots\wedge e^{\tau_k} \end{eqnarray}\noindent It is worth to emphasize that the bilinear covariants construction on $\mathbb{R}^{1,3}$ are not obstructed by a dimensional accident. However, on $S^7$ (and also on other specific dimensions), the geometric Fierz identities (\ref{fp1}, \ref{fp2}) severely obstruct the very existence of homogeneous bilinear covariants \cite{1}. In fact, }} spinors on $S^7$ have the bilinear covariants $\upphi_k$ equal to zero, with the exceptions when $k=0$ or $k=4$ \cite{1,BBR}, namely, \begin{eqnarray}}\newcommand{\benu}{\begin{enumerate}}\newcommand{\enu}{\end{enumerate} \upphi_0&=& B (\psi,\psi),\\ \label{phi4} \upphi_4&=&\frac{1}{4!} B (\psi,\gamma_{\tau_1}\gamma_{\tau_2}\gamma_{\tau_3}\gamma_{ \tau_4}\psi)\;e^{\tau_1}\wedge e^{\tau_2}\wedge e^{\tau_3}\wedge e^{\tau_4}\,. \end{eqnarray} Then, the geometric Fierz identities yield a single class Majorana spinors on $S^7$, given by $\upphi_0\neq 0$ and $\upphi_4\neq 0$, being all other bilinears $\upphi_k=0$, for $\{k\}\neq \{0,4\}$. A higher order generalization of Eq. (\ref{formab}) is then necessary, to encompass new classes of fermionic fields on $S^7$: \begin{eqnarray}\label{formac} \upbeta_k(\psi,\gamma_{\tau_1}\ldots\gamma_{\tau_k}\uppsi)&=& B\left({}_{\Re}\psi,\gamma_{\tau_1}\ldots\gamma_{\tau_k}{}_{\Re}\uppsi)- B\left({}_{\Im}\psi,\gamma_{\tau_1}\ldots\gamma_{\tau_k}{}_{\Im}\uppsi\right)\right)\\&&\nonumber\qquad\qquad+i\left[ B\left({}_{\Re}\psi,\gamma_{\tau_1}\ldots\gamma_{\tau_k}{}_{\Im}\uppsi\right)+B\left({}_{\Im}\psi,\gamma_{\tau_1}\ldots\gamma_{\tau_k}{}_{\Re}\uppsi\right)\right]\,.\end{eqnarray}\noindent Hence, the complex bilinear covariants can be defined \cite{BBR}, \begin{eqnarray}}\newcommand{\benu}{\begin{enumerate}}\newcommand{\enu}{\end{enumerate} \Upphi_k= \frac{1}{k!}\upbeta_k(\psi,\gamma_{\tau_1}\ldots\gamma_{\tau_k}\psi)e^{ \tau_1}\wedge\cdots\wedge e^{\tau_k},\end{eqnarray} yielding three (non-trivial) classes of spinor fields on the $S^7$ spin bundle \cite{BBR}, \begin{subequations} \begin{eqnarray} &1)\;\;\;\;\; \Upphi_0=0,\quad\Upphi_4\neq0,\label{c12}\\ &2)\;\;\;\;\; \Upphi_0\neq0,\quad\Upphi_4=0,\label{c13}\\ &3)\;\;\;\;\; \Upphi_0\neq0,\quad\Upphi_4\neq0\,.\label{c14}\end{eqnarray} \end{subequations} {\color{black}{The Fierz aggregate (\ref{zsigma}) in the $\mathbb{R}^{1,3}$ Minkowski spacetime can be now emulated for the 7-sphere. In fact, }} the reconstruction theorem can be then employed for constructing the original spinor field as a section of the spin bundle, from the corresponding Fierz aggregate \begin{eqnarray}}\newcommand{\benu}{\begin{enumerate}}\newcommand{\enu}{\end{enumerate}\label{aggg} \check{\rm Z}=\Upphi_0+\Upphi_4, \end{eqnarray} \noindent that is simpler than its 4D Minkowski counterpart Fierz aggregate, defined in Eq. (\ref{zsigma}). Hence, when an arbitrary spinor $\xi\in S^7$ satisfies $\xi^{\dagger}(\upsigma_2\otimes\gamma_{0})\psi\neq0$, where $\sigma_2=\scriptsize{\begin{pmatrix}0&-i\\i&0\end{pmatrix}}$, the original $S^7$ spinor $\psi$ can be obtained from its Fierz aggregate (\ref{aggg}), \begin{equation} \psi=\frac{1}{2}\left({{\langle\xi^{\dagger}(\upsigma_2\otimes\gamma_{0})\check{\rm Z}\xi}\rangle_0}\right)^{-1/2}\;e^{-i\theta}\check{\rm Z}\xi, \label{3}% \end{equation} \noindent where $e^{-i\theta}={2}(\langle{\xi^{\dagger}(\upsigma_2\otimes\gamma_{0})\check{\rm Z}\xi}\rangle_0)^{-1/2}\langle\xi^{\dagger}(\upsigma_2\otimes\gamma_{0})\psi\rangle_0$. \section{Lifting new spinor fields on the parallelizable $S^7$} Heretofore, new classes of $S^7$ spinors were derived into the classes (\ref{c12} -- \ref{c14}), whose representative spinor fields can be reconstructed by Eq. (\ref{3}). These representative spinor fields are now aimed to be lifted onto the so called parallelizable $S^7$, that can be regarded as the manifold of unit octonions. Among the parallelizable spheres, $S^7$ is the sole one that does not carry a Lie group structure, however a Moufang loop structure, instead. The (sub)bundle of octonionic sections on $S^7$ presents a Moufang loop underlying structure, which is fiberwise. From a geometric point of view, the $S^7$ algebra is a natural stage to generalize the concept of a Lie algebra, wherein the structures constants are substituted by the parallelizable torsion. {\color{black}{The octonionic algebra $\mathbb{O}$ is constituted by a 8-dimensional vector space, with basis $\{e_0,\ldots,e_7\}\subset \mathbb{R}^8$, with $e_0^2=1$, $e_a^2=-1$, for $a=1,\ldots,7$, endowed with the octonionic multiplication, denoted by ``$\circ$'', which is ruled by ${e}_a\circ {e}_b=f^{c}_{ab}{e}_c-\delta_{ab}$, where $f^{c}_{ab}=1$ for the cyclic permutations $\{(abc)\}=\{(126),(237),(341),(452),(563),(674),(715)\}.$ Every octonion $X\in\mathbb{O}$ can be, thus, written as $X=X_0e_0+\sum_{a=1}^7X^ae_a$. Instead of the vector space $\mathbb{R}^8$, one can take the paravector space}} $V_7:=\mathbb{R}\oplus\mathbb{R}^{0,7}$ endowed with the octonionic standard product $\circ \colon V_7 \times V_7 \to V_7$. In fact, the scalar part, {\color{black}{$X^0$}} does correspond to the real part of an octonion, whereas the vector component, {\color{black}{$\sum_{a=1}^7X^ae_a$, }} regards the imaginary part. {\color{black}{In this case, the}} identity ${e}_0=1$ and an orthonormal basis $\left\{{e}_a\right\}^{7}_{a=1}$ of $V_7 \hookrightarrow {\mathcal{C}}\ell_{0, 7}$ generate the octonion algebra \cite{baez}. The octonionic product can be emulated at the Clifford algebra ${\mathcal{C}}\ell_{0, 7}$ as \begin{eqnarray}}\newcommand{\benu}{\begin{enumerate}}\newcommand{\enu}{\end{enumerate} \label{201}A\circ B=\left\langle AB(1-\mho)\right\rangle_{0\oplus 1}, \quad A,B \in V_7 ,\end{eqnarray} where $\mho={e}_7{e}_1{e}_5+{e}_6{e}_7{e}_4+{e}_5{e}_6{e}_3+{e}_4{e}_5{e}_2+{e}_3{e}_4{e}_1+{e}_2{e}_3{e}_7+{e}_1{e}_2{e}_6$ {\color{black}{is a 3-form}}, and the juxtaposition denotes the Clifford product. The symbol $\langle \chi\rangle_{0\oplus 1}$ denotes the projection of a multivector $\chi\in{\mathcal{C}}\ell_{0,7}$ onto its paravector components. For the underlying Lie algebra $\mathfrak{g}_7$, the Lie bracket satisfies \begin{eqnarray}}\newcommand{\benu}{\begin{enumerate}}\newcommand{\enu}{\end{enumerate} [[{e}_{i},{e}_{j}],{e}_k]+[[{e}_{k},{e}_{i}],{e}_j]+[[{e}_{j},{e}_{k}],{e}_i] = (\delta_{i[k|}\delta_{j|p]}+\upepsilon_{mij}\upepsilon_{mkp}){e}_{p},\end{eqnarray}\noindent where $A_{[ab]}=\frac12(A_{ab}-A_{ba})$, for any tensor $A_{ab}$, and the Einstein's summation convention is used hereon. The (Clifford) conjugation of $X=X^0+X^b{e}_b \in \mathbb{O}$ reads $\bar{X}=X^0-X^b{e}_b$, for $X^0, X^a$ real coefficients. Given $X \in S^7$, the \textit{$X$-product} is defined by \cite{ced} \begin{equation}\label{205} A\circ_X B:=(A\circ X)\circ(\bar{X}\circ B). \end{equation} The expressions below are shown in, e.g., \cite{ced} \begin{equation}\label{3p14} (A\circ X)\circ(\bar{X}\circ B)=X\circ ((\bar{X}\circ A)\circ B)=(A\circ(B\circ X))\circ \bar{X}. \end{equation} As we dealed with bundles in the previous sections, the octonion bundle \begin{equation}\mathbb{O}S^7\simeq (\mathbb{R}\times S^7)\oplus TS^7,\end{equation} {\color{black}{shall be employed,}} where $TS^7$ denotes the tangent bundle on $S^7$, with fibers $\mathbb{R}\oplus T_X S^7$ \cite{Grigorian:2015jwa}. {\color{black}{Hence, given $A,B,C\in\mathbb{O}S^7$, and }}the associator $ [A,B,C] = A\circ (B\circ C)- (A\circ B)\circ C,$ one can write \cite{Grigorian:2015jwa} \begin{equation}\label{205} A\circ_X B=A\circ B + [A,B,\bar{X}]\circ X. \end{equation} Although $A\circ_X B\neq A\circ B$ in general, choosing $X$ as being an element of the following sets of vector fields $\{\pm {e}_{b}\}$, $\{(\pm{e}_{a}\pm {e}_{b})/\sqrt{2}\}$, $\{(\pm {e}_{a}\pm {e}_{b}\pm {e}_{c}\pm {e}_{d})/2\,|\, \; {e}_{a}\circ({e}_{b}\circ({e}_{c}\circ{e}_{d}))=\pm 1\}$, makes the equality $A\circ_X B= A\circ B$ to hold {\color{black}{for such particular values of $X$}} \cite{dix}. Eq.~(\ref{3p14}) shows that {\color{black}{the octonionic field}} $X\in \sec(\mathbb{O}S^7)$ determines two endomorphisms of the octonionic algebra, $f_1, f_2\in$ End($\sec(\mathbb{O}S^7)$), defined by $ A \circ_X B = f_1(A \circ f_1^{-1}(B)) = f_2(f_2^{-1}(A) \circ B)$, for all $A, B \in \sec(\mathbb{O}S^7).$ The \textit{quasi-alternativity} of the $\circ_X$-multiplication then follows as \begin{gather} A\circ_X (A\circ_X B)=(A\circ A)\circ_X B, \qquad (A\circ_X B)\circ_X B=A\circ_X (B\circ B). \end{gather} The $X$-product can be, thus, seen as the original octonionic product. In fact, there exists an orthogonal mapping $T\in {\rm SO}(\mathbb{R}^{0,7})$, such that the mapping $\rho: (V_7, \circ)=\mathbb{O}\to(V_7, \circ_X) = \mathbb{O}_X$, given by $a+ {\bf v} \overset{\rho}{\mapsto} a+ T({\bf v})$, is an isomorphism, for all $a\in \mathbb{R}$ and ${\bf v} \in \mathbb{R}^{0,7}$ \cite{daRocha:2012tw}. The reciprocal statement is up to now a conjecture. Besides, an orbit whose elements are isomorphic copies of $\mathbb{O}$ obtained out of any fixed copy of $\mathbb{O}$ is an orbifold $ S^7/\mathbb{Z}_2=\mathbb{R} \mathrm{P}^7$, being diffeomorphic to SO(7)/$G_2$. In fact, identifying two antipode points on $S^7$ yields $A\circ_{-X}B=A\circ_X B$. One of the most natural ways of obtaining a parallelizable $S^7$ is choosing two non-canonical connections on Spin(7)/$G_2$ \cite{Lukierski:1983qg}. Besides, the sphere $S^7$ plays a prominent role on the (quaternionic\footnote{We denote hereon by $\mathbb{H}$ the ring of quaternions.}) Hopf fibration $ S^3\hookrightarrow S^7\overset{p}{\rightarrow} S^4$, \cite{jayme}. In this sense, $S^7$ can be realized as being the set $\{(q_1,q_2)\in \mathbb{H}^2\,|\,\|q_1\|^2+\|q_2\|^2=1\}$, where $p:S^7\to S^4$ maps the pair $(q_1,q_2)$ to $q_1/q_2$, an element in the projective line $\mathbb{H P}^1\approx S^4$. Thus, each fiber is represented by a torsor that is parametrized by quaternions of unit norm, defining $S^3$. A construction of this Hopf algebra was also realized using regular spinors, being the most important realization with respect to the Lounesto's spinor field classification, in Refs. \cite{jayme,daRocha:2008we,EPJC}. More generally speaking, without considering just the $S^7$ manifold, a $n$-manifold $M$ is said to have the property of global parallelizability if there are $n$ linearly independent vector fields defined on $M$. Thereupon, for each $X\inM$, one can linearly combine these fields to obtain an orthonormal basis for $T_XM$. Given one of these bases, since vectors are linear combinations of such elements, their covariant derivative in different points can be taken in a natural way, which results in path independence for the parallel transport. In fact, it follows that \begin{equation} [\mathring{D}_{\mu},\mathring{D}_{\nu}]=0=R_{\mu\nu}, \end{equation} where $\mathring{D}$ denotes the covariant derivative defined with respect to this parallel transport, {\color{black}{whereas $R_{\mu\nu}$ denotes the curvature tensor}}. As usual, $ \mathring{D}=\partial+\mathring{\Gamma}=D-T,$ where $T$ denotes the parallelizing torsion and $\mathring{\Gamma}$ is the parallelizing connection. Let ${e_{\nu}}^{\rm a}$ indicate the \textit{vielbein}, related to a non-coordinate basis, wherein roman letters indicate the indexes of the tangent spaces, accordingly. As $D_{\mu}{e_{\nu}}^{\rm a}=0$, the covariant derivative of the \textit{vielbein} yields $ \mathring{D}_{\mu}{e_{\nu}}^{\rm a}={-T_{\mu\nu}}^{\rm a} $. Now, one can look at a manifold $M$, that for our case is $S^7$, and consider the infinitesimal translations determined by the covariant derivatives. As may be seen, it is straightforward that these translations configure a closed algebra \cite{ced}: \begin{equation} [D_{\rm a},D_{\rm b}]=[{e_{\rm a}}^{\mu}D_{\mu},{e_{\rm b}}^{\nu}D_{\nu}]=2{e_{\rm a}}^{\mu}[D_{\mu},{e_{\rm b}}^{\nu}]D_{\nu}=2{e_{\rm a}}^{\mu}{T_{\mu \rm b}}^{\nu}D_{\nu}=2{T_{\rm ab}}^{\rm c}D_{\rm c}. \end{equation} When the manifold is also a group manifold, it is evident that the parallelizing torsion does not depend on the point chosen and is, thus, only expressed by the structure constants. Nonetheless, $S^7$ must be carefully considered, for the torsion varies at each point on the manifold. This fact is intrinsically related to the non-associativity of $\mathbb{O}$, as it can be seen in Ref. \cite{ced}. For {\color{black}{a field}} $X\in\sec(\mathbb{O} S^7)$, one can construct a parametrization of $S^7$ with respect to unitary octonionic fields {\color{black}{$\frac{X}{|X|}\in\sec(\mathbb{O} S^7)$}}. The tangent space $T_XS^7$ is spanned by the usual octonionic basis as $\{X\circ {e}_i\}_{i = 1}^7$. Now, as introduced in {\color{black}{Ref. }}\cite{ced}, let us consider the infinitesimal operator $\delta_A$, where $A\in\sec(\mathbb{O} S^7)$ is now a pure imaginary octonionic field, acting on $X$ as $\delta_A X=X\circ A$. This transformation defines the parallel transport on the basis spanned by the choice of $X$. An explicit derivation can be realized \cite{ced} to find the commutator of the defined transformations: \begin{eqnarray} [\delta_A,\delta_B]X &\equiv& \delta_A(\delta_B X)-\delta_B(\delta_A X =X\circ\bigl(\bar{X}\circ((X\circ\,B\,)\circ A)-\bar{X}\circ((X\circ A)\circ\,B\,)\bigr). \end{eqnarray} It can be shown that the parameter $ \bar{X}\circ((X\circ\,B\,)\circ A)-\bar{X}\circ((X\circ A)\circ\,B\,)\!=\! 2\{\bar{X}\circ((X\circ\,B\,)\circ A)\} $ is twice the negative of the parallelizing torsion \cite{roo}. Componentwise, \begin{equation}\label{123123} T_{\rm abc}(X)=[(\bar{{e}}_{\rm a}\circ\bar{X})\circ(X\circ {e}_{\rm b})\circ {e}_{\rm c}] \quad \mbox{and} \quad [\delta_A,\delta_B]=2T_{\rm abc}(X)\delta_{\rm c}, \end{equation} presenting, thus, a Moufang loop (or Moufang quasigroup) structure in the second equation in (\ref{123123}). Therefore, one can see that the operator $\delta$ and the parallelizing covariant derivative are, in fact, in a 1-1 correspondence. Now, taking another field $\zeta\in\mathbb{O}$, with ${\zeta\over |\zeta|}\!=\! Y\in \sec(\mathbb{O} S^7)$, over the same orientation given by the choice of $X$, and transforming it such that the relations in Eq. (\ref{123123}) are preserved, such properties preclude the straightforward \emph{ansatz} $\delta_A Y\!=\! Y \circ \,A\,$ \cite{ced}. The two regarded fields on $S^7$ must, thus, transform according to another rule, that may seem at a first glance, not the simplest choice. Ref. \cite{ced} derived the appropriate transformation rule for fermionic fields on $S^7$, taking into account its underlying parallelizable torsion, as $ \delta_AY=Y\circ_X A.$ Now, the new classes of spinors on $S^7$ can be lifted onto the parallelizable $S^7$. In fact, for it we need to remember the equivalence between the classical and the algebraic spinor fields. Going back to the 4D Minkowski spacetime, the standard Dirac spinor $\psi$ was identified, e. g., in Ref. \cite{lou2} as an element of the minimal left ideal $(\mathbb{C}\otimes {\mathcal{C}}\ell _{1,3}){\rm f}$ associated to the Dirac--Clifford algebra $(\mathbb{C}\otimes {\mathcal{C}}\ell _{1,3})$, generated by the primitive idempotent $ {\rm f}=\frac{1}{4}(1+\gamma_{0})(1+i\gamma_{1}\gamma_{2})$ yielding $\psi \in (\mathbb{C}\otimes {\mathcal{C}}\ell _{1,3})f$ is an algebraic spinor \cite{lou2}. Hence, using the Dirac representation of the gamma matrices, the algebraic spinor\begin{equation} \psi = \begin{pmatrix} \psi_1 & 0 & 0 & 0 \\ \psi_2 & 0 & 0 & 0 \\ \psi_3 & 0 & 0 & 0 \\ \psi_4 & 0 & 0 & 0 \end{pmatrix}\in (\mathbb{C}\otimes {\mathcal{C}}\ell _{1,3}){\rm f} \simeq \mathcal{M}(4,\mathbb{C}){\rm f}, \end{equation} is equivalent to the classical spinor $\psi=(\psi_1, \psi_2, \psi_3, \psi_4)^\intercal \in \mathbb{C}^{4}.$ Now, this concept can be extended for the parallelizable $S^7$, emulating the transformation $\delta_A \psi =\psi\circ_X A$ that can encompass algebraic spinor fields. For it, let us consider the Clifford algebra ${\mathcal{C}}\ell_{0,7}$ on a tangent space $T_X S^7$, at a point $X\in S^7$. According to the Radon--Hurwitz theorem in the Appendix \ref{app1}, for $k = 7 - r_7 = 4$, one aims the set $\{e_{I_1}, {e}_{I_2}, {e}_{I_3}, {e}_{I_4}\}\subset {\mathcal{C}}\ell_{0,7}$ that commute and squares the identity \cite{Vaz:2016qyw}. Identifying, for example \cite{daRocha:2012tw} ${e}_{I_1} = {e}_{1}{e}_{2}{e}_{3}$, ${e}_{I_2} = {e}_{1}{e}_{4}{e}_{5}$, ${e}_{I_3} = {e}_{1}{e}_{6}{e}_{7}$, and ${e}_{I_4} = {e}_{3}{e}_{4}{e}_{7}$, yields that the idempotent \begin{equation*} f = \frac{1}{16}(1+{e}_1{e}_2{e}_3) (1+{e}_1{e}_4{e}_5) (1+{e}_1{e}_6{e}_7) (1+{e}_3{e}_4{e}_7)\in{\mathcal{C}}\ell_{0,7} \end{equation*} is a primitive one. Hence, a spinor $\psi\in S^7$ has its algebraic version as the element $\mathring\psi f$ of the left ideal ${\mathcal{C}}\ell_{0,7}f$, for some multivector $\mathring\psi\in{\mathcal{C}}\ell_{0,7}$. This is accomplished just for introducing the $S^7$ spinor into the Clifford bundle itself, on $S^7$. Now, to write the correct transformation of a fermionic field on the parallelizable $S^7$, given an element of the vector space underlying ${\mathcal{C}}\ell_{0,7}$, a non-associative product called the $\xi$-product was introduced in \cite{JA} as a natural generalization for the $X$-product. For homogeneous multivectors $\xi=u_1\wedge\ldots \wedge u_k\in\sec\Lambda^k(\mathbb{R}^{0, 7})\hookrightarrow\sec{\mathcal{C}}\ell_{0, 7}$, where $\left\{u_p\right\}^{k}_{p=1}\subset\sec T\mathbb{R}^{0, 7}$ and $A\in\sec(\mathbb{O}S^7)$, the products $\bullet_\llcorner$ and $\bullet_\lrcorner$ are defined (and extended by linearity) by \cite{JA,daRocha:2012tw} \begin{eqnarray}}\newcommand{\benu}{\begin{enumerate}}\newcommand{\enu}{\end{enumerate} \bullet_\llcorner \colon \sec(\mathbb{O}S^7)\times \sec\Lambda^k(\mathbb{R}^{0, 7})&\to& \sec(\mathbb{O}S^7)\nonumber\\ (A, \xi)&\mapsto& A\underset{\tiny{X}}{\bullet_\llcorner} \xi=(\cdots((A\circ_X u_1)\circ u_2)\circ \cdots)\circ u_k, \label{209}\\ \bullet_\lrcorner \colon \sec\Lambda^k(\mathbb{R}^{0, 7})\times \sec(\mathbb{O}S^7)&\to&\sec(\mathbb{O}S^7)\nonumber\\ (\xi, A)&\mapsto& \xi\underset{\tiny{X}}{\bullet_\lrcorner} A=u_1\circ (u_2\circ(\cdots \circ (u_k \circ A))\cdots ).\label{210}\end{eqnarray} Hence, within the above constructions, the transformation of the reconstructed spinor field on $S^7$ from its bilinear covariants in Eq. (\ref{3}), that is a representative of the new classes (\ref{c12} -- \ref{c14}) of spinor fields on $S^7$, can be defined as \begin{eqnarray}}\newcommand{\benu}{\begin{enumerate}}\newcommand{\enu}{\end{enumerate}\label{ferm1} \delta_A\psi=\psi\underset{\tiny{X}}{\bullet_\lrcorner} A, \quad \forall A\in\sec(\mathbb{O}S^7). \end{eqnarray} In this way, the previous new classes of $S^7$ spinors are lifted onto the parallelizable $S^7$. This transformation is compatible to the ones defined in Ref. \cite{top}. \section{Conclusions} We have managed to establish the reconstruction theorem for the new classes of spinor fields on $S^7$ using the generalized Fierz aggregate, for each recently found new class of spinor fields on the $S^7$ spin bundle according to their bilinear covariants. Besides, this categorization has enabled the construction of new fermionic fields on the parallelizable $S^7$, promoting the new classes of classical spinor fields on $S^7$ to new classes of algebraic ones. Hence, the correct transformation of these elements, generating a Moufang loop structure on the parallelizable $S^7$ was derived. Aiming to this procedure, we briefly reviewed the parallelizability property on the parallelizable $S^7$, wherein the parallel transport could be analyzed with respect to the torsion. Therein, the non-associativity of the octonionic bundle on $S^7$ was related to the torsion tensor on the parallelizable $S^7$, as a function dependent on each point on $S^7$, via the $X$-product. In this way, additional classes of fermionic (spinor) fields on the parallelizable $S^7$ have been constructed, according to the classes obtained heretofore, lifted from the $S^7$ spin bundle, with the right transformation under infinitesimal transformations. Our results, thus, generalize the ones in Ref. \cite{ced}, also proposing new classes of fermionic fields that may play the role of the solutions in compactfications of supergravity. \subsubsection*{Acknowledgments} AYM thanks to FAPESP (grant No. 2016/14021-8) and RdR~is grateful to CNPq (Grant No. 303293/2015-2), and to FAPESP (Grant No.~2017/18897-8), for partial financial support.
1,314,259,994,051
arxiv
\section{Introduction} \label{sec::Intro} Among the open challenges in the field of fluid mechanics, the accurate prediction and control of the turbulent flows around bluff bodies is a timely subject in the current development of automated, data-informed urban areas. A bluff body is an immersed solid (such as a vehicle, or a building) for which the interaction of the surrounding flow and its shape is responsible for the emergence of large, energetic wakes. When high Reynolds numbers are considered, the global aerodynamic interactions are characterized by complex concurring phenomena such as shear layers, flow separation, and reattachment and recirculation regions. The predictions of such features is a complex task, owing to the large range of active dynamic scales that can be observed in fully developed turbulence. Computational resources required to completely represent turbulent flows via direct numerical simulations are prohibitive for Reynolds numbers observed in realistic applications dealing with urban settings. Reduced-order Computational Fluid Dynamics (CFD) such as RANS \cite{Pope2000_cambridge,Wilcox2006_DCW} can provide a description of complete urban areas with affordable resources, but the accuracy of such prediction is strongly affected by the features of the turbulence model needed to close the dynamic equations. Such models, which are driven by a number of coefficients classically determined via empiric approaches, usually fail in representing interactions of different physical phenomena triggered by turbulence. Experimental approaches, which rely on measurement that can be obtained by various techniques, such as pressure sensors and hot wires, can provide a virtually exact characterization of the flow features, in the form of pressure and velocity measurements. However, experimental data may be local in space and time, and a full volume representation of flows is prohibitively expensive. In addition, experiments may be affected by difficulties in sensor positioning, which can preclude sampling in sensitive regions of the flow. Studies in the last two decades have tried to create a solid network between numerical simulation and experiments, in order to exploit the intrinsic advantages of both methods. Among the several proposals in the literature, Data Assimilation \cite{Daley1991_cambridge} is naturally fit to combine experimental and numerical data, in order to obtain a more accurate prediction of the flow. Sequential DA uses tools from probability and statistics to target physical states with minimal uncertainty (state estimation), once different sources of information and their related level of confidence are provided. Among these methods, one can include the Kalman Filter (KF) \cite{Kalman1960_jbe} and its ensemble version, the Ensemble Kalman Filter (EnKF) \cite{Evensen2009_Springer,Asch2016_SIAM} which is de facto among the most powerful tools available for DA. The EnKF can obtain a precise state estimation but also use this physical state to train an underlying model, such as CFD, to better perform in operative conditions. In this article, RANS simulation is augmented via the integration of experimental data, for the flow around a rectangular building. The augmentation is performed by optimizing a number of global free constants that determine the behaviour of the turbulence closure. Both time-averaged pressure and time-averaged velocity, which are sampled at a limited number of sensors, are used for this purpose. Despite the fact that a number of data-driven analyses to optimize RANS modelling are reported in the literature, using DA \cite{Xiao2016_jcp,Zhang2020_cf} or tools based on machine learning \cite{Duraisamy2019_arfm,Schmelzer2020_ftc}, most of those approaches employ high-fidelity numerical data as reference. The reason why is that numerous additional difficulties are expected when using experimental results, which include overfitting which can lead to the model divergence. It will be shown that approaches based on the EnKF, owing to the smoothing characteristics of the filter, are suitable for robust integration of experimental data within the reduced-order CFD formalism. In section \ref{sec::Num}, the numerical strategies and the algorithms used in this work are presented. This includes a description of the numerical solver used as well as a presentation of the data-driven strategies, which are integrated into a specific C++ library. In section \ref{sec:DA_exp}, the setup of the DA analysis is presented. The different techniques which will be compared are detailed. In section \ref{sec:rez} the results obtained are compared with data from a high-fidelity simulation. At last, in section \ref{sec:conclusions} the final remarks are drawn, and future perspectives are discussed. \section{Numerical Ingredients} \label{sec::Num} \subsection{Numerical code: OpenFOAM} Numerical simulations in this work are performed using a C++ open-source library known as \textit{OpenFOAM}. This library includes a number of solvers based on Finite Volume (FV) discretization \cite{Ferziger1996_springer}, as well as a number of utilities for preprocessing, postprocessing, and data manipulation. Owing to the free license and the very large number of modules available, allowing for extended multi-physics analyses, this code has been extensively used in the literature for research work in fluid mechanics \cite{Constant2017_cf,Meldi2012_pof}. For this work, the FV numerical discretization is performed for the RANS Navier-Stokes equations for incompressible flows and Newtonian fluids \cite{Pope2000_cambridge}: \begin{eqnarray} \overline{u}_j \frac{\partial \overline{u}_i}{\partial x_j} &=& - \frac{\partial \overline{p}}{\partial x_i} + \frac{\partial \overline{\tau}_{ij}}{\partial x_j} - \frac{\partial \tau^T_{ij}}{\partial x_j} \qquad i=1,2,3 \label{eq:Mom}\\ \nabla^2 \overline{p} &=& - \frac{\partial \overline{u}_{j}}{\partial x_i} \frac{\partial \overline{u}_{j}}{\partial x_i} - \frac{\partial}{\partial x_i} \left( \frac{\partial \tau^T_{ij}}{\partial x_j} \right) \label{eq:Poisson} \end{eqnarray} where eq. \ref{eq:Mom} is the momentum equation and eq. \ref{eq:Poisson} is the Poisson equation. The variables used are the velocity $\mathbf{u}=[u_1,u_2,u_3]$, the normalized pressure $p$, the viscous stress tensor $\tau_{ij}$ (which is modelled using the Newtonian fluid hypothesis) and the Reynolds stress tensor $\tau^T_{ij}$. The overbar indicates the average operation performed to obtain eqs. \ref{eq:Mom} - \ref{eq:Poisson}. Within the RANS framework, a turbulence closure must be used for $\tau^T_{ij}$. The $\mathcal{K} - \varepsilon$ model \cite{Launder1974_lhmt,Wilcox2006_DCW} uses the eddy viscosity hypothesis to create a link between $\tau^T_{ij}$ and the gradient of the averaged velocity $\overline{\mathbf{u}}$: \begin{equation} - \tau^T_{ij} = 2 \nu_T \overline{S}_{ij} - \frac{2}{3} \mathcal{K} \delta_{ij} \label{eq:ReyStressConst} \end{equation} where $\nu_T$ is the turbulent viscosity, $\mathcal{K}$ is the turbulent kinetic energy and $\overline{S}_{ij}$ is the mean strain rate: \begin{equation} \overline{S}_{ij} = \frac{1}{2} \left( \frac{\partial \overline{u}_i}{\partial x_j} + \frac{\partial \overline{u}_j}{\partial x_i} \right) \label{eq:meanStrainRate} \end{equation} In the $\mathcal{K} - \varepsilon$ model, $\nu_T$ is expressed as an algebraic function of $\mathcal{K}$ and the energy dissipation rate $\varepsilon$: \begin{equation} \nu_T = C_{\mu} \frac{\mathcal{K}^2}{\varepsilon} \end{equation} where $C_{\mu}$ is a model constant to be calibrated. To close the problem, two model equations for $\mathcal{K}$ and $\varepsilon$ must be included: \begin{eqnarray} \frac{\partial \mathcal{K}}{\partial t} + \overline{u}_j \frac{\partial \mathcal{K}}{\partial x_j} &=& \frac{\partial}{\partial x_j} \left( (\nu+\frac{\nu_T}{\sigma_\mathcal{K}}) \frac{\partial \mathcal{K}}{\partial x_j} \right) + \mathcal{P} - \varepsilon \\ \frac{\partial \varepsilon}{\partial t} + \overline{u}_j \frac{\partial \varepsilon}{\partial x_j} &=& \frac{\partial}{\partial x_j} \left( (\nu+\frac{\nu_T}{\sigma_\mathcal{\varepsilon}}) \frac{\partial \varepsilon}{\partial x_j} \right) + C_{\varepsilon 1} \frac{\varepsilon}{\mathcal{K}} \mathcal{P} - C_{\varepsilon 2} \frac{\varepsilon^2}{\mathcal{K}} \end{eqnarray} where the production term $\mathcal{P}=\nu_T \overline{S}^2$, $\overline{S}=\sqrt{2 \overline{S}_{ij} \overline{S}_{ij}}$. The model is complete once the five constants $C_\mu$, $C_{\varepsilon 1}$, $C_{\varepsilon 2}$, $\sigma_{\mathcal{K}}$ and $\sigma_{\varepsilon}$ are determined. Launder and Sharma \cite{Launder1974_lhmt} provided values which were determined via the analysis of academic test cases, such as the free decay of homogeneous isotropic turbulence of the turbulent plane channel. However, these coefficients are not constants, but they are a function of the local dynamics of the flow and their interaction with global features of the flow (see discussion in Refs. \cite{Margheri2014_caf,Xiao2019_pas,Duraisamy2019_arfm}). \subsection{Observation: experimental data obtained in wind tunnels}\label{S:OD} The experiments are conducted in the atmospheric boundary-layer wind tunnel of the Ruhr-University Bochum, Germany. The wind tunnel has a cross-section of $1.6\times1.8 m$ and a test section length of 9.4 m. Fig. \ref{FIG:WindTunnelModelAndInflow} a) shows the wooden building model mounted on a rotating table in the wind tunnel. The boundary layer flow is generated in the wind tunnel using both spires at the tunnel inlet and roughness elements. The mean wind profile matches that of a power law with the exponent of 0.2 as shown in Fig.\ref{FIG:WindTunnelModelAndInflow} b). This is representative of the terrain category II \citep{EN2005} simulating realistic conditions of the flow around isolated high-rise buildings. \begin{figure}[ht] \centering \includegraphics[scale=0.65]{WTmodelAndInflow.jpg} \caption{Wind tunnel test section used to produce experimental data} \label{FIG:WindTunnelModelAndInflow} \end{figure} The sampled data is heterogeneous as different sensors are used to capture features of the velocity field on the roof and pressure measurements on the surface of the building. The velocities above the roof are mainly measured at three different heights ($z/D=0.075$, $0.3$, and $0.45$) above the points marked in red in Fig. \ref{FIG:ModelAndMeasurementPoints} a). In addition, above the centre of the roof, marked P36 in Fig. \ref{FIG:ModelAndMeasurementPoints}, ten heights are considered with the spacing of $z/D=0.075$. The measurements are performed using a hot-wire anemometer, which consists of two cross wires allowing to measure both stream-wise and vertical velocity components. All velocity data are sampled with the frequency of $2000$ Hz. \begin{figure}[ht] \centering \includegraphics[scale=1.05]{ModelAndMeasurementPoints.jpg} \caption{Geometry of the high-rise building with (a) main dimensions and coordinate system; (b) top view with pressure tap locations; (c) facades with pressure tap locations (d) velocity observations measured over the rooftop.} \label{FIG:ModelAndMeasurementPoints} \end{figure} In addition to the velocity measurements, the surface pressure is also sampled at different locations. The surface pressure on the roofs of the model are measured using 64 pressure taps, distributed as shown in \ref{FIG:ModelAndMeasurementPoints} b) and c) marked with light gray circles. Surface pressures are acquired with a sampling frequency of 1000 Hz using a multi-channel simultaneous scanning measurement system. The tubing effects are numerically compensated \citep{Neuhaus2010}. More details about the wind tunnel experimentation and the analysis of the flow abound high-rise building, with the special focus on above the roof are presented in \citep{Hemida2020}. In this analysis, experimental data is used to improve the predictive capabilities of stationary RANS models. Therefore, time series available for the velocity components and the pressure have been averaged in time. \begin{figure}[ht] \centering \includegraphics[scale=0.6]{Grid.jpg} \caption{View of the grid used for the RANS calculations. The central vertical plane and an horizontal plane are shown.} \label{FIG:Grid} \end{figure} \subsection{Test case} \label{test_case} The considered high-rise building case is a numerical representation of the wind tunnel tests \citep{Hemida2020}. The model has a square cross-section with edges $B=133.33 mm$, and the height of the building $H=400 mm$, which represents a 120 m tall building in the full-scale. The building has a flat roof, and 0° wind direction is investigated so that the asymtpotic velocity is alligned with the streamwise direction $x$. The lateral direction is $y$ and the normal direction is $z$. The dimensions of the computational domain are chosen adopting the best practice guidelines given by \cite{Tominaga2008}. The upstream domain length is $5H$. The resulting dimensions of the domain are length ($x$) $\times$ width ($y$) $\times$ height ($z$) $15.5H \times 4.5H \times 4H = 6.2 \times 1.8 \times 1.6m^3$. For the $z$ direction, the height has been chosen to match to the height of the wind tunnel. A structured grid is used near the high-rise building surfaces, as shown in Fig. \ref{FIG:Grid}. The distance from the center point of the wall adjacent cell to the building leads to an average $y^+ = 141$ and minimum $y^+ = 40$ which ensures that such point is placed in the logarithmic layer. The total number of mesh elements used to discretize the domain is equal to $513 \, 266$ cells. A grid dependency study was performed by comparing the results against a finer grid. The finer grid is composed of 4.3 million cells, characterized by a spatial resolution that is 2 times higher near the building model than the coarse case. The mean pressure predicted by the coarse and fine grid simulations is compared at the locations of the pressure taps on the roof. The comparison showed that 86\% of the points on the building surface have a relative difference below 10\%. In the simulations the inlet boundary conditions, i.e. mean velocity $\overline{\mathbf{u}}$, the turbulent kinetic energy $\mathcal{K}$ and the turbulence dissipation rate $\varepsilon$, are based on the incident vertical profiles of the mean wind speed $U$ and longitudinal turbulence intensity $I_u$ The turbulent kinetic energy $\mathcal{K}$ is calculated from $U$ and $I_u$: \begin{align} \mathcal{K}(z) &= a(I_u(z)u(z))^2 \label{eq:k}\\ \varepsilon(z) &= \frac{u^{*3}_{ABL}}{\kappa (z+z_0)} \label{eq:eps} \end{align} where $a \in[0.5, \, 1.5]$ \cite{Norton2010}. In this study, $a = 1$ is chosen, as recommended by \citep{Tominaga2008}. The turbulence dissipation rate $\epsilon$ is given by Eq.\ref{eq:eps}, with the von Karman constant $\kappa = 0.42$. The SIMPLE algorithm was used for pressure-velocity coupling. Classical choices have been performed for the numerical schemes. First-order upwind schemes have been used the convection terms, while second order centered shcemes have been used for viscous terms. Pressure interpolation from cell center to face center has been obtained via second order linear schemes native of the OpenFOAM solver. \subsection{Data Assimilation: Ensemble Kalman Filter} Data assimilation (DA) \cite{Daley1991_cambridge,Asch2016_SIAM} is a family of tools allowing to combine several sources of information to obtain an \textit{augmented prediction} exhibiting increased accuracy. Classical applications usually rely on: \begin{itemize} \item a \textit{model}, which provides a (quasi) continuous representation of the physical phenomenon investigated. Physics-based models such as CFD solvers are an example of \textit{model} for fluid mechanics applications \item some \textit{observation}, which is usually more accurate of the model, but it is local in space and time. In fluid mechanics, this data may come from high-fidelity numerical simulations or from experiments \end{itemize} The \textit{augmented prediction} obtained via manipulation of the sources of information can also be actively used to infer an optimized parametric description of the \textit{model}, with the aim to obtain a predictive tool that can provide accurate predictions without having to rely on \textit{observation}. DA has been traditionally used in environmental and weather sciences, but applications in fluid mechanics have seen a rapid rise in recent times \cite{Foures2014_jfm, Rochoux2014_nhess, Xiao2016_jcp, Meldi2017_jcp, Meldi2018_ftc, Labahn2019_pci, Chandramouli2020_jcp, Zhang2020_cf, Mons2021_jfm, Moldovan2021_jcp}. A great variety of methods exists, but two groups can be identified \cite{Asch2016_SIAM, carrassi2018_wcc}: \begin{description}[topsep=0.4cm] \item[Variational methods:] Methods for which the goal is to minimize a cost function applied for the case studied. This minimum, which is usually reached via parametric optimization of the model, corresponds to an accurate flow state. \item[Statistical (sequential) methods:] Methods for which the goal is to minimize the variance of the solution (i.e. increase the confidence in the prediction). \end{description} Variational methods such as the 4DVar have been extensively used for application in fluid mechanics \cite{Artana2012_jcp,Foures2014_jfm,Mons2017_jfm,Chandramouli2020_jcp} in particular with steady state simulations. While sequential tools are supposedly more appropriate for the prediction of nonstationary phenomena, applications to steady flows are reported in the literature \cite{Meldi2017_jcp,Zhang2020_cf,Zhang2020_jcp}. In the present work, we will focus on tools derived from the Kalman Filter, a well known sequential method. \subsubsection{The Kalman Filter} The Kalman filter (KF) \cite{Kalman1960_jbe} is a sequential DA method based on the Bayes theorem. It provides a solution to the linear filtering of time-dependent discrete data. The classical formulation for the analysis of a physical quantity $\mathbf{x}$ relies on the combination of results produced via a discrete model $\mathbf{M}$, which is linear in the original KF, and some observation $\mathbf{y}$. Within the framework of KF, both the model and the observation are affected by errors/uncertainties, which are here referred to as $v$ and $w$, respectively. One of the central hypotheses of the Kalman Filter is that these uncertainties can be accurately described by a Gaussian distribution i.e. $v=\mathcal{N}(0,\mathbf{Q})$ and $w=\mathcal{N}(0,\mathbf{R})$. $\mathbf{Q}$ and $\mathbf{R}$, which also are a function of time, represent the variance of the model and of the observation, respectively. Considering that these errors can be described by a Gaussian distribution, the solution is completely determined by the first two moments of the state i.e. the physical quantity $\mathbf{x}$ and the error covariance matrix $\mathbf{P}=\mathbb{E}((\mathbf{x}-\mathbb{E}(\mathbf{x}))(\mathbf{x}-\mathbb{E}(\mathbf{x}))^T)$. Let us consider the time advancement of the physical system between the instant $k$ and $k+1$. For the latter, observation $\mathbf{y}_{k+1}$ is available. In this case, the data assimilation procedure consists of two phases (See Algorithm~\ref{alg:KF}): \begin{enumerate} \item A \textit{forecast} step (superscript $f$), where the physical state and the error covariance matrix at the time $k$ are advanced in time using the model: $\mathbf{x}_{k+1}^f=\mathbf{M} \mathbf{x}_{k}$ $\mathbf{P}_{k+1}^f=\mathbf{M} \mathbf{P}_{k} \mathbf{M}^T + \mathbf{Q}_{k+1}$ \item An \textit{analysis} step, where observation and forecast are combined to obtain the \textit{augmented prediction}: $\mathbf{K}_{k+1}=\mathbf{P}_{k+1}^f \mathbf{H}^T \left(\mathbf{H} \mathbf{P}_{k+1}^f \mathbf{H}^T + \mathbf{R}_{k+1}\right)^{-1}$ $\mathbf{x}_{k+1}^a=\mathbf{x}_{k+1}^f + \mathbf{K}_{k+1} ( \mathbf{y}_{k+1} - \mathbf{H} \mathbf{x}_{k+1})$ $\mathbf{P}_{k+1}^a=(\mathbf{I} - \mathbf{K}_{k+1} \mathbf{H}) \mathbf{P}_{k+1}^f$ \end{enumerate} where $\mathbf{H}$ is a linear operator mapping the model results to the observation space and $\mathbf{K}$ is the Kalman gain. This matrix takes into account the correlations between the values of the state vector and the values of the observations, and it is the central element providing the final state estimation of the physical system. The main drawbacks of this algorithm for complex applications in fluid mechanics are that i) it is designed for linear models $\mathbf{M}$ and ii) the size of $P$ is directly linked with the number of degrees of freedom of the problem investigated. While the first issue can be bypassed with ad-hoc improvements of the data-driven strategy, which are included for example in the \textit{extended} KF \cite{Asch2016_SIAM, carrassi2018_wcc}, the second one is more serious. In fact, $P$ must be advanced in time like the physical variables. In addition, during the \textit{analysis} phase, extended manipulation of $\mathbf{P}$ is required, including a matrix inversion. For the number of degrees of freedom used in CFD, which are usually in the range $10^6 - 10^8$, this leads to prohibitive requirements in terms of RAM and computational resources. \begin{algorithm} \setstretch{1.2} \caption{Algorithm for The Kalman Filter} \label{alg:KF} \textbf{Forecast steps}\\ \nl \qquad$\mathbf{x}_{k+1}^f = \mathbf{M}\mathbf{x}_k^a$\\ \nl \qquad$\mathbf{P}_{k+1}^f = \mathbf{M}\mathbf{P}_k^a\mathbf{M}^T + \mathbf{Q}_{k+1}$\\ \textbf{Analysis steps}\\ \nl \qquad$\mathbf{K}_{k+1} = \mathbf{P}_{k+1}^f\mathbf{H}^T\left(\mathbf{H}\mathbf{P}_{k+1}^f\mathbf{H}^T + \mathbf{R}_{k+1}\right)^{-1}$\\ \nl \qquad$\mathbf{x}_{k+1}^a = \mathbf{x}_{k+1}^f + \mathbf{K}_{k+1}(\mathbf{y}_{k+1}-\mathbf{s}_{k+1}^f)$\\ \nl \qquad$\mathbf{P}_{k+1}^a = (\mathbf{I}-\mathbf{K}_{k+1}\mathbf{H})\mathbf{P}_{k+1}^f$ \end{algorithm} \subsubsection{The (stochastic) Ensemble Kalman Filter} The Ensemble Kalman filter (EnKF) \cite{Evensen2009_IEEE,Asch2016_SIAM} is a popular data-driven strategy based on the KF which provides an efficient solution to the issues previously discussed. The idea is that the error covariance matrix $\mathbf{P}$ is not advanced in time anymore, but it is approximated via an ensemble of model runs. This strategy allows to fully account for non linearity of the model and it virtually eliminates the computational burdens associated with the manipulation of $\mathbf{P}$. The complete structure of the EnKF, which is summarized in the Algorithm \ref{alg:EnKF}, is now discussed. The EnKF relies on $N_e$ realizations of the model, which is the model ensemble. At a given instant $k$, the realizations can be assembled in a state matrix $\mathbf{X}_S$ of size $[N,N_e]$, where $N$ is the number of degrees of freedom of the physical problem investigated. Therefore, keeping the usage of the superscripts $f$ and $a$ introduced for the KF, each column of $\mathbf{X}_S$ corresponds to the state $\mathbf{x}_{i,k+1}^f$ of the $i^{th}$ member, where $i \in [1, N_e]$. An approximation of the error covariance matrix $\mathbf{P}_e$ can be obtained exploiting the hypothesis of statistically independence of the ensemble members: \begin{equation} \mathbf{P}_e^f = \mathbf{X}^f(\mathbf{X}^f)^T \end{equation} where $\mathbf{X}^f$ is the anomaly matrix which represents the deviation of all the values of the state vectors from their ensemble mean: \begin{equation} \mathbf{X}_{k+1} = \frac{\mathbf{x}_{i,k+1}-\overline{\mathbf{x}_{k+1}}}{\sqrt{N_e-1}} \; , \qquad \overline{\mathbf{x}_{k+1}} = \frac{1}{N_e}\sum_{i = 1}^{N_e}\mathbf{x}_{i,k+1} \end{equation} The sampled observation, which consist of $N_o$ elements, is also expanded to obtain $N_e$ sets of values. To do so, a Gaussian noise based on the covariance matrix of the measurement error $\mathbf{R}_{k+1}$ is added to the observation vector: \begin{equation} \label{eqn:EnKF_obs} \mathbf{y}_{i,k+1} = \mathbf{y}_{k+1} + \mathbf{e}_{i,k+1},\; with \; \mathbf{e}_{i,k+1} \thicksim \mathcal{N}(0, \mathbf{R}_{k+1}) \end{equation} Finally, the model realizations are projected to the observation space $N_e$ times, similarly to the classical KF: \begin{equation} \label{eqn:EnKF_Projobs} \mathbf{s}_{i,k+1} = \mathbf{H} \mathbf{x}_{i,k+1} \end{equation} All of these elements together allow for the determination of the Kalman gain: \begin{eqnarray} \mathbf{S}_{k+1}= \frac{\mathbf{s}_{i,k+1}-\overline{\mathbf{s}_{k+1}}}{\sqrt{N_e-1}} \; , \qquad \overline{\mathbf{s}_{k+1}} = \frac{1}{N_e}\sum_{i = 1}^{N_e}\mathbf{s}_{i,k+1} \\ \mathbf{E}_{k+1}= \frac{\mathbf{e}_{i,k+1}-\overline{\mathbf{e}_{k+1}}}{\sqrt{N_e-1}} \; , \qquad \overline{\mathbf{e}_{k+1}} = \frac{1}{N_e}\sum_{i = 1}^{N_e}\mathbf{e}_{i,k+1} \\ \label{eqn:EnKF_gain_R} \mathbf{K}_{k+1} = \mathbf{X}_{k+1}^f(\mathbf{S}_{k+1}^f)^T \left[\mathbf{S}_{k+1}^f(\mathbf{S}_{k+1}^f)^T + \mathbf{E}_{k+1}(\mathbf{E}_{k+1})^T\right]^{-1} \end{eqnarray} In an infinite ensemble size $\mathbf{E}_{k+1}(\mathbf{E}_{k+1})^T$ tends to the matrix $\mathbf{R}$ of the Kalman filter. In practice the size is limited, thus the product of the perturbations is simplified by the diagonal matrix $\mathbf{R}_{k+1}$ gaining simplification and computational cost \cite{carrassi2018_wcc, Hoteit2015_mwr}. In addition, $\mathbf{P}_e$ can be directly estimated from the ensemble members for each analysis phase, and there is no need for memory storage/time advancement. Finally, the update of the state vectors is then performed in the same way as in the classical KF, the only difference being that $N_e$ updates have to be performed: \begin{equation} \mathbf{x}_{i,k+1}^a = \mathbf{x}_{i,k+1}^f + \mathbf{K}_{k+1}(\mathbf{y}_{i,k+1}-\mathbf{s}_{i,k+1}^f) \end{equation} The EnKF can also be used to optimize the parametric description of the model. The underlying idea is that the parameters are updated at the end of the analysis phase so that the model can provide a more accurate prediction of the physical phenomenon investigated, reducing the difference between the model predicted state and the final state estimation. Several strategies are proposed in the literature \cite{Asch2016_SIAM} and, among those, one showing efficiency for a relatively small set of parameters (referred to as $\theta$) and easy to implement is the so-called \textit{extended state}. In this strategy, the steps of the EnKF are performed for a state $\mathbf{x}^\star$ which is defined as: \begin{equation} \mathbf{x}^\star = \begin{bmatrix} \mathbf{x} \\ \theta \end{bmatrix} \end{equation} That is, the state used for the EnKF includes both the physical state and the parametric description of the model. For this very simple algorithm, the size of the global state is now equal to $N^{\star}=N + N_\theta$, where $N_\theta$ is the number of parameters to be optimized. This modification brings a negligible increase in computational costs if $N_\theta << N$, and it allows us to obtain simultaneously an updated state estimation and optimized parametric description for the model at the end of the analysis phase. \begin{algorithm}[H] \caption{Algorithm for the Ensemble Kalman Filter} \label{alg:EnKF} \textbf{For $N_e$ the number of member in the ensemble, $i = 1,...,N_e$:} \nl Advancement in time of the state vectors:\\ \qquad $\boldsymbol{x}_{i,k+1}^f = \mathcal{M}\boldsymbol{x}_{i,k}^a$ \\ \nl Creation of an observation matrix from the observation data by introducing errors:\\ \qquad$\boldsymbol{y}_{i,k+1} = \boldsymbol{y}_{k+1} + \boldsymbol{e}_{i}$, with $\boldsymbol{e}_{i} \thicksim \mathcal{N}(0,\boldsymbol{R})$\\ \nl Calculation of the predicted observation:\\ \qquad$\boldsymbol{s}_{i,k+1}^f = \mathcal{H}\boldsymbol{x}_{i,k+1}^f$\\ \nl Calculation of the ensemble means:\\ \qquad$\overline{\boldsymbol{x}_{k+1}^f} = \frac{1}{N_e}\sum_{i = 1}^{N_e}\boldsymbol{x}_{i,k+1}^f$,\, $\overline{\boldsymbol{s}_{k+1}^f} = \frac{1}{N_e}\sum_{i = 1}^{N_e}\boldsymbol{s}_{i,k+1}^f$,\\ \qquad$\overline{\boldsymbol{e}_{k+1}} = \frac{1}{N_e}\sum_{i = 1}^{N_e}\boldsymbol{e}_{i,k+1}$\\ \nl Calculation of the anomaly matrices:\\ \qquad$\boldsymbol{X}_{k+1} = \frac{\boldsymbol{x}_{i,k+1}-\overline{\boldsymbol{x}_{k+1}}}{\sqrt{m-1}}$,\, $\boldsymbol{S}_{k+1} = \frac{\boldsymbol{s}_{i,k+1}-\overline{\boldsymbol{s}_{k+1}}}{\sqrt{m-1}}$,\, $\boldsymbol{E}_{k+1} = \frac{\boldsymbol{e}_{i,k+1}-\overline{\boldsymbol{e}_{k+1}}}{\sqrt{m-1}}$\\ \nl Calculation of the Kalman gain:\\ \qquad$\boldsymbol{K}_{k+1} = \boldsymbol{X}_{k+1}^f(\boldsymbol{S}_{k+1}^f)^T \left[\boldsymbol{S}_{k+1}^f(\boldsymbol{S}_{k+1}^f)^T + \boldsymbol{R}_{k+1}\right]^{-1}$\\ \nl Update of the state matrix:\\ \qquad$\boldsymbol{x}_{i,k+1}^a = \boldsymbol{x}_{i,k+1}^f + \boldsymbol{K}_{k+1}(\boldsymbol{y}_{i,k+1}- \boldsymbol{s}_{i,k+1}^f)$ \end{algorithm} \subsubsection{Inflation} The classical EnKF exhibits a number of shortcomings such as sampling errors due to the limited amount of members available in the ensemble. This is especially true for applications in fluid mechanics and in particular with CFD, where every simulation may need important computational resources and storage space. Therefore, the number of total ensemble members realistically acceptable for three-dimensional runs is around $N_e \in [40, 100]$. As this error is carried over the assimilation steps, one way of reducing this problem is to inflate the error covariance matrix $\mathbf{P}_{k+1}$ by a factor $\lambda^2$ \cite{Asch2016_SIAM}. This coefficient $\lambda > 1$ drives the so called \emph{multiplicative inflation}, which can be applied to the analyzed state matrix. It is responsible for an increased variability of the state estimation: \begin{equation} \mathbf{x}_{i}^a \longrightarrow \overline{\mathbf{x}^a} + \lambda(\mathbf{x}_{i}^a-\overline{\mathbf{x}^a}) \end{equation} Clearly, for $\lambda=1$ the results from the classical EnKF are obtained. Similarly, the optimization via EnKF of the set of inferred parameters $\theta$ can collapse very rapidly towards a local optimum, providing a sub-optimal result. Inflation can be used to mitigate an overly fast collapse of the parametric description of the model, artificially increasing the variability of the parameters and allowing it to target a global optimum solution. \subsubsection{Localization} The classical EnKF establishes a correlation between observation and the degrees of freedom of the model, but this correlation is not affected by the distance between them. In a limited ensemble size like the ones used in CFD, this can lead to spurious effects on the update of the state matrix for large domains. In practice, errors due to the finite ensemble approximations can be significantly larger than the real physical correlation, which naturally decays with distance in continuous systems. Due to the computational limitations to using more members in the ensemble, one way to avoid these spurious effects is to use a corrective multiplicative term to the values of the covariance matrix $\mathbf{P}_{k+1}$ that takes into account the physical distance between observation sensors and mesh elements of the state. This strategy is known as \emph{covariance localization}. Just as the inflation, the localization is effective to improve the accuracy of the calculation and to reduce the probability of divergence of the EnKF. The principle of covariance localization uses a coefficient-wise multiplication of the covariance matrix $\mathbf{P}_{k+1}$ and a corrective matrix that is here called $\mathbf{L}$. This type of operation is known as a Schur product, thus it is also called \emph{Schur localization}. This leads to the expression of the localized Kalman gain Equation \ref{localized_EnKF}. \begin{equation} \label{localized_EnKF} [\mathbf{P}_{k+1}]_{i,j}[\mathbf{L}]_{i,j} \longrightarrow {\mathbf{K}_{k+1}^{loc} = [\mathbf{L}]_{i,j}[\mathbf{X}_{k+1}^f(\mathbf{S}_{k+1}^f)^T]_{i,j}\left([\mathbf{L}]_{i,j}[\mathbf{S}_{k+1}^f(\mathbf{S}_{k+1}^f)^T]_{i,j} + \mathbf{R}_{k+1}\right)^{-1}} \end{equation} As the matrix $\mathbf{R}_{k+1}$ has a limited impact on the operation, this expression is simplified for convenience in the algorithm. The localized Kalman gain becomes: \begin{equation} [\mathbf{P}_{k+1}]_{i,j}[\mathbf{L}]_{i,j} \longrightarrow {\mathbf{K}_{k+1}^{loc} = [\mathbf{L}]_{i,j}[\mathbf{K}_{k+1}]_{i,j}} \end{equation} The structure of the matrix $\mathbf{L}$ must be set by the user, and it should represent the real physical correlation. In continuous systems, the correlation between physical variables decreases fast in space. Therefore, a generally used structure for the localization matrix is an exponential decay form: \begin{equation} \label{loc_matrix} \mathbf{L}(i,j) = e^{-\Delta^2_{i,j}/\mu} \end{equation} where $\Delta_{i,j}$ is the distance between the given observation sensor and the point of evaluation of the model (center of the mesh element in CFD). $\mu$ is a decay coefficient that can be tuned accordingly to the characteristics of the test case. \subsection{CONES: Coupling OpenFOAM with Numerical EnvironmentS} CONES (Coupling OpenFOAM with Numerical EnvironmentS) is a C++ library designed to couple the CFD software OpenFOAM with any other kind of open-source code. It is currently employed to carry out sequential DA techniques and, more specifically, advanced data-driven methods based on the EnKF. The communications between the EnKF-based code and OpenFOAM are performed by CWIPI (Coupling With Interpolation Parallel Interface) \cite{Reflox2011_aerlab}, which is an open-source code coupler for massively parallel multi-physics/multi-components applications and dynamic algorithms. The main favorable features of CONES in performing DA with OpenFOAM are the following: \begin{itemize} \item It is not needed to modify the installation of OpenFOAM but only compile user-made functions. \item The intrusive coupling between the different codes is performed preserving the original structure of the existing CFD solvers. Every CONES related function is contained in a Pstream (Part of OpenFOAM) modified library, hence, data exchange is done at the end of the solver loop by calling specific functions, and the calculation loop remains unmodified. \item It is very efficient to exchange information about the physical state and the mesh (arrays of millions of elements can be sent and received simultaneously and rapidly). \item Direct HPC communications between multiple processors, which handle partitions of the numerical simulations and the DA processes. \item Simulations and DA are run simultaneously \textit{online} i.e. there is no need to stop the simulations at each analysis phase. This last point allows for enormous gains in terms of computational resources required. \end{itemize} The coupler CWIPI developed by ONERA and CERFACS has been chosen due to its powerful management of fully parallel data exchanges based on distributed mesh definition and its ability to interpolate between non-coincident meshes (very useful for some advanced tools based on the EnKF, like the MGEnKF \cite{Moldovan2021_jcp}). Most of its uses are related to gas turbine designs \cite{Duchaine2015_csd, Legrenzi2016_AIAA}, but it has also recently been employed in the field of aeroacoustics with OpenFOAM \cite{Moratilla-Vega2022_cpc}. CWIPI communication protocol is based on the MPI library. Thus, MPI and CWIPI environments must be initialized within the codes. This will allow using the primitives of CWIPI to exchange information between two codes. In Figure \ref{fig:CWIPI} direct communications between two codes through CWIPI and some of the main primitives are illustrated. \begin{figure}[ht] \centering \subfloat[{\footnotesize CWIPI between two codes}]{\label{fig:CWIPIaction}{\includegraphics[width=0.45\textwidth]{CWIPI.pdf}}}\hfill \hspace{0.3cm} \subfloat[{\footnotesize Main primitives in CWIPI}]{\label{fig:primitives}{\includegraphics[width=0.50\textwidth]{CWIPI_Primitives.pdf}}} \caption {Functioning of CWIPI} \label{fig:CWIPI} \end{figure} In this work, CONES couples the solver SimpleFOAM, which is designed to simulate flows using RANS, with a sequential DA library developed by the team. The structure of a single run is exemplified in Fig. \ref{fig:CONES}. The MPI communications and the coupler CWIPI are initialized in both codes (in OpenFOAM and in the DA library). \begin{figure} \centering \includegraphics[width=1\textwidth]{EnKF_Cones.pdf} \caption{Scheme of CONES for steady simulations} \label{fig:CONES} \end{figure} Despite the fact that applications of EnKF-based tools are tied with the time advancement of the solution, the application to stationary flows is straightforward. An analysis virtual time window is fixed in terms of the number of iterative steps of the code. Once that number of time steps is performed simultaneously by the $N_e$ ensemble members (CFD runs), they send their information to the EnKF code and wait online for the updated flow field / parametric description. Currently, the information exchanged is the velocity field $\mathbf{u}^f$ and the parameters of the model $\theta^f$. Hence, the state matrix, composed of as many state vectors as members (CFD simulations) in the ensemble, is the one expressed in eq. \ref{eqn:state_vector} for the DA cycle at time $k+1$. \begin{equation} \mathbf{x}_{i,k+1} = \begin{bmatrix} \mathbf{u}_{i,k+1} \\ \theta_{i,k+1} \end{bmatrix} \label{eqn:state_vector} \end{equation} A piece of additional information provided by the simulations is the set of values $\mathbf{s}_{i,k+1}$, i.e. the projection of the model solution on the coordinates of the sensors for each ensemble member. Considering that the sensor placement does not necessarily comply with the center of a mesh element, interpolation of the flow field has to be performed. OpenFOAM possesses several functions to transform cell-center quantities into particular points. The accuracy of these interpolation methods \cite{Leonard2021_ihpbc} has been taken into account. The nature of the observations $\mathbf{y}$ is analyzed in more detail in Section \ref{sec:DA_exp}, but CONES can work with sensors measuring both the pressure $p$ and the velocity field $\mathbf{u}$. In this specific case dealing with stationary simulations, the observation is constant, and it is loaded once, but it could be integrated at each analysis phase in case of analysis of an unstationary flow. Thus, the DA code receives information from the model and the sensors, and it produces an updated set of states and parameters ($\mathbf{u}^a$, $\theta^a$), which are sent back to the OpenFOAM simulations. The pressure $p$ is updated for each ensemble member via a Poisson equation and this complete set of data is used to start a new set of iterative steps. Once the convergence of the model parameters complies with a threshold set by the user from the model is achieved, the coupling is deleted, and both MPI and CWIPI environments are finalized. \section{DA experiments} \label{sec:DA_exp} CONES is here used to study the flow around a building using the numerical test case presented in Sec. \ref{test_case}. In particular, the DA tools are used to optimize the value of the five global constants driving the $\mathcal{K} - \varepsilon$ model, with the aim to minimize the discrepancy between the RANS results and the high-fidelity observation provided. A similar analysis was recently performed by Zhao et al. \cite{Zhao2022_be}, but they used numerical results from a high-fidelity simulation. Here, the observation is taken from time-averaged measures from experiments. The first key aspect to take into account is determining a suitable prior state for the velocity and pressure field, as well as for the parametric description. For the latter, values found by Margheri et al. \cite{Margheri2014_cf} using uncertainty propagation of epistemic uncertainties are preferred to the classical values obtained by Launder and Sharma \cite{Launder1974_lhmt}. These baseline values, which are shown in Tab. \ref{tab:Prior_EnKF}, are the initial mean of the $N_e$ ensemble simulations. Each value of the parameters for each CFD run is initially determined using a bounded Gaussian distribution $\mathcal{N}(\mu_N, \sigma_N)$, where $\mu_N$ is the parameter mean value and $\sigma_N$ is chosen to provide a sufficiently large initial variability of the parametric space based on Margheri et al. work \cite{Margheri2014_cf}. The normal distributions are bounded between $\sigma_N$ and $7\sigma_N$, the limit has been empirically set depending on the sensitivity of the coefficients. For example, $C_{\varepsilon1}$ is bounded by $1.25\sigma_N$ but $\sigma_\varepsilon$ is bounded by $7\sigma_N$. The initial physical state for each ensemble is obtained from a single run using the values of the model constants in Ref. \cite{Margheri2014_cf}. The number of ensemble members $N_e = 40$ is chosen considering other works in the literature relying on CFD for the model part of the EnKF \cite{katzfuss2016_as, Mons2021_prf, Moldovan2021_jcp}. The observation is obtained from time-averaged data from a total of $120$ sensors. Among these, $90$ sensors are pressure taps, and $30$ sensors are hot wires measuring two components of the velocity field, the streamwise velocity $u_x$ and spanwise velocity $u_z$. This adds up to $150$ time-averaged observation values. The data is loaded at the beginning of the analysis phase in the following format: $\mathbf{y}= \begin{bmatrix} u_{x1} & \dots & u_{x30} & u_{z} & \dots & u_{z30} & p_{31} & \dots & p_{120} \end{bmatrix}^T$, and does not change throughout the calculation. Similarly the covariance matrix $R_{k+1}$ is taken constant leading to $R = \sigma_m I$ with $\sigma_m$ the confidence given to the measurement. Three independent DA experiments are performed. The variations do not deal with details of the model or the observation, but they consider different features of the DA procedure. More precisely, the cases analyzed are: \begin{itemize} \item Case A: classical EnKF. \item Case B: EnKF with covariance localization. \item Case C: EnKF with covariance localization and inflation. \end{itemize} \begin{table}[!ht] \centering \begin{tabularx}{\textwidth}{|C|C|C|C|C|} \hline \multirow{3}{*}{\textbf{Parameter}} & \multirow{2}{*}{\thead{$\mathbf{\mathcal{K}}$\textbf{-}$\mathbf{\varepsilon}$ \textbf{model}\\ \textbf{default}\\ \textbf{values}}} & \multicolumn{3}{c|}{\textbf{Prior of the EnKF}}\\ \cline{3-5} & & \thead{$\mathbf{\mu_N}$} & \thead{$\mathbf{\sigma_N}$ \\ for cases A,B} & \thead{$\mathbf{\sigma_N}$ \\ for case C}\\ \hline \hline $C_\mu$ [-] & 0.09 & 0.1 & 0.01 & 0.005\\ $C_{\varepsilon1}$ [-] & 1.44 & 1.575 & 0.1 & 0.05\\ $C_{\varepsilon2}$ [-] & 1.92 & 1.9 & 0.1 & 0.05\\ $\sigma_\mathcal{K}$ [-] & 1.0 & 1.0 & 0.1 & 0.05\\ $\sigma_\varepsilon$ [-] & 1.3 & 1.6 & 0.1 & 0.05\\ \hline \end{tabularx} \caption{Comparison between conventional constants from RANS $\mathcal{K}-\varepsilon$ model and the initial parameters employed for the EnKF ($N_e = 40$)} \label{tab:Prior_EnKF} \end{table} For case study A and B, the width of the virtual assimilation window has been fixed to $150$ iterations. For C it has been fixed to $100$ considering additional evolution of the parameters caused by the inflation. Those values have been chosen observing results from preliminary analyses, which pointed out how at least $50$ iterations were needed to obtain a clear signature of the new parametric setting over the physical quantities. For localization in cases B and C, the domain is also clipped in a volume sufficiently large around the observations, similar to the way Moldovan et al. \cite{Moldovan2022_jfm} did it for the BARC geometry. Similarly to the physical clipping, that reduces the number of degress of freedom in the EnKF and thus the computational cost, the goal of the localization clipping is also to ensure eliminating regions which are far from the sensors and that would only be marginally updated by the DA procedure. Fig. \ref{fig:sensors_clipping} shows the cutting region defined for the physical localization. The coefficient $\mu$ of the localization matrix $\mathbf{L}$ is set so that the values of the matrix are close to $0$ at the border of the clipping. This avoids discontinuity problems in the physical field. The parameters of the model $\theta$ are not affected by localization. \begin{figure}[h!] \centering \includegraphics[width=0.7\textwidth]{building.pdf} \caption{Clipping box used for localization: pressure sensors are represented in red and velocity sensors are displayed in green (data in $mm$) } \label{fig:sensors_clipping} \end{figure} \subsection{Case A: classical EnKF} In this first case, the classical Ensemble Kalman Filter is used. The run ends when convergence of the parameters is reached, which is in this case after $60$ analysis phases (i.e. a total of $6000$ CFD iterations). The evolution of the mean value of the five parameters of the $\mathcal{K} - \varepsilon$ model is shown in Fig. \ref{fig:coeff_withoutBoth}. One can see that the final results obtained by the EnKF are significantly different than the baseline values, and that the speed at which the parameter converge is significantly different. In particular, the evolution of $\sigma_{\varepsilon}$ deserves some comments. This coefficient controls the magnitude of the turbulent diffusion term in the equation for $\varepsilon$, $D_{\varepsilon} = \nu_t/\sigma_{\varepsilon}+\nu$, which is associated with non-homogeneous conditions (see Sec. \ref{sec::Num}). The optimization performed by the EnKF targets very low values for $\sigma_{\varepsilon}$ during the calculation, increasing the relevance of $D_{\varepsilon}$ in the equation. However, noise propagated by the Kalman gain can turn the value for some ensemble members to be negative, resulting in a divergence of the calculation. Therefore, we have imposed a constraint for this parameter so that values cannot be lower than a small but positive value prescribed. For the other parameters, one can see that $C_\mu$ and $C_{\varepsilon1}$ converge to a value close to $1/3$ of the initial estimate, $\sigma_\mathcal{K}$ does not exhibit large variations and $C_{\varepsilon2}$ is three times larger. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{withoutBoth_2.pdf} \caption{$\mathcal{K} - \varepsilon$ coefficients convergence without inflation and localization} \label{fig:coeff_withoutBoth} \end{figure} \subsection{Case B: EnKF with covariance localization} In this case, the calculation is performed with covariance localization. The evolution of the five coefficients is shown in Figure \ref{fig:coeff_withoutInfl}. The trend and in particular the evolution of $\sigma_\varepsilon$ are similar to the ones observed for Case A. given remarks for $\sigma_{\varepsilon}$ in case A stay the same here. However, larger fluctuations can be observed before convergence. Also, a significantly larger number of iterations ($100$ analysis phases) is required to get a good convergence of the parameters. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{withoutInfl_2.pdf} \caption{$\mathcal{K} - \varepsilon$ coefficients convergence without inflation but with localization} \label{fig:coeff_withoutInfl} \end{figure} \subsection{Case C: EnKF with both inflation and localization} The DA calculation is here performed relying on deterministic inflation for the model's parameters and covariance localization. This is the most advanced run in terms of complexity of the DA algorithm. The evolution of the five parameters is shown in Figure \ref{fig:coeff_withBoth}. To ensure the robustness of the simulation during the first time steps, the inflation quantifier $\lambda$ is gradually increased from 1.05 to 1.3 and, later, removed to obtain the convergence ($\lambda=1.05$ for $k\in [1,40]$, $\lambda=1.1$ for $k\in [41,120]$, $\lambda=1.2$ for $k\in[121,160]$, $\lambda=1.3$ for $k\in[161,200]$, and $\lambda=1$ for $k>200$). Some coefficients such as $C_\mu$ and $\sigma_\mathcal{K}$ show a higher sensitivity to changes of the value of $\lambda$, highlighting the importance of inflation in identifying a suitable large parametric space for the optimization. For this reason, convergence is reached significantly later in this case. Also, the threshold value for $\sigma_\varepsilon$ is increased here, in order to avoid stability problems that could be easily triggered by the higher variability associated with the parametric inflation. The impact of the physical prediction of the three different parametric descriptions, which are reported in Table \ref{tab:opti_values}, are investigated in Section \ref{sec:rez}. \begin{figure}[h!] \centering \includegraphics[width=1\textwidth]{bothInflLoc.pdf} \caption{$\mathcal{K}-\varepsilon$ coefficients convergence with inflation and localization} \label{fig:coeff_withBoth} \end{figure} \begin{table}[h!] \centering \begin{tabularx}{\textwidth}{|C|C|C|C|} \hline \multirow{2}{*}{\textbf{Parameter}} & \multicolumn{3}{c|}{\textbf{Optimized values}}\\ \cline{2-4} & \thead{\textbf{Case A}} & \thead{\textbf{Case B}} & \thead{\textbf{Case C}}\\ \hline $C_\mu$ [-] & 0.021 & 0.032 & 0.020 \\ $C_{\varepsilon 1}$ [-] & 0.418 & 0.165 & 0.162 \\ $C_{\varepsilon 2}$ [-] & 4.629 & 4.080 & 7.978 \\ $\sigma_\mathcal{K}$ [-] & 0.837 & 0.476 & 0.355 \\ $\sigma_\varepsilon$ [-] & 0.1 & 0.1 & 0.1\\ \hline \end{tabularx} \caption{$\mathcal{K}-\varepsilon$ model coefficients final values obtained with the EnKF} \label{tab:opti_values} \end{table} \section{Results} \label{sec:rez} Results obtained by the three DA runs are now compared with the prior (classical RANS using $\mathcal{K}-\varepsilon$ model) and the time-averaged experimental results. \subsection{Velocity field} The analysis of the velocity field is performed first. Velocity is an explicit variable in segregated solvers for incompressible flows. Therefore, the performance of the DA strategies can be assessed by the qualitative improvement obtained for the prediction of this quantity. Figure \ref{FIG:VelProfiles} shows the streamwise velocity profiles $u_x$ and the normal velocity profiles $u_z$ for several locations corresponding to the positions of the hot wire. For $u_x$, one can see that the accuracy of the predicted field via DA is sensibly improved for each location. Very minor differences can be observed when comparing the three different DA strategies. On the other hand, the prediction of the normal velocity $u_z$ is very similar to the prior. This is not surprising, considering that the match between prior and experimental data is good. An interesting result can be observed for Point $20$, where the maximum difference between the prior and the experimental data is observed for $u_z$. In this case, one can see that the DA prediction is getting closer to the experiments, confirming that the EnKF is able to provide a statistically more accurate prediction of the flow, down to the confidence indicated for the different sources of information \begin{figure}[ht] \centering \includegraphics[scale=1.6]{VelProfilesAll.jpg} \caption{Vertical and streamwise velocity profiles above the marked red locations on the roof (Points: 18, 20, 22, 36, 38, 50, 54); comparison between wind tunnel data (WT), k-epsilon model (k-eps), and three ensembles Kalman filter cases.} \label{FIG:VelProfiles} \end{figure} The features of the velocity field are further assessed in figure \ref{FIG:Streamlines}, where streamlines on a vertical plane at the center of the high-rise building are shown. Here, the prior and the DA runs are compared with a validated LES study. One can see that the RANS parametric variation obtained integrating local experimental information in the room region is responsible for a significant reduction of the recirculation region behind the building. While the size of this region is now smaller than the reference LES, similar topological organization can be observed at mid-height, which is not captured by the RANS prior. A zoom of the roof area, which is shown in figure \ref{FIG:recircBubble}, shows how the assimilation of the velocity field is beneficial in improving the prediction of the recirculation region, which is sensibly improved. \begin{figure}[ht] \centering \includegraphics[scale=1.1]{StreamLines.jpg} \caption{Flow structures at the middle plane; comparison between validated Large Eddy Simulation (validated LES) published in \citep{vranesevic_furthering_2022} and k-epsilon model (k-eps) on the left side of the Figure, and three ensembles Kalman filter cases ot the right side of the Figure.} \label{FIG:Streamlines} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=1]{StreamLinesZoom.jpg} \caption{Close view of the flow structures above the roof top, red arrow - indicates the reattachment position of the separation bubble of the simulation with k-epsilon model, green arrow - indicates the reattachment position of the separation bubble of the validated LES simulation} \label{FIG:recircBubble} \end{figure} \subsection{Pressure field} The behaviour of the pressure field is now investigated. This physical quantity is significantly more difficult to be predicted, because the Poisson equation resolved in the numerical solver uses the pressure as a Lagrangian marker. Therefore, the pressure field is simultaneously used as a physical variable and a Lagrangian constraint to grant incompressibility of the flow. Therefore, the analysis of this quantity is crucial to assess the stability and the precision of the algorithms. In figure \ref{FIG:PMatrix} the mean pressure coefficient is shown in terms of performance metrics comparison with experimental data. If one just has a look at the best performance region for the error $>10\%$, one could be erroneously lead to this that the prior ($12\%$ of the occurrences for less than $10\%$ error) behaves better than the DA runs (between $6\%$ and $10\%$ of the occurrences for less than $10\%$ error). This information is misleading, though, as a large number of occurrences for the DA runs are just outside this interval. In fact, as large margins of error are considered, one can see that the DA runs outperform the prior RANS. For a $20\%$ error threshold, an improvement of around $5\%$ occurrences ($22.5-24.5\%$ against $18.4\%$) is observed with the use of DA. The gap rises to around $15\%$ when a $30\%$ error threshold is considered. This trend is confirmed analyzing the mean pressure coefficients calculated at pressure taps on the roof, shown in figure \ref{FIG:POnLines}. One can see that the DA prediction almost always performs better than the prior, even if gains for the pressure field are less important than what observed for the velocity field. \begin{figure}[ht] \centering \includegraphics[scale=0.7]{PressureMetrix.jpg} \caption{Scatter plot of mean pressure coefficient for all numerical cases with performance metrics-comparison with experimental data.} \label{FIG:PMatrix} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=1.25]{PressuresOnLines.jpg} \caption{Mean pressure coefficient at pressure tap locations along red lines marked at the roof; comparison between wind tunnel data (WT), k-epsilon model (k-eps), and three ensembles Kalman filter cases} \label{FIG:POnLines} \end{figure} Some remarks should be spent about the performance of the three DA runs. Despite the differences in the techniques used and the apparently different results obtained for the parameter optimization, the prediction of the physical variables is pretty similar. The values of the model parameters have probably converged towards a robust optimum, where the sensitivity of the solution to further parametric variation is very low. This aspect, which needs further investigation, may indicate that robust optimization can be obtained setting a suitable confidence interval for the observation. In this scenario, the application of localization has proven effective. The reduction of degrees of freedom in the DA process, which significantly decreases the computational resources required for each analysis phase, is not responsible for the degradation of the results. On the other hand, probably because of the features of the parametric optimum region found, the inflation techniques have not improved the results. \section{Conclusions} \label{sec:conclusions} The newly developed platform CONES has been used to perform a data-driven investigation of the flow around a high-rise building. More precisely heterogeneous experimental samples, in the form of data from pressure taps and hot wires, have been integrated with RANS CFD runs, performed using the open-source code OpenFOAM. The coupling has been performed using techniques based on the Ensemble Kalman Filter (EnKF), including advanced manipulations such as localization and inflation. The augmented state estimation obtained via EnKF has also been employed to improve the predictive features of the model via an optimization of the five free global model constant of the $\mathcal{K}-\varepsilon$ turbulence model used to close the equations. Therefore, a relatively small uncertainty space has been chosen, the same employed by Ben Ali et al. \cite{BenAli2022_jweia} using variational data assimilation. The results have shown that a global improvement has been observed for the physical quantities of investigation, and that results obtained with the different DA strategies are equivalent. For this last point, physical and covariance localization appear to be effective for the study of complex flows. The reduction of degrees of freedom of the DA problem has not affected the quality of the results, while globally reducing the time needed for the data-driven procedures. On the other hand, the usage of inflation has not produced better results, in particular due to the increase of computational resources required. The analysis of the velocity field shows that the EnKF allows to significantly reduce the error in the streamwise and normal direction of more than $50\%$. The effects of the parametric inference are observed also on the recirculation region behind the building. In this case, the physical topology of the flow becomes more similar to the reference LES validated with experimental data, even if the recirculation bubble is overly reduced in size. For the pressure field, improvements are observed even if they are not quantitatively important as for the velocity field. This may be due to the segregated structure of the CFD solver, which employs the pressure as a Lagrangian multiplier to impose the incompressibility constraint. Potentially, more sophisticated coupled solvers could provide improved results when used in DA tools using pressure data as observation. Future investigations include more sophisticated parametric descriptions of the turbulence modelling employed, including coupling between DA tools and machine learning applications. This work was granted access to the HPC resources of GENCI in the framework of the resources requested in A12 for project A0122A01741 on the IRENE supercomputer (TGCC). Florent Duchaine and Miguel Ángel Moratilla-Vega are warmly acknowledged for the help provided during the early stages of development of CONES.
1,314,259,994,052
arxiv
\section*{Contents} \noindent \S\ref{s:intro}. Introduction \noindent \S\ref{s:glmh}. Log mixed Hodge structures with group action \noindent \S\ref{s:lha}. Log higher Albanese manifolds \noindent \S\ref{s:example}. Description of a result of Deligne by log higher Albanese map \noindent \S\ref{s:appen}. Summary of a result of Deligne in \cite{D89} \renewcommand{\thefootnote}{\fnsymbol{footnote}} \footnote[0]{Primary 14C30; Secondary 14D07, 32G20} \renewcommand{\thefootnote}{\arabic{footnote}} \setcounter{section}{-1} \section{Introduction}\label{s:intro} We review the results of \cite{KNU17}: - General theory of log mixed Hodge structures with polarizable graded quotients endowed with group actions. - Description of the functors represented by higher Albanese manifolds in terms of tensor functors. - Toroidal partial compactifications of higher Albanese manifolds to get log higher Albanese manifolds, and describe the functors represented by them. We describe a result of Deligne in \cite{D89} about polylogarithms, which were appeared in higher Albanese maps, in terms of the log higher Albanese maps. The advantage of our formulation is that log higher Albanese maps have $q$ expansions at the boundary points over which we observe directly $\zeta(n)$ $(n\ge2)$ as values of polylogarithms. For readers' convenience, we add as Appendix a summary of the related result of Deligne in \cite{D89}. Actually, for the present description in Section 3, it is enough to use the formulation of spaces of nilpotent orbits in \cite{KNU} Part III. The formulation of them in \cite{KNU17} is reviewed in Sections 1 and 2 for further study in the case of higher Albanese manifolds with non-trivial Hodge structures. \section{Log mixed Hodge structures with group action}\label{s:glmh} \medskip We review general formulations and results of log mixed Hodge structures with group action in \cite{KNU17} and \cite{KNU} Parts III, IV, in a minimal size for the later use of this paper. The full version will be appeared in \cite{KNU} Part V. \begin{para}\label{log} A {\it log structure} on a ringed space $(S,\cO_S)$ consists of a sheaf of monoids $M$ on $S$ and a homomorphism $\alpha:M\to\cO_S$ such that $\alpha^{-1}(\cO_S^\times)\overset\sim\to\cO_S^\times$. \medskip \begin{sbeg} Let $S={\mathbb{C}}$ and $\{0\}$ a divisor. The associated log structure is $M=\{ f\in\cO_S\,|\,\text{$f$ is invertible on $S\smallsetminus \{0\}$}\}$. $S^{\log}$ is defined to be the set of all pairs $(s,h)$ consisting of a point $s\in S$ and an argument function $h$ which is a homomorphism $M_s\to {\bold{S}}^1$ of monoids whose restriction to $\cO_{S,s}^{\times}$ is $u\mapsto u(s)/|u(s)|$. In this case, the ringed space $(S^{\log},\cO_S^{\log})$ is explained as follows. Let $\tilde S^{\log}:={\mathbb{C}}\cup({\mathbb{R}}\times i\infty)={\mathbb{R}}\times i({\mathbb{R}}\cup\infty)$ endowed with coordinate function $z=x+iy$ $(-\infty< y\le\infty)$. Let $S^{\log}:=({\mathbb{C}}\cup({\mathbb{R}}\times i\infty))/{\mathbb{Z}}$, and consider maps $\tilde S^{\log}\to S^{\log}\to S:$ $z=x+iy\mapsto(e^{-2\pi y}, e^{2\pi ix})\mapsto q:=e^{2\pi iz}$. Note that $(e^{-2\pi y}, e^{2\pi ix})$ is a polar coordinate extended over $-\infty< y\le\infty$, and $S^{\log}\to S$ is a real oriented blowing-up at $\{0\}$, which is proper. $h:M_0\to\bold S^1$ in $t:=(0,h)\in S^{\log}$ sends $q$ to $e^{2\pi ix}$. Since $z$ is considered as a branch of $(2\pi i)^{-1}\log(q)$, we have $\cO_{S,t}^{\log}=\cO_{S,0}[z]$ which is isomorphic to a polynomial algebra $\cO_{S,0}[T]$ of one indeterminate $T$ over $\cO_{S,0}$ under $z\leftrightarrow T$ (\cite{KU09} 2.2.5). \end{sbeg} For more general and finer treatment, see \cite{KN99}, \cite{KU09} 2.2. \end{para} \begin{para}\label{Gsetting} Let $G$ be a linear algebraic group over ${\mathbb{Q}}$. Let $G_u$ be the unipotent radical of $G$ and let $G_{{{\rm red}}}=G/G_u$. Let ${\operatorname{Rep}}(G)$ be the category of finite-dimensional linear representations of $G$ over ${\mathbb{Q}}$. \end{para} \begin{para}\label{k0} Let $k_0:{\mathbb{G}}_m\to G_{{{\rm red}}}$ be a ${\mathbb{Q}}$-rational and central homomorphism. Assume that, for one (hence all) lifting ${\mathbb{G}}_{m,{\mathbb{R}}}\to G_{\mathbb{R}}$ of $k_0$, the adjoint action of ${\mathbb{G}}_{m,{\mathbb{R}}}$ on ${\rm {Lie}}\,(G_u)_{\mathbb{R}}={\mathbb{R}}\otimes_{\mathbb{Q}}{\rm {Lie}}\,(G_u)$ is of weight $\le-1$. Then, for any $V\in {\operatorname{Rep}}(G)$, the action of ${\mathbb{G}}_m$ on $V$ via a lifting ${\mathbb{G}}_m\to G$ of $k_0$ defines an increasing filtration $W$ on $V$ over ${\mathbb{Q}}$, called {\it weight filtration}, which is independent of the lifting. \end{para} \begin{para}\label{type k0} Assume that we are given a homomorphism $k_0:{\mathbb{G}}_m\to G_{{{\rm red}}}$ as in \ref{k0}. A {\it $G$-mixed Hodge structure} ({\it $G$-MHS}, for short) {\it of type $k_0$} is an exact $\otimes$-functor (\cite{D90} 2.7) from ${\operatorname{Rep}}(G)$ to the category of ${\mathbb{Q}}$-mixed Hodge structures keeping the underlying vector spaces with weight filtrations. \end{para} \begin{para}\label{SCR} As in \cite{D71}, let $S_{{\mathbb{C}}/{\mathbb{R}}}$ be the Weil restriction of scalars of ${\mathbb{G}}_m$ from ${\mathbb{C}}$ to ${\mathbb{R}}$. It represents the functor $A\mapsto({\mathbb{C}}\otimes_{\mathbb{R}} A)^\times$ for commutative rings $A$ over ${\mathbb{R}}$. In particular, $S_{{\mathbb{C}}/{\mathbb{R}}}({\mathbb{R}})={\mathbb{C}}^\times$, which is understood as ${\mathbb{C}}^\times$ regarded as an algebraic group over ${\mathbb{R}}$. Let $w:{\mathbb{G}}_m\to S_{{\mathbb{C}}/{\mathbb{R}}}$ be the homomorphism induced from the natural map $A^\times\to({\mathbb{C}}\otimes_{\mathbb{R}} A)^\times$. \end{para} \begin{para}\label{hd} The following (1) and (2) are equivalent: (1) A finite-dimensional linear representation of $S_{{\mathbb{C}}/{\mathbb{R}}}$ over ${\mathbb{R}}$. (2) A finite-dimensional ${\mathbb{R}}$-vector space $V$ with a decomposition $V_{\mathbb{C}}:={\mathbb{C}}\otimes_{\mathbb{R}} V=\bigoplus_{p,q\in{\mathbb{Z}}}V_{\mathbb{C}}^{p,q}$ such that, for any $p,q$, $V_{\mathbb{C}}^{q,p}$ is complex conjugate of $V_{\mathbb{C}}^{p,q}$ (Hodge decomposition). For a finite-dimensional linear representation $V$ of $S_{{\mathbb{C}}/{\mathbb{R}}}$, the corresponding decomposition is defined by $$ V_{{\mathbb{C}}}^{p,q}:=\{v\in V_{{\mathbb{C}}}\;|\;[z]v=z^p\bar z^qv\;\text{for}\;z\in{\mathbb{C}}^\times\}. $$ Here $[z]$ denotes $z\in{\mathbb{C}}^\times$ regarded as an element of $S_{{\mathbb{C}}/{\mathbb{R}}}({\mathbb{R}})$. \end{para} \begin{para}\label{ah} Let $H$ be a $G$-MHS of type $k_0$ (\ref{type k0}). By \ref{hd} and Tannaka duality (cf.\ \cite{D90} 1.12 Th\'eor$\grave{\rm e}$me), the Hodge decompositions of $\gr^W$ of $H(V)$ for $V\in{\operatorname{Rep}}(G)$ give a homomorphism $S_{{\mathbb{C}}/{\mathbb{R}}}\to(G_{{{\rm red}}})_{\mathbb{R}}$ such that the composite ${\mathbb{G}}_m\overset{w}\to S_{{\mathbb{C}}/{\mathbb{R}}}\to(G_{{{\rm red}}})_{\mathbb{R}}$ coincides with $k_0$. We call this $S_{{\mathbb{C}}/{\mathbb{R}}}\to(G_{{{\rm red}}})_{\mathbb{R}}$ the {\it associated homomorphism with $H$}. \end{para} \begin{para}\label{D} Let $k_0:{\mathbb{G}}_m\to G_{{{\rm red}}}$ be as in \ref{k0}. Fix a homomorphism $h_0:S_{{\mathbb{C}}/{\mathbb{R}}}\to(G_{{{\rm red}}})_{\mathbb{R}}$ such that $h_0\circ w=k_0$. {\it $G$-mixed Hodge structure of type $h_0$} is a $G$-mixed Hodge structure of type $k_0$ (\ref{type k0}) whose associated homomorphism $S_{{\mathbb{C}}/{\mathbb{R}}}\to(G_{{{\rm red}}})_{\mathbb{R}}$ (\ref{ah}) is $G_{{{\rm red}}}({\mathbb{R}})$-conjugate to $h_0$. The {\it period domain $D=D(G,h_0)$} associated to $(G,h_0)$ is defined to be the set of isomorphism classes of $G$-mixed Hodge structures of type $h_0$. \end{para} \begin{para}\label{usual} Usual period domains of Griffiths \cite{G} and their generalization for mixed Hodge structures \cite{U} are special cases of the present period domains. Let $\Lambda=(H_0, W, (\langle \;,\,\rangle_w)_w, (h^{p,q})_{p,q})$ be the Hodge data as usual as in \cite{KNU} Part III. Let $G$ be the subgroup of ${\rm {Aut}}(H_{0,{\mathbb{Q}}}, W)$ consisting of elements which induce {\it similitudes for $\langle\;,\,\rangle_w$} for each $w$. That is, $ G:= \{g\in {\rm {Aut}}(H_{0,{\mathbb{Q}}}, W)\;|\;$ for any $w$, there is a $t_w\in {\mathbb G}_m$ such that $\langle gx,gy\rangle_w = t_w\langle x, y \rangle_w$ for any $x,y \in \gr^W_w\}. $ Let $G_1:={\rm {Aut}}(H_{0,{\mathbb{Q}}}, W, (\langle \;,\,\rangle_w)_w) \subset G$. Let $D(\Lambda)$ be the period domain of \cite{U}. Then $D(\Lambda)$ is identified with an open and closed part of $D$ in this paper as follows. Assume that $D({\Lambda})$ is not empty and fix an ${{\bold r}}\in D(\Lambda)$. Then the Hodge decomposition of $\gr^W{{\bold r}}$ induces $h_0: S_{{\mathbb{C}}/{\mathbb{R}}} \to (G_{\text{red}})_{\mathbb{R}}$. (We have $\langle [z]x, [z]y\rangle_w= |z|^{2w}\langle x, y\rangle_w$ for $z\in {\mathbb{C}}^\times$ (see \ref{SCR} for $[z]$).) Consider the associated period domain $D$ (\ref{D}). Then $D$ is a finite disjoint union of $G_1({\mathbb{R}})G_u({\mathbb{C}})$-orbits which are open and closed in $D$. Let $\cD$ be the $G_1({\mathbb{R}})G_u({\mathbb{C}})$-orbit in $D$ consisting of points whose associated homomorphisms $S_{{\mathbb{C}}/{\mathbb{R}}}\to (D_{\text{red}})_{\mathbb{R}}$ are $(G_1/G_u)({\mathbb{R}})$-conjugate to $h_0$. Then the map $H\mapsto H(H_{0,{\mathbb{Q}}})$ gives a $G_1({\mathbb{R}})G_u({\mathbb{C}})$-equivariant isomorphism $\cD\simeq D(\Lambda)$. \end{para} \begin{para}\label{Y} Fix a homomorphism $h_0:S_{{\mathbb{C}}/{\mathbb{R}}}\to(G_{{{\rm red}}})_{\mathbb{R}}$ as in \ref{D}. Let $\cC$ be the category of triples $(V,W,F)$ consisting of a finite-dimensional ${\mathbb{Q}}$-vector space $V$, an increasing filtration $W$ on $V$ (called the weight filtration), and a decreasing filtration $F$ on $V_{\mathbb{C}}$ (called the Hodge filtration). Let $Y$ be the set of all isomorphism classes of exact $\otimes$-functors from ${\operatorname{Rep}}(G)$ to $\cC$ preserving the underlying vector spaces with weight filtrations. Then $G({\mathbb{C}})$ acts on $Y$ by changing the Hodge filtration $F$. We have $D\subset Y$ and $D$ is a $G({\mathbb{R}})G_u({\mathbb{C}})$-orbit in $Y$ (cf.\ \cite{KNU17} Proposition 3.2.5). We define $\check D:=G({\mathbb{C}})D$ in $Y$. Thus $$ D\subset \check D=G({\mathbb{C}})D \subset Y. $$ $\Dc$ is a $G({\mathbb{C}})$-homogeneous space and $D$ is an open subspace. Hence $\Dc$ and $D$ are complex analytic manifolds. \end{para} \begin{para}\label{pol} Let $h_0: S_{{\mathbb{C}}/{\mathbb{R}}}\to (G_{{{\rm red}}})_{\mathbb{R}}$ be as in \ref{D}. Let $C$ be the image of $i\in {\mathbb{C}}^\times = S_{{\mathbb{C}}/{\mathbb{R}}}({\mathbb{R}})$ by $h_0$ in $(G_{{{\rm red}}})({\mathbb{R}})$ (Cartan involution). We say that $h_0$ is {\it ${\mathbb{R}}$-polarizable} if $\{a\in (G_{{{\rm red}}})'({\mathbb{R}})\;|\; Ca=aC\}$ is a maximal compact subgroup of $(G_{{{\rm red}}})'({\mathbb{R}})$. Here $(G_{{{\rm red}}})'$ denotes the commutator subgroup of $G_{{{\rm red}}}$. \end{para} \begin{para}\label{Gamma1} Let $h_0: S_{{\mathbb{C}}/{\mathbb{R}}}\to (G_{{{\rm red}}})_{\mathbb{R}}$ be ${\mathbb{R}}$-polarizable (\ref{pol}). Let $\Gamma$ be a subgroup of $G({\mathbb{Q}})$ for which there is a faithful $V\in {\operatorname{Rep}}(G)$ and a ${\mathbb{Z}}$-lattice $L$ in $V$ such that $L$ is stable under the action of $\Gamma$. Then the following holds (\cite{KNU17} Proposition 3.3.4): $(1)$ The action of $\Gamma$ on $D$ is proper, and the quotient space $\Gamma \operatorname{\backslash} D$ is Hausdorff. $(2)$ If $\Gamma$ is torsion-free and if $\gamma p=p$ with $\gamma\in \Gamma$ for some $p\in D$, then $\gamma=1$. $(3)$ If $\Gamma$ is torsion-free, then the projection $D\to \Gamma \operatorname{\backslash} D$ is a local homeomorphism. \end{para} \begin{para}\label{nilp} Let $(G,h_0)$ be as above. A {\it nilpotent cone} is a cone $\sigma$ over ${\mathbb{R}}_{\ge0}$ in ${\rm {Lie}}\,(G)_{\mathbb{R}}$ generated by a finite number of mutually commuting elements such that, for any $V\in{\operatorname{Rep}}(G)$, the image of $\sigma$ under the induced map ${\rm {Lie}}\,(G)_{\mathbb{R}}\to {\rm {End}}_{\mathbb{R}}(V)$ consists of nilpotent operators. For $F\in\check D$ and a nilpotent cone $\sigma$, $(\sigma,\exp(\sigma_{\mathbb{C}})F)$ is a {\it nilpotent orbit} if it satisfies the following conditions: Take a generators $N_1,\dots,N_n\in{\rm {Lie}}\,(G)_{\mathbb{R}}$ of the cone $\sigma$. (1) (admissibility) There is a faithful $V\in{\operatorname{Rep}}(G)$ such that the relative monodromy weight filtrations $M(N_j,W)$ on $V$ exist for all $1\le j\le n$. (2) (Griffiths transversality) $N_jF^p{\subset} F^{p-1}$ for any $1\le j\le n$, $p\in {\mathbb{Z}}$. (3) (limit mixed Hodge property) $\exp(\sum_{j=1}^n iy_jN_j)F\in D$ if $y_j \in {\mathbb{R}}_{>0}$ are sufficiently large. \medskip \end{para} This is well-defined, i.e., independent of choices of generators $N_1,\dots,N_n$. Note that, for admissibility, the above condition (1) is enough under the assumption of ${\mathbb{R}}$-polarizability (cf.\ \cite{Kas86}, \cite{KNU} III Proposition 1.3.4, Remark in 2.2.2, \cite{Kat14} Proposition 2.1.10). \begin{para}\label{DSig} A {\it weak fan $\Sigma$ in ${\rm {Lie}}\,(G)$} is a nonempty set of sharp rational nilpotent cones satisfying the conditions that it is closed under taking faces and that any ${\sigma}, {\sigma}' \in \Sigma$ coincide if they have a common interior point and if there is an $F\in \Dc$ such that both $(\sigma,\exp(\sigma_{\mathbb{C}})F)$ and $(\sigma',\exp(\sigma'_{\mathbb{C}})F)$ are nilpotent orbits. For a weak fan $\Sigma$ in ${\rm {Lie}}\,(G)$, let $D_\Sigma$ be the set of all nilpotent orbits $(\sigma,\exp(\sigma_{\mathbb{C}})F)$ with $\sigma\in\Sigma$ and $F\in\Dc$. \end{para} \begin{para}\label{sc} Let $\Gamma$ be a subgroup of $G({\mathbb{Q}})$ as in \ref{Gamma1}. A weak fan $\Sigma$ in \ref{DSig} is said to be {\it strongly compatible with $\Gamma$} if $\Sigma$ is stable under the adjoint action of $\Gamma$ and each cone $\sigma\in\Sigma$ is generated over ${\mathbb{R}}_{\ge0}$ by log of $\exp(\sigma)\cap\Gamma$. \end{para} \begin{para}\label{cB} ${\cal {B}}$ denotes the category of locally ringed spaces with a covering by open sets each of which has the strong topology in an analytic space. ${\cal {B}}(\log)$ denotes the category of objects of ${\cal {B}}$ endowed with an fs log structure. For precise definitions of these, see \cite{KU09} 3.2.4, \cite{KNU} Part III 1.1. \end{para} \begin{para}\label{LMH} Let $S\in{\cal {B}}(\log)$. A ${\mathbb{Q}}$-{\it log mixed Hodge structure} ({\it ${\mathbb{Q}}$-LMH}, for short) {\it with ${\mathbb{R}}$-polarizable graded quotients on $S$} is $(H_{\mathbb{Q}},W,H_\cO,F)$ consisting of locally constant sheaf $H_{\mathbb{Q}}$ with an increasing filtration $W$ of $H_{\mathbb{Q}}$ on $(S^{\log},\cO_S^{\log})$, locally free sheaf $H_\cO$ with a decreasing filtration $F$ of $H_\cO$ on $(S,\cO_S)$ such that $\gr_F^p$ is locally free for all $p$, and a specified isomorphism $\cO_S^{\log}\otimes_{\mathbb{Q}} H_{{\mathbb{Q}}}\simeq\cO_S^{\log}\otimes_{\cO_S}H_\cO$, whose pullbacks to each fs log point $s\in S$ satisfy the following conditions (1)--(3). (1) (admissibility) Let $N_1,\dots,N_n$ be a generator of the local monodromy cone $C(s):=\Hom(M_s/\cO_s^\times, {\mathbb{R}}_{\geq 0})\subset\pi_1(s^{\log})$. The relative monodromy weight filtrations $M(N_j, W)$ exists for all $1\le j\le n$. (2) (Griffiths transversality) $ \nabla F^p {\subset} \omega_s^{1,\log}\otimes_{\cO_s}F^{p-1} \quad \text{for all $p\in{\mathbb{Z}}$}.$ (3) (${\mathbb{R}}$-polarizablility on graded quotients) For each $w\in{\mathbb{Z}}$, there is a non-degenerate $(-1)^w$-symmetric bilinear form $\langle\;,\,\rangle_w:H(\gr^W_w)_{\mathbb{R}}\times H(\gr^W_w)_{\mathbb{R}}\to{\mathbb{R}}$ over ${\mathbb{R}}$ such that the quadruple $(H(\gr^W_w), \langle\;,\,\rangle_w, H(\gr^W_w)_\cO, F(\gr^W_w))$ is an ${\mathbb{R}}$-polarized log Hodge structure of weight $w$ on $s$. The last part means as follows. Let $q_1,\dots,q_r\in M_s\smallsetminus\cO_s^\times$ whose classes generate the monoid $M_s/\cO_s^\times$. For $t\in s^{\log}$ and $a\in\text{sp}(t)$ with $\exp(a(\log(q_j)))$ sufficiently small for all $1\le j\le r$, $(H(\gr^W_w), \langle\;,\,\rangle_w, H(\gr^W_w)_\cO, F(\gr^W_w)(a))$ is an ${\mathbb{R}}$-polarized Hodge structure. Here we use $H(\gr^W_w)_{\mathbb{R}}:={\mathbb{R}}\otimes_{\mathbb{Q}} H(\gr^W_w)$, $H(\gr^W_w)_\cO:=\cO_s\otimes_{\mathbb{Q}} H(\gr^W_w)$, $F(\gr^W_w):=F(H(\gr^W_w)_{\cO})$. \medskip Note that, in \cite{KU09} Definition 2.4.7, \cite{KNU} Part III 1.3.2, rational polarizations on graded quotients were used. But, in the present paper, we use ${\mathbb{R}}$-polarizablility on graded quotients. Even under this latter condition, the proof of \cite{KNU} Part III Proposition 1.3.4 works. \end{para} \begin{para}\label{LMH1} {\bf Definition.} Given $(G,h_0)$ and $\Gamma$ as in \ref{Gamma1}. Let $S\in{\cal {B}}(\log)$. A {\it $G$-log mixed Hodge structure with a $\Gamma$-level structure on $S$} is $(H,\mu)$ consisting of an exact $\otimes$-functor $H:{\operatorname{Rep}}(G)\to\text{LMH}(S); (V,W)\mapsto(V,W,F)$ and a global section $\mu$ of the quotient sheaf $\Gamma\backslash {\Cal I}$. \medskip Here $\Cal I$ is the following sheaf on $S^{\log}$. For an open set $U$ of $S^{\log}$, ${\Cal I}(U)$ is the set of all isomorphisms $H_{\mathbb{Q}}|_U\overset{\sim}\to \text{id}$ of $\otimes$-functors from ${\operatorname{Rep}}(G)$ to the category of local systems of ${\mathbb{Q}}$-modules $V$ on $U$ preserving the weight filtration $W$. \medskip \end{para} \begin{para}\label{Stype} Let $(G,h_0)$ be as in \ref{Gamma1} and let $\Gamma$ and $\Sigma$ be as in \ref{sc}. A $G$-LMH $H$ on $S$ with a $\Gamma$-level structure $\mu$ is said to be {\it of type $(h_0,\Sigma)$} if the following (i) and (ii) are satisfied for any $s\in S$ and any $t\in s^{\log}$. Take a $\otimes$-isomorphism $\tilde \mu_t: H_{{\mathbb{Q}},t}\overset\sim\to \;\text{id}$ which belongs to $\mu_t$. (i) There is a ${\sigma}\in \Sigma$ such that the logarithm of the action of the cone $\Hom((M_S/\cO^\times_S)_s, {\mathbb{N}})\subset \pi_1(s^{\log})$ on $H_{{\mathbb{Q}},t}$ is contained, via $\tilde \mu_t$, in $\sigma \subset {\rm {Lie}}\,(G)_{\mathbb{R}}$. (ii) Let ${\sigma}\in \Sigma$ be the smallest cone satisfying (i). Let $a: \cO_{S,t}^{\log}\to {\mathbb{C}}$ be a ring homomorphism which induces the evaluation $\cO_{S,s}\to {\mathbb{C}}$ at $s$ and consider the Hodge filtration $F$ of the functor $V\mapsto {\tilde \mu}_t a(H(V))$ in $Y$. Then this functor belongs to $\Dc$ and $({\sigma}, F)$ generates a nilpotent orbit. \medskip If $(H, \mu)$ is of type $(h_0,\Sigma)$, we have a map $S \to \Gamma \operatorname{\backslash} D_{\Sigma}$, called the {\it period map} associated to $(H, \mu)$, which sends $s\in S$ to the class of the nilpotent orbit $({\sigma}, Z)\in D_{\Sigma}$ which is obtained in (ii). \end{para} \begin{para}\label{strong} Let $(G,h_0)$ be as in \ref{Gamma1} and let $\Gamma$ and $\Sigma$ be as in \ref{sc}. Introduce on $\Gamma\operatorname{\backslash} D_\Sigma$ the {\it strong topology}, that is the strongest topology for which the period map $S\to\Gamma\operatorname{\backslash} D_\Sigma$ is continuous for all $(S,H,\mu)$, and introduce a sheaf of holomorphic functions $\Cal O$ and a log structure $M$. \end{para} \begin{thm}\label{thm1} Let $(G, h_0, \Gamma, \Sigma)$ be as in \ref{strong}. Assume that $h_0$ is ${\mathbb{R}}$-polarizable (\ref{pol}). Then $(1)$ $\Gamma \operatorname{\backslash} D_{\Sigma}$ is Hausdorff. \smallskip From hereafter, assume that $\Gamma$ is neat. $(2)$ $\Gamma \operatorname{\backslash} D_{\Sigma}$ is a log manifold ({\rm \cite{KNU} Part III 1.1.5}). In particular, $\Gamma \operatorname{\backslash} D_{\Sigma}$ belongs to ${\cal {B}}(\log)$. $(3)$ $\Gamma \operatorname{\backslash} D_{\Sigma}$ represents the contravariant functor from ${\cal {B}}(\log)$ to {\rm (Set):} $S\mapsto \{$isomorphism class of $G$-LMH over $S$ with a $\Gamma$-level structure of type $(h_0,\Sigma)$ $\}$. $(4)$ Let $S$ be a connected, log smooth, fs log analytic space, and let $U$ be the open subspace of $S$ consisting of all points of $S$ at which the log structure of $S$ is trivial. Assume that $S\smallsetminus U$ is a smooth divisor. Let $(H,\mu)$ be a $G$-MHS over $U$ of type $h_0$ (\ref{D}) endowed with a $\Gamma$-level structure (\ref{LMH1}). Let $\varphi: U\to {\Gamma}\operatorname{\backslash} D$ be the associated period map. Assume that $(H,\mu)$ extends to a $G$-LMH over $S$ with a $\Gamma$-level structure of type $(h_0,\Sigma)$. Then, $\varphi$ extends to a morphism $\overline{\varphi}$ in ${\cal {B}}(\log)$ in the following commutative diagram: $$ \CD S@>{\overline{\varphi}}>>{\Gamma} \operatorname{\backslash} D_\Sigma\\ \bigcup&&\bigcup\\ U@>{\varphi}>>{\Gamma} \operatorname{\backslash} D. \endCD $$ \end{thm} \section{Log higher Albanese manifolds}\label{s:lha} \medskip We review here formulations and results of higher Albanese manifolds in \cite{HZ87} and of log higher Albanese manifolds in \cite{KNU17}. \begin{para}\label{Lie} Let $X$ be a connected smooth algebraic variety over ${\mathbb{C}}$. Fix $b\in X$. Let $\Gamma$ be a quotient group of $\pi_1(X, b)$ which is torsion-free and nilpotent. \smallskip Let $\cG=\cG_\Gamma$ be the unipotent algebraic group over ${\mathbb{Q}}$ whose Lie algebra is defined as follows: Let $I$ be the augmentation ideal $\text{Ker}({\mathbb{Q}}[\Gamma]\to {\mathbb{Q}})$ of ${\mathbb{Q}}[\Gamma]$. Then ${\rm {Lie}}\,(\cG)$ is the ${\mathbb{Q}}$-subspace of ${\mathbb{Q}}[\Gamma]^{\wedge}:=\varprojlim_n {\mathbb{Q}}[\Gamma]/I^n$ generated by all $\log(\gamma)$ ($\gamma \in \Gamma$). The Lie product of ${\rm {Lie}}\,(\cG)$ is defined by $[x,y]= xy-yx$. We have $\Gamma \subset \cG({\mathbb{Q}})$. \end{para} \begin{para}\label{hAlb} Let $\pi_1=\pi_1(X, b)$. Let $J$ be the augmentation ideal $\text{Ker}({\mathbb{Q}}[\pi_1]\to {\mathbb{Q}})$ of ${\mathbb{Q}}[\pi_1]$. For a positive integer $n$, let $\Gamma_n$ be the image of $\pi_1\to{\mathbb{Q}}[\pi_1]/J^n$. Then ${\rm {Lie}}\,(\cG_{\Gamma_n})$ has a mixed Hodge structure induced from de Rham theory on the path spaces over $X$ by Chen's iterated integral. For a given $\Gamma$ as in \ref{Lie}, there exists $n\ge1$ such that $\Gamma$ is a quotient of $\Gamma_n$. Hereafter we assume that ${\rm {Lie}}\,(\cG_{\Gamma})$ has a quotient mixed Hodge structure of the one on ${\rm {Lie}}\,(\cG_{\Gamma_n})$. Note that this mixed Hodge structure on ${\rm {Lie}}\,(\cG_{\Gamma})$ is independent of the choice of $n$. We note that there is an insufficient statement on mixed Hodge structre on ${\rm {Lie}}\,(\cG_{\Gamma})$ in \cite{KNU17} 6.1.2. The authors of \cite{KNU17} agreed to correct this part, so as to assume the existence of this mixed Hodge structure on ${\rm {Lie}}\,(\cG_{\Gamma})$ as above in the present paper. Let $\cG=\cG_\Gamma$. Let $F^0{\rm {Lie}}\,(\cG)_{\mathbb{C}}$ be the $0$-th Hodge filter on ${\rm {Lie}}\,(\cG)_{\mathbb{C}}$ and let $F^0\cG({\mathbb{C}})$ be the corresponding subgroup of $\cG({\mathbb{C}})$. The {\it higher Albanese manifold} associated to $(X,\Gamma)$ is defined in \cite{HZ87} as $$ A_{X,\Gamma}:= \Gamma\operatorname{\backslash} \cG({\mathbb{C}})/F^0\cG({\mathbb{C}}). $$ \end{para} \begin{para}\label{G} Take a ${\mathbb{Q}}$-MHS $V_0$ with polarizable $\gr^{W}$ having the ${\mathbb{Q}}$-MHS on ${\rm {Lie}}\,(\cG)_{\mathbb{Q}}$ with $\cG=\cG_\Gamma$ in \ref{hAlb} as a direct summand. Let $Q\subset {\rm {Aut}}(V_{0,{\mathbb{Q}}})$ be the {\it Mumford-Tate group} associated to $V_0$, i.e., the Tannaka group of the Tannaka category $\langle V_0 \rangle$ generated by $V_0$: $\langle V_0 \rangle \overset\sim\to {\operatorname{Rep}}(Q)$. Explicitly, it is the smallest ${\mathbb{Q}}$-subgroup $Q$ of ${\rm {Aut}}(V_{0,{\mathbb{Q}}})$ such that $Q_{\mathbb{R}}$ contains the image of the homomorphism $h:S_{{\mathbb{C}}/{\mathbb{R}}} \to {\rm {Aut}}(V_{0,{\mathbb{R}}})$ and such that ${\rm {Lie}}\,(Q)_{\mathbb{R}}$ contains $\delta$. Here $h$ and $\delta$ are determined by the canonical splitting of the ${\mathbb{Q}}$-MHS $V_0$ (\cite{CKS87}, \cite{KNU} Part II 1.2). \bigskip The action of $Q$ on ${\rm {Lie}}\,(\cG)$ induces an action of $Q$ on $\cG$. By this, define a semi-direct product $G$ of $Q$ and $\cG$: $$ 1\to \cG\to G\to Q\to 1. $$ We have $\cG\subset G_u$. We have $h_0: S_{{\mathbb{C}}/{\mathbb{R}}}\to (Q_{{{\rm red}}})_{\mathbb{R}}= (G_{{{\rm red}}})_{\mathbb{R}}$ by using the Hodge decomposition of $\gr^WV_0$. \end{para} \begin{para}\label{moduli} Let $D_G$ (resp.\ $D_Q$) be the period domain $D$ for $G$ (resp.\ $Q$) and $h_0$ in \ref{G}. We have a canonical map $\Gamma \operatorname{\backslash} D_G\to D_Q$ induced by $G\to Q$. Let $b_Q\in D_Q$ be the isomorphism class of the evident functor ${\operatorname{Rep}}(Q)\to\text{${\mathbb{Q}}$-MHS}$ by definition in \ref{G}, and let $b_G\in D_G$ be the isomorphism class of ${\operatorname{Rep}}(G)\to{\operatorname{Rep}}(Q)\overset{b_Q}\to\text{${\mathbb{Q}}$-MHS}$ via the section $Q\hookrightarrow G$. Let $\cD$ be the fiber of the map $D_{G} \to D_{Q}$ over $b_Q$. \end{para} \begin{thm}\label{t:quot} The map $G_u({\mathbb{C}}) \to D_G\; ;\; g \mapsto g\cdot b_G$ induces an isomorphism $A_{X,\Gamma}=\Gamma\operatorname{\backslash} \cG({\mathbb{C}})/F^0\cG({\mathbb{C}})\simeq\Gamma\operatorname{\backslash} \cD$ of analytic manifolds. \end{thm} \begin{para}\label{HZ} Let $\cC_{X,\Gamma}$ be the category of variations of ${\mathbb{Q}}$-MHS $\cH$ on $X$ satisfying the following three conditions: (1) For any $w\in{\mathbb{Z}}$, $\gr^W_w\cH$ is a constant polarizable Hodge structure. (2) $\cH$ is {\it good at infinity} in the sense of \cite{HZ87} (1.5), i.e., there exists a smooth compactification $\overline{X}$ of $X$ with normal crossing boundary divisor $\overline{X}\smallsetminus X$ such that the Hodge filtration bundles extend to sub-bundles of the canonical extension of $\cO$-module of $\cH$ which induce the corresponding thing for each $\gr^W_w\cH$, and that, for the nilpotent logarithm $N_j$ of a local monodromy transformation about a component of $\overline{X}\smallsetminus X$, the relative monodromy weight filtration $M(N_j,W)$ exists. (3) The monodromy action of $\pi_1(X,b)$ factors through $\Gamma$. \end{para} Hain and Zucker showed \begin{thm}\label{t:HZ}{\rm (\cite{HZ87} (1.6) Theorem).} The category $\cC_{X,\Gamma}$ is equivalent to the category of ${\mathbb{Q}}$-MHS $V$ with polarizable $\gr^WV$ endowed with an action of ${\rm {Lie}}\,(\cG)$ such that ${\rm {Lie}}\,(\cG)\otimes V\to V$ is a homomorphism of MHS. \end{thm} \begin{para}\label{FG} Define a contravariant functor $\cF_{\Gamma}:{\cal {B}}(\log) \to\text{Sets}$ as follows: For $S\in{\cal {B}}(\log)$, $\cF_{\Gamma}(S)$ is the set of isomorphism classes of pairs $(H,\mu)$ of an exact $\otimes$-functor $H:\cC_{X,\Gamma} \to \text{MHS}(S)$ and a $\Gamma$-level structure $\mu$ satisfying the following condition (i). Here a {\it $\Gamma$-level structure} means a global section of the sheaf $\Gamma \operatorname{\backslash} \Cal I$, where $\Cal I$ is the sheaf of functorial $\otimes$-isomorphisms $H(\cH)_{{\mathbb{Q}}} \overset{\sim}\to \cH(b)_{{\mathbb{Q}}}$ of ${\mathbb{Q}}$-local systems preserving weight filtrations. (i) For any ${\mathbb{Q}}$-MHS $h$, we have a functorial $\otimes$-isomorphism $H(h_X) \cong h_S$ such that the induced isomorphism of local systems $H(h_X)_{\mathbb{Q}}\cong h_{\mathbb{Q}}=h_X(b)_{\mathbb{Q}}$ belongs to $\mu$. Here $h_X$ (resp.\ $h_S$) denotes the constant variation (resp.\ family) of ${\mathbb{Q}}$-MHS over $X$ (resp.\ $S$) associated to $h$. \begin{thm}\label{t:FG} Let the notation be as in \ref{FG}. The functor $\cF_{\Gamma}$ is represented by $A_{X,\Gamma}\simeq\Gamma\operatorname{\backslash}\cD$. \end{thm} This follows from Theorem \ref{t:quot} and Theorem \ref{t:HZ}. Let $\varphi:X\to A_{X,\Gamma}$ be the higher Albanese map. \end{para} \begin{para}\label{FGS} Let $\Sigma$ be a weak fan in ${\rm {Lie}}\,(G)$ such that ${\sigma}\subset {\rm {Lie}}\,(\cG)_{\mathbb{R}}$ for any ${\sigma}\in \Sigma$. Assume that $\Sigma$ and $\Gamma$ are strongly compatible. Let $\Gamma \operatorname{\backslash} D_{G,\Sigma}\to D_Q$ be a canonical morphism induced by $G\to Q$. Define $$ \text{$A_{X, \Gamma,\Sigma}$:= (the fiber of $\Gamma\operatorname{\backslash} D_{G,\Sigma}\to D_Q$ over $b_Q$) $\in{\cal {B}}(\log)$} $$ Define a contravariant functor $\cF_{\Gamma,\Sigma}:{\cal {B}}(\log) \to\text{Sets}$ as follows: For $S\in{\cal {B}}(\log)$, ${\cF}_{\Gamma,\Sigma}(S)$ is the set of isomorphism classes of pairs $(H,\mu)$ consisting of an exact $\otimes$-functor $H:\cC_{X,\Gamma}\to{\rm {LMH}}(S)$ and a $\Gamma$-level structure $\mu$ satisfying the condition (i) in \ref{FG} and also the following condition (ii). (ii) The following (ii-1) and (ii-2) are satisfied for any $s\in S$ and any $t\in s^{\log}$. Let $\tilde \mu_t: H(\cH)_{{\mathbb{Q}},t}\cong \cH(b)_{\mathbb{Q}}$ be a functorial $\otimes$-isomorphism which belongs to $\mu_t$. (ii-1) There is a ${\sigma}\in \Sigma$ such that the logarithm of the action of the local monodromy cone $\Hom((M_S/\cO^\times_S)_s, {\mathbb{N}})\subset \pi_1(s^{\log})$ on $H_{{\mathbb{Q}},t}$ is contained, via $\tilde \mu_t$, in $\sigma \subset {\rm {Lie}}\,(\cG)_{\mathbb{R}}$. (ii-2) Let ${\sigma}\in \Sigma$ be the smallest cone which satisfies (ii-1) and let $a: \cO_{S,t}^{\log}\to {\mathbb{C}}$ be a ring homomorphism which induces the evaluation $\cO_{S,s}\to {\mathbb{C}}$ at $s$. Then, for each $\cH\in \cC_{X,\Gamma}$, $({\sigma}, \tilde \mu_t(a(H(\cH))))$ generates a nilpotent orbit in the sense of \cite{KNU} Part III, 2.2.2. \begin{thm}\label{t:FGS} Let the notation be as in \ref{t:FG} and \ref{FGS}. $(1)$ The functor $\cF_{\Gamma,\Sigma}$ is represented by $A_{X, \Gamma,\Sigma}$. $(2)$ Let $\overline{X}$ be a smooth algebraic variety over ${\mathbb{C}}$ which contains $X$ as a dense open subset such that the complement $\overline{X}\smallsetminus X$ is a smooth divisor. Endow $\overline{X}$ with the log structure associated to this divisor. Assume that $\Sigma$ is the fan consisting of all rational nilpotent cones in ${\rm {Lie}}\,(\cG)_{\mathbb{R}}$ of rank $\le1$ (denoted by $\Xi$ in \cite{KNU17} 6.2.5). Then, the {\rm higher Albanese map} $\varphi: X \to A_{X,\Gamma}$ extends uniquely to a morphism $\overline{\varphi}: \overline{X} \to A_{X,\Gamma,\Sigma}$ of log manifolds. \end{thm} Since an object of $\cC_{X, \Gamma}$ is good at infinity (\ref{HZ}), it extends to an LMH over $\overline{X}$. Hence (2) follows from (1) and the general theorem \ref{thm1} (4). \end{para} \section{Description of a result of Deligne by log higher Albanese map}\label{s:example} For a group $\Gamma^{(n)}$ in \ref{Gamma^n} below, Deligne \cite{D89} showed that polylogarithms appear in the higher Albanese map $X\to A_{X,\Gamma^{(n)}}$ (cf.\ Section A below). Here we describe them in our framework in \cite{KNU17} (Section 2 in the present paper). \begin{para}\label{bsetting} Let $X:={\mathbb{P}}^1({\mathbb{C}})\smallsetminus\{0,1,\infty\}\subset\overline{X}:={\mathbb{P}}^1({\mathbb{C}})$ with affine coordinate $x$. Let $b:=(0,1)$ the \lq\lq tangential base point" over $0\in\overline{X}$ with tangent $v_0\in T_0(\overline{X})=\Hom_{\mathbb{C}}(m_0/m_0^2,{\mathbb{C}})$ defined by $v_0(x)=1$ in \cite{D89} Section 15. This is understood in log geometry in the following way. Let $y=(0,h)\in\overline{X}^{\log}$ be the point lying over $0\in X$, where $h:M^{\gp}_{\overline{X},0}=\cO_{\overline{X},0}^{\times}\times x^{\mathbb{Z}} \to {\bf S}^1$ is the argument function which is a group homomorphism sending $f\in\cO_{\overline{X},0}^{\times}$ to $f(0)/|f(0)|$ and $x$ to $v_0(x)/|v_0(x)|=1$ (\cite{KNU17} 6.3.7). Let $u_0\in\cO_{\overline{X},y}^{\log}$ be the branch of $\log(x)$ having real value on ${\mathbb{R}}_{>0}$. (This $u_0$ is the branch denoted by $f\in\cO_{\overline{X},y}^{\log}$ in \cite{KNU17} 6.3.7 (ii), and $u_0$ can be also regarded as the function $2\pi iz$ on $\tilde S^{\log}$ in 1.1.1.) Then the corresponding base point in the boundary in our sense is $b=(y,a)$, where $a:\cO^{\log}_{\overline{X},y}={\mathbb{C}}\{x\}[u_0]\to {\mathbb{C}}$ is the specialization which is a ring homomorphism sending $x$ to $0$ and $u_0$ to $a(u_0)=\log(v_0(x))=\log(1)=0$ (\cite{KNU17} 6.3.7 (ii)). \medskip See \cite{KNU17} 6.3.6, 6.3.7 for more general description of the above correspondence of boundary points. \end{para} \begin{para}\label{Gamma} The inclusion $X\subset{\mathbb{G}}_m({\mathbb{C}})={\mathbb{C}}^{\times}$ induces $\pi_1(X,b)\to\pi_1({\mathbb{G}}_m({\mathbb{C}}),b)={\mathbb{Z}}(1)$. Let $K$ be its kernel, and let $\Gamma:=\pi_1(X,b)/[K,K]$ and $\Gamma_1:=K/[K,K]$. Then $$1\to\Gamma_1\to\Gamma\to{\mathbb{Z}}(1)\to1.$$ \end{para} \begin{para}\label{Gamma^n} Let $Z^n\Gamma$ be the descending central series of $\Gamma$ defined by $Z^{n+1}\Gamma:=[Z^n\Gamma,\Gamma]$ starting with $Z^1\Gamma=\Gamma$. Let $\Gamma^{(n)}:=\Gamma/Z^{n+1}(\Gamma)$ and $\Gamma_1^{(n)}:={\rm {Image}}\,(\Gamma_1\to\Gamma^{(n)})$. Let $\gamma_0,\gamma_1\in\Gamma^{(n)}$ be the classes of small loops anticlockwise around $0$ and clockwise around $1$, respectively. Then, we have \vskip5pt $\Gamma^{(n)}=\langle \gamma_{0},\gamma_{1}\rangle$, $({\rm ad}\,\gamma_{0})^{k-1}\gamma_{1}$ $(1\le k\le n)$ are commutative, $\Gamma_{1}^{(n)}=\sum_{k=1}^n{\mathbb{Z}}({\rm ad}\,\gamma_{0})^{k-1}\gamma_{1}$. \end{para} \begin{para}\label{DLam} Let $\Lambda=(V,W,(\langle\;,\rangle_w)_{w\in{\mathbb{Z}}},(h^{p,q})_{p,q\in{\mathbb{Z}}})$ be as follows. $V$ is a free ${\mathbb{Z}}$-module with basis $e_1,e_2,e_3,\dots,e_{n+1}$. $W$ is a weight filtration on $V_{\mathbb{Q}}$ defined by $$ W_{-2n-1}=0\subset W_{-2n}=W_{-2n+1}={\mathbb{Q}} e_1 \subset W_{-2n+2}=W_{-2n+3}=W_{-2n+1}+{\mathbb{Q}} e_2 $$ $$ \subset\cdots\subset W_0=W_{-1}+{\mathbb{Q}} e_{n+1}=V_{{\mathbb{Q}}}. $$ $\langle\;,\,\rangle_w : \gr^W_w(V_{\mathbb{Q}}) \times \gr^W_w(V_{\mathbb{Q}})\to {\mathbb{Q}}$ ($w\in{\mathbb{Z}}$) are the ${\mathbb{Q}}$-bilinear forms characterized by $\langle e_{n+1+k},e_{n+1+k}\rangle_{2k}=1$ for $k=0,-1,\dots,-n$. $h^{k,k}=1$ for $k=0,-1,\dots,-n$, and $h^{p,q}=0$ for the other $(p,q)$. Let $D(\Lambda)$ be the period domain in \cite{KNU} Part III with universal Hodge filtration $F$: $$ F^{1}=0\subset F^0={\mathbb{C}}(e_{n+1}+\sum_{n\ge j\ge1}a_{j,n+1}e_j) \subset F^{-1}=F^0+{\mathbb{C}}(e_n+\sum_{n-1\ge j\ge1}a_{j,n}e_j) $$ $$ \subset\cdots\subset F^{-n}=F^{-n+1}+{\mathbb{C}} e_1=V_{{\mathbb{C}}}. $$ \end{para} \begin{para}\label{action} Let $\cG$ be the unipotent group $\cG$ in \ref{Lie} for $\Gamma^{(n)}$. Define an action of ${\rm {Lie}}\,(\cG)$ on $V_{\mathbb{Q}}$ by $N_0=\log(\gamma_0)$, $N_1=\log(\gamma_1)$: $$ N_0e_j=e_{j-1}\; (j=2,\dots,n),\;\;N_0e_j=0\;(j=1,n+1), $$ $$ N_1e_{n+1}=-e_n,\;\; N_1e_j=0\;(j=1,2,\dots,n). $$ Then $$ (-N_0+N_1)^j=(-N_0)^j+(-{\rm{Ad}} N_0)^{j-1}N_1\quad (1\le j\le n+1). $$ \end{para} \begin{para}\label{hAlb2} Let $X$ be as in \ref{bsetting} and $\Gamma^{(n)}$ be as in \ref{Gamma^n}. We consider the higher Albanese manifold $A_{X,\Gamma^{(n)}}$ of $X$ by using the base point $b$ in \ref{bsetting}. The ${\mathbb{Q}}$-MHS on ${\rm {Lie}}\,(\cG)$ is as follows: $N_0$ and $N_1$ are of Hodge type $(-1,-1)$ and compatible with bracket and hence $F^0\cG({\mathbb{C}})=\{1\}$. Thus the higher Albanese manifold is $$ A_{X,\Gamma^{(n)}}=\Gamma^{(n)}\operatorname{\backslash}\cG({\mathbb{C}}). $$ \end{para} \begin{lem}\label{l:MHSact} Let $F$ and $N_j$ $(j=0,1)$ be as in \ref{DLam} and in \ref{action}. \noindent {\rm(i)} We have the following. $(1)$ $(N_0,F)$ satisfies the Griffiths transversality if and only if $$ a_{k,n+1}=0 \quad (2\le k\le n);\quad a_{1,k}=a_{l-k+1,l}\quad(2\le k< l\le n). $$ $(2)$ $(N_1,F)$ satisfies the Griffiths transversality if and only if $$ a_{k,n}=0 \quad (1\le k\le n-1). $$ $(3)$ $(-N_0+N_1,F)$ satisfies the Griffiths transversality if and only if $$ a_{1,k}=a_{l-k+1,l}\quad(2\le k<l\le n+1). $$ \noindent {\rm(ii)} The following three conditions are equivalent. $(1)$ The Lie action ${\rm {Lie}}\,(\cG)\otimes V\to V$ in \ref{action} is a homomorphism of MHS with respect to the MHS on ${\rm {Lie}}\,(\cG)$ in \ref{hAlb2} and the MHS $(V,W,F)$ in \ref{DLam}. $(2)$ For $j=0$ and $1$, $(N_j,F)$ satisfies the Griffiths transversality. $(3)$ $a_{j,k}=0$ unless $(j,k)=(1,n+1)$. \end{lem} The assertions are easily verified by direct computation. \begin{para}\label{hAlb3} For any fixed $a\in{\mathbb{C}}$, denote by $F(a)$ the Hodge filtration in \ref{l:MHSact} (ii) (3) with $a_{1,n+1}=a$. By the action in \ref{action}, we define $$ \cD:=\exp\left({\mathbb{C}} N_0+\sum_{k=1}^n{\mathbb{C}}({\rm{Ad}} N_0)^{k-1}N_1\right)F(a)\subset D(\Lambda). $$ Then, this $\cD$ coincides with $\cD$ in \ref{moduli}. Hence $\cG({\mathbb{C}})\simeq\cD$ and $A_{X,\Gamma^{(n)}}\simeq\Gamma^{(n)}\operatorname{\backslash}\cD$ as complex analytic manifolds. \end{para} \begin{para}\label{eq} Let $\varphi:X\to A_{X,\Gamma^{(n)}}\simeq\Gamma^{(n)}\operatorname{\backslash}\cD$ be the composite of higher Albanese map and the isomorphism in \ref{hAlb3}. Let $F(x)$ be the pullback by $\varphi$ of the universal Hodge filtration on $\Gamma^{(n)}\operatorname{\backslash}\cD$. Since $F(x)$ is rigid by Theorem \ref{t:HZ}, we consider a connection equation: $$ dF(x)=\omega F(x),\quad \omega:=(2\pi i)^{-1}\frac{dx}{x}N_0+(2\pi i)^{-1}\frac{dx}{1-x}N_1. $$ That is, $$ da_{k-1,k}(x)=(2\pi i)^{-1}\frac{dx}{x}\quad (2\le k\le n), $$ $$ da_{n,n+1}(x)=-(2\pi i)^{-1}\frac{dx}{1-x}, $$ $$ da_{j,k}(x)=(2\pi i)^{-1}a_{j+1,k}(x)\frac{dx}{x}\quad (3\le k\le n+1,\;1\le j\le k-2). $$ \end{para} \begin{para}\label{sol} This system is solved by iterated integrals. The solutions are $$ a_{j,k}(x)=\frac{1}{(k-j)!}((2\pi i)^{-1}\log (x))^{k-j}\quad (2\le k\le n,\;1\le j\le k-1), $$ $$ a_{j,n+1}(x)=-(2\pi i)^{-n-1+j}l_{n+1-j}(x)\quad(1\le j\le n). $$ Here the $l_j(x)$ are polylogarithms, in particular $l_1(x)=-\log(1-x)$. \end{para} \begin{para}\label{table} Table of solutions: $$ \left(\CD 1\;\;&a_{1,2}\;\;&\ldots&a_{1,n}&a_{1,n+1}\\ 0&1&\ddots&\vdots&\vdots\\ \vdots&0&\ddots\;\;&\;a_{n-1,n}\;\;&a_{n-1,n+1}\\ \vdots&\vdots&\ddots&1&a_{n,n+1}\\ 0&0&\ldots&0&1 \endCD\right)= \left(\CD 1\;\;&(2\pi i)^{-1}\log(x)\;&\ldots\;\;&\frac{((2\pi i)^{-1}\log(x))^{n-1}}{(n-1)!}\;\;&-(2\pi i)^{-n}l_n(x)\\ 0&1&\ddots&\vdots&\vdots\\ \vdots&0&\ddots&(2\pi i)^{-1}\log(x)&-(2\pi i)^{-2}l_2(x)\\ \vdots&\vdots&\ddots&1&-(2\pi i)^{-1}l_{1}(x)\\ 0&0&\ldots&0&1 \endCD\right). $$ Note that, for $1\le j\le n$, $$ \exp((2\pi i)^{-1}\log(x)N_0)e_j =e_j+(2\pi i)^{-1}\log(x)e_{j-1}+\cdots +\frac{1}{(j-1)!}((2\pi i)^{-1}\log (x))^{j-1}e_1, $$ for $j=n+1$, $$ \exp\left(-\sum_{n\ge k\ge1}(2\pi i)^{-k}l_{k}(x)({\rm{Ad}} N_0)^{k-1}N_1\right) e_{n+1} =e_{n+1}-\left(\sum_{n\ge k\ge1}(2\pi i)^{-k}l_{k}(x)e_{n+1-k}\right). $$ \end{para} \begin{para}\label{Hfilt} For $\alpha,\beta,\lambda_{2},\dots,\lambda_{n}\in{\mathbb{C}}$, let $F=F(\alpha,\beta,\lambda_{2},\dots,\lambda_{n})$ be a Hodge filtration: $$ F^{1}=0\subset F^0={\mathbb{C}}(e_{n+1}+\beta e_{n}+ \lambda_{2}e_{n-1}+\dots+\lambda_{n}e_{1}) $$ $$ \subset F^{-1}=F^0+{\mathbb{C}}\left(e_n+\alpha e_{n-1}+\frac{\alpha^{2}}{2!}e_{n-2} +\dots+\frac{\alpha^{n-1}}{(n-1)!}e_1\right)\subset\cdots $$ $$ \subset F^{-n+1}=F^{-n+2}+{\mathbb{C}}(e_2+ \alpha e_1) \subset F^{-n}=F^{-n+1}+{\mathbb{C}} e_1=V_{{\mathbb{C}}}. $$ \end{para} \begin{para}\label{hAmap} Let $\varphi:X\to A_{X,\Gamma^{(n)}}\simeq \Gamma^{(n)}\operatorname{\backslash}\cD$ be the higher Albanese map in \ref{eq}. We have a commutative diagram $$ \CD &&\tilde\varphi(X)&\;&\subset& \cD\\ &\tilde\varphi\nearrow& @VVV&@VVV\\ X@>\sim>\varphi>\varphi(X)&\;&\subset \;&\Gamma^{(n)}\operatorname{\backslash}\cD \endCD $$ where $\tilde\varphi:X\to\cD$ is a multi-valued map corresponding to the Hodge filtration $$ x\mapsto F((2\pi i)^{-1}\log (x),-(2\pi i)^{-1}l_1(x),\dots, -(2\pi i)^{-n}l_{n}(x)) $$ in the nottion in \ref{Hfilt}. $\tilde\varphi(X)\to X$ and $\tilde\varphi(X)\to\varphi(X)$ are $\Gamma^{(n)}$-torsors and $\varphi:X\overset\sim\to\varphi(X)$ is an isomorphism. \end{para} \begin{para}\label{loghA} Let $\Sigma$ be the set of all cones of the form ${\mathbb{R}}_{\geq 0} N$ with $N\in{\rm {Lie}}\,(\cG)$. We consider the extended period domain $D({\Lambda})_{\Sigma}$ in \cite{KNU} Part III. This is only a set. By using the strong topology (\cite{KU09} Section 3.1), the quotient $\Gamma^{(n)}\operatorname{\backslash} D(\Lambda)_{\Sigma}$ has a structure of a log manifold. Define $\Gamma^{(n)}\operatorname{\backslash}\cD_\Sigma$ to be the closure of $\Gamma^{(n)}\operatorname{\backslash}\cD$ in $\Gamma^{(n)}\operatorname{\backslash} D(\Lambda)_\Sigma$. This inherits a structure of log manifold. We have $A_{X,\Gamma^{(n)},\Sigma}\simeq\Gamma^{(n)}\operatorname{\backslash}\cD_\Sigma$ in the category ${\cal {B}}(\log)$. Let $N\in{\rm {Lie}}\,(\cG)$ and $\sigma:={\mathbb{R}}_{\geq 0} N$. Let $\Gamma_{\sigma}$ be the group generated by the monoid $\Gamma^{(n)}\cap\exp(\sigma)$. If we use as $\Sigma$ the fan consisting of the cone $\sigma$ and $0$, also denoted by $\sigma$ by abuse of notation, we have $A_{X,\Gamma_{\sigma},\sigma}\simeq\Gamma_{\sigma}\operatorname{\backslash}\cD_\sigma$ in the category ${\cal {B}}(\log)$. \end{para} \begin{para}\label{local0} Let $N_0$ be as in \ref{action} and set ${\sigma}_0={\mathbb{R}}_{\geq 0}N_0$. Let $F=F(\alpha,\beta,\lambda_{2},\dots,\lambda_{n})$ be as in \ref{Hfilt}. By Lemma \ref{l:MHSact} (i) (1), $(N_0,F)$ satisfies the Griffiths transversality if and only if $\beta=\lambda_{2}=\dots=\lambda_{n-1}=0$. If this is the case, $(N_0,F)$ generates a $\sigma_0$-nilpotent orbit, since admissibility and ${\mathbb{R}}$-polarizability on $\gr^W$ trivially hold. We describe the local structure of $\Gamma_{\sigma_0}\operatorname{\backslash}\cD_{\sigma_0}$ near the image $p_0$ of this nilpotent orbit. Let $Y:=\{(q, \beta, \lambda_2,\dots,\lambda_n)\in {\mathbb{C}}^{n+1}\;|\; \beta=\lambda_{2}=\dots=\lambda_{n-1}=0 \;\text{if}\; q=0\}$ be the log manifold with the strong topology, with the structure sheaf of rings which is the inverse image of the sheaf of holomorphic functions on ${\mathbb{C}}^{n+1}$, and with the log structure generated by $q$. Then there is an open neighborhood $U$ of $(0,0,\dots,0,\lambda_n)$ in ${\mathbb{C}}^{n+1}$ and an open immersion $$ Y\cap U \hookrightarrow \Gamma_{\sigma_0}\operatorname{\backslash}\cD_{\sigma_0} $$ of log manifolds which sends $(q, \beta, \lambda_2,\dots,\lambda_n)\in Y\cap U$ with $q\neq 0$ to the class of $F(\alpha,\beta,\lambda_2,\dots,\lambda_n)$, where $\alpha\in {\mathbb{C}}$ is such that $q=e^{2\pi i\alpha}$, and which sends $(0,0,\dots,0,\lambda_n)$ to $p_0$. \end{para} \begin{para}\label{naive0} Near $x=0$, a nilpotent orbit in naive sense is $$ (1)\hskip40pt \exp((2\pi i)^{-1}\log(x)N_0)F(0,0,\dots,0,\lambda_n^0) =F((2\pi i)^{-1}\log(x),0,\dots,0,\lambda_n^0),\hskip40pt $$ where $\lambda_n^0=-(2\pi i)^{-n}l_n(0)$. The corresponding \lq\lq higher Albanese map" (i.e., local version about $0$ of $\tilde\varphi$ in \ref{hAmap}) is $$ (2)\hskip90pt F((2\pi i)^{-1}\log(x),-(2\pi i)^{-1}l_1(x),\dots,-(2\pi i)^{-n}l_n(x)) \hskip90pt $$ under the condition $l_j(0)=0$ ($1\le j\le n-1$). These two are asymptotic when $x$ goes to the boundary point $b=(y,a)$ with $y=(0,h)\in\overline{X}^{\log}$ and $a$ being the specialization at $y$ in \ref{bsetting}. \end{para} \begin{para}\label{over0} As above, let $u_0$ be the branch of $\log (x)$ in \ref{bsetting} and $T$ an indeterminate over $\cO_{\overline{X},0}$. Then, by 1.1.1, we have an isomorphism $\cO_{\overline{X},y}^{\log}=\cO_{\overline{X},0}[u_0]\simeq \cO_{\overline{X},0}[T]$ of $\cO_{\overline{X},0}$-algebras under $(2\pi i)^{-1}u_0\leftrightarrow T$. Consider an $\cO_{\overline{X},0}$-algebra homomorphism $\cO_{\overline{X},0}[T]\to\cO_{\overline{X},0}$, $T\mapsto x$. Under the initial condition in \ref{naive0} given by the base point $b$ in \ref{bsetting}, we have $$ l_j(x)=\sum_{k=1}^\infty \frac{x^k}{k^j}\quad(1\le j\le n-1),\quad l_n(x)=c+\sum_{k=1}^\infty \frac{x^k}{k^n} $$ on a simply connedcted neighborhood $\overline{X}_0$ of $0\in\overline{X}$, where $c:=-(2\pi i)^n\lambda_n^0$. Let $\alpha=(2\pi i)^{-1}\log(x)$. Then, as $x\to0$, $\exp(-\alpha N_0)(\text{$F$ in \ref{naive0} (2)})$ converges to $F(0,0,\dots,0,\lambda_n^0)$ in $\cD$ (\ref{hAlb3}), and hence the class of ($F$ in \ref{naive0} (2)) converges to the class $p_0$ (\ref{local0}) of the nilpotent orbit $(\sigma_0,\exp(\sigma_{0,{\mathbb{C}}})F(0,0,\dots,0,\lambda_n^0))$ in $\Gamma_{\sigma_0}\operatorname{\backslash}\cD_{\sigma_0}$. We thus have an extension of the higher Albanese map over $\overline{X}_0$ (Theorem \ref{t:FGS} (2)): $$ \overline\varphi_0:\overline{X}_0\to\Gamma_{\sigma_0}\operatorname{\backslash}\cD_{\sigma_0}. $$ This is a morphism in the category ${\cal {B}}(\log)$. The log structure on the source (resp.\ the target) is given by $x$ (resp.\ $q$). The pullback of the universal log mixed Hodge structure on the target coincides with the log mixed Hodge structure on the source. \end{para} \begin{para}\label{log0} By using log mixed Hodge theory, \ref{naive0} is described as follows. Taking the images of the nilpotent orbit in naive sense \ref{naive0} (1) and the \lq\lq higher Albanese map" \ref{naive0} (2), we have their real analytic extensions with boundary $$ \overline\nu_0^{\log}, \; \overline\varphi_0^{\log}:\overline{X}_0^{\log}\to(\Gamma_{\sigma_0}\operatorname{\backslash}\cD_{\sigma_0})^{\log}. $$ Here, $\overline{X}_0^{\log}$ is like Example 1.1.1, and $(\Gamma_{\sigma_0}\operatorname{\backslash}\cD_{\sigma_0})^{\log}$ coincides with the moduli of nilpotent $i$-orbits $\Gamma_{\sigma_0}\operatorname{\backslash}\cD_{\sigma_0}^{\sharp}$ in the present situation (\cite{KNU} III Theorem 2.5.6). Let $\tilde{\overline{X}}_0^{\log}$ be the universal covering of $\overline{X}_0^{\log}$. The above maps are still lifted to $$ \tilde{\overline\nu}_0^{\log}, \; \tilde{\overline\varphi}_0^{\log}:\tilde{\overline{X}}_0^{\log}\to\cD_{\sigma_0}^{\sharp}. $$ The boundary point $b$ in \ref{naive0} can be understood as the point $b=(z=0+i\infty)=(u_0=-\infty+i0)\in\tilde{\overline{X}}_0^{\log}$. We have $(\exp(-(2\pi i)^{-1}\log(x)N_0)(\ref{naive0} \;(2)))(b)=F(0,0,\dots,0,\lambda_n^0)$, and $$ \tilde{\overline\nu}_0^{\log}(b)=\tilde{\overline\varphi}_0^{\log}(b)= (\text{nilpotent $i$-orbit generated by $(N_0, F(0,0,\dots,0,\lambda_n^0))$})\in\cD_{\sigma_0}^{\sharp}.$$ \end{para} \begin{para}\label{local1} Let now $\sigma_1={\mathbb{R}}_{\ge0}N_1$ for $N_1$ in \ref{action}. Let $F=F(\alpha,\beta,\lambda_{2},\dots,\lambda_{n})$ be as in \ref{Hfilt}. By Lemma \ref{l:MHSact} (i) (2), $(N_1,F)$ satisfies the Griffiths transversality if and only if $\alpha=0$. If this is the case, $(N_1,F)$ generates a $\sigma_1$-nilpotent orbit, since admissibility and ${\mathbb{R}}$-polarizability on $\gr^W$ trivially hold. We have a similar description of the local structure of $\Gamma_{\sigma_1}\operatorname{\backslash} \cD_{\sigma_1}$ near the image $p_1$ of this nilpotent orbit. Let $Y$ be the log manifold $\{(\alpha,q,\lambda_2,\dots,\lambda_n)\in {\mathbb{C}}^{n+1}\;|\; \alpha=0 \;\text{if}\; q=0\}$ with the strong topology, the structure sheaf and the log structure defined by $q$. Then there is an open neighborhood $U$ of $(0,0,\lambda_2,\dots,\lambda_n)$ in ${\mathbb{C}}^{n+1}$ and an open immersion $$ Y\cap U \hookrightarrow \Gamma_{\sigma_1}\operatorname{\backslash} \cD_{\sigma_1} $$ of log manifolds which sends $(\alpha,q, \lambda_2,\dots,\lambda_n)\in Y\cap U$ with $q\neq 0$ to the class of $F(\alpha, \beta, \lambda_2,\dots,\lambda_n)$, where $\beta\in {\mathbb{C}}$ is such that $q=e^{2\pi i\beta}$, and which sends $(0,0,\lambda_2,\dots,\lambda_n)$ to $p_1$. \end{para} \begin{para}\label{naive1} We assume the initial condition in \ref{naive0}. Near $x=1$, a nilpotent orbit in naive sense is $$ (1)\hskip30pt\exp((2\pi i)^{-1}\log(1-x)N_{1})\cdot F(0,0,-(2\pi i)^{-2}\zeta(2),\dots,-(2\pi i)^{-n}(c+\zeta(n)))\hskip30pt $$ $$ =F(0,-(2\pi i)^{-1}l_1(x),-(2\pi i)^{-2}\zeta(2),\dots,-(2\pi i)^{-n}(c+\zeta(n))). $$ The corresponding \lq\lq higher Albanese map" (i.e., local version about $1$ of $\tilde\varphi$ in \ref{hAmap}) is $$ (2)\hskip84ptF((2\pi i)^{-1}\log(x),-(2\pi i)^{-1}l_1(x),\dots,-(2\pi i)^{-n}l_n(x)).\hskip84pt $$ These two are asymptotic when $x$ goes to the tangential boundary point $\tilde p_1:=(1,-1)$ with tangent $v_1\in T_1(\overline{X})=\Hom_{\mathbb{C}}(m_1/m_1^2,{\mathbb{C}})$ defined by $v_1(1-x)=-1$. This is the boundary point in our sense described as follows. Let $u_1$ be the branch of $\log(1-x)$ having real value on ${\mathbb{R}}_{<1}$. Then the corresponding point in the boundary in our sense is $\tilde p_1=(y,a)$ with $y=(1,h)\in\overline{X}^{\log}$ such that the argument function $h:M^{\gp}_{\overline{X},1}=\cO_{\overline{X},1}^{\times}\times (1-x)^{\mathbb{Z}} \to {\bf S}^1$ is a group homomorphism sending $f\in\cO_{\overline{X},1}^{\times}$ to $f(1)/|f(1)|$ and $1-x$ to $v_1(1-x)/|v_1(1-x)|=-1$, and the specialization $a:\cO^{\log}_{\overline{X},y}={\mathbb{C}}\{1-x\}[u_1]\to {\mathbb{C}}$ is a ring homomorphism sending $1-x$ to $0$ and $u_1$ to $a(u_1)=-a(-u_1)=\log(v_1(-(1-x)))=\log(1)=0$ (cf.\ \cite{KNU17} 6.3.7 (ii)). \end{para} \begin{para}\label{over1} As above, let $u_1$ be the branch of $\log(1-x)$ and $T$ an indeterminate over $\cO_{\overline{X},1}$. Then, by 1.1.1, we have an isomorphism $\cO_{\overline{X},y}^{\log}=\cO_{\overline{X},1}[u_1]\simeq \cO_{\overline{X},1}[T]$ of $\cO_{\overline{X},1}$-algebras under $(2\pi i)^{-1}u_1\leftrightarrow T$. Consider an $\cO_{\overline{X},1}$-algebra homomorphism $\cO_{\overline{X},1}[T]\to\cO_{\overline{X},1}$, $T\mapsto 1-x$. Let $\beta=(2\pi i)^{-1}\log(1-x)$. Then, as $x\to 1$ in $\overline{X}$ along the real axis starting from $b$ over $0$ to $1$, $\exp(-\beta N_1)(\text{$F$ in \ref{naive1} (2)})$ converges to $F(0,0,-(2\pi i)^{-2}\zeta(2),\dots,-(2\pi i)^{-n}(c+\zeta(n)))$ in $\cD$ (\ref{hAlb3}), and hence the class of ($F$ in \ref{naive1} (2)) converges to the class $p_1$ (\ref{local1}) of the nilpotent orbit $$ (\sigma_1,\exp(\sigma_{1,{\mathbb{C}}})F(0,0,-(2\pi i)^{-2}\zeta(2),\dots,-(2\pi i)^{-n}(c+\zeta(n)))) $$ in $\Gamma_{\sigma_1}\operatorname{\backslash}\cD_{\sigma_1}$. We thus have an extension of the higher Albanese map over a simply connected neighborhood $\overline{X}_1$ of $1$ in $\overline{X}$ (Theorem \ref{t:FGS} (2)): $$ \overline\varphi_1:\overline{X}_1\to \Gamma_{\sigma_1}\operatorname{\backslash}\cD_{\sigma_1}. $$ This is a morphism in the category ${\cal {B}}(\log)$. The log structure on the source (resp.\ the target) is given by $1-x$ (resp.\ $q$). The pullback of the universal log mixed Hodge structure on the target coincides with the log mixed Hodge structure on the source. \end{para} \begin{para}\label{log1} By using log mixed Hodge theory, \ref{naive1} is described as follows. Taking the images of the nilpotent orbit in naive sense \ref{naive1} (1) and the \lq\lq higher Albanese map" \ref{naive1} (2), we have their real analytic extensions with boundary $$ \overline\nu_1^{\log}, \; \overline\varphi_1^{\log}:\overline{X}_1^{\log}\to(\Gamma_{\sigma_1}\operatorname{\backslash}\cD_{\sigma_1})^{\log}. $$ Here, $\overline{X}_1^{\log}$ is similar to Example 1.1.1 over $x=1$, and $(\Gamma_{\sigma_1}\operatorname{\backslash}\cD_{\sigma_1})^{\log}$ coincides with the moduli of nilpotent $i$-orbits $\Gamma_{\sigma_1}\operatorname{\backslash}\cD_{\sigma_1}^{\sharp}$ in the present situation (\cite{KNU} III Theorem 2.5.6). Let $\tilde{\overline{X}}_1^{\log}$ be the universal covering of $\overline{X}_1^{\log}$. The above maps are still lifted to $$ \tilde{\overline\nu}_1^{\log}, \; \tilde{\overline\varphi}_1^{\log}:\tilde{\overline{X}}_1^{\log}\to\cD_{\sigma_1}^{\sharp}. $$ The boundary point $\tilde p_1$ in \ref{naive1} can be understood as the point $\tilde p_1=(z_1=0+i\infty)=(u_1=-\infty+i0)\in\tilde{\overline{X}}_1^{\log}$ (where $2\pi iz_1:=u_1$). We have $(\exp(-(2\pi i)^{-1}\log(1-x)N_1)(\ref{naive1}\, (2)))(\tilde p_1)=F(0,0,-(2\pi i)^{-2}\zeta(2),\dots,-(2\pi i)^{-n}(c+\zeta(n)))$, and $\tilde{\overline\nu}_1^{\log}(\tilde p_1)=\tilde{\overline\varphi}_1^{\log}(\tilde p_1)\in\cD_{\sigma_1}^{\sharp}$ is the nilpotent $i$-orbit generated by $(N_1, F(0,0,-(2\pi i)^{-2}\zeta(2),\dots,-(2\pi i)^{-n}(c+\zeta(n))))$. \end{para} \begin{para}\label{eq'} In order to describe the local structure near $x=\infty$, we take a local coordinate $\xi:=x^{-1}$. By abuse of notation, let $F(\xi)$ be the pullback of the universal Hodge filtration by the composite $\varphi:X\to A_{X,\Gamma^{(n)}}\simeq\Gamma^{(n)}\operatorname{\backslash}\cD$ of higher Albanese map and the isomorphism in \ref{hAlb3}. Since $d\log(x)=-d\log(\xi)$ and $-d\log(x-1)=d\log(\xi)-d\log(1-\xi)$, a connection equation in \ref{eq} now is $$ dF(\xi)=\omega F(\xi),\quad \omega:=(2\pi i)^{-1}\frac{d\xi}{\xi}(-N_0+N_1)+(2\pi i)^{-1}\frac{d\xi}{1-\xi}N_1. $$ That is, $$ da_{k-1,k}(\xi)=-(2\pi i)^{-1}\frac{d\xi}{\xi}\quad (2\le k\le n), $$ $$ da_{n,n+1}(\xi)=-(2\pi i)^{-1}\frac{d\xi}{\xi}-(2\pi i)^{-1}\frac{d\xi}{1-\xi}, $$ $$ da_{j,k}(\xi)=-(2\pi i)^{-1}a_{j+1,k}(\xi)\frac{d\xi}{\xi}\quad (3\le k\le n+1,\;1\le j\le k-2). $$ \end{para} \begin{para}\label{sol'} This system is solved by iterated integrals as before, and the solutions are $$ a_{j,k}(\xi)=\frac{1}{(k-j)!}(-(2\pi i)^{-1}\log (\xi))^{k-j}\quad (2\le k\le n,\;1\le j\le k-1), $$ $$ a_{j,n+1}(\xi)=\frac{1}{(n+1-j)!}(-(2\pi i)^{-1}\log (\xi))^{n+1-j} +(-(2\pi i)^{-1})^{n+1-j}l_{n+1-j}(\xi)\quad(1\le j\le n). $$ \end{para} \begin{para}\label{table'} Table of solutions: $$ \left(\CD 1\;\;&a_{1,2}\;\;&\ldots&a_{1,n}&a_{1,n+1}\\ 0&1&\ddots&\vdots&\vdots\\ \vdots&0&\ddots\;\;&\;a_{n-1,n}\;\;&a_{n-1,n+1}\\ \vdots&\vdots&\ddots&1&a_{n,n+1}\\ 0&0&\ldots&0&1 \endCD\right) $$ $$ =\left(\CD 1\;\;&-(2\pi i)^{-1}\log(\xi)\;\;&\ldots\;\;&\frac{(-(2\pi i)^{-1}\log(\xi))^{n-1}}{(n-1)!}\;\;& \frac{(-(2\pi i)^{-1}\log(\xi))^n}{n!}+(-(2\pi i)^{-1})^{n}l_n(\xi)\\ 0&1&\ddots&\vdots&\vdots\\ \vdots&0&\ddots&-(2\pi i)^{-1}\log(\xi)& \frac{(-(2\pi i)^{-1}\log(\xi))^2}{2!}+(-(2\pi i)^{-1})^{2}l_2(\xi)\\ \vdots&\vdots&\ddots&1&-(2\pi i)^{-1}\log(\xi)-(2\pi i)^{-1}l_{1}(\xi)\\ 0&0&\ldots&0&1 \endCD\right). $$ \end{para} \begin{para}\label{local'} Let now $\sigma_\infty={\mathbb{R}}_{\ge0}N_{\infty}$ with $N_{\infty}:=-N_0+N_1$ for $N_0$, $N_1$ in \ref{action}. Let $F=F(-\alpha',\beta',\lambda_{2}',\dots,\lambda_{n}')$ be as in \ref{Hfilt}. By Lemma \ref{l:MHSact} (i) (3), $(N_{\infty},F)$ satisfies the Griffiths transversality if and only if $\beta'=-\alpha', \lambda_{2}'=\frac{(-\alpha')^2}{2!}, \dots, \lambda_{n-1}'=\frac{(-\alpha')^{n-1}}{(n-1)!}$. If this is the case, $(N_{\infty},F)$ generates a $\sigma_{\infty}$-nilpotent orbit, since admissibility and ${\mathbb{R}}$-polarizability on $\gr^W$ trivially hold. We describe the local structure of $\Gamma_{\sigma_{\infty}}\operatorname{\backslash}\cD_{\sigma_{\infty}}$ near the image $p_{\infty}$ of this nilpotent orbit. Let $Y:=\{(q',\beta', \lambda_2',\dots,\lambda_n')\in {\mathbb{C}}^{n+1}\,|\, \beta'=-\alpha', \lambda_{2}'=\frac{(-\alpha')^2}{2!}, \dots, \lambda_{n-1}'=\frac{(-\alpha')^{n-1}}{(n-1)!} \;\text{if}\; q'=0\}$ be the log manifold with the strong topology, with the structure sheaf of rings which is the inverse image of the sheaf of holomorphic functions on ${\mathbb{C}}^{n+1}$, and with the log structure generated by $q'$. Then there is an open neighborhood $U$ of $(0,0,\dots,0,\lambda_n')$ in ${\mathbb{C}}^{n+1}$ and an open immersion $$ Y\cap U \hookrightarrow \Gamma_{\sigma_{\infty}}\operatorname{\backslash}\cD_{\sigma_{\infty}} $$ of log manifolds which sends $(q',\beta', \lambda_2',\dots,\lambda_n')\in Y\cap U$ with $q'\neq 0$ to the class of $F(-\alpha',\beta',\lambda_2',\dots,\lambda_n')$, where $\alpha'\in {\mathbb{C}}$ is such that $q'=e^{2\pi i\alpha'}$, and which sends $(0,0,\dots,0,\lambda_n')$ to $p_{\infty}$. \end{para} \begin{para}\label{naive'} Near $x=\infty$, i.e., $\xi=0$, a nilpotent orbit in naive sense is $$ (1)\hskip120pt \exp((2\pi i)^{-1}\log(\xi)N_{\infty})F(0,0,\dots,0,\lambda_n^{\prime\,0}) \hskip120pt $$ $$ =F\left(-(2\pi i)^{-1}\log(\xi),-(2\pi i)^{-1}\log(\xi),\frac{(-(2\pi i)^{-1}\log(\xi))^2}{2!},\dots, \frac{(-(2\pi i)^{-1}\log(\xi))^n}{n!}+\lambda_n^{\prime\,0}\right), $$ where $\lambda_n^{\prime\,0}=(-(2\pi i)^{-1})^nl_n(0)$. The corresponding \lq\lq higher Albanese map" (i.e., local version about $\infty$ of $\tilde\varphi$ in \ref{hAmap}) is $$ (2)\hskip4pt F\left(-(2\pi i)^{-1}\log(\xi),-(2\pi i)^{-1}\log(\xi)-(2\pi i)^{-1}l_1(\xi),\dots,\frac{(-(2\pi i)^{-1}\log(\xi))^n}{n!}+(-(2\pi i)^{-1})^nl_n(\xi)\right)\hskip4pt $$ under the condition $l_j(0)=0$ ($1\le j\le n-1$). These two are asymptotic when $\xi$ goes to the boundary point $b'$ described as follows. Changing $\infty$ and $\xi$ into $0$ and $x$, respectively, $b'=(\infty,1)$ corresponds to the tangential boundary point $(0,1)$ of Deligne, i.e., $b'$ is the tangential base point over $\infty\in\overline{X}$ with tangent $v'\in T_\infty(\overline{X})=\Hom_{\mathbb{C}}(m_\infty/m^2_\infty,{\mathbb{C}})$ defined by $v'(\xi)=1$. This corresponds to our boundary point $b'=(y',a')$ with $y'=(\infty,h')\in\overline{X}^{\log}$ described as follows. Let $u'$ be the branch of $\log(\xi)$ having real value on ${\mathbb{R}}_{>0}$. The argument function $h':M^{\gp}_{\overline{X},\infty}=\cO_{\overline{X},\infty}^{\times}\times \xi^{\mathbb{Z}} \to {\bf S}^1$ is a group homomorphism sending $f\in\cO_{\overline{X},\infty}^{\times}$ to $f(\xi=0)/|f(\xi=0)|$ and $\xi$ to $v'(\xi)/|v'(\xi)|=1$, and the specialization $a':\cO^{\log}_{\overline{X},y'}={\mathbb{C}}\{\xi\}[u']\to {\mathbb{C}}$ is a ring homomorphism sending $\xi$ to $0$ and $u'$ to $a'(u')=\log(v'(\xi))=\log(1)=0$. \end{para} \begin{para}\label{over'} As above, let $u'$ be the branch of $\log(\xi)$ and $T$ an indeterminate over $\cO_{\overline{X},\infty}$. Then, by 1.1.1, we have an isomorphism $\cO_{\overline{X},y'}^{\log}=\cO_{\overline{X},\infty}[u']\simeq \cO_{\overline{X},\infty}[T]$ of $\cO_{\overline{X},\infty}$-algebras under $(2\pi i)^{-1}u'\leftrightarrow T$. Consider an $\cO_{\overline{X},\infty}$-algebra homomorphism $\cO_{\overline{X},\infty}[T]\to\cO_{\overline{X},\infty}$, $T\mapsto \xi$. Let $\alpha'=(2\pi i)^{-1}\log(\xi)$. Then, as $\xi\to0$, $\exp(-\alpha' N_\infty)(\text{$F$ in \ref{naive'} (2)})$ converges to $F(0,0,\dots,0,\lambda_n^{\prime\,0})$ in $\cD$ (\ref{hAlb3}), and hence the class of ($F$ in \ref{naive'} (2)) converges to the class $p_\infty$ (\ref{local'}) of the nilpotent orbit $(\sigma_\infty,\exp(\sigma_{\infty,{\mathbb{C}}})F(0,0,\dots,0,\lambda_n^{\prime\,0}))$ in $\Gamma_{\sigma_\infty}\operatorname{\backslash}\cD_{\sigma_\infty}$. We thus have an extension of the higher Albanese map over $\overline{X}_\infty$ (Theorem \ref{t:FGS} (2)): $$ \overline\varphi_\infty:\overline{X}_\infty\to\Gamma_{\sigma_\infty}\operatorname{\backslash}\cD_{\sigma_\infty}. $$ This is a morphism in the category ${\cal {B}}(\log)$. The log structure on the source (resp.\ the target) is given by $\xi$ (resp.\ $q$). The pullback of the universal log mixed Hodge structure on the target coincides with the log mixed Hodge structure on the source. \end{para} \begin{para}\label{log'} By using log mixed Hodge theory, \ref{naive'} is described as follows. Taking the images of the nilpotent orbit in naive sense \ref{naive'} (1) and the \lq\lq higher Albanese map" \ref{naive'} (2), we have their real analytic extensions with boundary $$ \overline\nu_\infty^{\log}, \; \overline\varphi_\infty^{\log}:\overline{X}_\infty^{\log}\to(\Gamma_{\sigma_\infty}\operatorname{\backslash}\cD_{\sigma_\infty})^{\log}. $$ Here, $\overline{X}_\infty^{\log}$ is like Example 1.1.1, and $(\Gamma_{\sigma_\infty}\operatorname{\backslash}\cD_{\sigma_\infty})^{\log}$ coincides with the moduli of nilpotent $i$-orbits $\Gamma_{\sigma_\infty}\operatorname{\backslash}\cD_{\sigma_\infty}^{\sharp}$ in the present situation (\cite{KNU} III Theorem 2.5.6). Let $\tilde{\overline{X}}_\infty^{\log}$ be the universal covering of $\overline{X}_\infty^{\log}$. The above maps are still lifted to $$ \tilde{\overline\nu}_\infty^{\log}, \; \tilde{\overline\varphi}_\infty^{\log}:\tilde{\overline{X}}_\infty^{\log}\to\cD_{\sigma_\infty}^{\sharp}. $$ The boundary point $b'$ in \ref{naive'} can be understood as the point $b'=(z'=0+i\infty)=(u'=-\infty+i0)\in\tilde{\overline{X}}_\infty^{\log}$ (where $2\pi iz':=u'$). We have $(\exp(-(2\pi i)^{-1}\log(\xi)N_\infty)(\ref{naive'}\, (2)))(b')=F(0,0,\dots,0,\lambda_n^{\prime\,0})$, and $$ \tilde{\overline\nu}_\infty^{\log}(b')=\tilde{\overline\varphi}_\infty^{\log}(b')= (\text{nilpotent $i$-orbit generated by $(N_\infty, F(0,0,\dots,0,\lambda_n^{\prime\,0}))$})\in\cD_{\sigma_\infty}^{\sharp}. $$ \end{para} \begin{para}\label{global} For any $\sigma\in\Sigma$, $\Gamma_{\sigma}\operatorname{\backslash}\cD_\sigma\to\Gamma^{(n)}\operatorname{\backslash}\cD_\Sigma$ is a local homeomorphism. This is analogously proved as \cite{KU09} Theorem A (iv). Summing-up, we have a global extension over $\overline{X}$ of the higher Albanese map which is an isomorphism over its image: $$ \overline\varphi:\overline{X}\overset\sim\to\overline\varphi(\overline{X})\subset A_{X,\Gamma^{(n)},\Sigma}\simeq\Gamma^{(n)}\operatorname{\backslash}\cD_\Sigma. $$ \end{para} \begin{para}\label{prob} To study analytic continuations and extensions of polylogariths in the spaces of nilpotent $i$-orbits $D_{\Sigma}^{\sharp}$, in the spaces of ${\rm{SL}}(2)$-orbits $D_{{\rm{SL}}(2)}$, and in spaces of Borel-Serre orbits $D_{{\rm {BS}}}$ is an interesting problem. See \cite{KNU} for these extended period domains and their relations which are described as a fundamental diagram. \end{para} \setcounter{section}{0} \def\Alph{section}{\Alph{section}} \section{Summary of a result of Deligne in \cite{D89}}\label{s:appen} \medskip We add here a summary of a result of Deligne in \cite{D89} for readers' convenience. \begin{para} Just as \ref{bsetting}--\ref{Gamma}, consider the situation $X:={\mathbb{P}}^1({\mathbb{C}})\smallsetminus\{0,1,\infty\}\subset\overline{X}:={\mathbb{P}}^1({\mathbb{C}})$. Let $b:=(0,1)$ the \lq\lq tangential base point" over $0\in\overline{X}$ with tangent $1$. Consider the quotient group $\Gamma$ of $\pi_1(X,b)$ as in \cite{D89} 16.14 (cf.\ \ref{Gamma}): The inclusion $X\subset{\mathbb{G}}_m({\mathbb{C}})={\mathbb{C}}^{\times}$ induces $\pi_1(X,b)\to\pi_1({\mathbb{G}}_m({\mathbb{C}}),b)={\mathbb{Z}}(1)_B$ (suffix B means Betti, cf.\ \cite{D89}). Let $K:=\Ker(\pi_1(X,b)\to{\mathbb{Z}}(1)_B)$. Let $\Gamma:=\pi_1(X,b)/[K,K]$ and $\Gamma_1:=K/[K,K]$. Then, we have an exact sequence $$1\to\Gamma_1\to\Gamma\to{\mathbb{Z}}(1)_B\to1.$$ \end{para} \begin{para} (\cite{D89} 16.15). Let $\mu_0,\mu_1:{\mathbb{Z}}(1)_B\to\Gamma$ be the monodromies around $0,1$, respectively. Take a generator $u$ of ${\mathbb{Z}}(1)_B$ (e.g.\ $u=2\pi i$), put $a_j=\mu_j(u)$ $(j=0,1)$. Then, $\Gamma=\langle a_0,a_1\rangle$ with relation: conjugates of $a_1$ are commutative. $\Gamma_1$ is a representation of ${\mathbb{Z}}(1)_B$ with basis (conjugates of $a_1$) under the action $\gamma \mapsto \mu_0(t)\gamma\mu_0(t)^{-1}$ ($\gamma\in\Gamma_1, t\in{\mathbb{Z}}(1)_B$), i.e., $\Gamma_1={\mathbb{Z}}[{\mathbb{Z}}(1)_B]\cdot a_1$, where $\sum_kc_k(a_0^ka_1a_0^{-k})=\sum_kc_k\cdot(2\pi i\cdot k)\cdot a_1$. These are described as $$ \Gamma_1={\mathbb{Z}}[{\mathbb{Z}}(1)_B]\cdot a_1 \simeq{\mathbb{Z}}[u,u^{-1}]\cdot\frac{du}{u},\quad \Gamma={\mathbb{Z}}(1)_B\ltimes\Gamma_1, $$ $$ \sum_kc_k(a_0^ka_1a_0^{-k})=\sum_kc_k\cdot(2\pi i\cdot k)\cdot a_1 \simeq\sum_kc_ku^k\frac{du}{u} $$ (\cite{D89} 16.16). Action of ${\mathbb{Z}}(1)_B$ on $\Gamma_1$ is given by multiplication in ${\mathbb{Z}}[{\mathbb{Z}}(1)_B]={\mathbb{Z}}[u,u^{-1}]$. \end{para} \begin{para} The descending central series of $\Gamma$ induces a filtration on $\Gamma_1$: $$ Z^N(\Gamma)\cap\Gamma_1=((u-1)^{N-1})\cdot\frac{du}{u} \quad (N\geq 1) $$ Let $\Gamma^{(N)}:=\Gamma/Z^{N+1}(\Gamma)$ and $\Gamma_1^{(N)}:={\rm {Image}}(\Gamma_1\to\Gamma^{(N)})$. Then $$ \Gamma_1^{(N)}={\mathbb{Z}}[u,u^{-1}]/(u-1)^N\cdot\frac{du}{u}. $$ Put $u=e^v$ and hence $v=\log u$. Then $$ {\mathbb{Q}}\otimes\Gamma_1^{(N)}={\mathbb{Q}}[u,u^{-1}]/(u-1)^N\cdot\frac{du}{u} ={\mathbb{Q}}[v]/(v^N)\cdot dv $$ and we have $$ \Gamma_1^{(N)}= \left\{\sum_{k=0}^{N-1}c_k\exp(kv)dv\,\bigg|\,c_k\in{\mathbb{Z}}\right\}. $$ \end{para} \begin{para} As groups, identify $$ \varphi:{\mathbb{Q}}[v]/(v^N)\cdot dv\overset\sim\to\prod_1^N{\mathbb{Q}}(n)_B:\quad v^{n-1}dv=\frac{1}{n}dv^{n}\mapsto u^{\otimes n}. $$ Then $$ \sum_{k=0}^{N-1}c_k\exp(kv)dv\overset\varphi\mapsto \hskip180pt $$ $$ \hskip30pt \sum_{k=0}^{N-1}c_k\left(\sum_{n=0}^{N-1}\frac{1}{n!}k^nu^{\otimes n}\right)\otimes u =\sum_{n=1}^N\left(\sum_{k=0}^{N-1}c_k\frac{k^{n-1}}{(n-1)!}\right)u^{\otimes n}. $$ Hence \begin{sbprop} {\rm(\cite{D89} 16.17)}.\quad $(n-1)!\cdot{\rm pr}_n\circ\varphi(\Gamma_1^{(N)})={\mathbb{Z}}(n)_B$. \end{sbprop} \end{para} \begin{para} (\cite{D89} 16.12). Define a Lie algebra action of ${\mathbb{Q}}(1)$ on $\prod_1^N{\mathbb{Q}}(n)$ by $$ a\ast(b_1,b_2,\dots,b_N)=(0,ab_1,\dots,ab_{N-1}), $$ and ${\mathbb{Q}}(1)\ltimes\prod_1^N{\mathbb{Q}}(n)$ the associated semi-direct product of Lie algebra. Let $\mu_0, \mu_1:{\mathbb{Q}}(1)\to{\mathbb{Q}}(1)\ltimes\prod_1^N{\mathbb{Q}}(n)$ be morphisms of Lie algebras such that $\mu_0$ is the identity onto the first factor ${\mathbb{Q}}(1)$ and $\mu_1$ is the identity onto the factor ${\mathbb{Q}}(1)$ in the product $\prod_1^N{\mathbb{Q}}(n)$. By abuse of notation, let $\mu_0,\mu_1:{\mathbb{Q}}(1)\to {\mathbb{Q}}\otimes{\rm {Lie}}\, \Gamma^{(N)}$. Then there exists a unique Lie algebra isomorphism respecting each $\mu_0$, $\mu_1$: $$ {\mathbb{Q}}(1)\ltimes\prod_1^N{\mathbb{Q}}(n) \overset\sim\to{\mathbb{Q}}\otimes{\rm {Lie}}\, \Gamma^{(N)} ={\mathbb{Q}}(1)\ltimes({\mathbb{Q}}\otimes{\rm {Lie}}\,\Gamma_1^{(N)}) $$ which is given by $\mu_0$ and $\nu_n:=(\operatorname{ad}\mu_0)^{n-1}(\mu_1)$ $(1\le n\le N)$. \end{para} \begin{para}\label{itTate} Let $\operatorname{Lie} U_{\text{DR}}^{(N)}$ be the de Rham realization of iterated Tate motive in \cite{D89} 16.13. Let $e_\alpha:=\mu_\alpha(1)\in\operatorname{Lie} U_{\text{DR}}^{(N)}$ ($1=\exp(2\pi i)\in{\mathbb{Q}}(1)_{\text{DR}}$, $\alpha=0,1$). Take coordinates $(u,(v_n)_{1\le n\le N})$ of $U_{\text{DR}}^{(N)}$ as follows: $$ (u,(v_n)_n)\mapsto \exp(ue_0)\exp\left(\sum_{n=1}^{N} v_n({\rm{Ad}} e_0)^{n-1}(e_1)\right). $$ \begin{sblem} {\rm(\cite{D89} 19.3.1).} Let $z\in{\mathbb{C}}^{\times}\smallsetminus{\mathbb{R}}_{\ge1}$. The end point of the image in $U_{\text{DR}}^{(N)}({\mathbb{C}})$ of the line segment from $(0,z)$ to $z$ has coordinates $u=0$, $v_n=-l_n(z)$. \end{sblem} \noindent {\it Proof.} Let $z_1,z_2\in{\mathbb{C}}^{\times}\smallsetminus{\mathbb{R}}_{\ge1}$. Take a path from $z_1$ to $z_2$, and take an iterated integral $I_{z_1}^{z_2}$ of $$ dI(t)=\left(\frac{dt}{t}e_0+\frac{dt}{t-1}e_1\right)\cdot I(t) $$ for $I(t)=1+ue_{0}+\sum_{n}v_{n}({\rm{Ad}} e_{0})^{n-1}(e_{1})$. Note $e_0\ast e_0=e_0$, $e_0\ast({\rm{Ad}} e_0)^{n-1}(e_1)=({\rm{Ad}} e_0)^n(e_1)$\;\;$(1\le n\le N)$, $e_1\ast e_0=0$, $e_1\ast e_1=e_1$, $e_1\ast({\rm{Ad}} e_0)^{n-1}(e_1)=0$\;\;$(2\le n\le N)$. The corresponding differential equation is $$ du=\frac{dt}{t}, \quad dv_1=\frac{dt}{t-1}, \quad dv_n=v_{n-1}\frac{dt}{t}. $$ Take $I(z_1)=\text{identity}\in U_{\text{DR}}^{(N)}({\mathbb{C}})$ as an initial condition and consider $z_2$ as a variable. If $z_1$ is a tangential base point $(0,\tau)$ (\cite{D89} Section 15), replace the initial condition by $$ I(t)\exp\left(-\log\left(\frac{t}{\tau}\right)\right)\to\text{identity}\quad \text{as $t\to0$}. $$ For the line segment from $(0,z)$ to $z$, we have $$ u=\log\left(\frac{t}{z}\right), \quad v_n=-l_n(t). \qed $$ \end{para} \bigskip {\bf Acknowlegements.} The author thanks Kazuya Kato and Chikara Nakayama for joint research. This note grew up in the preparation for and during the workshop \lq\lq Hodge theory and algebraic geometry" held at the end of August, 2017, Tokyo Denki University. The author thanks Tetsushi Ito, Satoshi Minabe, Taro Fujisawa, and Atsushi Ikeda for the good occasion and helpful discussions. The author thanks the referee for careful reading and valuable comments. S.\ Usui was partially supported by JSPS Grants-in-Aid for Scientific Research (C) 17K05200.
1,314,259,994,053
arxiv
\section*{Introduction} Let $\HilbScheme{p(t)}{n}$ denote the Hilbert scheme parameterizing all subschemes in a projective space $\mathbb P^n_K$ with Hilbert polynomial $p(t)$ over an infinite field $K$. The Hilbert scheme was first introduced by Grothendieck \cite{Gro} in the $60$s and, although it has been intensively studied by several authors, its structure and features are still quite mysterious. Also the topological structure of a Hilbert scheme is not well-understood yet. For instance, in general it is not known how many irreducible components a Hilbert scheme has and which of them are rational. In these topics, some few special cases have been treated, for example, by J. Fogarty \cite{Fo}, R. Piene and M. Schlessinger \cite{RaSc}, R. Treger \cite{TregerIII}, P. Lella and M. Roggero \cite{LR}. The aim of this paper is to develop algebraic constructive methods in the context of the computation of initial and generic initial ideals, in order to study properties of the Hilbert scheme. Therefore, we often identify a point of $\HilbScheme{p(t)}{n}$ with any ideal in $S:=K[x_0, \dots, x_n]$ defining it as a scheme in $\mathbb P^n_K$ and we endow $S$ with a term order. By our techniques, we give some lower bounds for the number of irreducible components of $\HilbScheme{p(t)}{n}$ and determine the maximal Hilbert function in every irreducible component. Moreover, we obtain some interesting results about the rationality of the components of $\HilbScheme{p(t)}{n}$, in particular for the components that contain a point corresponding to an arithmetically Cohen-Macaulay scheme in codimension two. The notion of generic initial ideal has attracted the attention of many researchers since its first introduction. Indeed, a generic initial ideal $\mathrm{gin}(I)$ of a homogeneous ideal $I\subset S$ contains many information about the ideal $I$ and about the scheme defined by $I$ (e.g.~\cite{Gunnar}). It is noteworthy that $\mathrm{gin}(I)$ can be obtained from $I$ by a flat deformation corresponding to a rational curve on the Hilbert scheme. Furthermore, the set of generic initial ideals coincides with that of Borel-fixed ideals, which appear as a useful tool in the investigation of the Hilbert scheme, especially for what concerns its components, already in the $60$s \cite{H66}. Indeed, \lq\lq every component and intersection of components of a Hilbert scheme contains at least one Borel-fixed ideal\rq\rq\ \cite{R1}. This property follows essentially by two facts: the generic initial ideals are Borel-fixed and the irreducible components of $\HilbScheme{p(t)}{n}$ are invariant under the action of the general linear group $\mathrm{GL}:=\mathrm{GL}_K(n+1)$ induced by the action on $S$. Every component of $\HilbScheme{p(t)}{n}$ can contain more than one Borel-fixed ideal. Nevertheless, for any given term order in $S$ and for every irreducible component $Y$ of $\HilbScheme{p(t)}{n}$, we identify a special Borel-fixed ideal $\mathbf{G}_Y$, which we call {\em double-generic initial ideal}, and which gives us information about $Y$. Roughly speaking, $\mathbf{G}_Y$ is the \lq\lq generic initial ideal of the generic (and the general) point of $Y$\rq\rq (see Definition \ref{def:gigi}). The introduction and the investigation of the notion of double-generic initial ideal have been inspired by the ideas of both Gr\"obner strata and marked schemes, which support the main results of this paper, although they do not explicitely appear. In fact, some of our examples have been obtained by the available computational methods already developed in the study of marked schemes by several authors. It is not easy to get a generic initial ideal: a deterministic computation using parameters can be very heavy, while random changes of coordinates gives a non-certain result. The double-generic initial ideal overcomes these difficulties, since it is easy to individuate $\mathbf{G}_Y$ starting from the list of Borel-fixed ideals on $Y$: it is sufficient to detect the maximum in this list w.r.t.~a suitable order on the Borel-fixed ideals of $Y$ (see Theorem \ref{th:minoreminore} and Corollary \ref{ginmaggiore}). Along the paper, we consider the classical scheme-theoretic embedding of the Hilbert scheme $\HilbScheme{p(t)}{n}$ in the Grassmannian $\GrassScheme{S_m}{q}$, where $S_m$ is the vector space of the homogeneous polynomials in $S$ of degree $m$, for $m$ a sufficiently large degree, and $q:={n+m \choose n}-p(m)$. More precisely, it is sufficient to take $m\geq r$, where $r$ is the Gotzmann number of the Hilbert polynomial $p(t)$ (for more details see for instance \cite{CS}). Since in turn the Grassmannian $\GrassScheme{S_m}{q}$ can be embedded in $\mathbb P(\wedge^q S_m)$ via the Pl\"ucker embedding, a point $V$ of $\HilbScheme{p(t)}{n}$ can be identified with a non-zero totally decomposable tensor, i.e.~an {\it extensor} $f_1\wedge\dots\wedge f_q$, where $f_1,\dots,f_q\in S_m$ are linearly independent polynomials such that the ideal \hskip -0.5mm $(f_1,\dots,f_q)$ defines the projective subscheme corresponding to~$V$. In the above setting, we follow the approach presented by D.~Eisenbud in his book \cite{Ei} to deal with the generic initial ideal by means of a suitable total order on the terms of $\wedge^q S_m$, depending on the term order $\prec $ on $S$ (see \eqref{ordEisenbud}). Thus, we associate to every subset $W$ of $\GrassScheme{S_m}{q}$ a suitable set of terms in $\wedge^q S_m$ called $\Delta$-support of $W$ (see Definition \ref{def:support}), and then introduce the initial extensor $\mathbf{in}(W)$ of $W$ as the maximum of the $\Delta$-support of $W$. Further, we also introduce the generic initial extensor $\mathbf{gin}(W)$ of $W$ as the maximum of the $\Delta$-support of the orbit of $W$ under the usual action of $\mathrm{GL}$ on $\GrassScheme{S_m}{q}$ (see Definition \ref{def:extIn}). We prove that $\mathbf{in}(W)$ and $\mathbf{gin}(W)$ do not change when $W$ is replaced by its closure $\overline{W}$ (Proposition \ref{prop:chiusura}). In particular, if $W$ is closed and irreducible, $\mathbf{in}(W)$ and $\mathbf{gin}(W)$ can be read as the initial extensor and the generic initial extensor of either the generic point of $W$ or the set of closed points of $W$ (Proposition \ref{prop:chiusura}, Remark \ref{rem:generic point}). Moreover, exploiting the analogous property for ideals given in \cite[Theorem 15.18]{Ei}, we prove that $\mathbf{gin}(W)$ is fixed by the Borel subgroup of $\mathrm{GL}$, up to multiplication by a non-null element of $K$ (Theorem \ref{th:eisenbud} and Corollary \ref{cor:chiusura3}). In Section 3, we focus our attention on the subsets $W$ of $\GrassScheme{S_m}{q}$ that are closed and stable under the action of $\mathrm{GL}$ and prove that $\mathbf{in}(W)$ and $\mathbf{gin}(W)$ coincide and the corresponding point of the Grassmannian belongs to $W$ (Proposition \ref{prop:componenti}). If $W$ is closed and irreducible, then there is a dense open subset of $W$ consisting of points having $\mathbf{gin}(W)$ as generic initial extensor (Proposition~\ref{lemma:UL}). In Section 4, we concentrate on subsets of the Hilbert scheme. We prove that there is a perfect correspondence between the notions of initial and generic initial extensor of any point $V$ of $\HilbScheme{p(t)}{n}$ and those of initial and generic initial ideal of any ideal defining $V$ as a subscheme of $\mathbb{P}^n_K$ (Theorem \ref{gin appartiene} and Corollary \ref{appartenenza}). Furthermore, we prove that if $Y\subseteq \HilbScheme{p(t)}{n}$ is irreducible, closed and invariant under the action of $\mathrm{GL}$, then the ideal associated to $\mathbf{gin}(Y)$ does not depend on the chosen $m\geq r$ for the embedding in a Grassmannian scheme (Corollaries \ref{appartenenza} and \ref{prop:GIGI}). This allows us to define the double generic initial ideal of $Y$ (Definition \ref{def:gigi}). We also present some relevant subsets of $\HilbScheme{p(t)}{n}$ which are irreducible, closed and invariant under the action of $\mathrm{GL}$ (Examples \ref{ex:singular} and~\ref{ex:gore e punti}). In Section 5, we introduce a suitable partial order $\prec\!\!\prec$ on the terms of $\wedge^q S_m$ and prove that the initial extensor and the generic initial extensor of a closed irreducible subset $W\subseteq \GrassScheme{S_m}{q}$ are, respectively, the maxima of the $\Delta$-supports of $W$ and of its orbit with respect to this partial order (see Definition~\ref{precprec} and Theorem~\ref{th:minoreminore}). Although a partial order might appear less convenient than a total order, this feature of $\prec\!\!\prec$ is in fact a crucial point of the paper, which we exploit in order to obtain some relevant applications. We also explore some of the properties of this partial order, in particular when we consider either the degrevlex term order or a constant Hilbert polynomial (Propositions \ref{prop: m cresce} and \ref{m cresce con polinomio costante}). Finally, in Section \ref{sec:applications}, we present some interesting applications of the previous results. First, we point out a necessary condition for a Borel term to correspond to a point of a given irreducible component of a Hilbert scheme (Proposition \ref{cor:condizione necessaria}). Then, we obtain lower bounds on the number of irreducible components of $\HilbScheme{p(t)}{n}$ simply counting the maximal elements with respect to $\prec\!\!\prec$ among the extensor terms corresponding to the Borel-fixed ideals in $\HilbScheme{p(t)}{n}$. We observe that this bound depends on the chosen term order on $S$ (Proposition \ref{prop:numero componenti} and Example \ref{ex:4 componenti}). The list of {\em all} the saturated Borel-fixed ideals in $\HilbScheme{p(t)}{n}$ can be obtained by the algorithms presented in \cite{CLMR,PL} in the characteristic zero case, and in \cite{B2014} for every characteristic. Recall that, for every irreducible, closed subset $Y\subseteq \HilbScheme{p(t)}{n}$ that is stable under the action of $\mathrm{GL}$, there is a maximum among the Hilbert functions of its points (Remark \ref{rem:maximum}). We prove that this maximum is reached by the point corresponding to the double-generic initial ideal $\mathbf{G}_Y$, hence by the maximum with respect to $\prec\!\!\prec$, when we choose the degrevlex term order on $S$ (Theorem \ref{funzione massima}). We conjecture that an analogous result holds for minimal Hilbert functions, when the deglex term order is chosen. We conclude by investigating the rationality of some components of a Hilbert scheme. We prove that, if $Y$ is an isolated irreducible component of $\HilbScheme{p(t)}{n}$ and $\mathbf{gin}(Y)$ corresponds to a smooth point in $Y$, then $Y$ is rational (Theorem \ref{razionale}). In the final Example \ref{ex:tommasino}, we exhibit a Hilbert scheme having two components with the same double-generic initial ideal which is smooth on each component, but singular on the Hilbert scheme. Thus, by Theorem \ref{razionale} both these components are rational. As a relevant consequence of Theorem \ref{razionale}, by exploiting \cite[Th\'{e}or\`{e}me 2(i)]{Elli} we prove that every component of $\HilbScheme{p(t)}{n}$ containing a Cohen-Macaulay point of codimension $2$ is rational (Corollary \ref{prop:razionale aCM codim 2}). More precisely, the Cohen-Macaulay locus of such a component is the orbit of an open subset isomorphic to an affine space under the action of $\mathrm{GL}$. This improves results of J. Fogarty \cite[Theorem 2.4 and Corollary 2.6]{Fo} and R. Treger \cite[Theorem 2.6]{TregerIII} by independent arguments. All these results on rationality give a partial answer to one of the open questions on Hilbert schemes collected at the AIM workshop Components of Hilbert Schemes (Palo Alto, July 19 to 23, 2010) \cite[Problem 1.45]{AimQ}. \section{Generalities} \label{sec:generalities} Let $S:=K[x_0,\dots,x_n]$ be the polynomial ring over an infinite field $K$. For every integer $m$, $S_m$ denotes the homogeneous component of degree $m$ of $S$; if $A\subseteq S$, then $A_m$ denotes $A\cap S_m$. Elements and ideals in $S$ will be always assumed to be homogeneous. A term $\tau$ of $S$ is a power product $\tau = x_0^{\alpha_0}\cdot\dots\cdot x_n^{\alpha_n}$. The set of all the terms of $S$ will be denoted by $\mathbb T$. We denote by $\prec$ a given term order in $S$ and assume that $x_0\prec x_1\prec\dots\prec x_n$. In our setting, if $\tau=x_0^{\alpha_0}\dots x_n^{\alpha_n}$ and $\sigma =x_0^{\beta_0}\dots x_n^{\beta_n}$ are two terms in $S$ of the same degree, then \begin{itemize} \item if $\prec$ is the deglex term order, then $\tau\prec \sigma$ if and only if $\alpha_k<\beta_k$, where $k:=\max\{i\in \{0,\dots,n\} \ \vert \ \alpha_i\not= \beta_i\}$; \item if $\prec$ is the degrevlex term order, then $\tau\prec \sigma$ if and only if $\alpha_k>\beta_k$, where $k:=\min\{i\in \{0,\dots,n\} \ \vert \ \alpha_i\not= \beta_i\}$. \end{itemize} If $J$ is a monomial ideal in $S$, $B_J$ denotes the monomial basis of $J$, i.e.~the set of the terms that are minimal generators of $J$. For any non-zero polynomial $f \in S$, the \textit{support} $\textnormal{Supp}\,(f)$ of $f$ is the set of the terms that appear in $f$ with a non-zero coefficient. The maximal term occurring in the support of $f$ with respect to $\prec$ (w.r.t.~$\prec$) is called the {\em initial term} of $f$ w.r.t.~$\prec$ and denoted by $\mathrm{in}_\prec(f)$. If $I$ is an ideal in $S$, the {\em initial ideal} $\mathrm{in}_\prec(I)$ of $I$ w.r.t.~$\prec$ is the ideal generated by the initial terms of the polynomials in $I$. When there is no ambiguity, we will write $\mathrm{in}(f)$ and $\mathrm{in}(I)$ in place of $\mathrm{in}_\prec(f)$ and $\mathrm{in}_\prec(I)$. A set $\{f_1,\dots,f_t\}$ of monic polynomials of an ideal $I$ is the {\em reduced Gr\"obner basis} of $I$, w.r.t.~$\prec$, if $\mathrm{in}(I)=(\mathrm{in}(f_1),\dots,\mathrm{in}(f_t))$ and no term in $\textnormal{Supp}\,(f_i)\setminus \{\mathrm{in}(f_i)\}$ belongs to $\mathrm{in}(I)$. We refer to \cite{Mora2005} for an extended treatment of the theory of Gr\"obner bases and related topics. \medskip We consider the general linear group $\mathrm{GL}_K(n+1)$ ($\mathrm{GL}$, for short) of the invertible matrices of order $n+1$ with entries in $K$. If $g=(g_{ij})_{i,j \in \{0,\dots,n\}}$ is a matrix in $\mathrm{GL}$, $g$ acts on the polynomials of $S$ in the following way \[ \begin{array}{rcl} g:S&\rightarrow & S\\ f(x_0,\dots,x_n)&\mapsto & g(f)=f(\sum_{j=0}^n g_{0j}x_j,\dots, \sum_{j=0}^n g_{n j}x_j) \end{array} \] For every subset $V \subseteq S$, we let $g(V)=\{g(f(x_0,\dots,x_n))\vert f \in V\}$. \medskip If $I$ is an ideal of $S$, we can consider $\mathrm{in}(g(I))$. It is well-known that there is an open subset $\mathcal A\not=\emptyset$ of $\mathrm{GL}$ such that, for every $g\in \mathcal A$, $\mathrm{in}(g(I))$ is a constant monomial ideal called the {\em generic initial ideal} of $I$ and denoted by $\mathrm{gin}_\prec(I)$ (or $\mathrm{gin}(I)$ when there is no ambiguity). In our setting, $\mathrm{gin}(I)$ is fixed under the action of the Borel subgroup of upper-triangular invertible matrices, hence it is a Borel-fixed ideal (Borel, for short) (see Galligo's Theorem \cite{GA2} for $\mathrm{char}(K)=0$ and \cite[Proposition 1]{BS} for a positive characteristic). Note that for every degree $m$: \begin{equation}\label{eq:tagliInGin} \mathrm{in}(I_{ m})=\mathrm{in}(I)_{m}\text{ and }\mathrm{gin}(I_{ m})=\mathrm{gin}(I)_{m}. \end{equation} The saturation of the ideal $I\subset S$ is the ideal $I^{\textnormal{sat}}:=\cup_{k\geq 0}(I:(x_0,\dots,x_n)^k)$ and $I$ is \emph{saturated} if $I=I^{\textnormal{sat}}$. The schemes $\textnormal{Proj}\,(S/I)$ and $\textnormal{Proj}\,(S/I')$ are equal as subschemes of $\mathbb{P}^n_K$ if and only if $I$ and $I'$ have the same saturation. Given a homogeneous ideal $I$ in $S$, we refer to \cite[Chapters 1 and 4]{Ei-syzygies} for the definitions of the Hilbert function (denoted by $H_{S/I}$), Hilbert polynomial and Castelnuovo-Mumford regularity, or simply {regularity} (denoted by $\textnormal{reg}(I)$). Here, we recall that the regularity $\textnormal{reg}(X)$ of the scheme $X=\textnormal{Proj}\,(S/I)$ is the regularity of the ideal $I^{sat}$ and $\textnormal{reg}(I)\geq \textnormal{reg}(I^{sat})$. Moreover, for every $m\geq \textnormal{reg}(I)$, we say that $I$ is {\em $m$-regular} and the Hilbert function $H_{S/I}$ satisfies $H_{S/I}(m)=p(m)$, where $p(t)$ is its Hilbert polynomial. In this case, we will also say that $p(t)$ is the Hilbert polynomial of $I$. Recall that, if $m\geq \textnormal{reg}(I)$, then $\textnormal{Proj}\,(S/I)$ can be completely recovered by the $K$-vector space $I_m$, since $(I_m)^{sat}=I^{sat}$. If $\mathrm{char}(K)=0$, the regularity of a Borel-fixed ideal is the maximum of the degrees of its minimal generators. The initial ideal and the generic initial ideal of an ideal $I$ have the same Hilbert function as $I$. However, their regularities satisfy the inequa\-lities $\textnormal{reg}(\mathrm{in}(I))\geq \textnormal{reg}(I)$ and $\textnormal{reg}(\mathrm{gin}(I))\geq \textnormal{reg}(I)$, which sometimes are strict. Moreover, the initial ideal and the generic initial ideal of a saturated ideal can be no more saturated. However, if $\prec$ is the degrevlex term order, then $I$ and $\mathrm{gin}(I)$ share the same regularity and each of them is saturated if the other is (see \cite{BS2}). \begin{example}\label{ex:regsale} Let $I$ be the ideal $(x_2^2,x_1x_2+x_0^2)\subset K[x_0,x_1,x_2]$, with $\mathrm{char}(K)=0$ (see \cite{Mayes}). The ideal $I$ is saturated and $\textnormal{reg}(I)=3$, while, for every term order on $S$, $\mathrm{in}(I)=(x_2^2, x_1x_2,x_0^2x_2,x_0^4)$ is not saturated, and $\textnormal{reg}(\mathrm{in}(I))=4$. If $\prec$ is the degrevlex term order, then $\mathrm{gin}(I)=(x_2^2, x_1x_2,x_1^3)$ is a saturated ideal with regularity $3$. If $\prec$ is the deglex term order, then $\mathrm{gin}(I)=(x_2^2,x_1 x_2,x_0^2x_2,x_1^4)$ is not saturated with $\textnormal{reg}(\mathrm{gin}(I))=4$. \end{example} \section{Initial and generic initial extensors} \label{sec:in e gin} Let $\prec$ be a term order on $S$ and $m$ a positive integer. In the present section, we consider $S_m$ as a $K$-vector space. The \emph{initial space} of a $K$-vector space $V\subset S_m$ is the $K$-vector space $\mathrm{in}(V):=\langle \mathrm{in}(f)\vert f\in V\rangle$. There is an open subset $\mathcal A\not=\emptyset$ of $\mathrm{GL}$ such that, for every $g\in \mathcal A$, $\mathrm{in}(g(V))$ is constant. This constant initial space is called the \emph{generic initial space} of $V$ and is denoted by $\mathrm{gin}(V)$ (see \cite{Gunnar} and the references therein). Let $q$ be a positive integer and $\GrassScheme{S_m}{q}$ the Grassmannian of subspaces of $S_m$ of dimension~$q$. We consider $\GrassScheme{S_m}{q}$ as a subscheme embedded in the projective space $\mathbb P(\wedge^q S_m)$ through the Pl\"ucker embedding (for example, see \cite{Harris1995}). \begin{definition} An \emph{extensor} (of step $q $) on $S_m$ is a non-zero element of $\wedge^q S_m$ of the form $f_1\wedge\dots\wedge f_q$, with $f_1,\dots,f_q\in S_m$. \end{definition} Note that an element $f_1\wedge \dots \wedge f_q \in \wedge^q S_m$ vanishes whenever the vector space generated by $f_1,\dots,f_q$ has dimension lower than $q$. Following \cite[Section 15.9]{Ei}, we say that an \emph{extensor term} (or simply a \emph{term}) in $\wedge^{q} S_m$ is an extensor of type $\tau_1\wedge \dots\wedge \tau_{q}$, with $\tau_i\in S_m\cap \mathbb T$; furthermore, we say that a term of $\wedge^q S_m$ is a \emph{normal expression} if $\tau_1\succ \dots\succ \tau_{q}$. We denote by $\mathbf{T}_{S_m}^q$ the set of the normal expression terms and from now on, whenever we consider a term $\tau_1\wedge \dots \wedge \tau_q\in \wedge^q S_m$, we assume that it belongs to $\mathbf T_{S_m}^q$. Furthermore, $\mathbf T_{S_m}^q$ is the $K$-vector basis we always consider for the $K$-vector space $\wedge^q S_m$. We can compare the terms in $\mathbf T_{S_m}^q$ lexicographically according to $\prec$, in the following way \begin{multline} \label{ordEisenbud} \tau_1\wedge \dots\wedge \tau_{q}\prec \sigma_1\wedge \dots\wedge \sigma_{q} \Leftrightarrow\\ \exists \ j\in \lbrace 1,\dots,q\rbrace \ : \ \tau_i=\sigma_i, \ \forall \ i<j, \text{ and } \tau_j\prec \sigma_j. \end{multline} In this setting, for every $L\in \mathbf T_{S_m}^q$, there is a unique Pl\"ucker coordinate $\Delta_L$ on $\GrassScheme{S_m}{q}$ corresponding to $L$, and vice versa. Moreover, $\GrassScheme{S_m}{q}= \textnormal{Proj}\,(K[\Delta_L : L\in \mathbf T_{S_m}^q]) \ =: \ \textnormal{Proj}\,(K[\Delta])$. Therefore, if $V=\langle f_1,\dots,f_q\rangle\subseteq S_m$ is a $K$-vector space of dimension $q$, the extensor $f_1\wedge \dots \wedge f_{q}$ has the unique writing $\sum_{L\in \mathbf T_{S_m}^q} c_L L$, with $c_L= \Delta_L(V)\in K$. \medskip For every $L=\tau_1\wedge\dots\wedge \tau_q \in \mathbf T_{S_m}^q$, we will denote by $U_L$ the standard open set of $\GrassScheme{S_m}{q}$ corresponding to $\Delta_L$, namely the locus of points in $\GrassScheme{S_m}{q}$ where $\Delta_L$ is invertible. Moreover, we will denote by $\vs{L}$ the vector space $\langle \tau_1,\dots, \tau_q \rangle$ in $\GrassScheme{S_m}{q}$. \begin{definition}\label{def:support} If $W$ is a non-empty subset of $\GrassScheme{S_m}{q}$, the \emph{$\Delta$-support} of $W$ is the following subset of $\mathbf T_{S_m}^q$: $$\Delta\mathrm{Supp}(W):=\lbrace L \in \mathbf T_{S_m}^q\ \vert\ U_{L}\cap W \not=\emptyset\rbrace.$$ If $W=\lbrace V\rbrace$, we simply write $\Delta\mathrm{Supp}(V)$ for $\Delta\mathrm{Supp}(\lbrace V\rbrace)$. We will apply this definition and the related ones also to subschemes $W$ of $\GrassScheme{S_m}{q}$, meaning that the $\Delta$-support of a scheme is that of its underlying set of points (see also Example \ref{nonridotto}) \end{definition} If $W$ is a non-empty subset of $\GrassScheme{S_m}{q}$, then its $\Delta$-support is non-empty. From now, we consider subsets of $\GrassScheme{S_m}{q}$ that are non-empty. \begin{proposition}\label{prop:chiudere} Let $W$ be a subset of $\GrassScheme{S_m}{q}$. \begin{enumerate}[(i)] \item \label{prop:chiudere_i} If $\overline W$ is the closure of $W$, then $\Delta\mathrm{Supp}(W)=\Delta\mathrm{Supp}(\overline W)$. \item \label{prop:chiudere_ii} If $W$ is closed and ${\widetilde W}$ is the set of its closed points, then $\Delta\mathrm{Supp}(W)=\Delta\mathrm{Supp}(\widetilde W)$. \item \label{prop:chiudere_iii} If $W$ is closed and irreducible and $V$ is its generic point, then $\Delta\mathrm{Supp}(W)$ $=\Delta\mathrm{Supp}(V)$. \end{enumerate} \end{proposition} \begin{proof} \eqref{prop:chiudere_i} It is immediate that $\Delta\mathrm{Supp}(W)\subseteq \Delta\mathrm{Supp}(\overline W)$, because $ W\subseteq \overline W$. We now prove the other inclusion. If $L$ belongs to $\Delta\mathrm{Supp}(\overline W)$, then there is at least a point $V$ in $U_L\cap \overline W$. Thus, every open neighbourhood of $V$ meets $W$ non-trivially, in particular $U_L$ meets $W$ non-trivially. Hence, $L$ belongs to $\Delta\mathrm{Supp}(W)$. Items \eqref{prop:chiudere_ii} and \eqref{prop:chiudere_iii} are straightforward consequence of \eqref{prop:chiudere_i}, because $\overline{\widetilde W}=W$ (for example, see \cite[Chapter V, Section 3.4, Theorem 3]{bour}) and $\overline{\{V\}}={W}$, in the respective hypotheses. \end{proof} The action of $\mathrm{GL}$ on $S$ defined in Section \ref{sec:generalities} induces an action on $\wedge^q S_m$ in the following natural way: if $H=\tau_1\wedge \dots \wedge \tau_q$ is a term in $ \mathbf T_{S_m}^q$, we set $g(H)=g(\tau_1)\wedge\dots \wedge g(\tau_q)\in \wedge^q S_m$ where $g\in \mathrm{GL}$ and then extend the action to every element in $\wedge^q S_m$ by linearity. Note that in general $g(H)$ does not need to be a term. In a similar natural way, we obtain an action of $\mathrm{GL}$ on $K[\Delta]$ and on $\GrassScheme{S_m}{q}$ (see also \cite[Example 10.18]{Harris1995}): for every $g\in \mathrm{GL}$ and for every $H\in \mathbf T_{S_m}^q$ (hence, every $\Delta_H\in \Delta$), if $g(H)=\sum_{L\in \mathbf T_{S_m}^q} c_L L$, then we set $g(\Delta_H):=\sum_{L\in \mathbf T_{S_m}^q} c_L \Delta_L$. Thus, for every point $V$ of $\GrassScheme{S_m}{q}$ and for every element $g$ of $\mathrm{GL}$, by $g(V)$ we mean the point of $\textnormal{Proj}\,(K[\Delta])$ corresponding to the prime ideal $g(\mathfrak a)$, with $\mathfrak a$ the prime ideal defining $V$. If $V$ is a $K$-point of $\GrassScheme{S_m}{q}$, i.e.~$V=\langle f_1,\dots,f_q\rangle$ with $f_i\in S_m$, then this action on $V$ is exactly the usual action of $\mathrm{GL}$ on the polynomials generating $V$ as a $K$-vector space. In this context, the orbit of $V\in\GrassScheme{S_m}{q}$ is the set $\O(V):=\lbrace g(V)\ \vert\ g\in \mathrm{GL} \rbrace$. If $W$ is a subset of $\GrassScheme{S_m}{q}$, the orbit of $W$ is $\O(W):=\cup_{V\in W} \O(V)$. \begin{definition}\label{def:extIn} If $W$ is a subset of $\GrassScheme{S_m}{q}$, the \emph{initial extensor} of $W$ is $$\mathbf{in}(W):=\max_{\prec} \Delta\mathrm{Supp}(W)$$ and the \emph{generic initial extensor} of $W$ is \[ \mathbf{gin}(W):=\mathbf{in}(\O(W))=\max_{\prec} \Delta\mathrm{Supp}(\O(W)), \] where the maximum is taken w.r.t.~to the order $\prec$, as defined in \eqref{ordEisenbud}. If $W=\lbrace V\rbrace$, we simply write $\mathbf{in}(V)$ for $\mathbf{in}(\lbrace V\rbrace)$ and $\mathbf{gin}(V)$ for $\mathbf{gin}(\lbrace V\rbrace)$. \end{definition} \begin{remark}\label{rem:ginpiugrande} Note that, for every subset $W $ of $\GrassScheme{S_m}{q}$, we have $\mathbf{in}(W)\preceq\mathbf{gin}(W)$ because $\Delta\mathrm{Supp}(W)\subseteq \Delta\mathrm{Supp}(\O(W))$. \end{remark} \begin{definition}\label{borel nel wedge} We say that a term $L\in \mathbf T_{S_m}^q$ is {\em Borel-fixed} ({\em Borel}, for short) if $g(L)\in \langle L\rangle$ for every upper-triangular matrix $g\in \mathrm{GL}$. We will denote by $\mathbf B_{S_m}^q$ the set of Borel terms. \end{definition} Note that a term $L\in\mathbf T_{S_m}^q$ is Borel if and only if the ideal generated by $\vs{L}$ in $S$ is Borel. \begin{theorem}\label{th:eisenbud} Let $V=\langle f_1,\ldots,f_q\rangle$ be a $K$-point of $\GrassScheme{S_m}{q}$ and let $f_1\wedge \dots \wedge f_{q}=\sum_{L\in \mathbf T_{S_m}^q} c_L L$. Then: \begin{enumerate}[(i)] \item\label{th:eisenbud_i} $\Delta\mathrm{Supp}(V)=\{L\in \mathbf T_{S_m}^q\ \vert\ c_L\neq 0\}$. \item $\wedge^q \mathrm{in}(V)= \langle \mathbf{in}(V)\rangle $, or equivalently $\mathrm{in}(V)= \vs{\mathbf{in}(V)}$. \item\label{th:eisenbud_iii} $\wedge^q \mathrm{gin}(V)=\langle \mathbf{gin}(V)\rangle$, or equivalently $\mathrm{gin}(V)= \vs{\mathbf{gin}(V)}$, and $\mathbf{gin}(V)$ is Borel. \end{enumerate} \end{theorem} \begin{proof} \begin{enumerate}[(i)] \item As already observed, for a $K$-point $V$ and a term $L$, we have $\Delta_L(V)=c_L \in K$. Hence, $V$ belongs to $U_L$ if and only if $\Delta_L(V)$ is invertible in $K$, thus if and only if $\Delta_L(V)$ is non-zero. \item If $\mathrm{in}(V)=\langle \tau_1,\dots,\tau_q\rangle$, with $\tau_1\succ\dots \succ \tau_q$, we can assume that $f_1,\dots,f_q$ are polynomials such that $\mathrm{in}(f_i)=\tau_i$, for every $i\in\lbrace 1,\dots,q\rbrace$. Then, exploiting item \eqref{th:eisenbud_i} we see that $\mathbf{in}(V)= \max_{\prec}\Delta\mathrm{Supp}(\langle f_1,\dots, f_q\rangle) = \tau_1\wedge \dots\wedge \tau_q$. \item We immediately obtain the equality $\wedge^q \mathrm{gin}(V)=\langle \mathbf{gin}(V)\rangle$ and can conclude by \cite[Theorem 15.18]{Ei}, taking into account also formula \eqref{eq:tagliInGin}. \end{enumerate} \end{proof} By the following example we underline that the definition of $\Delta$-support of a subscheme $W$ of $\GrassScheme{S_m}{q}$ does not depend on the possible non-reduced structure of $W$, i.e.~the $\Delta$-support of $W$ coincides with the $\Delta$-support of $W^{red}$. Moreover, whereas our definition of $\Delta$-support can be applied to all the subschemes $W$ of a Grassmannian, the characterization of $\Delta$-support given in Theorem \ref{th:eisenbud}\eqref{th:eisenbud_i} cannot be extended to every $W$, even when $W$ can be defined by an extensor. \begin{example}\label{nonridotto} Let $S=K[x_0,x_1,x_2]$ be endowed with the degrevlex term order. Let us consider the non-reduced closed subscheme $W \subset \GrassScheme{S_2}{2}=\textnormal{Proj}\,(K[\Delta])$ defined by the ideal $I$ that is generated by $\Delta_L^2$, where $L=x_2^2\wedge x_1x_2$, and by all the other Pl\"ucker coordinates except $\Delta_L$ and $\Delta_{L'}$, where $L'=x_1x_2\wedge x_1^2$. Note that $W$ is a non-empty subscheme of the Grassmannian, since the radical of $I$ is not irrelevant. Indeed, $W$ is a double structure on the closed $K$-point $V'$ with Pl\"ucker coordinates all equal to $0$, except $\Delta_{L'}$. Then, $W$ does not intersect the standard open subsets of the Grassmannian, except $U_{L'}$. Thus, we obtain $\Delta\mathrm{Supp}(W)=\Delta\mathrm{Supp}(V')=\{{L'} \}$ and $\mathbf{in}(W)=\mathbf{in}(V')=L'$. Being a double structure over a closed $K$-point, $W$ is isomorphic to the scheme $\mathrm{Spec} (K[\varepsilon])$ (where $\varepsilon^2=0$). Then, using the funtorial language, we can see $W$ as an element of $\GrassFunctor{S_2}{2}(K[\varepsilon])$, where $\GrassFunctor{S_2}{2}$ denotes the Grassmann functor (e.g., see \cite[formula (2.1) and Section 5]{BLMR}). Notice that $W$ is a special element of $\GrassFunctor{S_2}{2}(K[\varepsilon])$, since it is given by the rank 2 free (not only locally free) submodule of $S_2\otimes_K K[\varepsilon]$ generated by $f_1=\varepsilon x_2^2-x_1^2$ and $f_2=x_1x_2$. Therefore, we can identify it with the extensor in $\wedge^2 (S_2\otimes_K K[\varepsilon])$ $$f_1\wedge f_2=(\varepsilon x_2^2-x_1^2) \wedge x_1x_2=\varepsilon (x_2^2 \wedge x_1x_2)-(x_1^2 \wedge x_1x_2)=\varepsilon L+L'.$$ Now we can see that Theorem \ref{th:eisenbud}\eqref{th:eisenbud_i} cannot be extended to this case; in fact the coefficient of $L$ in $\varepsilon L+L'$ is $\varepsilon \neq0$, but $L$ does not belong to $\Delta\mathrm{Supp}(W)$. \end{example} \begin{proposition}\label{prop:chiusura} Let $W$ be a subset of $\GrassScheme{S_m}{q}$. \begin{enumerate}[(i)] \item \label{prop:chiusura_i} If $\overline W$ is the closure of $W$, then $\mathbf{in}(W)=\mathbf{in}(\overline W)$ and $\mathbf{gin}(W)=\mathbf{gin}(\overline W)$. \item \label{prop:chiusura_ii} If $W$ is closed and ${\widetilde W}$ is the set of its closed points, then $\mathbf{in}(W)=\mathbf{in}(\widetilde W)$ and $\mathbf{gin}(W)=\mathbf{gin}(\widetilde W)$. \item \label{prop:chiusura_iii} If $W$ is closed and irreducible and $V$ is its generic point, then $\mathbf{in}(W)=\mathbf{in}(V)$ and $\mathbf{gin}(W)=\mathbf{gin}(V)$. \end{enumerate} \end{proposition} \begin{proof} For what concerns the initial extensor, the three statements directly follow from Proposition \ref{prop:chiudere}. For what concerns the generic initial extensor, the statements in \eqref{prop:chiusura_ii} and \eqref{prop:chiusura_iii} are consequences of \eqref{prop:chiusura_i}, since $\overline{\widetilde W}=W$ and $\overline{\{V\}}={W}$ in the respective hypotheses. Then, it remains to prove the statement about the generic inital extensor in \eqref{prop:chiusura_i}. It is sufficient to show that $\Delta\mathrm{Supp}(\O(W))=\Delta\mathrm{Supp}(\O(\overline W))$. We only prove the non-obvious inclusion. Let $L$ be any term in $\Delta\mathrm{Supp}(\O(\overline W))$. By definition, $U_L\cap \O(\overline W)$ is not empty. More precisely, there are an element $g\in \mathrm{GL}$ and a point $V_1\in \overline W$ such that $g(V_1)\in U_L$. Then, $g(V_1)$ belongs to $U_L\cap \overline{g(W)}$, since $g(\overline W)=\overline{g(W)}$. By the definition of closure, this implies $U_L\cap g(W)\neq \emptyset$. Hence, $U_L\cap \O(W)\neq \emptyset$ and $L\in \Delta\mathrm{Supp}(\O(W))$. \end{proof} \begin{remark}\label{rem:generic point} \ \begin{enumerate}[(i)] \item In many cases, we will identify a closed subset $W$ of $\GrassScheme{S_m}{q}$ either with the set $\widetilde W$ of its closed points or, if $W$ is also irreducible, with its generic point. Indeed, by Proposition \ref{prop:chiusura}, $\mathbf{in}(W)$ and $\mathbf{gin}(W)$ can be also read as the initial extensor and the generic initial extensor either of $\widetilde W$ or, if $W$ is irreducible, of the generic point of $W$. These facts will be useful because, up to an extension of the base field $K$ to the residue field $K_V$ (where $V$ is either a suitable point in $\widetilde W$ or the generic point of $W$), they will allow to reduce our arguments to the case of a rational point. Note that, being $K$ infinite, $\mathrm{GL}$ is Zariski dense in $\mathrm{GL}\otimes_K K'$ for every extension field $K'$ of $K$, hence the computation of the generic initial extensor does not change after an extension of the base field. Furthermore, if $K$ is algebraically closed and we can identify $W$ with $\widetilde W$, then we do not need to extend the base field. \item If $W$ is closed and irreducible, we can read $\mathbf{in}(W)$ and $\mathbf{gin}(W)$ as the initial extensor and the generic initial extensor of a {\it general point} in $W$. In fact, consider the two non-empty open subsets $W':=W \cap U_{\mathbf{in}(W)}$ and $W'':=W\cap U_{\mathbf{gin}(W)}$ of $W$. The sets $\widetilde{W'}$ and $\widetilde{W''}$ of the closed points in $W'$ and $W''$, respectively, are both dense in $W$. By construction, we have $\mathbf{in}(V)=\mathbf{in}(W)$ for every point $V\in W'$ and $\mathbf{gin}(V)=\mathbf{gin}(W)$ for every point $V\in W''$. \end{enumerate} \end{remark} \begin{corollary} \label{cor:chiusura3} Let $W$ be a subset of $\GrassScheme{S_m}{q}$. Then, $\mathbf{gin}(W)$ belongs to $\mathbf B_{S_m}^q$. \end{corollary} \begin{proof} By Proposition \ref{prop:chiusura} we can assume that $W$ is closed. As observed in Remark \ref{rem:generic point}, we can replace $W$ by a suitable closed point $V$ of $W$, which can be considered as a $K$-point after a possible extension of the base field. We conclude by Theorem \ref{th:eisenbud}\eqref{th:eisenbud_iii}. \end{proof} \section{Stable subsets under the action of GL} \label{sec:GL} In this section we focus our attention on the subsets $W$ of $\GrassScheme{S_m}{q}$ that are stable under the action of the group $\mathrm{GL}$, i.e.~$g(W)=W$ for every $g\in \mathrm{GL}$. Under this hypothesis on $W$ we see that the initial extensor and the generic initial extensor of $W$ coincide. If, moreover, we assume that $W$ is also closed and irreducible, then we obtain interesting properties of the two open subsets $W \cap \mathcal V_{\mathbf{in}(W)}$ and $W \cap \mathcal U_{\mathbf{in}(W)}$ of the points of $W$ having $\mathbf{in}(W) = \mathbf{gin}(W)$ as initial extensor and generic initial extensor, respectively (see formula \eqref{eq:V tondo con L}). Let $V$ be a point in $\GrassScheme{S_m}{q}$. The closure $\overline{\O(V)}$ of its orbit $\O(V)$ is irreducible, because $\O(V)$ is irreducible, and is stable under the action of $\mathrm{GL}$ because $g(\mathcal O(V)) \subseteq \mathcal O(V)$ by definition of orbit, for every $g\in \mathrm{GL}$, and hence $g(\overline{\mathcal O(V)}) \subseteq \overline{g(\mathcal O(V))} \subseteq \overline{\mathcal O(V)}$. In particular, every subset $W$ of $\GrassScheme{S_m}{q}$ that is stable under the action of $\mathrm{GL}$ is a disjoint union of orbits of its points under the action of $\mathrm{GL}$. If, moreover, $W$ is also closed, it is the union of the closure of these orbits. \begin{proposition}\label{prop:componenti} Let $W\subseteq \GrassScheme{S_m}{q}$ be closed and stable under the action of $\mathrm{GL}$. Then: \begin{enumerate}[(i)] \item\label{prop:componenti_i} $\mathbf{in}(W)=\mathbf{gin}(W)$. \item\label{prop:componenti_0} For every $W' \subseteq W$, both $\vs{\mathbf{in}(W')}$ and $\vs{\mathbf{gin}(W')}$ belong to $W$. \item\label{prop:componenti_ii} If $W$ is reducible, then its irreducible components are stable under the action of $\mathrm{GL}$. \item\label{prop:componenti_iii} If $Y_1,\dots, Y_\ell$ are irreducible components of $W$, then every irreducible component of $\cap_{i=1}^\ell Y_i$ is stable under the action of $\mathrm{GL}$. \item\label{prop:componenti_iv} Let $U\subseteq W$ be open and stable under the action of $\mathrm{GL}$; the irreducible components of $W\setminus U$ and those of the closure $\overline U$ of $U$ are stable under the action of $\mathrm{GL}$. \end{enumerate} \end{proposition} \begin{proof} \eqref{prop:componenti_i} To prove $\mathbf{in}(W)=\mathbf{gin}(W)$ it is enough to observe that in the present hypothesis $W=\mathcal O(W)$. \eqref{prop:componenti_0} By Proposition \ref{prop:chiusura}\eqref{prop:chiusura_i} and \eqref{prop:chiusura_ii}, we may assume that $W'$ is closed and choose a closed point $V \in W'$ such that $\mathbf{in}(V)=\mathbf{in}(W')$. Extending the field of scalars, if necessary, we may assume that $V$ is a $K$-point. Exploiting the term order, we can construct a map $\varphi \colon \mathbb A^1_K \rightarrow \GrassScheme{S_m}{q}$ such that $\varphi (1)=V$,$\varphi(0)=\mathrm{in}(V)$, and for every $c\neq0,1$ $\varphi (c)=g_c (V)$ where $g_c \in \mathrm{GL}$ corresponds to a diagonal matrix in which the entries of the diagonal are suitable powers of $c$ (see for instance \cite{BM91}). Then we conclude, since $W$ is closed and contains the orbits of its points. This same argument applies to $\mathbf{gin}(W')$. \eqref{prop:componenti_ii} We have to show that every irreducible component $Y$ of $W$ is stable under the action of $\mathrm{GL}$, namely that $g(Y)=Y$ for every $g\in \mathrm{GL}$. By topological arguments, $g(Y)$ is an irreducible component of $W$. Let $V$ be a point in $Y$ not belonging to any other irreducible component of $W$. Then, $Y$ is the only irreducible component of $W$ containing the orbit $\O(V)$. On the other hand, $\O(V)=g(\O(V))$ is contained in $g(Y)$ and, in particular, $V \in g(Y)$. Therefore $Y=g(Y)$. Item \eqref{prop:componenti_iii} follows directly from \eqref{prop:componenti_ii}. We now prove \eqref{prop:componenti_iv}. For what concerns $W\setminus U$ it is sufficient to observe that it is closed in $\GrassScheme{S_m}{q}$ and stable under the action of $\mathrm{GL}$. Hence, we can apply \eqref{prop:componenti_ii}. Finally, consider $V\in \overline U\setminus U$. The intersection of every open neighbourhood $A$ of $V$ with $U$ is non-empty. Moreover, for every $g\in \mathrm{GL}$, $g(A)$ is an open neighbourhood of $g(V)$ and also its intersection with $U$ is non-empty, because $U$ is stable under the action of $\mathrm{GL}$. Then, $\overline U$ is stable under the action of $\mathrm{GL}$ too and we again apply \eqref{prop:componenti_ii}. \end{proof} For every $L\in \mathbf T_{S_m}^q$, consider the following subsets of $\GrassScheme{S_m}{q}$: \begin{equation}\label{eq:V tondo con L} \mathcal V_L:=\lbrace V\in \GrassScheme{S_m}{q} \ \vert\ \mathbf{in}(V)=L\rbrace, \quad \mathcal U_L:=\lbrace V\in \GrassScheme{S_m}{q} \ \vert\ \mathbf{gin}(V)=L\rbrace. \end{equation} Obviously, $\vs{L}$ belongs to $\mathcal V_L$, and $\mathcal U_L$ is non-empty if and only if $L$ belongs to $\mathbf B_{S_m}^q$. Thus, from now, when we consider $\mathcal U_L$ we assume that $L$ is Borel. It is immediate that $\mathcal U_L$ is stable under the action of $\mathrm{GL}$, while in general $\mathcal V_L$ is not, even when $L$ is Borel-fixed. We also point out that $\mathcal U_L$ does not need to contain $\vs{\mathbf{in} (V)}$ for every $V\in \mathcal U_L$, even when $\mathbf{in}(V)$ is Borel-fixed, as the following example shows. \begin{example}\label{ex:BorelNonGin} Let us assume $\mathrm{char} (K)=0$ and consider $\GrassScheme{S_2}{3}$ and the degrevlex term order on $S=K[x_0,x_1,x_2]$ with $x_0\prec x_1\prec x_2$. If we take $V=\langle x_2^2,x_0x_2,$ $x_1x_2+x_1^2\rangle$, we obtain $L:=\mathbf{gin}(V)=x_2^2\wedge x_1x_2\wedge x_1^2$ and $L':=\mathbf{in}(V)=x_2^2\wedge x_1x_2\wedge x_0x_2$, both elements of $\mathbf B_{S_2}^{3}$. Hence, $V\in \mathcal U_L$, but $\vs{\mathbf{in}(V)}\notin \mathcal U_L $, because $\mathbf{gin}(\vs{\mathbf{in}(V)})=\mathbf{in}(V)$ being $\mathbf{in}(V)$ Borel-fixed. On the other hand, $V\in \mathcal V_{L'}$, but $\vs{\mathbf{gin}(V)}\notin \mathcal V_{L'}$. \end{example} For every Borel term $L$ we will now examine the relations between the three subsets $U_L$, $\mathcal V_L$ and $\mathcal U_L$ of $\GrassScheme{S_m}{q}$. It is obvious that $\mathcal V_L \subseteq U_L$, by definition of initial extensor (see Definition \ref{def:extIn}), while in general $\mathcal U_L\nsubseteq U_L$. Furthermore, as shown by Example \ref{ex:BorelNonGin}, we can have both $\mathcal V_L\nsubseteq \mathcal U_L$ and $\mathcal U_L\nsubseteq \mathcal V_L$. Some more detailed relations can be obtained taking into account the action of $\mathrm{GL}$. \begin{proposition} \label{lemma:UL} \ \begin{enumerate}[(i)] \item \label{lemma:UL_i} For every $L\in \mathbf T_{S_m}^q$, $\mathcal V_{L}= U_{L}\setminus \bigcup_{L'\succ L, L' \in \mathbf T_{S_m}^q } U_{L'}$. \item \label{lemma:UL_ii} For every $L\in \mathbf B_{S_m}^q$, $\mathcal U_{L}= \O(U_{L})\setminus \bigcup_{L'\succ L, L' \in \mathbf B_{S_m}^q } \O(U_{L'})$. \item \label{lemma:UL_iii} $\{ \mathcal {V}_L\}_{ L\in \mathbf T_{S_m}^q} $ and $\{ \mathcal {U}_L\}_{ L\in \mathbf B_{S_m}^q }$ are two stratifications of $\GrassScheme{S_m}{q}$ consisting of locally closed subsets. \end{enumerate} Moreover, if $W $ is a closed and irreducible subset of $\GrassScheme{S_m}{q}$ then \begin{enumerate}[(a)] \item\label{lemma:UL_iv} $W\cap \mathcal {V}_{\mathbf{in}(W)}$ is a dense open subset of $W$, while $W\cap \mathcal {V}_{L}$ is empty if $L\succ \mathbf{in}(W)$. \item\label{lemma:UL_v} If $W$ is also stable under the action of $\mathrm{GL}$, then $W\cap \mathcal {U}_{\mathbf{gin}(W)}=W\cap \O(\mathcal V_{\mathbf{gin}(W)}) =W\cap \O(\mathcal V_{\mathbf{in}(W)})$ is a dense open subset of $W$, while $W\cap \mathcal {U}_{L}$ is empty if $L\succ \mathbf{gin}(W)$. \end{enumerate} \end{proposition} \begin{proof} \eqref{lemma:UL_i} and \eqref{lemma:UL_ii} directly follow by the definition of initial extensor and generic initial extensor (see Definition \ref{def:extIn}) and by Corollary \ref{cor:chiusura3}. For \eqref{lemma:UL_iii} we observe that the two families $\{\mathcal V_L\}_{L\in \mathbf T_{S_m}^q}$ and $\{\mathcal U_L\}_{L\in \mathbf B_{S_m}^q}$ are partitions of the Grassmannian, since every point $V$ of $\GrassScheme{S_m}{q}$ is contained in exacly one set $ \mathcal {V}_L$, the one with $L=\mathbf{in}(V)$, and in exacly one set $ \mathcal {U}_L$, the one with $L=\mathbf{gin}(V)$. Moreover, by the previous items it follows that $ \mathcal {V}_L$ and $ \mathcal {U}_L$ are locally closed in $\GrassScheme{S_m}{q}$. \eqref{lemma:UL_iv} The intersection $W\cap U_{L} $ is empty when $L\succ \mathbf{in}(W)$ and $W\cap U_{\mathbf{in}(W)}$ is not empty by definition of initial extensor. Then, exploiting \eqref{lemma:UL_i} we get that also $W\cap \mathcal V_{L} =\emptyset $ when $L\succ \mathbf{in}(W)$, and $ W\cap \mathcal V_{\mathbf{in}(W)} =W\cap U_{\mathbf{in}(W)} $ is a dense open subset of $W$. To prove \eqref{lemma:UL_v} we can apply the same arguments of the previous item and Proposition \ref{prop:componenti}\eqref{prop:componenti_i}. \end{proof} \section{Hilbert scheme and double-generic initial ideal of a GL-stable subset} Let $p(t)$ be a Hilbert polynomial and denote by $\HilbScheme{p(t)}{n}$ the Hilbert scheme para\-me\-te\-rizing the set of all subschemes with Hilbert polynomial $p(t)$ in the projective space $\mathbb P^n_K$. From now, we consider $\HilbScheme{p(t)}{n}$ as a subscheme of $\GrassScheme{S_m}{q}$, where $m$ is an integer larger than or equal to the Gotzmann number $r$ of $p(t)$ and $q:=\binom{n+m}{m}-p(m)$ (for instance, see \cite{CS}). Moreover, let $\prec$ be a term order in $S$. It is well-known that $\HilbScheme{p(t)}{n}$ is invariant under the action of GL, as a consequence of the definition of Hilbert scheme. Thus, for many aspects, we can consider the Hilbert scheme simply as a closed subscheme $W$ of the Grassmannian, also stable under the action of GL, and can apply all the results we have obtained in Section \ref{sec:GL} to its irreducible closed subsets that are stable under the action of $\mathrm{GL}$. There is however an important issue that comes into play when $\HilbScheme{p(t)}{n}$ is involved. Roughly speaking, it is the relation between the notions of initial and generic initial extensors and the analogous ones for ideals. Now, we investigate this relation and show that, independently of the integer $m$, there is a well-defined ideal corresponding to the generic initial extensor of a closed irreducible subset of $\HilbScheme{p(t)}{n}$ that is also stable under the action of $\mathrm{GL}$. \medskip From now, a subset of $\HilbScheme{p(t)}{n}$ that is closed, irreducible, and stable under the action of $\mathrm{GL}$ is called a {\em GL-stable} subset. \medskip Recall that every $K$-point of $\GrassScheme{S_m}{q}$ is a $q$-dimensional $K$-vector space $V$ of $S_m$. It is natural to consider the ideal generated by $V$ in $S$ and we denote it by $I_V$. Exploiting Theorem \ref{th:eisenbud}, we now relate the initial ideal $\mathrm{in}(I_V)$ and the generic initial ideal $\mathrm{gin}(I_V)$ to $\mathbf{in}(V)$ and $\mathbf{gin}(V)$, respectively. In general, if $V$ is any point of $\GrassScheme{S_m}{q}$, $\mathrm{in}(I_V)$ does not need to coincide with the ideal $I_{\vs{\mathbf{in}(V)}}$ and $\mathrm{gin}(I_V)$ does not need to coincide with $I_{\vs{\mathbf{gin}(V)}}$, even though their homogeneous parts of degree $m$ do, as shown by the following example. \begin{example}\label{ex:regsale2} Let $V$ be the vector space $\langle x_2^2,x_1x_2+x_0^2\rangle \subset K[x_0,x_1,x_2]_2$. For any term order in $K[x_0, x_1, x_2]$ with $x_0\prec x_1 \prec x_2$, the initial extensor of $V$ is $\mathbf{in}( V)=x_2^2 \wedge x_1x_2$ and the initial ideal of $I_V$ is $\mathrm{in}(I_V)=(x_2^2, x_1x_2,x_0^2x_2,x_0^4)$. It is evident that $ x_2^2 $ and $ x_1x_2 $ do not generate $\mathrm{in}(I_V)$. Indeed, the Hilbert polynomial of $K[x_0, x_1, x_2]/( x_2^2 , x_1x_2)$ is $t+2$, while the Hilbert polynomial of $K[x_0, x_1, x_2]/I_V$ and of $K[x_0, x_1, x_2]/\mathrm{in}(I_V)$ is $p'(t)=4$. \end{example} If $V$ is a point of a Hilbert scheme, an analogous situation to Example \ref{ex:regsale2} cannot happen. \begin{theorem}\label{gin appartiene} Let $V$ be a $K$-point of $\HilbScheme{p(t)}{n}$, and set $V_1:=\vs{\mathbf{in}(V)}$ and $V_2:=\vs{\mathbf{gin}(V)}$. Then, the Hilbert polynomial of $I_{V_1}$ and $I_{V_2}$ is $p(t)$, and $$I_{V_1}=\mathrm{in}(I_V)=(\mathrm{in}(I_V)^{\textnormal{sat}})_{\geq m}, \qquad I_{V_2}=\mathrm{gin}(I_V)=(\mathrm{gin}(I_V)^{\textnormal{sat}})_{\geq m}.$$ \end{theorem} \begin{proof} Recall that $\HilbScheme{p(t)}{n}$ is a closed subscheme of $ \GrassScheme{S_m}{q}$ and it is stable under the action of GL. Hence, we can apply Proposition \ref{prop:componenti}\eqref{prop:componenti_0} to $\HilbScheme{p(t)}{n}$ and get that both $V_1$ and $V_2$ belong to $\HilbScheme{p(t)}{n}$. Therefore $I_V$, $I_{V_1}$ and $I_{V_2}$ share the same Hilbert polynomial $p(t)$. Thinking of the points of $\HilbScheme{p(t)}{n}$ as subschemes of $\mathbb{P}^n_K$, the regularity of all of them is upper bounded by the Gotzmann number of $p(t)$, in particular by $m$. As a consequence, the saturated ideal in $S$ defining a $K$-point of $\HilbScheme{p(t)}{n}$ can be completely recovered by saturation from its homogeneous part of degree $m$, then $(\mathrm{in}(I_V)^{\textnormal{sat}})_{\geq m}=((\mathrm{in}(I_V)^{\textnormal{sat}})_{ m})$. Thus, we obtain $I_{V_1}=\mathrm{in}(I_V)$ observing that these two ideals are generated in degree $m$, their Hilbert functions coincide in every degree $m'\geq m$, and $(I_{V_1})_m=\mathrm{in}(V)$ by Theorem \ref{th:eisenbud}. The equality $ I_{V_2}=(\mathrm{gin}(I_V)^{\textnormal{sat}})_{\geq m}$ follows by the same arguments. \end{proof} \begin{corollary}\label{appartenenza} Let $W$ be a closed subset of $\HilbScheme{p(t)}{n}$. Then, the Hilbert polynomial of the ideals $I_{\vs{\mathbf{in}(W)}}$ and $I_{\vs{\mathbf{gin}(W)}}$ is $p(t)$. If, moreover, $W$ is stable under the action of GL, then $I_{\vs{\mathbf{in}(W)}}=I_{\vs{\mathbf{gin}(W)}}$ is a point of $W$. \end{corollary} \begin{proof} By Proposition \ref{prop:chiusura}\eqref{prop:chiusura_ii} and possibly extending $K$ to its algebraic closure as suggested in Remark \ref{rem:generic point}, there is a $K$-point $V$ of $W$ such that $\mathbf{in}(V)=\mathbf{in}(W)$. Then we get the first statement for $\mathbf{in}(W)$ as a consequence of Theorem \ref{gin appartiene}. This same argument applies to $\mathbf{gin}(W)$. The second statement directly follows by applying Proposition \ref{prop:componenti}\eqref{prop:componenti_i} and \eqref{prop:componenti_0} to the GL-stable subset $\overline{\mathcal O(\{ V\})}$ of $W$. \end{proof} A relevant and immediate consequence of the above results is that the points of the Hilbert scheme corresponding to the initial extensor and the generic initial extensor of anyone of its points do not depend on the Grassmannian in which we embed the Hilbert scheme, recalling that, for every integer $m \geq r$ where $r$ is the Gotzmann number, the Hilbert scheme $\HilbScheme{p(t)}{n}$ can be embedded in $\GrassScheme{S_m}{q}$. If we take $m'\geq r$, $m'\neq m$, we replace $q$ by $q':=\tbinom{n+m'}{n}-p(m')$. \begin{corollary}\label{prop:GIGI} Let $Z$ be a subset of $\HilbScheme{p(t)}{n}$ and denote by $W$ and $W'$ the images of $Z$ by the embeddings of $ \HilbScheme{p(t)}{n}$ in $\GrassScheme{S_m}{q}$ and in $\GrassScheme{S_{m'}}{q'}$, respectively, for some $m,m' \geq r$. Then, \[ (I_{\vs{\mathbf{in}(W)}})^{\textnormal{sat}}=(I_{\vs{\mathbf{in}(W')}})^{\textnormal{sat}} \ \text{ and } \ (I_{\vs{\mathbf{gin}(W)}})^{\textnormal{sat}}=(I_{\vs{\mathbf{gin}(W')}})^{\textnormal{sat}}.\] \end{corollary} Due to Corollary \ref{prop:GIGI}, we can finally give the following definition. \begin{definition}\label{def:gigi} Let $Y$ be a {\em GL-stable} subset of $ \HilbScheme{p(t)}{n}$. We will denote by $\mathbf{G}_Y$ the ideal $(I_{\vs{\mathbf{gin}(Y)}})^{\textnormal{sat}}$ in $S$ and call it the {\em double-generic initial ideal of $Y$}. \end{definition} \begin{example} Given a Hilbert polynomial $p(t)$ with Gotzmann number $r$, an integer $n$ and a term order $\prec$ on $S$, let $I$ be the ideal generated in $S$ by the greatest $\tbinom{n+r}{n}-p(r)$ terms of degree $r$ w.r.t.~$\prec$. If $p(t)$ is the Hilbert polynomial of $I$, then $I$ is a hilb-segment ideal (e.g.~\cite[Definition 3.7]{CLMR}). If the hilb-segment ideal exists (e.g.~\cite[Proposition 3.17]{CLMR}), it is the double-generic initial ideal $\mathbf{G}_Y$ of every irreducible component $Y$ of $\HilbScheme{p(t)}{n}$ containing it. \end{example} We end this section observing that, in addition to the irreducibile components, there are many other relevant subsets of the Hilbert scheme that are invariant under the action of GL. We now list a few examples which are obtained applying Proposition \ref{prop:componenti}. \begin{example}\label{ex:singular}\ \begin{enumerate}[(i)] \item The irreducible components of the singular locus of $\HilbScheme{p(t)}{n}$ are GL-stable. \item Let $f:\mathbb N\rightarrow \mathbb N$ be the Hilbert function of a subscheme of $\mathbb{P}^n_K$ and let $W$ the locus of $\HilbScheme{p(t)}{n}$ of points corresponding to subschemes $Z$ of $ \mathbb{P}^n_K$ whose Hilbert function $H_Z$ behaves so that $H_V(t) \leq f(t)$ for every $t\in \mathbb N$. The irreducible components of $W$ are GL-stable by Proposition \ref{prop:componenti}. Indeed, $W$ is stable under the action of $\mathrm{GL}$ and is closed by semicontinuity (for example, see \cite{mall}). \item For any given integer $s$, let $W$ be the locus of $\HilbScheme{p(t)}{n}$ of points corresponding to subschemes of $\mathbb P^n_K$ whose regularity is lower than or equal to $s$, that is the Hilbert scheme with bounded regularity $\HilbSchemeR{p(t)}{n}{s}$ which is studied in \cite{BBR}. It is obviously stable under the action of $\mathrm{GL}$, and it is open by semicontinuity. Then, the irreducible components of the complementary $\HilbScheme{p(t)}{n}\setminus W$ (i.e.~the set of points of $\HilbScheme{p(t)}{n}$ corresponding to subschemes with regularity $\geq s+1$) are GL-stable. \end{enumerate} \end{example} \begin{example}\label{ex:gore e punti} Let $p(t)=c$ be a constant Hilbert polynomial. \begin{enumerate}[(i)] \item The locus of $\HilbScheme{c}{n}$ of points corresponding to schemes that are supported on a unique point is closed and stable under the action of $\mathrm{GL}$. Thus, its irreducible components are GL-stable. \item For $p(t)=c$, the locus of $\HilbScheme{c}{n}$ of points corresponding to Gorenstein schemes is an open subset, stable under the action of $\mathrm{GL}$. Thus, the irreducible components of its closure are GL-stable. \item For $p(t)=c$, the irreducible components of the closure of the locus of $\HilbScheme{c}{n}$ of the Gorenstein schemes that are supported on a unique point are GL-stable. \item The irreducible components of the locus of $\HilbScheme{c}{n}$ of points corresponding to non-reduced subschemes of $\mathbb{P}^n_K$ are GL-stable. \end{enumerate} Many other examples can be obtained considering every locus of $\HilbScheme{p(t)}{n}$ that is defined as a subscheme of $\mathbb P^n_K$ by any property of its points, because such a locus is invariant under the action of $\mathrm{GL}$. \end{example} \section{\texorpdfstring{The partial order {$\prec\!\!\prec$} on the terms of $\wedge^q S_m$}{The partial order << on lists of terms on $\wedge^q S_m$}} \label{sec:minoreminore} In this section, we introduce a partial order $\prec\!\!\prec$ between finite subsets with the same cardinality $q$ of a totally ordered set $T$ and prove some properties. Then, we will apply these results to the case of lists of terms in $S_m$ and extend them to terms in $\wedge^q S_m$. We obtain that the double-generic initial ideal of a GL-stable subset $Y$ of a Hilbert scheme is the maximum among all the Borel terms in $Y$ with respect to the partial order $\prec\!\!\prec$. This characterization gives a simple and deterministic method to recognize the double-generic initial ideal of $Y$ among all the Borel ideals in $Y$. \begin{definition}\label{partial order} Let $(T,\prec)$ be a totally ordered set and consider $A, B\subseteq T$ containing $q$ distinct elements each. We write $A\prec\!\!\prec B$ if there is a bijection $\omega:A\rightarrow B$ such that $a\preceq \omega(a)$, for every $a\in A$. \end{definition} It is quite obvious that $\prec\!\!\prec$ is a partial order and that in particular $A\prec\!\!\prec A$. The following technical result allows a better understanding of its meaning. \begin{proposition}\label{intrinseco} Let $(T,\prec)$ be a finite, ordered set and $A, B$ be two subsets of $T$ containing $q$ distinct elements each. Further, we index the elements of $A=\lbrace a_1,\dots,a_q\rbrace$ so that $a_i\succeq a_{i+1}$ for every $i \in \lbrace 1,\dots,q-1\rbrace$, and similarly for $B$. The followings are equivalent: \begin{enumerate}[(i)] \item \label{intrinseco_i}$A\prec\!\!\prec B$; \item \label{intrinseco_ii}for every element $c \in T$, $\vert \{{a_i} : {a_i} \succeq c \}\vert \leq \vert\{b_j : b_j \succeq c \}\vert$; \item \label{intrinseco_iii} for every $i \in \lbrace 1,\dots,q \rbrace$, ${a_i}\preceq {b_i}$; \item \label{intrinseco_iv}$ \lbrace a_1,\dots,a_q \rbrace\setminus \lbrace b_1,\dots,b_q\rbrace \prec\!\!\prec \lbrace b_1,\dots,b_q\rbrace \setminus \lbrace a_1,\dots,a_q \rbrace.$ \end{enumerate} \end{proposition} \begin{proof} We first prove that item (\ref{intrinseco_i}) implies item (\ref{intrinseco_ii}). Observe that we can assume $c\in A\cup B$, by replacing $c$ if necessary by the smallest term w.r.t.~$\prec$ in $A\cup B$ which is greater than $c$. If $c=a_s\in A$, then there are exactly $s$ elements in $A$ bigger than or equal to $c$: more precisely, $a_i\preceq c$ for $i\in\{1,\dots,s\}$. Consider the bijection $\omega: A\rightarrow B$ such that $\omega(a_i)\succeq a_i$, for every $i\in \lbrace 1,\dots, q\rbrace$. Since we have $\{b_i\in B\vert b_i\succeq a_s=c\}\supseteq \left\{\omega(a_j)\right\}_{ j\in \{1,\dots,s\}}$ then it is immediate that $\vert\{b_i\in B\vert b_i\succeq a_s=c\}\vert\geq s$. Otherwise, if $c=b_s \in B$, then $\vert \{b_j \in B\vert b_j \succeq b_s=c \}\vert=s$, and for every $j>s$ we have $b_s \succ b_j\succeq \omega^{-1}(b_j)$. Hence $\vert \{a_j \in A\vert a_j \succeq b_s=c \}\vert \leq s $ since it is contained in $A\setminus \{ \omega^{-1} (b_{s+1}), \dots , \omega^{-1}(b_q) \}$. Item (\ref{intrinseco_ii}) implies (\ref{intrinseco_iii}), by contradiction: if there is $j\in \lbrace 1,\dots, q\rbrace$ such that $a_j\succ b_j$, then $\vert \lbrace a_i\in A\vert a_i\succeq a_j \rbrace \vert=j>\vert \lbrace b_i \in B\vert b_i\succeq a_j \rbrace \vert$. Finally, if item\eqref{intrinseco_iii} holds, then we can consider the bijection $\omega: A\rightarrow B$ defined as $\omega(a_i)=b_i$, which fulfills the Definition of $\prec\!\!\prec$. The equivalence between item \eqref{intrinseco_ii} and \eqref{intrinseco_iv} is immediate. \end{proof} Proposition \ref{intrinseco}\eqref{intrinseco_iii} points out a \lq\lq natural\rq\rq\ bijection between the sets $A$ and $B$ which fulfills Definition \ref{partial order}, but it does not mean that there are no other such bijections. If, for instance, $b_1 \succeq \dots \succeq b_q\succ a_1\succeq \dots \succeq a_q$, then every bijection from $A$ to $B$ fulfills Definition \ref{partial order}. If $\prec$ is a term order on $S$, then for every integer $m$ the couple $(S_m\cap\mathbb T,\prec)$ is a finite ordered set. From now, we identify every normal expression $\tau_1\wedge \dots \wedge \tau_q\in \wedge^q S_m$ with the set $\lbrace \tau_1,\dots,\tau_q\rbrace\subset S_m\cap\mathbb T$. \begin{definition}\label{precprec} For every two terms $\tau_1\wedge \dots \wedge \tau_q$ and $\sigma_1\wedge \dots \wedge \sigma_q\in \mathbf T_{S_m}^q$, we write $\tau_1\wedge \dots \wedge \tau_q\prec\!\!\prec \sigma_1\wedge \dots \wedge \sigma_q$ if and only if $\lbrace \tau_1,\dots,\tau_q\rbrace\prec\!\!\prec \lbrace \sigma_1,\dots,\sigma_q\rbrace$, according to Definition \ref{partial order}. \end{definition} Now, we can apply the partial order $\prec\!\!\prec$ of Definition \ref{precprec} and the results of Proposition \ref{intrinseco} to the terms of $\wedge^q S_m$. If $N$ is a set of terms of $\wedge^q S_m$, by $\max_{\prec\!\!\prec} N$ we denote (if it exists) the maximum of $N$ w.r.t.~the order $\prec\!\!\prec$. In Remark \ref{rem:ginpiugrande}, we observed that, for every point $V$ of $\GrassScheme{S_m}{q}$, we have $\mathbf{in}(V)\preceq\mathbf{gin}(V)$. Now, we observe there is a stronger relation between $\mathbf{in}(V)$ and $\mathbf{gin}(V)$. More generally, we prove that we can replace the order of \eqref{ordEisenbud} by the order $\prec\!\!\prec$ of Definition \ref{precprec} in the study of initial and generic initial extensors of irreducible closed subsets of $\GrassScheme{S_m}{q}$. \begin{theorem}\label{th:minoreminore} Let $W$ be a subset of $\GrassScheme{S_m}{q}$ such that $\overline W$ is irreducible. Then \begin{enumerate}[(i)] \item \label{th:minoreminore_i} $\mathbf{in}(W)=\max_{\prec\!\!\prec} \Delta\mathrm{Supp}(W)$; \item \label{th:minoreminore_ii} $\mathbf{gin}(W)=\max_{\prec\!\!\prec} \Delta\mathrm{Supp}(\O(W))$; \item \label{th:minoreminore_iii} $\mathbf{in}(W)\prec\!\!\prec \mathbf{gin}(W)$; \item \label{th:minoreminore_iv} $\mathbf{gin}(W)=\max_{\prec\!\!\prec}(\mathbf B_{S_m}^q \cap \overline W)$. \end{enumerate} \end{theorem} \begin{proof} For what concerns statement \eqref{th:minoreminore_i}, thanks to Proposition \ref{prop:chiudere}\eqref{prop:chiudere_i} and Proposition \ref{prop:chiusura}\eqref{prop:chiusura_i}, we can assume that $W$ is closed. Furthermore, if $V$ is the generic point of $W$, then $\mathbf{in}(V)=\mathbf{in}(W)$ and $\Delta\mathrm{Supp}(V)=\Delta\mathrm{Supp}(W)$ by Propositions \ref{prop:chiudere}\eqref{prop:chiudere_iii} and \ref{prop:chiusura}\eqref{prop:chiusura_iii}. Up to an extension of the field of scalars, it is sufficient to prove statement \eqref{th:minoreminore_i} for the $K$-point $V$. Let $\mathbf{in}(V)=\tau_1\wedge\dots\wedge \tau_q$. We show that $\sigma_1\wedge\dots\wedge \sigma_q\prec\!\!\prec \tau_1\wedge\dots\wedge \tau_q$, for every $\sigma_1\wedge\dots\wedge \sigma_q\in \Delta\mathrm{Supp}(V)\setminus\lbrace \mathbf{in}(V)\rbrace$. We can choose for the $K$-vector space $V$ the unique basis $f_1,\dots,f_q\in S_m$, with $\mathrm{in}(f_i)=\tau_i$ and $f_i=\tau_i+\sum_{\eta_{j_i}\prec \tau_i} c_{i{j_i}} \eta_{j_i}=\sum_{\eta_{j_i}\preceq \tau_i} c_{i{j_i}} \eta_{j_i}$, where the coefficient of $\tau_i$ in the last summation is $1$. Consider $f_1\wedge \dots \wedge f_q=\sum_{\sigma_1\wedge\dots\wedge\sigma_q\in \Delta\mathrm{Supp}(V)} \Delta_L(V) \sigma_1\wedge\dots \wedge \sigma_q$. For every normal expression $\sigma_1\wedge\dots\wedge \sigma_q\in \Delta\mathrm{Supp}(V)\setminus\lbrace \mathbf{in}(V)\rbrace$, by construction of $f_1\wedge \dots \wedge f_q$ we have $$\sigma_1\wedge\dots \wedge \sigma_q=sgn(\gamma) \eta_{j_{\gamma(1)}}\wedge \dots \wedge \eta_{j_{\gamma(q)}},$$ for some $\gamma$ permutation of $\lbrace 1,\dots,q\rbrace$ and $\eta_{j_{\gamma(\ell)}}$ appearing with non-null coefficient in $f_{\gamma(\ell)}$. Hence, we can consider the bijection $\omega:\lbrace\sigma_1,\dots,\sigma_q \rbrace\rightarrow \lbrace \tau_1,\dots,\tau_q\rbrace$ such that $\omega(\sigma_\ell)=\tau_{\gamma(\ell)}$. This bijection $\omega$ fulfills Definition \ref{partial order}. To prove \eqref{th:minoreminore_ii}, we observe that \[\mathbf{gin}(W)=\mathbf{in}(\mathcal O(W))= \mathbf{in}(\overline{\mathcal O(W)})= \max_{\prec\!\!\prec} \Delta\mathrm{Supp}(\overline{\O(W)}),\] by definition of generic initial extensor, by Proposition \ref{prop:chiusura} and by fact \eqref{th:minoreminore_i}. Then, we conclude because $\Delta\mathrm{Supp}(\overline{\O(W)})= \Delta\mathrm{Supp}(\O(W))$ by Proposition \ref{prop:chiudere}. To prove \eqref{th:minoreminore_iii}, now it is enough to observe that $\Delta\mathrm{Supp}(W)$ is included in $\Delta\mathrm{Supp}(\O(W))$. Fact \eqref{th:minoreminore_iv} follows by the previous items and by Theorem \ref{th:eisenbud}\eqref{th:eisenbud_iii}. \end{proof} \begin{remark} It is important to observe that {\em the statement of Theorem \ref{th:minoreminore} does not hold true in the weaker hypothesis that the subset $W$ is only closed}. Indeed, the generic initial extensors of its irreducible components do not need to be comparable by the partial order $\prec\!\!\prec$. This is a crucial point for the application of this result in subsection \ref{subsec:1}. \end{remark} Let $I$ and $J$ be two saturated monomial ideals in $S$ and assume that their Hilbert functions are equal in degree $m$, that is $\dim_K (I_m)=\dim_K (J_m) =q$. If $I_m=\langle \sigma_1,\dots,\sigma_q\rangle $ and $J_m=\langle \tau_1,\dots,\tau_q\rangle$, then we can compare the sets of terms $\{ \sigma_1,\dots,\sigma_q\}, \{\tau_1,\dots,\tau_q\}$ w.r.t.~$\prec\!\!\prec $, and if $\{ \sigma_1,\dots,\sigma_q\} \prec\!\!\prec \{ \tau_,\dots,\tau_q\}$, by abuse of notation we will simply write $I_m\prec\!\!\prec J_m$. This relation between the degree $m$ components of the two ideals $I$ and $J$ cannot be considerd as a relation between the two ideals. Indeed, if $m'\neq m$ then the components of degree $m'$ of $I$ and $J$ may have different dimensions, hence they are no more comparable using $\prec\!\!\prec$. We now present some interesting cases, where the relation $\prec\!\!\prec$ among the degree $m$ parts of two monomial ideals lying on the same Hilbert scheme is preserved when passing to another degree. The first one concerns the double-generic initial ideal of a GL-stable subset and follows from Theorem \ref{th:minoreminore}. \begin{corollary}\label{ginmaggiore} Let $Y$ be any GL-stable subset of $ \HilbScheme{p(t)}{n}$ and $r$ be the Gotzmann number of the Hilbert polynomial $p(t)$. If $J$ is the saturated ideal defining a point of $Y$, then for every $m\geq r$ \begin{equation}\label{minoreminoreOK} J_m \prec\!\!\prec (\mathbf{G}_Y)_m . \end{equation} \end{corollary} \begin{proof} Taking the embedding of $Y$ in $\GrassScheme{S_m}{q}$, the point defined by $J$ is the $K$-vector space $J_m$. Then, $\wedge^q J_m=\mathbf{in}(J_m)$ belongs to $\Delta\mathrm{Supp}(Y)$. By Theorem \ref{th:minoreminore}, we have $\mathbf{gin}(Y)=\mathbf{in}(Y)=\max_{\prec\!\!\prec} \Delta\mathrm{Supp}(Y)$; hence in particular $\wedge^q J_m \ {\prec\!\!\prec} \ \mathbf{gin}(Y)$. We can conclude because $\mathbf{gin}(J_m)$ is the homogeneous component of degree $m$ of the double-generic initial ideal \hskip -0.5mm $\mathbf{G}_Y$ of~$Y$, by Corollary~\ref{prop:GIGI}. \end{proof} \begin{remark} Let $Y$ be GL-stable. If $r_Y$ is the maximum among the regularities of the points of $Y$, \eqref{minoreminoreOK} holds true also for every integer $m'$, $ r_Y\leq m' <r$, by \cite[Theorem 1.2]{BBR}. \end{remark} Let $I$ and $J$ be any two saturated monomial ideals defining points of the same Hilbert scheme. Now, we show that $J_m \prec\!\!\prec I_m$ is equivalent to $J_{m+1} \prec\!\!\prec I_{m+1}$ if $J$ and $I$ are Borel-fixed and $\prec$ is the degrevlex term order. Moreover, if $p(t)$ \hskip -0.5mm is a constant polynomial, this result holds true for every term order on~$S$. For every term $\tau = x_0^{\alpha_0}\cdot\dots\cdot x_n^{\alpha_n}$, we set $\min(\tau):=\min\{i\in \{0,\dots,n\} : \alpha_i\not= 0\}$. We recall that if the monomial ideal $J$ is Borel-fixed, and $J=J_{\geq m}$, with $m\geq \textnormal{reg}(J^{\textnormal{sat}})$, then $J$ is a stable ideal (see \cite{Seiler2009I,Seiler2009II} and the references therein for details about stable ideals and their properties). We extend \cite[Definition 2.7]{mall} to such an ideal: we call {\em growth-vector of $J_m$} the vector $gv(J_m):=(v_0,\dots,v_n)$, with $v_i:=\vert\{ \tau \in J_m\cap \mathbb T_m : \min(\tau)=i \}\vert$. \begin{lemma}\label{lemma:growth-vector} Let $J$ be any $m$-regular Borel-fixed ideal with Hilbert polynomial $p(t)$. Then the growth-vector of $J_m$ depends only on $p(t)$ and $m$. \end{lemma} \begin{proof} Let $v=(v_0,\dots,v_n)$ be the growth-vector of $J_m$. By \cite[Lemma 1.1]{EK}, for every $t\geq m$, we have \begin{equation} q(t)=\dim S_t -p(t)=\sum _{i=0}^n v_i \tbinom{t-m+i}{i}. \end{equation} Since $J$ is Borel-fixed and $m$-regular, then $J_{\geq m}$ is stable and the term $x_n^m$ belongs to $J_m$ \cite[Proposition 4.4]{Seiler2009II}, and we obtain that $v_n=1$. By induction $v_j$ is the product of $j!$ times the leading coefficient of $q(t)-\sum _{i=j+1}^{n} v_i\binom{t-m+i}{i}$. \end{proof} From now on, let $J$ and $I$ be $m$-regular {Borel-fixed} ideals in $S$ such that $\textnormal{Proj}\,(S/J)$ and $\textnormal{Proj}\,(S/I)$ share the same Hilbert polynomial $p(t)$. Given a term $\tau=x_0^{\alpha_0}\dots x_n^{\alpha_n} \in \mathbb T$, in the following we let $\partial_{x_i}(\tau):=\alpha_i$. \begin{lemma}\label{minima} Let $\prec$ be the degrevlex term order on $S$. Assume $J_m \prec\!\!\prec I_m$, and let $\omega\colon J\cap \mathbb{T}_m\rightarrow I\cap \mathbb{T}_m$ be any function such that $\tau \preceq \omega(\tau)$ for every $\tau \in J_m\cap \mathbb T$. If $\ell:=\min(\tau)$ then $\ell=\min(\omega(\tau))$ and $\partial_{x_\ell} (\tau)\geq\partial_{x_\ell} (\omega(\tau))$. \end{lemma} \begin{proof} Since we are using the degrevlex term order, then $\min(\tau)\leq \min(\omega(\tau))$ for every $\tau \in J_m\cap \mathbb T$. Hence, $\min(\tau)=\min(\omega(\tau))$, because the growth vector of $J_m$ is the same as the one of $I_m$ by Lemma \ref{lemma:growth-vector}. The second part of the statement follows directly from the definition of the degrevlex term order. \end{proof} \begin{proposition}\label{prop: m cresce} If $\prec$ is the degrevlex term order on $S$, then $J_m \prec\!\!\prec I_m$ if and only if $J_{m+1} \prec\!\!\prec I_{m+1}$. \end{proposition} \begin{proof} First, assume that $J_m \prec\!\!\prec I_m$ and let $\omega_m\colon J\cap \mathbb{T}_{m}\rightarrow I\cap \mathbb{T}_{m}$ be a bijective function such that $\tau \preceq \omega_m(\tau)$. Every term in $J_{m+1}$ is of kind $\tau x_\ell$ for a unique $\tau \in J_m$ and $\ell\leq \min(\tau)$, and the same is for $I_{m+1}$ \cite[Lemma 1.1]{EK}. For every $\tau x_l\in J\cap \mathbb{T}_{m+1}$, we define $\omega_{m+1}(\tau x_l):=\omega_m(\tau)x_l$. Since $\prec$ is a term order, it is immediate that $\omega_{m+1}(\tau x_l)\succ \tau x_l$. Then, by Lemma \ref{minima} we see that the function $\omega_{m+1}\colon J\cap \mathbb{T}_{m+1}\rightarrow I\cap \mathbb{T}_{m+1}$ is bijective. Vice versa, assume that $J_{m+1} \prec\!\!\prec I_{m+1}$ and let $\omega_{m+1}\colon J\cap \mathbb{T}_{m+1}\rightarrow I\cap \mathbb{T}_{m+1}$ be a bijective function such that $\tau x_\ell\preceq \omega_{m+1}(\tau x_\ell )$. We now construct $\omega_m\colon J\cap \mathbb{T}_{m}\rightarrow I\cap \mathbb{T}_{m}$. We observe that there is a bijection between the terms in $J_m$ and the subset $J_{m+1}'$ of terms in $J_{m+1}$ that are divisible by the square of their minimal variable; indeed we obtain such a bijection associating to $\tau \in J_m\cap\mathbb T$ the term $\tau \cdot x_{\min(\tau)}\in J_{m+1}$. The same happens for the subset $I_{m+1}'\subset I_{m+1}$ defined in the same way as $J'_{m+1}$. By Lemma \ref{minima}, we obtain $\omega_{m+1}^{-1}(I_{m+1}') \subseteq J_{m+1}' $. On the other hand $I_{m+1}'$ and $J_{m+1}'$ have the same cardinality (that of $J\cap \mathbb{T}_{m}$ and $I\cap \mathbb{T}_{m}$). Hence, $\omega_{m+1}^{-1}(I_{m+1}') = J_{m+1}'$ and we obtain the bijection $\omega_m$ by setting $\omega_m(\tau)=\omega_{m+1}(\tau\cdot x_\ell) /x_\ell$ where $\ell:=\min(\tau)$. \end{proof} \begin{proposition}\label{m cresce con polinomio costante} If $p(t)=d$ is a constant Hilbert polynomial, then $J_m \prec\!\!\prec I_m$ if and only if $J_{m+1} \prec\!\!\prec I_{m+1}$, for every term order $\prec$ on $S$. \end{proposition} \begin{proof} Being the Hilbert polynomial constant, $J_t$ and $I_t$ contain all the terms of degree $t$ in the variables $x_1,\dots,x_n$, for every $t\geq m$. Hence, we can consider only the terms $\tau$ with $\min(\tau)=0$ and conclude by Proposition \ref{intrinseco}\eqref{intrinseco_iv} and Lemma \ref{lemma:growth-vector}. \end{proof} \section{Applications} \label{sec:applications} We always consider $\HilbScheme{p(t)}{n}$ embedded in $\GrassScheme{S_{m}}{q}$ for some $m\geq r$, and, for the sake of semplicity, we denote by $\mathbf B_{S_m}^q \cap \HilbScheme{p(t)}{n}$ the set of Borel-fixed extensor terms corresponding to points of $\HilbScheme{p(t)}{n}$. Recall that {\em all} the terms of $\mathbf B_{S_m}^q\cap \HilbScheme{p(t)}{n}$ can be obtained by the algorithms presented in \cite{CLMR,PL} in characteristic 0, and in \cite{B2014} for every characteristic. In this section, we show how the properties of the generic initial ideal and of the partial term order ${\prec\!\!\prec}$ in $\mathbf B_{S_m}^q \cap \HilbScheme{p(t)}{n}$ can be used to investigate the topological structure and the rationality of the irreducible components of a Hilbert scheme. The following first result singles out a condition that every Borel-fixed ideal defining a point of a GL-stable subset must satisfy. \begin{proposition} \label{cor:condizione necessaria} Let $Y$ be a $GL$-stable subset of $\HilbScheme{p(t)}{n}$. \begin{enumerate}[(i)] \item If $L\in\mathbf B_{S_m}^q \cap \HilbScheme{p(t)}{n}$ and $\vs{L} {\prec\!\!\not\prec } (\mathbf{G}_Y)_m$, then $\vs{L} \notin Y$. \item If $V$ is a $K$-point of $\HilbScheme{p(t)}{n}$ and $\vs{\mathbf{gin}(V)} {\prec\!\!\not\prec } (\mathbf{G}_Y)_m$, then $V \notin Y$. \end{enumerate} \end{proposition} \begin{proof} The statement is an immediate consequence of Theorem \ref{th:minoreminore} and of Corollary \ref{ginmaggiore}. Indeed, by these results, the $m$-degree homogeneous part of the double-generic initial ideal of $Y$ determines the maximal term among the Borel terms in $Y$ and, hence, among all the terms in $Y$. \end{proof} From Proposition \ref{cor:condizione necessaria}, we have therefore that if $\vs{L}$ belongs to a GL-stable subset $Y$, where $L$ is a term, then necessarily $\vs{L} {\prec\!\!\prec } (\mathbf{G}_Y)_m$. \subsection{\texorpdfstring{Detection of different components in a Hilbert scheme}{Detection of different components in a Hilbert scheme}} \label{subsec:1} In this subsection, we see that some interesting lower bounds for the number of irreducible components of a Hilbert scheme spring out from the properties of the double-generic initial ideal and of the partial term order ${\prec\!\!\prec}$. \begin{proposition}\label{prop:numero componenti} Let $\prec$ be a term order in $S$ and $M_{\prec\!\!\prec}$ be the number of the maximal terms in $\mathbf B_{S_m}^q \cap \HilbScheme{p(t)}{n}$ w.r.t.~$\prec\!\!\prec$. Then, there are at least $M_{\prec\!\!\prec}$ irreducible components in $\HilbScheme{p(t)}{n}$. \end{proposition} \begin{proof} The statement follows directly by Propositions \ref{prop:componenti}\eqref{prop:componenti_0} and \ref{cor:condizione necessaria} and Theorem~\ref{th:minoreminore}. \end{proof} Assuming $n>2$, if $J\subset S$ is a Borel-fixed ideal, we denote by ${\textnormal{sat}}_{x_0,x_1}(J)$ the ideal generated by the evaluations at $(1,1,x_2,\dots,x_n)$ of the terms generators of $J$ and call it the {\em $x_0,x_1$-saturation of $J$} (see \cite{R1}). Denote by $\Lambda$ the term in $\mathbf B_{S_m}^q\cap \HilbScheme{p(t)}{n}$ corresponding to the unique saturated lex-segment ideal in $S$ whose Hilbert polynomial is $p(t)$. \begin{corollary}\label{cor:numero componenti} If $\mathrm{char}(K)=0$ and ${\textnormal{sat}}_{x_0,x_1}(\vs{L})\not= {\textnormal{sat}}_{x_0,x_1}(\vs{\Lambda})$ for every maximal term $L \in\mathbf B_{S_m}^q \cap \HilbScheme{p(t)}{n} \setminus \{\Lambda\}$, then there are at least $M_{\prec\!\!\prec}+1$ irreducible components in $\HilbScheme{p(t)}{n}$. \end{corollary} \begin{proof} It is enough to apply Proposition \ref{prop:numero componenti} and \cite[Theorem 6]{R1}. \end{proof} \begin{remark} If $L$ is a maximal term in $\mathbf B_{S_m}^q\cap \HilbScheme{p(t)}{n}$, then the corresponding ideal $(\vs{L})$ is strongly stable, also if the field $K$ has positive characteristic. Indeed, let $L'=\sigma_1\wedge \dots \wedge \sigma_q$ be a Borel-fixed extensor term whose corresponding ideal is not strongly stable. Then, over any field of charactestic zero, $\tau_1\wedge \dots \wedge \tau_q:=\mathbf{gin}(\langle \sigma_1, \dots, \sigma_q\rangle )$ is a Borel term corresponding to a strongly stable ideal with Hilbert polynomial $p(t)$ and $\tau_1\wedge \dots \wedge \tau_q\succ\!\!\succ \sigma_1\wedge \dots \wedge \sigma_q$. Therefore, in $\mathbf B_{S_m}^q\cap \HilbScheme{p(t)}{n}$ there is at least a term which is $\succ\!\!\succ$ of $L'$. Following the terminology introduced in \cite{CS2013}, the ideal $(\tau_1, \dots , \tau_q) $ is the {\it zero-generic initial ideal of} $(\sigma_1,\dots , \sigma_q)$. \end{remark} The bounds of Proposition \ref{prop:numero componenti} and of Corollary \ref{cor:numero componenti} are not meaningful in two cases. The first is when $p(t)$ is a constant Hilbert polynomial, because then all the Borel-fixed ideals are on the same component, hence for every term order on $S$ we will find a unique maximal term w.r.t.~$\prec\!\!\prec$. The second case is when the given term order $\preceq$ on $S$ is the deglex one, because there is the unique maximal term corresponding to the lex-segment ideal. Anyway, in general we can get useful information, although the lower bound depends on the term order given on $S$, as the following example shows. \begin{example}\label{ex:4 componenti} For $n=3$ and $p(t)=7t-5$, the Gotzmann number is $r=16$. We get the complete list of the $112$ strongly stable ideals in $\HilbScheme{7t-5}{3}$ by \cite{CLMR,PL} and compare their intersections with $S_{16}$ w.r.t.~$\prec\!\!\prec$ for several term orders in $S$ . As just observed, there is only one maximal element for the lexicographic term order. If we consider the term order on $S$ given by the weight vector $[w_0=1,w_1=2,w_2=9,w_3=12]$ (and ties broken by lex) we obtain two maximal terms corresponding to the ideals with saturations $\mathfrak b_1:=(x_3^3,x_3^2x_2,$ $x_3x_2^2,x_3^2x_1,x_2^5)$ and $\mathfrak b_2:=(x_3^2,x_3x_2^3,x_2^4)$, respectively. Computing the $x_0,x_1$-saturation, we see that neither of them lies on the component containing the lex-segment ideal, because $(\vs{\Lambda})^{{\textnormal{sat}}}=(x_3,x_2^8,x_2^7x_1^9)$. Thus, there are at least $3$ irreducible components in $\HilbScheme{p(t)}{n}$ by Corollary \ref{cor:numero componenti}. If we choose the degrevlex term order, we find $4$ maximal terms corresponding to the ideals with the following saturations: the ideals $\mathfrak b_1$, $\mathfrak b_2$ previously considered and \begin{equation*} \begin{split} &\mathfrak b_3:=(x_3^3,x_3^2x_2^2,x_3x_2^3,x_3^2x_2x_1,x_3x_2^2x_1,x_3^2x_1^2, x_3x_2x_1^2, x_3x_1^3,x_2^7),\\ &\mathfrak b_4:=(x_3^3,x_3^2x_2,x_3x_2^2,x_3^2x_1^2,x_3x_2x_1^2,x_2^6). \end{split} \end{equation*} We conclude there are at least $4$ irreducible components in $\HilbScheme{p(t)}{n}$, by Proposition \ref{prop:numero componenti} since in this case the hypothesis of Corollary \ref{cor:numero componenti} does not hold. \end{example} \subsection{Maximal Hilbert function in a GL-stable subset}\label{subsec:2} In this subsection, if $f$ and $g$ are two numerical functions, we say that {\em $f$ is greater than $g$} if $f(t)\geq g(t)$, for every $t\in\mathbb N$, and write $f\geq g$. As we have already recalled in Section \ref{sec:minoreminore}, if a monomial ideal $J$ is Borel-fixed, and $J=(J_m)$, with $m\geq \textnormal{reg}(J^{\textnormal{sat}})$, then $J$ is stable. \begin{theorem}\label{funzione massima} Let $\prec$ be the degrevlex term order in $S$. If $V$ and $V'$ are two $K$-points of $\HilbScheme{p(t)}{n}$ such that $J:=I_V$ and $I:=I_{V'}$ are Borel-fixed ideals, then $$V \prec\!\!\prec V' \ \Rightarrow \ \dim_K(J^{{\textnormal{sat}}})_t\geq \dim_K(I^{{\textnormal{sat}}})_t, \ \forall \ t\geq 0.$$ In particular, if $Y$ is a GL-stable subset of $\HilbScheme{p(t)}{n}$, the Hilbert function of $\mathrm{Proj}(S/\mathbf{G}_Y)$ is the maximum among the Hilbert functions of $\mathrm{Proj}(S/H)$, where $H$ varies among the saturated ideals defining points of $Y$. \end{theorem} \begin{proof} It is enough to prove $\dim_K(J^{{\textnormal{sat}}})_t\geq \dim_K(I^{{\textnormal{sat}}})_t$ for every $t<m$, because $V$ and $V'$ are points of the same Hilbert scheme and $m$ is an upper bound for the regularities of both $J$ and $I$. For every $t<m$, $\dim_K(J^{\mathrm{sat}})_t$ is the number of terms of $J_m$ which are divisible by $x_0^{m-t}$, because $J$ is {Borel-fixed}. Since $V'\succ \!\!\succ V$, we can apply Proposition \ref{intrinseco} \eqref{intrinseco_ii} to $c=x_n^{t}x_0^{m-t}$ and see that the number of terms in $\vs{V}$ divisible by $x_0^{m-t}$ is larger than or equal to those in $V'$. Hence, we obtain $\dim_K(J^{\mathrm{sat}})_t\geq \dim_K(I^{\mathrm{sat}})_t$. The last statement follows from Theorem \ref{th:minoreminore}\eqref{th:minoreminore_ii} and the fact that, for every homogeneous polynomial ideal $H$, $\mathrm{gin}_{\prec}(H^{{\textnormal{sat}}})=\mathrm{gin}_{\prec}(H)^{{\textnormal{sat}}}$, being $\prec$ the degrevlex term order. \end{proof} \begin{remark} By Theorems \ref{funzione massima} and \ref{th:minoreminore}\eqref{th:minoreminore_ii} we get another method to find different irreducible components of a Hilbert scheme that consists in detecting the maximal Hilbert functions of projective schemes with a given Hilbert polynomial. This method might be easier to use than the detection of the maximal Borel terms w.r.t.~the partial order $\prec\!\!\prec$. However, the detection of the maximal Hilbert functions gives a lower bound on the number of irreducible components which is far from being shar : for instance, in Example \ref{ex:4 componenti} we find $4$ maximal Borel-fixed terms but there are only $2$ maximal Hilbert functions. \end{remark} \begin{remark} \label{rem:maximum} The existence of the maximum among the Hilbert functions on a GL-stable subset of $\HilbScheme{p(t)}{n}$ can be proved by semicontinuity in the following way, although we observe that Theorem \ref{funzione massima} gives a constructive answer. Let $Y$ be a GL-stable subset of $\HilbScheme{p(t)}{n}$, $m$ be an upper bound on the Ca\-stel\-nuo\-vo-Mumford regularity of points in $Y$. We define the following numerical function $f:\mathbb N\rightarrow\mathbb N$ \[f(t)=\begin{cases} \max\lbrace H_{S/I}(t)\ \vert\ I\in Y\rbrace, \quad \text {if } 1\leq t\leq m\\ p(t), \quad \text{otherwise.} \end{cases} \] For every $1\leq s\leq m$, the subset $A_s=\lbrace I\in Y\vert H_{S/I}(s)\geq f(s)\rbrace$ of $Y$ is open by semicontinuity \cite[Remark 12.7.1 in chapter III]{H77}. Hence $\cap_{s=1}^m A_s$ is an open subset of $Y$ and it is non-empty, because every $A_s$ is non-empty by construction of $f$. Thus, there is an open subset of ideals $I\in Y$ having maximal Hilbert function $f$. \end{remark} It would be nice to find a result analogous to that of Theorem \ref{funzione massima} for the deglex term order. We state the following conjecture. \begin{conj}\label{conjLex} Let $Y$ be a GL-stable subset of $\HilbScheme{p(t)}{n}$ and $\preceq$ be the deglex term order. Then, the Hilbert function of $\mathrm{Proj}(S/\mathbf{G}_Y)$ is the minimum among the Hilbert functions of $\mathrm{Proj}(S/H)$, where $H$ varies among the ideals defining a point of $Y$. \end{conj} \subsection{Rational components of a Hilbert scheme}\label{subsec:rat} The following results deal with the rationality of irreducible components in a Hilbert scheme. The main tools we use are some of the features of double-generic initial ideals together with arguments introduced in \cite{FR,LR,RT}. \begin{theorem}\label{razionale} Let $Y$ be an isolated irreducible component of $\HilbScheme{p(t)}{n}$. If $\mathbf{gin}(Y)$ corresponds to a smooth point in $Y$, then $Y$ is rational. \end{theorem} \begin{proof} Let $L:=\mathbf{gin}(Y)$ and recall that $\mathcal V_L$ is the set of points of $\GrassScheme{S_m}{q}$ having $L$ as initial extensor (see \eqref{eq:V tondo con L}). By Proposition \ref{prop:componenti}\eqref{prop:componenti_i}, $L$ is equal to $\mathbf{in}(Y)$ and, by Proposition \ref{lemma:UL}, $\mathcal V_L \cap Y$ is a dense open subset of $Y$. By \cite[Lemma 3.2]{RT} and \cite{LR}, the set of ideals in $S$ having a given monomial ideal $J$ as initial ideal w.r.t.~any given term order can be endowed with a structure of homogeneous scheme $X$ w.r.t.~a non-standard grading. The reduced scheme structure $X^{red}$ and the isolated irreducible components of $X$ turn out to be homogeneous too \cite[Corollary 2.7]{FR}. Moreover, $X$ is connected, because every isolated irreducible component contains $J$, and every isolated irreducible component of $X$ that is smooth at $J$ is isomorphic to an affine space \cite[Corollary 3.3]{FR}. We now apply these results to the monomial ideal $J=(\vs{L})$. Note that it is enough to consider the reduced scheme structure $X^{red}$ because we deal with the isolated irreducible components of $X$ that are smooth at the point $J$. By definition of Hilbert scheme, $X^{red}$ and $(\mathcal V_L \cap \HilbScheme{p(t)}{n})^{red}$ are isomorphic. Moreover, by Proposition \ref{lemma:UL}\eqref{lemma:UL_iv}, the set $(\mathcal V_L \cap \HilbScheme{p(t)}{n})^{red}$ contains the open subset $Y':=\mathcal V_L\cap Y$ of $Y$ which is one of the irreducible components of $(\mathcal V_L \cap \HilbScheme{p(t)}{n})^{red}$ and, hence, preserves a homogeneous scheme structure. Being $\vs{L}$ a smooth point of $Y$, it is also smooth on $Y'$. Thus, $Y'$ is isomorphic to an affine space and $Y$ is rational. \end{proof} As an application of Theorem \ref{razionale} we recover the well-known fact that the irreducible component of a Hilbert scheme containing the unique saturated lex-segment ideal is rational (e.g.~\cite{LR}). We now present a new result in the following corollary, where we focus on the 2-codimensional Cohen-Macaulay points of a Hilbert scheme. \begin{corollary}\label{prop:razionale aCM codim 2} Let $p(t)$ be a Hilbert polynomial of degree $n-2$. Every irreducible component $Y$ of $\HilbScheme{p(t)}{n}$ containing a point $V$ corresponding to an arithmetically Cohen-Macaulay subscheme is rational.\\ Furthermore, if $L=\mathbf{gin}(V) $, then $\mathcal V_L$ is isomorphic to an affine space and $\mathcal U_L$ is the Cohen-Macaulay locus of $Y$. \end{corollary} \begin{proof} Recall that the subset $C$ of $\HilbScheme{p(t)}{n}$ formed by the points corresponding to arithmetically Cohen-Macaulay subschemes is an open subset containing $V$ \cite[Th\'{e}or\`{e}me (12.2.1)(vii)]{Gro}. Moreover, $V$ and every other point defining an arithmetically Cohen-Macaulay subscheme of codimension $2$ correspond to smooth points in $\HilbScheme{p(t)}{n}$ (see \cite{Fo} for $n=2$ and \cite[Theorem 2(i)]{Elli} for $n\geq 3$). Hence, $C\cap Y$ is a smooth, non-empty open subset of $Y$. It is well-known that if we choose the degrevlex term order, then also $\mathrm{gin}(I_V)^{{\textnormal{sat}}}=\mathrm{gin}(I_V^{{\textnormal{sat}}})$ defines an arithmetically Cohen-Macaulay subscheme of $\mathbb{P}^n_K$. By Theorem \ref{gin appartiene}, the ideal $\mathrm{gin}(I_V^{{\textnormal{sat}}})$ is the double-generic initial ideal $\mathbf{G}_Y$ of $Y$, and its homogeneous component of degree $m$ is $\vs{L}$ with $L=\mathbf{gin}(Y)=\mathbf{gin}(V)$. Thus, Theorem~\ref{razionale} allows us to conclude that $\mathcal V_L$ is an affine space and we observe that, by definition, $V$ belongs to $\mathcal U_L$. Note that the result we obtain holds for every $V\in C\cap Y$, hence $\mathcal U_L=C\cap Y$. \end{proof} \begin{example} \label{ex:tommasino} In this example we apply Theorem \ref{razionale} to the double-generic initial ideal $\mathbf{G}_Y$ of an irreducible component $Y$ of a Hilbert scheme, which is smooth for $Y$, but which is not smooth for the Hilbert scheme. Let us consider the Hilbert scheme $\HilbScheme{3t+2}{3}$ over a field of null characteristic. There are $4$ saturated Borel ideals corresponding to points on this Hilbert scheme $$\mathfrak b_1:=(x_{{3}},x_{{2}}^{4},x_{{1}}^{2} x_{{2}}^{3} ), \quad \mathfrak b_2:=(x_{{3}}^{2},x_{{2}}x_{{3}},x_{{1}}x_{{3}},x_{{2}}^{4},x_{{1}} x_2^3),$$ $$\mathfrak b_3:=( x_{{3}}^{2},x_{{2}}x_{{3}},x_{{2}}^{3},x_{{1}}^{2}x_{{3}}), \quad\mathfrak b_4:=( x_{{3}}^{2},x_{{2}}x_{{3}},x_{{2}}^{3},x_{{1}}x_{{2}}^{2}).$$ The ideal $\mathfrak b_1$ corresponds to the lex-segment point of $\HilbScheme{3t+2}{3}$. It is well-known that such a point belongs to a unique component, that we denote by $Y_1$, and this component is rational. By a direct computation, we find that the dimension of $Y_1$ is $18$ and that its general point corresponds to the union of a plane cubic curve and two isolated points. By \cite[Theorem 6]{R1}, we see that also $\mathfrak b_2$ and $\mathfrak b_3$ define points of $Y_1$, while $\mathfrak b_4$ does not If we choose the degrevlex term order, we find that $\mathfrak b_4$ is the maximum w.r.t.~${\prec\!\!\prec}$ of the Borel ideals. By a direct computation involving marked schemes (see \cite{CR11,BCLR,BLR}), using for instance either the Singular library \cite{michela,DGPS} or the algorithm described in \cite{BCR2}, we obtain that $\mathfrak b_4$ is contained in two irreducible components $Y_2$ and $Y_3$. Therefore, the point corresponding to $\mathfrak b_4$ is not smooth for the Hilbert scheme. However, it turns out to be smooth for both $Y_2$ and $Y_3$, and by Theorem \ref{razionale} we get that $Y_2$ and $Y_3$ are both rational. To complete the description, by a direct computation we find that the dimension of $Y_2$ is $12$ and its general point corresponds to the disjoint union of a conic and a line. Here are the generators of one of the ideals we obtain after a random specialization of the $12$ free parameters: {\small{{\begin{enumerate}[] \item $f_1=x_3^2 + 990x_0^2-x_2^2-3 x_1x_2-x_1x_3-2 x_1^2+67 x_0 x_3-23 x_0 x_2-68 x_0 x_1$ \item $f_2= x_2x_3 + 484x_0^2-x_2^2-2 x_1x_3+4 x_1^2+22 x_0x_3-88 x_0x_1$ \item $f_3=x_2^3 -6538x_0^3+3 x_0x_2^2+4 \, x_1^2x_3-x_2x_1^2+30 x_0x_1x_2+8 x_0x_1x_3+286 x_0x_1^2+6\,x_1^3-4x_0^2 x_3-711x_0^2 x_2-386 x_0^2x_1$ \item $f_4= x_1x_2^2 + 1913x_0^3-3x_0x_2^2+x_1^2x_3-3 x_0x_1^{2}x_2-3x_1x_2+2x_0x_1x_3+69 x_0 x_1^2+5x_1^3-x_0^2x_3+22x_0^2 x_2-815 x_0^2 x_1$ \end{enumerate}}}} and the primary decomposition of this ideal {\small\begin{enumerate}[] \item $\mathfrak p_{{1}}=(3x_1+x_3+23x_0,x_2-2x_1+22x_0)$ \item $\mathfrak p_{{2}}=(-2x_1+x_3+22x_0-x_2,7x_1^2-2x_1x_2+72x_0x_1-7x_0x_2+x_2^2-645x_0^2)$. \end{enumerate}} The dimension of $Y_3$ is $15$ and its general point corresponds to the union of a twisted cubic curve and a point. Here are the generators of one of the ideals we obtain after a random specialization of the $15$ free parameters: {\small{{\begin{enumerate}[] \item $f'_1 = x_3^2 + 37x_0^2-18x_1^2-x_2^2-3x_1x_3-3x_1x_2+2x_0x_3-6x_0x_2+15x_0x_1$ \item $f'_2= x_2x_3 + 31x_0^2-12x_1^2-x_2^2-2x_1x_3+2x_1x_2+x_0x_3+16x_0x_1$ \item $f'_3=x_2^3 -150x_0^3+86x_0x_1^2-24x_1^3-10x_1^2x_3 +37x_0x_1x_3+x_2x_1^2-7x_0x_1x_2-30x_0^2x_3-x_0^2x_2-11x_0^2x_1$ \item $f'_4=x_1x_2^2 -x_0^3-2x_0x_1^2+x_0x_2^2-2x_1^2x_3+3x_0x_1x_3-x_2x_1^2-x_0x_1x_2+5x_0^2x_3-3x_0^2x_1$ \end{enumerate}}}} and the primary decomposition of this ideal {\small\begin{enumerate}[] \item $\mathfrak p'_1=(x_0+x_1,x_2-8x_0,x_3-7x_0)$ \item $\mathfrak p'_2=(-x_1x_2-2\,x_1x_3-2\,x_0x_1-x_0^2+x_2^2+5\,x_0x_3, x_2x_3+30x_0^2-4\,x_1x_3+x_1x_2-12\,x_1^2+6\,x_0x_3+14\,x_0x_1,$ $x_3^2+36x_0^2-5\,x_1x_3-4\,x_1x_2-18\,x_1^2+7\,x_0x_3-6\,x_0x_2+13\,x_0x_1)$. \end{enumerate}} Finally, computing the marked schemes on $\mathfrak b_2$ and $\mathfrak b_3$, we check that $Y_1$, $Y_2$ and $Y_3$ are the only irreducible components of $\HilbScheme{3t+2}{3}$. \end{example} \section*{Acknowledgments} The authors are grateful to Anthony Iarrobino for his generous suggestions and inspiring comments on the topic and the writing of this paper. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,314,259,994,054
arxiv
\section{Introduction} The behavior of quantum systems under the interchange of identical particles goes under the name of ``statistics.'' In elementary quantum mechanics, the interchange of identical particles is assumed to affect the wavefunction of a many-particle system only by the introduction of a phase, and interchange is modelled in terms of the symmetric group, $S_n$. Thus for a system of $n$ identical particles one assumes that there is a character $\chi \colon S_n \to U(1)$ describing the statistics. For $n \ge 2$, $S_n$ has two characters: either $\chi(\sigma) = 1 $ for all $\sigma \in S_n$, or $\chi(\sigma) = {\rm sgn}(\sigma)$. Thus interchanging two particles must give rise to a phase of $1$ or $-1$, corresponding to bosonic and fermionic statistics, respectively. Statistics in which particle interchange gives rise to higher-dimensional unitary representations of $S_n$ have also been studied; these go under the name of ``parastatistics'' or ``nonabelian statistics'' \cite{GM,DR}. More recently, in the physics of two-dimensional systems, it has been noted that a more precise description of the interchange of identical particles should keep track of the braid traced out by particles as they are interchanged \cite{WilczekWu}. For structureless point particles on the plane this amounts to replacing the symmetric group by the Artin braid group, $B_n$. The group $B_n$ has a circle's worth of characters; that is, interchanging two particles gives rise to an arbitrary phase $\exp(i\theta) \in U(1)$. These statistics are known as ``anyonic'' or ``fractional.'' Anyonic statistics, as well as nonabelian anyonic statistics, appear to play a crucial role in the fractional quantum Hall effect \cite{MRPGXGWen,ShWi}. Anyons have also been proposed as a mechanism for superconductivity \cite{Wilczek2}. There are further variations on the theme of statistics which we would like to address here. If one considers structureless point particles on the space $X$, the proper analog of the Artin braid group is the group $B_n(X)$ of the braids on $X$ (defined below). The implications for statistics have been studied already by a number of authors \cite{TW,IIS}. For particles with internal structure, however, one must also keep track of the rotation of the particles as one interchanges them. Thus it is natural to work with the group $FB_n(X)$ of {\it framed} braids on $X$. This has the braid group of $X$ as a quotient group, that is, there is a surjective homomorphism $$ \pi \colon FB_n(X) \to B_n(X) . $$ A unitary representation of $FB_n(X)$ corresponds to a choice not only of statistics, but also certain a certain aspect of spin, namely, the phase induced by rotating a particle by $2\pi$. In the case $X = {\bf R}^2$, spin and statistics may be chosen independently, at least at the level of many-particle quantum mechanics. The mathematical reason is that {\it in this case} there is a homomorphism $$ \iota \colon B_n(X) \to FB_n(X) $$ such that $\pi (\iota (g)) = g$ for all $g \in B_n(X)$. In general, however---for example, when $X = S^2$---this is {\it not} true, so certain relations between spin and statistics arise. These are not to be confused with the spin-statistics relations arising in quantum field theory. For nonlinear sigma models, further constraints exist on the spin and statistics of topological solitons. This has been noted for the 2+1-dimensional $O(3)$ nonlinear sigma model with Hopf term by Wilczek and Zee \cite{WZ}, and subsequently elaborated on by Wu and Zee \cite{WuZ}, Wen \cite{JWen}, and others. Here we describe a general framework to study fields from spacetime, ${\bf R} \times X$, to a target manifold $M$. (In the $O(3)$ nonlinear sigma model, $M = S^2$.) Topological solitons correspond to elements of $\pi_d(M)$, where $d$ is the dimension of $X$. Any topological soliton $\alpha$ defines a certain quotient group ${\rm Stat}_n(X,\alpha)$ of $FB_n(X)$, and a representation $\rho$ of $FB_n(X)$ corresponds to an allowed choice of spin and statistics for solitons of type $\alpha$ if and only if $\rho$ factors through ${\rm Stat}_n(X,\alpha)$, that is, if there exists a representation $\widetilde \rho$ of ${\rm Stat}_n(X,\alpha)$ such that $$ \rho = \widetilde\rho j $$ where $j \colon FB_n(X)\to {\rm Stat}_n(X,\alpha)$ is the quotient map. An analysis along these lines leads to some interesting results. For example, using a braid group analysis Thouless and Wu \cite{TW} concluded that the phase $\exp(i \theta)$ from interchanging two particles on $S^2$ must satisfy $$ \theta = {k\pi\over(n-1)}, \qquad k \in { \bf Z} $$ when $n$ particles are present. This is correct for structureless point particles. However, a study of the $O(3)$ nonlinear sigma model gives a different result for solitons with unit topological charge on $S^2$, namely: $$ \theta = {k\pi\over n} ,\qquad k\in { \bf Z}. $$ As alluded to above, in quantum field theory the axioms of locality, Poincar\'e-invariance, energy positivity, and so on give rise to additional relationships between spin and statistics. These are of a different character than our results, which are derived in the context of many-particle quantum mechanics by purely topological methods. For spin-statistics theorems in quantum field theory, we refer the reader to the original arguments of Fierz and Pauli \cite{FierzPauli}, the theorems proved using the Garding-Wightman \cite{SW} and C*-algebraic axioms \cite{DR,DHR}, and the more recent extensions to field theory in 2 or 3 dimensions, in which anyonic statistics arise \cite{FMFGFGMRS}. \section{Spin, Statistics, and Framed Braids} The relations between spin, statistics, and braid groups treated here arise naturally from considering quantization of systems for which the classical configuration space $C$ is not simply connected \cite{LMSo}. While the most obvious choice of the quantum Hilbert space is simply the space $L^2(C)$ of square-integrable functions on $C$, one may equally well use $L^2(C,E)$, the space of $L^2$ sections of a flat line bundle $E$ over $C$. Isomorphism classes of flat line bundles over $C$ are in one-to-one correspondence with characters of the fundamental group $\pi_1(C)$. For any group $G$, let $G^\ast$ denote the group of characters, or one-dimensional unitary representations, of $G$. Then each element of $\pi_1(C)^\ast$ gives a different quantization. (Of course, if $C$ is infinite-dimensional there are severe analytical difficulties in defining the appropriate $L^2$ spaces, which we do not address here.) For a system of $n$ indistinguishable structureless point particles on a connected manifold $X$, where we assume that no two particles can be at the same place at the same time, the configuration space is $(X^n - \Delta)/S_n,$ where $\Delta \subseteq X$ is given by $$ \Delta = \lbrace (x_1, \dots, x_n) \colon\; \exists j\ne k \;\; x_j = x_k \rbrace , $$ and the symmetric group $S_n$ acts on $X^n - \Delta$ by permutation of the points $(x_1, \dots, x_n)$. Thus quantizing this system involves choosing an character of the {\it braid group of} $X$, $$ B_n(X) = \pi_1((X^n - \Delta)/S_n) . $$ In other words elements of $B_n(X)^\ast$ correspond to choices of (abelian) statistics. Similarly, $k$-dimensional unitary representations of $B_n(X)$ correspond to nonabelian statistics, as they define flat $U(k)$-bundles over the configuration space $(X^n - \Delta)/S_n$. If $\dim X > 2$, the braid group $B_n(X)$ equals $S_n$ when $X$ is simply connected \cite{Birman1}, and in general is the wreath product of $S_n$ and $\pi_1(X)$ \cite{IIS}. The two-dimensional case is more interesting; for example, $B_n({\bf R}^2) = B_n$ is the original braid group due to Artin \cite{ArtinBirman2}, with generators $s_j$, $1 \le j < n$, and relations \begin{eqnarray} s_j s_k &=& s_k s_j \qquad\qquad |j - k| \ge 2 , \nonumber\cr s_j s_{j+1} s_j &=& s_{j+1} s_j s_{j+1} . \nonumber\end{eqnarray} Here $s_j$ corresponds to the interchange of the $j$th and $(j+1)$st particle in a counterclockwise manner. Later Fadell and Van Buskirk computed $B_n(S^2)$ \cite{FvB}, Van Buskirk computed $B_n(RP^2)$ \cite{vB}, Birman computed $B_n(T^2)$ \cite{Birman1}, and finally Scott computed $B_n(X)$ for $X$ any compact 2-manifold \cite{Scott}. For example, $B_n(S^2)$ is the quotient of $B_n$ by the additional relation \begin{eqnarray} s_1s_2 \dots s_{n-1}s_{n-1}s_{n-2} \dots s_1 = 1. \label{sphere}\end{eqnarray} This is not surprising, since any element of $B_n(S^2)$ arises from a braid in ${\bf R}^2$, which has $S^2$ as its one-point compactification, while the braid on the left side of equation (\ref{sphere}) corresponds to moving the first particle in a counterclockwise fashion around the rest, and one may contract the loop traced out by the first particle around the south pole of $S^2$ (the point at infinity). It is easy to see that $B_n^\ast = U(1)$, since all characters of $B_n$ are of the form $$ \chi(s_j) = e^{i\theta} . $$ Equation \ref{sphere} implies that $e^{2i(n-1)\theta} = 1$ for any character of $B_n$ that factors through $B_n(S^2)$. Thus $B_n(S^2)^\ast = { \bf Z}_{2(n-1)}$ for $n \ge 2$. This was noted by Thouless and Wu \cite{TW}. The situation changes when we consider particles with spin, for example, topological solitons in a nonlinear sigma model. As we shall see, in this context one must treat spin and statistics together using framed braids. A framed braid may be thought of as a ``ribbon'' \cite{RT}, but here we prefer to think of it as a ``thickened'' braid. Physically, a thickened braid represents the world-tubes of a number of solitons. Let $X$ be an oriented connected manifold of dimension $d$, and let $D^d$ denote the closed unit ball in ${\bf R}^d$. Let $e_i \colon D^d\to X$, $1 \le i \le n$, be disjoint oriented balls embedded in $X$. Let a {\it framed braid} on $X$ be an oriented embedding $F$ of the disjoint union of $n$ solid cylinders $[0,1] \times D^d$ in $[0,1]\times X$, such that $F_i(0,\cdot) = F_i(1,\cdot) = e_i$, where $F_i$ denotes the embedding of the $i$th cylinder, and such that $$ F_i(t,x) = (t,F_{i,t}(x)) $$ for some function $F_{i,t}\colon D^d \to X$. Let $FB_n(X)$ denote the set of homotopy classes of framed braids on $X$, where the homotopy is required to preserve the above conditions on $F$. One can check that $FB_n(X)$ is independent of the embeddings $e_i$, and there is a canonical quotient map $$ \pi \colon FB_n(X) \to B_n(X) . $$ The framed braid group keeps track not only of the interchange of particles, but also their rotation in the process. For example, $FB_n = FB_n({\bf R}^2)$ has generators $s_j$, $1 \le j < n$, and $t_j$, $1 \le j \le n$, and relations \begin{eqnarray} s_j s_k &=& s_k s_j \qquad\qquad |j - k| \ge 2, \nonumber\cr s_j s_{j+1} s_j &=& s_{j+1} s_j s_{j+1} , \nonumber\cr s_j t_k &=& t_k s_j \qquad\qquad k \ne j, j+1 , \nonumber\cr t_{j+1}s_j &=& s_j t_j , \nonumber\cr t_j s_j &=& s_j t_{j+1} . \nonumber\end{eqnarray} The element $s_j$ corresponds to the interchange of the $j$th and $(j+1)$st particles as before, while $t_j$ corresponds to a $2\pi$ rotation of the $j$th particle. In this case there is a natural inclusion $\iota \colon B_n \to FB_n$ such that $\pi \circ \iota$ is the identity on $B_n$. In other words, the exact sequence $$FB_n(X) \to B_n(X) \to 1$$ splits in this case; this occurs whenever the tangent bundle of $X$ is trivializable. Thus every character of $FB_n$ restricts to a character of $B_n$, while conversely every character of $B_n$ extends to a character of $FB_n$. This allows us to describe characters $\chi$ of $FB_n$ in terms of two independent angles $\phi$ and $\theta$, related to spin and statistics, respectively: $$ \chi(t_j) = e^{i \phi} ,\;\; \chi(s_j) = e^{i\theta} . $$ Thus $FB_n^\ast = U(1) \times U(1)$ for $n \ge 2$. (Note that the character of $FB_n$ only depends on $\phi$ and $\theta$ modulo $2\pi$, so it detects only the spin mod ${ \bf Z}$ of the particle in question.) On $S^2$, however, spin and statistics are inextricably entangled, because $FB_n(S^2)$ is the quotient of $FB_n$ by the relation \begin{eqnarray} s_1s_2 \dots s_{n-1}s_{n-1}s_{n-2} \dots s_1 = t_1^2 . \label{sphere2}\end{eqnarray} In other words, the {\it framed} braid on $S^2$ in which the first particle moves around the rest but does not rotate about its own axis in the process is homotopic to one in which the first particle experiences a rotation by $4\pi$. This is easily visualized using a slight variant of the ``belt trick'' proof that $\pi_1(SO(3)) = { \bf Z}_2$, as in \cite{Kauffman}. Thus there is no inclusion $\iota \colon B_n(S^2) \to FB_n(S^2)$ with $\pi \circ \iota$ equal to the identity on $B_n(S^2)$, for $n \ge 2$. Moreover, the character $\chi$ of $FB_n$ described above factors through a character of $FB_n(S^2)$ if and only if $e^{2i(n-1)\theta} = e^{2i\phi}$. It follows that $FB_n(S^2)^\ast \cong U(1) \times { \bf Z}_{2(n-1)}$. \section{Nonlinear Sigma Models} In a nonlinear sigma model, fields are described as maps from spacetime, ${\bf R}\times X$, to a target space $M$. The classical configuration space is thus the space of maps from $X$ to $M$, denoted ${\rm Maps}(X,M)$. The configuration space is a disjoint union of connected components, one for each homotopy class in $[X,M]$. One may construct a flat line bundle on ${\rm Maps}(X,M)$ from a flat line bundle on each component. A flat line bundle on the component of ${\rm Maps}(X,M)$ containing a given map $f_0 \colon X \to M$ is uniquely determined by a character on $\pi_1({\rm Maps}(X,M), f_0)$. Thus, as described in the previous section, quantizing a given component of ${\rm Maps}(X,M)$ depends on a choice of such a character. We will study components of $[X,M]$ corresponding to collections of topological solitons. By a topological soliton, we mean a map $g \colon X \to M$ that is constant outside a small ball in $X$. With an appropriate Hamiltonian, solitons behave roughly like ``point particles'' with internal degrees of freedom. To work with topological solitons mathematically the Thom-Pontryagin construction \cite{Milnor} turns out to be quite important. Let $X$ be a compact oriented manifold of dimension $d$, and let $M$ be a connected manifold with a chosen basepoint $\ast$. (We will discuss the case where $X$ is noncompact below.) Each element of $\pi_d(M)$ defines an element $T(\alpha) \in [X,M]$ as follows. Let $e \colon D^d \to X$ be any oriented embedded ball in $X$. Representing $\alpha$ by a map $g \colon D^d \to M$ with $g|_{\partial D^d} = \ast$, we define $f_0 \colon X \to M$ by setting $f_0 = g e^{-1}$ in the ball $e(D^d)$ and $f_0 = \ast$ outside the ball. The homotopy class of $f_0$ is obviously independent of the choice of the representative $g \in \alpha$ and the choice of $e$, so we may define the map $$ T \colon \pi_d(M) \to [X,M] $$ by letting $T(\alpha) = [f_0]$. Note that for any $n \ge 0$, $T(n\alpha)$ can be represented by a map from $X$ to $M$ that is constant outside of $n$ small balls, and looks like the map $f$ in each ball. Thus the path component of ${\rm Maps}(X,M)$ corresponding to the element $TP(n\alpha)$ is the configuration space for the system consisting of $n$ solitons of type $\alpha$. Consequently, quantizing this system requires a choice of a character of $\pi_1({\rm Maps}(X,M),f_0)$ where $f_0 \in {\rm Maps}(X,M)$ is any map with homotopy class $T(n\alpha)$. We now show how any choice of topological soliton $\alpha \in \pi_d(M)$ determines a quotient group of the framed braid group $FB_n(X)$. Representations of this quotient group will correspond to allowed spin and statistics for solitons of type $\alpha$. First, choose a map $g \colon D^d \to M$ representing $\alpha$. Then, given any element $[F] \in FB_n(X)$, recall that for $1 \le i \le n$, $F_i \colon [0,1] \times D^d \to [0,1]\times X$ is an embedding; physically the image of $F_i$ will represent the world-tube of the $i$th soliton. We write $$ F_i(t,x) = (t,F_{i,t}(x)) $$ where $(t,x) \in [0,1] \times D^d$. We define a map $f \colon [0,1] \times X \to M$ by letting $f(t,x) = g(F_{i,t}^{-1}(x))$ if $x \in X$ is in the image of $F_{i,t}$, and $f(t,x) = \ast$ otherwise. Note that $f(t,\cdot) = f_t$ is a loop of maps from $X$ to $M$. Thus $f$ defines an element $[f] \in \pi_1({\rm Maps}(X,M),f_0)$. Note also that $f_0 \in {\rm Maps}(X,M)$ has homotopy class $[f_0] = T(n\alpha)$. Thus the group $\pi_1({\rm Maps}(X,M),f_0)$ only depends, up to isomorphism, on $n$ and $\alpha$. Moreover, it is easy to check that $[f]$ is independent of the choice of $g$ with $[g] = \alpha$ and the choice of framed braid $F \in [F]$. It follows that there is a map $$ \psi \colon FB_n(X) \to \pi_1({\rm Maps}(X,M),f_0) . $$ We call the image of $\psi$ the {\it spin-statistics group} of $n$ topological solitons of type $\alpha$, and write this group as ${\rm Stat}_n(X,\alpha)$. It follows that the only allowed choices of (abelian) statistics $\chi \in FB_n(X)^\ast$ for solitons of type $\alpha$ are those which factor through ${\rm Stat}_n(X,\alpha)$: $$ \chi = \widetilde\chi \psi . $$ Since $\psi$ is onto, abelian statistics are in fact in one-to-one correspondence with elements of ${\rm Stat}_n(X,\alpha)^\ast$. A similar remark holds for nonabelian statistics, as mentioned in the Introduction. In the above we have assumed that space, $X$, is compact. In the most important noncompact case, $X = {\bf R}^n$, there would be no nontrivial topological solitons according to the definition above, as all maps $f_0 \colon {\bf R}^n \to M$ are homotopic. In discussing solitons on ${\bf R}^n$ it is typical to work instead with its compactification, $S^n$. The reason usually given (which is admittedly somewhat heuristic), is that for non-topological terms in the action for the field $f \colon {\bf R} \times {\bf R}^n \to M$ to be finite, $f$ must be static at spatial infinity; that is, for some point $\ast \in M$, \begin{equation} \lim_{|\vec x| \to \infty} f(t,\vec x) = \ast \in M \label{static} \end{equation} for all times $t$. Thus for each $t$, the map $f_t \colon {\bf R}^n \to M$ extends uniquely to a continuous map from $S^n$ to $M$ sending the point at infinity to $\ast$. In other words, the physically relevant configuration space is the space ${\rm Maps}_\ast(S^n,M)$ of {\it basepoint preserving} maps from $S^n$ to $M$, where we take the point at infinity as the basepoint for $S^n$. The rest of the analysis of spin and statistics of solitons need be only slightly changed in order to take this into account. In general, when $X$ is noncompact, let $\overline X$ be the one-point compactification of $X$, with the point at infinity as basepoint. Then there is a Thom-Pontryagin map $$ T \colon \pi_d(M) \to [\overline X,M]_\ast , $$ where $[\overline X,M]_\ast$ denotes the basepoint preserving homotopy classes of basepoint preserving maps from $\overline X$ to $M$. Let $\alpha \in \pi_d(M)$. Then for any $f_0 \colon \overline X \to M$ with $[f_0] = T(n\alpha)$, there is a map $$ \psi\colon FB_n(X) \to \pi_1({\rm Maps}_\ast(X,M),f_0) , $$ defined just as in the compact case, and we define the image of $\psi$ to be $Stat_n(X,\alpha)$. (In certain cases compactifications other than the one-point compactification would be more appropriate, but the necessary adjustments are easily made.) We should note the mathematical resemblance between our techniques and those arising in the use of ``configuration space models'' in homotopy theory for computing the homology of iterated loop spaces $\Omega^n \Sigma^n M$ \cite{Segal}. In particular it is interesting to note the use of creation and annihilation operators for topological solitons \cite{CarMcD} in a purely mathematical context. \section{Examples} One nonlinear sigma model for which spin and statistics has been deeply studied is the $O(3)$ nonlinear sigma model with Hopf term \cite{ShWi}. Here fields are given by maps from ${\bf R} \times {\bf R}^2$ to $S^2$. In the study of this model it has been common to compactify space-time to $S^3$. This technique eliminates from consideration all fields with nonzero soliton number, a deficiency which seems so far only to have been addressed by J.\ Wen \cite{JWen}, although certain aspects of the mathematics seem to be foreshadowed by the work of Ringwood and Woodward \cite{RW}. The framework developed above, of course, treats spin and statistics for arbitrary soliton number. In this section we calculate ${\rm Stat}_n({\bf R}^2,\alpha)$ and ${\rm Stat}_n(S^2,\alpha)$ for the soliton $\alpha$ corresponding to the map of degree one from $S^2$ to itself. In doing so we develop techniques applicable to general simply-connected target manifolds $M$. In particular, we compute the groups $$ \pi_1({\rm Maps}(S^2,M),f_0) $$ and $$ \pi_1({\rm Maps}_\ast(S^2,M),f_0) $$ for any $f_0 \colon S^2 \to M$. In what follows, we work in the category of spaces with basepoint, so ``map'' will mean ``basepoint preserving map.'' Let $M$ be a connected and simply connected space with basepoint. First, note that a loop in ${\rm Maps}_\ast(S^2,M)$ is the same as a map $f \colon S^1 \times S^2/S^1 \times\ast$ to $M$. Let $$ \iota \colon S^2 \to S^1 \times S^2/S^1 \times \ast $$ denote the natural inclusion. This map induces a map $$ \iota^\ast \colon [ S^1 \times S^2/S^1 \times \ast, M] \to \pi_2(M). $$ In physical terms, $\iota^\ast[f] \in \pi_2(M)$ represents the topological charge or ``soliton number'' of the field $f$. In fact the homotopy class of $f$ is completely determined by its soliton number together with its ``instanton number,'' an element of $\pi_3(M)$. \begin{Theorem}\label{thm1} For any simply connected space $M$, $[ S^1 \times S^2/S^1 \times \ast, M]$ is isomorphic to $\pi_3(M) \oplus \pi_2(M)$ in such a manner that the map $$\iota^\ast\colon [ S^1 \times S^2/S^1 \times \ast, M] \to \pi_2(M) $$ corresponds to the projection $p_2 \colon \pi_3(M) \oplus \pi_2(M) \to \pi_2(M)$. \end{Theorem} Proof - Identify $S^3$ with the union of two solid tori, $S^3 = D^2 \times S^1 \cup_{S^1 \times S^1} S^1 \times D^2$. Let us write a point in $D^2 \times S^1$ as $(t\vec x, \vec y)$ where $t \in [0,1]$ and $\vec x, \vec y$ are unit vectors in ${\bf R}^2$, and similarly for $S^1 \times D^2$. Define the map $H \colon S^3 \to S^1 \times S^2 / S^1 \times *$ by \begin{eqnarray} H(t\vec x, \vec y) &= * \nonumber\cr H(\vec x, t\vec y) &= (\vec x, \rho(t\vec y) ). \nonumber\end{eqnarray} Let $ \vee $ denote the wedge of spaces with basepoint. The theorem is a consequence of the following lemma: \begin{Lemma} \label{Hopf-construction-gives-splitting} The map $ \iota \vee H \colon S^2 \vee S^3 \to S^1 \times S^2 / S^1 \times * $ is a homotopy equivalence. Furthermore, $\pi_2 ( \iota \vee H)$ and $\pi_1$ are homotopic as maps from $S^2 \vee S^3$ to $S^2$, and ${\rm pinch} ( \iota \vee H)$ and $\pi_2$ are homotopic as maps from $S^2 \vee S^3$ to $S^3$, where $\pi_1$ and $\pi_2$ are the natural quotient maps, and ${\rm pinch} \colon S^1 \times S^2 / S^1 \times * \to S^3$ is the pinch map. \end{Lemma} Proof - This follows easily from homology considerations. \hskip 3em \hbox{\BOX} \vskip 2ex \begin{Corollary}\label{cor1} For any $f_0 \in {\rm Maps}_\ast(S^2,M)$, $$ \pi_1({\rm Maps}_\ast(S^2,M)),f_0) \cong \pi_3(M) . $$ \end{Corollary} Proof - This follows directly from the above theorem and the fact that elements of $\pi_1({\rm Maps}_\ast(S^2,M),f_0)$ are the same as elements of $[S^1 \times S^2/S^1\times *,M]$ whose image under $\iota^ \ast$ equals $[f_0] \in \pi_2(M)$. \hskip 3em \hbox{\BOX} \vskip 2ex In particular, to quantize the theory of maps $f \colon {\bf R} \times {\bf R}^2 \to S^2$ satisfying $$ \lim_{|\vec x| \to \infty} f(t,\vec x) = \ast $$ and having given soliton number requires a choice of a representation of $\pi_3(S^2) = { \bf Z}$. This extends previous work that only treated the case of vanishing soliton number \cite{WZ}. The group $\pi_1({\rm Maps}(S^2,M), f_0)$ is a quotient of $\pi_1({\rm Maps}_\ast(S^2,M), f_0)$, and computing it requires an analysis of soliton-instanton interactions. The key topological aspects of these interactions are are encoded in the Whitehead product $$ [\., \.] \colon \pi_2(M) \times \pi_2(M) \to \pi_3(M) . $$ Let us briefly recall the definition of this product, referring the reader to standard texts on algebraic topology for more details \cite{Sp,Wh}. The universal Whitehead product is the map $$ W\colon S^3 \to S^2 \vee S^2 $$ given by \begin{eqnarray} W(t\vec x, \vec y) &= \rho(t\vec x) \vee * \nonumber\cr W(\vec x, t\vec y) &= * \vee \rho(t\vec y). \nonumber\end{eqnarray} Given $\alpha = [f]$ and $\beta = [g]$ in $\pi_2(M)$, the Whitehead product $[\alpha,\beta] \in \pi_3(M)$ is the class of the map $(f \vee g)\circ W$. In particular, for any $\alpha \in \pi_2(M)$ the Whitehead product defines a homomorphism $$ [\alpha,\.] \colon \pi_2(M) \to \pi_3(M). $$ In our application, $\alpha$ will be the class of $f_0 \colon S^2 \to M$, so we write this homomorphism as $[f_0,\.]$. The natural inclusion $ \iota \colon S^2 \to S^1 \times S^2$ induces a map $ \iota ^* \colon \[ S^1 \times S^2, M \]\to \pi_2(M)$. We have: \begin{Theorem} \label{Z2q} For any simply connected space $M$ and any map $f_0\colon S^2 \to M$, there is a bijection from the group $\pi_3(M)/ {\rm Im}[f_0,\.]$ to the inverse image $ \iota ^*[f_0] \subseteq \[ S^2 \times S^1, M \]$ of the homotopy class $[f_0]$. \end{Theorem} Proof - Let $\Sigma A$ denote the suspension of a space $A$ and $CA$ the cone over $A$. In general for a cofibration sequence $A \hookrightarrow X \to X/A$ there is a left action of the group $[\Sigma A, M]$ on the set $[X/A, M]$, which has the property that there is a short exact sequence \begin{equation} \label{basic-action-exact-sequence} [X/A, M]/[\Sigma A,M] \to [X, M] \to [A, M] . \end{equation} That is, there is an injective map from the orbit space $ [X/A, M]/[\Sigma A,M]$ to the set $[X, M]$, whose image is exactly the elements which are mapped to the trivial element in $[A,M]$ under $ \iota ^*$. The action is defined by the coaction map $\theta\colon X\cup_ \iota CA \to X/A \vee \Sigma A$ together with the homotopy equivalence $\gamma \colon X\cup_ \iota CA \to X/A$. That is, given maps $f\colon X/A \to M$ and $ \alpha \colon \Sigma A \to M$ there is a unique homotopy class $ \alpha *f\colon X/A \to M$ such that the following two maps are homotopic: \begin{equation} \label{definition-of-action-via-coaction} ( \alpha *f)\circ \gamma \sim (f \vee \alpha )\circ \theta , \end{equation} see \cite{Sp}. Now consider the cofibration sequence $ S^1 \times * \subset S^1 \times S^2 \to S^1 \times S^2/ S^1 \times *$. Since $\pi_1(M) = 0$, diagram (\ref{basic-action-exact-sequence}) implies that there is an isomorphism \begin{equation} \label{exact-sequence-for-S2XS1toM} \[S^1 \times S^2/ S^1 \times * , M\]/\pi_2(M) \simeq \[S^1 \times S^2, M\] . \end{equation} To compute the coaction map, let $\tilde H\: S^3 \to S^1 \times S^2 \cup C(S^1\times *)$ be the map \begin{eqnarray} \tilde H(t\vec x, \vec y) &=& t\vec x \in D^2 = C(S^1 \times *) \nonumber\cr \tilde H(\vec x, t\vec y) &=& (\vec x, \rho(t\vec y)) \nonumber\end{eqnarray} Then \begin{equation} \label{lift-equivalence-to-mapping-cone} {i \vee H} = ( \iota \vee \tilde H)\circ j \end{equation} where $$ j \colon S^1 \times S^2 \cup C(S^1 \times *) \to S^1 \times S^2 / S^1 \times * $$ is the natural homeomorphism. The composite \begin{equation} \label{first-composite-for-theta} (\pi_2 \vee {\rm id})\circ \theta \circ \tilde H \colon S^3 \to S^2 \vee S^2 \end{equation} is clearly the universal Whitehead product $W\: S^3\to S^2 \vee S^2$. On the other hand the composite \begin{equation} \label{second-composite-for-theta} ({\rm pinch} \vee *) \circ \theta \circ ( \iota \vee \tilde H) \colon S^2 \vee S^3 \to S^3 \end{equation} is homotopic to the projection $\pi_2$ on the second factor. Therefore by Lemma \ref{Hopf-construction-gives-splitting}, composites (\ref{first-composite-for-theta}) and (\ref{second-composite-for-theta}) and the Hilton-Milnor theorem \cite{Wh}, \begin{equation} \label{computation-of-theta} ( \iota \vee H \vee \iota ) \circ ( \iota _1 \vee ([ \iota _1, \iota _3] + \iota _2)) \sim k \circ \iota \vee \tilde H \end{equation} as maps from $S^2 \vee S^3$ to $S^1 \times S^2 /S^1 \times *$, where $$ k \colon S^1 \times S^2 \cup C(S^1 \times *) \to S^1 \times S^2 /S^1 \times * $$ is the natural homotopy equivalence. Lemma~\ref{Hopf-construction-gives-splitting} and diagram (\ref{computation-of-theta}) computes the action of of $\pi_2(M)$ on $\[S^1 \times S^2/ S^1 \times * , M\] \cong \pi_2(M) \+ \pi_3(M)$ to be $$ \alpha *(f, \beta ) = (f, \beta + [f, \alpha ]). $$ By the exact sequence (\ref{exact-sequence-for-S2XS1toM}) we have $$ \pi_2(M) \+ \pi_3(M) \bigm/\lbrace (f, \beta ) = (f, \beta + [f, \alpha ])\rbrace \cong \[S^1 \times S^2, M\] , $$ which implies Theorem~\ref{Z2q}. \hskip 3em \hbox{\BOX} \vskip 2ex \begin{Corollary}\label{cor2} For any $f_0 \in {\rm Maps}(S^2,M)$, $$ \pi_1({\rm Maps}(S^2,M),f_0) \cong \pi_3(M)/{\rm Im}[f_0,\cdot] . $$ \end{Corollary} Proof - This follows from the theorem above and the fact that $\pi_1({\rm Maps}(S^2,M),f_0) \cong \iota ^*[f_0] \subseteq [S^1 \times S^2,M]$. \hskip 3em \hbox{\BOX} \vskip 2ex We now compute the spin-statistics groups of the $O(3)$ nonlinear sigma model: \begin{Corollary}\label{cor3} Let $\alpha \in \pi_2(S^2)$ be a generator and let $f_0 \colon S^2 \to S^2$ have $[f_0] = n\alpha$. Then $$ \pi_1({\rm Maps}_\ast(S^2,S^2),f_0) = { \bf Z}, \qquad \pi_1({\rm Maps}(S^2,S^2),f_0) = { \bf Z}_{2n},$$ and for all $n \ge 1$, $$ {\rm Stat}_n({\bf R}^2,\alpha) = { \bf Z}, \qquad {\rm Stat}_n(S^2,\alpha) = { \bf Z}_{2n}. $$ \end{Corollary} Proof - Note that $\pi_1({\rm Maps}_\ast(S^2,S^2),f_0)$ is independent of $f_0$ and equals $\pi_3(S^2) = { \bf Z}$ by Corollary \ref{cor1}. Thus our choice of $\alpha$ determines a homomorphism $\psi \colon FB_n({\bf R}^2) \to { \bf Z}$ as described in Section 3. Recall that $t_1 \in FB({\bf R}^2)$ corresponds to the rotation of the first strand by $2 \pi$ about its axis. We claim that $\psi(t_1) = 1$. It will follow that ${\rm Stat}_n({\bf R}^2,\alpha)$, the image of the map $\psi$, is ${ \bf Z}$. Associate to the framed braid $t_1$ a loop of basepoint preserving maps from $S^2$ to $S^2$, i.e.\ a map $f \colon S^1 \times S^2 /S^1 \times * \to S^2$. By Lemma \ref{Hopf-construction-gives-splitting}, $\psi(t_1) \in \pi_3(S^2)$ is represented by the map $f \circ H \colon S^3 \to S^2$. We may calculate the Hopf invariant of this map by taking the inverse images of two regular values in $S^2$ and computing their linking number in $S^3$. This is easily seen to be $1$, so $\psi(t_1) = 1$. Next note that by Corollary \ref{cor2}, $\pi_1({\rm Maps}(S^2,S^2), f_0)$ is the quotient of $\pi_3(S^2)$ by the subgroup generated by $[\alpha,f_0]$. Moreover $[\alpha, f_0] = n[\alpha, \alpha] = 2n$ by the bilinearity of the Whitehead product together with the fact that $[\alpha,\alpha] = 2$. Thus $\pi({\rm Maps}(S^2,S^2),f_0) = { \bf Z}_{2n}$. Recall that $FB_n(S^2)$ is a quotient of $FB_n({\bf R}^2)$. Let $t \in FB_n(S^2)$ be the image of $t_1 \in FB({\bf R}^2)$. Then by the same argument as for $t_1$, $\psi(t) = 1 \in { \bf Z}_{2n}$, so ${\rm Stat}_n(S^2,\alpha) = { \bf Z}_{2n}$. \hskip 3em \hbox{\BOX} \vskip 2ex
1,314,259,994,055
arxiv
\section{Introduction} Observational correlations between the mass of supermassive black holes (SMBHs) and their host galaxy, such as the $M-\sigma$ relation \citep{Ferrarese00, Gebhardt00, Tremaine02} link the evolution of the SMBH and their host bulge. Feedback \citep[e.g.,][]{SilkRees98} , in the form of ultra fast outflows (UFOs), has been invoked to explain and derive the $M-\sigma$ relation from analytical arguments \citep{King03, king05}. The model is very attractive due to its simplicity, reliance on common sense physics (Eddington limit, escape velocity and radiation momentum outflow rate arguments), observational analogy to outflows from massive stars (that are also near their Eddington limits), and finally direct observations of UFOs in nearby bright AGN \citep{PoundsEtal03a,KP03,TombesiEtal10,Tombesi2010ApJ,PoundsVaughan11a}. Assuming a homogeneous gas distribution following a singular isothermal sphere (SIS) potential (e.g., $\S 4.3.3$b in \citeauthor{BT08}, \citeyear{BT08}), \citet{King03} shows that within the inverse Compton (IC) cooling radius, $R_{\rm IC}\sim 500 M_{8}^{1/2}\sigma_{200}$ kpc (where $M_8$ is the SMBH mass in units of $10^{8}{\,{\rm M}_\odot}} \newcommand\mearth{{\,{\rm M}_{\oplus}}$ and $\sigma_{200}$ is the velocity dispersion in the host, $\sigma$ in units of $200$ km s$^{-1}$; \citet{ZK12b}), the wind shock, which develops when the UFO collides with the interstellar medium (ISM), can cool effectively via IC scattering. Most of the thermalized wind kinetic energy is lost to this radiation, and only the pre-shock ram pressure impacts the ISM. By considering the equation of motion of the swept up ISM shell, \citet{King03} derived the mass that the SMBH had to attain in order to clear the host galaxy's gas. Beyond the cooling radius, $R_{\rm IC}$, the wind shock cannot cool effectively and retains the wind kinetic energy in the form of thermal energy and the outflow becomes energy driven. This regime is much more effective at clearing a galaxy of gas. The model of \citet{King03} assumes the electrons and ions in the shock share a single temperature at all times, initially the shock temperature $T_{\rm sh}\sim 10^{10}$K. However, \cite{FQ12a} have shown that, due to the high temperature and low density of the shocked wind, the electron-ion energy equilibration time-scale is long compared with the Compton time-scale. This would imply that the electron temperature is much lower than the ion temperature, i.e. $T_{\rm e}\ll T_{\rm ion}$. \citet{Bourne13} point out an observational test to distinguish between outflows with a one-temperature ($1T$; $T_{\rm e} = T_{\rm i}$) or two-temperature ($2T$) structure, and conclude preliminarily that X-ray observations broadly support the findings of \citet{FQ12a}. This would however lead to significant implications for AGN feedback on host galaxies: most of the UFO's kinetic energy, carried by the ions, is then conserved rather than radiated away. The cooling radius, $R_{\rm IC}$, becomes negligibly small on the scale of the host galaxy, and the outflow is essentially always in the energy conserving phase. Based on spherically symmetric analytical models \citep[e.g.,][]{king05}, even black holes $\sim 100$ times below $M_{\sigma}$ could clear a galaxy of its gas. It is then not clear (i) how black holes manage to grow so massive, and (ii) why momentum-conserving flows provide such a tight fit to the observed $M-\sigma$ relations \citep{King03}. Several recent additional numerical and analytical results however call the spherically symmetric models of AGN feedback into question. In the context of the physically related problem of stellar feedback, \cite{H-CMurray09} modelled the structure of a hot bubble inflated by a cluster of young stars in Carina Nebula, and have shown that the models assuming spherical symmetry do not explain the observational data. At the same time, a model in which the ambient ISM is clumpy accounts for observations much better. \cite{H-CMurray09} build a toy analytical model in which a significant fraction of the energy inside of the hot bubble is lost via advection, e.g., adiabatic expansion energy losses, rather than radiative energy losses (which can be directly observed in X-rays in the case of Carina Nebula, and are much lower than expected in the spherically symmetric models). Physically, the authors argue that the compressed shell of a multiphase ISM has pores through which the hot gas escapes. This deflates the bubble and allows a much better explanation of the bubble size, age and luminosity. \cite{RoggersPittard13} have recently performed 3D numerical simulations of a supernova exploding inside an inhomogeneous giant molecular cloud, and found results consistent with that of \cite{H-CMurray09}: the densest molecular regions turned out to be surprisingly resistant to ablation by the hot gas which was mainly escaping from the region via low density channels. For the AGN feedback problem that we study here, \citet{wagner12} have found very similar results when studying the interaction of an AGN jet with the multiphase ISM. Furthermore, \citet{Wagner13} studied the interaction of a wide-angle outflow with an inhomogeneous ambient medium, finding again that hot gas mainly streams away through channels between the cold clouds; the latter are impacted by the momentum of the UFO only. These authors also concluded that the opening angle of the UFO at launch appears secondary, since interactions of the UFO with the intervening clouds isotropize the hot bubble, so that result of a jet and an UFO running into the inhomogeneous ISM may actually be much more similar than often assumed. In an analytical study, \citeauthor{Nayakshin14} (\citeyear{Nayakshin14}, hereafter N14) also argued that most of the UFO energy leaks out of the porous bulge via the low-density voids, and that the cold gas is affected only by the ram pressure. He argued that the densest cold clouds may continue to feed the AGN via the `chaotic accretion mode' \citep{HobbsEtal11} despite the AGN blowing an energy-driven bubble into the host galaxy, and that the balance between the ram pressure of the UFO on the clouds and cloud self-gravity leads to an $M-\sigma$ correlation very similar in functional form to that of \cite{King03}. Furthermore, \citeauthor{Zubovas14} (\citeyear{Zubovas14}, ZN14 hereafter) presented numerical simulations of AGN feedback impacting elliptical, initially homogeneous ambient gas distributions and showed that the UFO energy escapes via directions of least resistance (along the minor axis of the ellipsoid). They additionally presented a toy analytical model, similar in spirit to that of \cite{H-CMurray09}, which showed that the SMBH growth stops when the SMBH reaches a mass of the order of the \cite{King03} result. In this paper we investigate these ideas further numerically. We set up a hot bubble of shocked UFO gas bounded by either one- or two-phase ambient gas, and then study the resulting interaction. Our multiphase gas is produced by evolving a Gaussian random velocity field as is frequently done in numerical models of star formation inside turbulent molecular clouds \citep{Bate09}, similar to earlier work by \cite{HobbsEtal11}. Our numerical methods and initial conditions differ substantially from that of \cite{wagner12,Wagner13} and ZN14, but results are qualitatively similar. We also find that most of the UFO energy is carried away by hot low density gas escaping the innermost regions of the host via paths of least resistance, which exists in the clumpy ISM in abundance \citep[e.g.,][]{McKeeOstriker77}. Most of the gaseous mass in our models is in the high-density cold phase of the ISM that occupies a small fraction of the host's volume, and for this reason our host galaxies turn out to be much less vulnerable to AGN feedback than could be thought based on the energy budget arguments alone. \section{Simulation Set-up} \subsection{Numerical method} The simulations presented here make use of a modified version of the N-body/hydrodynamical code GADGET-3, an updated version of the code presented in \citet{Springel05}. We implement the SPHS\footnote{Smooth particle hydrodynamics with a high-order dissipations switch.} formalism as described in \citet{Read10} and \citet{ReadHayfield12}, in order to correctly treat mixing within multiphase gas, together with a second-order Wendland kernel \citep{Wendland95,DehnenAly12} with 100 neighbours. The SPHS algorithm was developed for the express purpose of capturing instabilities such as Kelvin-Helmholz and Rayleigh-Taylor, and has been demonstrated as robust in many test problems \citep{ReadHayfield12} and full galaxy formation simulations \citep{HobbsEtAl13}. The simulations are run in a static isothermal potential with the total mass of the potential within radius R following: \begin{equation} M_{\rm pot}(R)= \frac{M_{\rm a}}{a} {R}\;, \label{MRDM} \end{equation} where $M_{\rm a}=5\times10^{10}{\,{\rm M}_\odot}} \newcommand\mearth{{\,{\rm M}_{\oplus}}$ and $a=4$kpc. The potential is softened at small radii in order to avoid divergence in the gravitational force as $R$ tends to zero. The one dimensional velocity dispersion of the potential is $\sigma_{\rm pot} = (GM_{\rm a}/2a)^{1/2}\simeq 164$ km s$^{-1}$. In all simulations we use an ideal equation of state for the gas, the gas pressure is given by $P = \rho k_{\rm B}T/\mu m_{\rm p}$, where $\rho$ is the gas density, $k_{\rm B}$ is the Boltzmann constant, $T$ is the gas temperature and $\mu=0.63$ is the mean molecular weight. An optically thin radiative cooling function for gas ionized and heated by a quasar radiation field (assuming a fixed black hole luminosity of $L_{Edd}=2.5\times 10^{46}$ erg s$^{-1}$) as calculated by \citet{sazonovetal05} is used for $T>10^{4}$ K. Below $10^{4}$ K, cooling is modelled as in \citet{Mashchenko08}, proceeding through fine structure and metastable lines of C, N, O, Fe, S and Si. For simplicity, we fix metal abundances at solar metallicity. We impose a temperature floor of $100$ K. Gas particles are converted into star particles according to a Jeans instability condition. SPH particles with density above a critical density of \begin{equation} \rho_{crit} = \rho_{thresh} + \rho_{J} \end{equation} are turned into star particles, where $\rho_{thresh}=10^{-20}$ g cm$^{-3}$ and $\rho_{J}$ is the local Jeans density given by, \begin{equation} \rho_{J} = \left(\frac{\pi k_{B}T}{\mu m_{p}G}\right)^{3}\left(n_{ngb}m_{sph}\right)^{-2}\simeq 1.17\times 10^{-18}T_{4}^{3} \text{g cm}^{-3} \label{sfr} \end{equation} where $T_{4}=T/10^{4}$ K, $n_{ngb}=100$ is the typical number of neighbours of an sph particle and $m_{sph}$ is the SPH particle mass. The $\rho_{thresh}$ term ensures that only high-density gas is converted into star particles whilst the second term is the local Jeans density and ensures that stars only form in gas that is unstable towards gravitational collapse\footnote{Strictly speaking in order to properly follow the collapse of gas one should be able to resolve the local Jeans mass, $M_{J}$, i.e. $n_{ngb}m_{sph} < M_{J}$ \citep{Whitworth98}. Gas with $T=T_{floor}=100$ K has $\rho_{J}\simeq 10^{-24}$ g cm$^{-3}$ leading to some gas having $\rho > \rho_{J}$ but not being converted into stars and hence we are not resolving the Jeans mass of this gas. However for the purpose of these simulations we are not particular interested in studying star formation in detail and the number of particles for which the above condition is true is negligibly small.}. Removing high-density gas aids in reducing the computation time by removing particles that would otherwise have prohibitively short time-steps. Each newly formed star particle has the same mass as the original gas particle and only interact with other particles through gravity. \begin{figure} \psfig{file=splash_uniform.pdf,width=0.5\textwidth,angle=0} \caption{Density (top panel) and temperature (bottom) slices through z=0 plane at time t=0 and $t\simeq 282$ kyr for homogeneous initial condition simulation H1.} \label{time-evo-uniform} \end{figure} \begin{figure*} \psfig{file=evo_cool.pdf,width=1\textwidth,angle=0} \caption{Same as Fig. \ref{time-evo-uniform} but for the turbulent initial condition simulation T1. Density (top panel) and temperature (bottom panel) slices through z=0 plane evolving in time from t=0 to $t\simeq 282$ kyr in steps of $\sim 94$ kyr, from left to right, respectively.} \label{time-evo-rho} \end{figure*} \subsection{Initial conditions} Simulation of isolated galaxies by definition does not model gas inflows into galaxies from larger scales, and therefore idealized initial conditions for the ISM of the host must be used. There is a considerable freedom in choosing these initial conditions. In W12 and W13, cold, high-density clumps in hydrostatic equilibrium with the hot, low-density phase are introduced at the beginning of the simulation. The initial velocity of the gas is zero everywhere. In the current paper, however, since the epoch we are interested is one of a rapid SMBH growth and star formation in the host galaxy, the ambient gas may be in a very dynamical non-equilibrium state, which we model with an imposed turbulent velocity flow. In doing so we are inspired by numerical studies of star formation in molecular clouds \citep[e.g.,][]{BateEtal03}. In practice, our method for generating two-phase initial conditions is based on earlier work by \citet{HobbsEtal11}, where the importance of high-density gas clumps for SMBH {\it feeding rather than feedback} was studied. We seed a sphere of gas (cut from a relaxed, glass-like configuration) with a turbulent velocity field following \citet{DubinskiEtAl95}. A Kolmogorov power spectrum is assumed, $P_{\rm v}(k)\sim k^{-11/3}$, where $k$ is the wavenumber. Gas velocity $\vec{v}$ can be defined in terms of the vector potential $\vec{A}$, whose realization is also a power-law with the cutoff at $k_{\rm min}$. Physically the small scale cut-off $k_{\rm min}$ defines the largest scale, $\lambda_{\rm max}=2\pi /k_{\rm min}$, on which turbulence is likely to be driven. Here we set $k_{\rm \rm min} \simeq 1/R_{\rm out}$, as the shell becomes distorted for larger $\lambda_{\rm max}$. The statistical realization of the velocity field is generated by sampling the vector potential $\vec{A}$ in Fourier space, drawing the amplitudes of the components of $\vec{A_{\rm k}}$ at each point $\left(k_x, k_y, k_z\right)$ from a Rayleigh distribution with a variance given by $<|\vec{A_{\rm k}}|^{2}>$ and assigning phase angles that are uniformly distributed between $0$ and $2\pi$. Finally, we take the Fourier transform of $\vec{v_{\rm k}}=i\vec{k}\times\vec{A_{\rm k}}$ to obtain the velocity field in real space. The gas initially follows the SIS potential (meaning that $\rho(R) \propto R^{-2}$) from $R_{\rm in}=0.1$ kpc to $R_{\rm out}=1$ kpc with a gas mass fraction $f_{\rm g}=M_{\rm g}/(M_{\rm g}+M_{\rm pot})=0.5$, where $M_{\rm g}$ and $M_{\rm pot}$ are the gas and potential mass within the shell $0.1\leq R\leq 1$ kpc, respectively. In order to avoid particles at small radii with prohibitively small time steps we add a sink particle at the centre of the simulation domain with $M_{\rm sink}=2\times 10^{8}{\,{\rm M}_\odot}} \newcommand\mearth{{\,{\rm M}_{\oplus}}$ ( $\sim M_{\sigma}/2$). The turbulent velocity is normalized such that the root-mean-square velocity, $v_{\rm turb}\simeq \sigma \simeq 232$ km s$^{-1}$, where $\sigma\simeq (GM_{\rm a}/2a(1-f_{\rm g}))^{1/2}$ is the velocity dispersion of the potential plus gas component. The initial gas temperature is set to $T\simeq 1\times 10^{6}$ K, such that the shell is marginally virialized, i.e; $(E_{\rm turb}+E_{\rm therm})/|E_{\rm grav}|\sim 1/2$, where $E_{\rm turb}$ and $E_{\rm therm}$ are the total turbulent kinetic energy and total thermal energy of the gas respectively and $E_{\rm grav}$ is the gravitational potential energy of the system. The system is allowed to evolve under the action of the turbulent velocity field for time $\sim\tau_{\rm dyn}/3 = R_{\rm out}/3\sigma$, allowing the density inhomogeneities to grow. The resulting gas shell is then re-cut to have an inner radius $R_{\rm in}=0.3$ kpc and outer radius $R_{\rm out}=1$ kpc. The total gas mass is $M_{\rm g}\simeq 5.9\times 10^{9}{\,{\rm M}_\odot}} \newcommand\mearth{{\,{\rm M}_{\oplus}}$, corresponding to a gas fraction of $f_{\rm g}\simeq 0.4$ and giving a velocity dispersion for the system (gas + potential within the shell) of $\sigma\simeq 212$ kms$^{-1}$. The total number of particles in the gas shell is $N_{\rm gas}\simeq 2.6\times 10^{6}$ with a particle mass $m_{\rm gas}\simeq 2250{\,{\rm M}_\odot}} \newcommand\mearth{{\,{\rm M}_{\oplus}}$. Typical parameters for an UFO give a velocity $v_{\rm out}\sim 0.1$ c, mass outflow rate $\dot{M}_{\rm out}\sim 0.1{\,{\rm M}_\odot}} \newcommand\mearth{{\,{\rm M}_{\oplus}}$ yr$^{-1}$ and kinetic energy flux $\dot{M}_{\rm out}v^{2}/2\simeq 0.05L_{\rm Edd}$. Modelling a continuous ejection of fast wind particles by SPH is not currently feasible: at our present mass resolution (which is much higher than a typical cosmological simulation), a single SPH particle accounts for all of the UFO mass over $\sim 22.5$ kyr. Fortunately, it is the total energy budget of the hot shocked wind bubble and not its minuscule mass that determines the strength of the bubble's impact on the ambient medium \citep[the mass of the UFO is so small compared to the host galaxy that it does not even enter in the analytic theory;][]{King10b}. Therefore we rescale the properties of the UFO particles, keeping the hot bubble's energy at a desirable value but increasing the outflow's mass, to be able to model the thermalized UFO hydrodynamically and with a reasonable numerical resolution. In particular, the UFO thermalized in the reverse shock is introduced in the initial condition as a hot spherical bubble of radius $R_{\rm bub}=0.3$ kpc centred on the sink particle. We have tested different bubble masses and find that, qualitatively, the main conclusions of our paper remain unchanged. The initial gas density and temperature are assumed constant throughout the bubble, as expected \citep{FQ12a}. The temperature and mass of the bubble are determined based upon the desired energy ratio between the hot bubble and the ambient gas component: \begin{equation} E_{\rm r}=\frac{E_{\rm H}}{E_{\rm a}}=\frac{M_{\rm H}c_{\rm s}^{2}}{M_{\rm a}\sigma^{2}} \end{equation} where $E_{\rm H}$ and $E_{\rm a}$ are the energy in the hot bubble and the ambient gas, respectively, $M_{\rm H}$ and $M_{\rm a}$ are the total mass in the hot and cold component, respectively, $c_{\rm s}$ is the sound speed in the hot bubble and $\sigma\simeq 212$km s$^{-1}$ is the velocity dispersion. All simulations presented in this paper use $c_{\rm s}\simeq 3000$ km s$^{-1}$ and $E_{\rm r}=5$; the main conclusions of our paper are independent of $E_{\rm r}$ as long as $E_{\rm r}\gg 1$, as expected for AGN-inflated feedback bubbles \citep{King10b}. The left-most panels in Fig. \ref{time-evo-rho} show the initial density and temperature structure of the system. As well as the runs with a turbulent medium, we have a control simulation that has not been seeded with turbulence to contrast the outcomes. The radial gas distribution of the control run follows the same profile as the turbulent shell {\it before} relaxation, so that the gas is homogeneous, but has a mass equal to that of the turbulent shell {\it after} relaxation. The initial radially binned gas distribution is hence identical for the homogeneous and turbulent runs save for a slight evolution during relaxation of the latter runs as described above (compare the dashed red and blue curves in Fig. \ref{rad-dist}). It should also be noted that the control run has a low initial temperature $T\simeq 10^{5}$ K, which is subvirial in order to ensure that the gas remains homogeneous during the simulation, which is the regime we wish to study here. Further, since there is no imposed turbulent velocity field that would develop into the turbulent multiphase ISM, there is no need to relax this initial condition before applying the hot bubble. For this reason, the gas has a zero initial velocity in the homogeneous control run, unlike the turbulent run. This difference in initial conditions has a very minor effect on the final outcome of the simulations because the radial velocity gained by the gas in the homogeneous run is much larger than the difference in the initial velocities in the two runs. In what follows we refer to the simulations as the turbulent (T1) and control (homogeneous, H1) runs, respectively. In order to study the direct impact of the hot bubble on the ambient gas and/or to avoid confusion due to the dense gas phase shielding lower density gas behind it (at larger radii), a number of figures only include the SPH particles that were within $0.3\leq R\leq 0.35$ kpc at $t=0$ kyr. Behaviour of gas initially at larger radii will nevertheless be discussed in some of the figures below. \section{Feedback on turbulent versus homogeneous medium} \begin{figure} \psfig{file=rad_distr_cool.pdf,width=0.5\textwidth} \caption{Histogram of the gas mass in radial bins. The blue and red lines are for the turbulent clumpy (T1) and homogeneous (H1) gas distributions, respectively. The dashed and solid lines correspond to times $t=0$ kyr and $t\simeq 283$ kyr, respectively. Note how little the clumpy distribution evolves: if anything, gas continues to accumulate in the innermost region, whereas it is completely blown away in the H1 run.} \label{rad-dist} \end{figure} \begin{figure} \psfig{file=vel_distr_cool.pdf,width=0.5\textwidth} \caption{Histogram of the radial velocity distributions $t=70.8$ kyr for SPH particles that belong to one of the three representative density groups, i.e., the highest $10\%$, around the logarithmic mean and the lowest $10\%$ of SPH particle densities, as labelled in the inset. Particles selected were within $R\leq 0.35$ kpc at $t=0$ kyr, as explained in the text} \label{vel-dist} \end{figure} Fig. \ref{time-evo-uniform} shows density (top) and temperature (bottom) slices at time $t=0$ (left) and $t\simeq 283$ kyr (right) for the homogeneous density run, H1. Fig. \ref{time-evo-rho} shows the same quantities at four different times for the turbulent initial condition simulation T1. The times of the first and the last snapshots are the same as those for Fig. \ref{time-evo-uniform}. It is immediately obvious that the homogeneous ambient density case, H1, produces a ``boring'' spherically symmetric, dense, shell that is expanding under the pressure of the hot bubble in the middle. The bubble also remains spherically symmetric.\footnote{There may be small scale \cite{Vishniac1983} instabilities developing on the surface of the bubble \citep{NZ12}, but these instabilities grow slower than the shell is driven outward in this energy-conserving situation.} Importantly, the bubble drives {\it all} of the ambient gas encountered outward at a high velocity. This is in stark contrast to the turbulent run as can be seen in Fig. \ref{time-evo-rho}. The expansion of the hot bubble into the ambient phase occurs along the paths of least resistance. The low-density ambient phase is swept up and pushed out, while the high-density gas suffers a much smaller positive radial acceleration and little (if any) gain in temperature. Some compression and ablation of the cold dense medium does occur, but most of it survives the bubble's passage intact. Fig. \ref{rad-dist} highlights the differences in the results of simulation H1 and T1 in a more compact way by presenting the distribution of gas in radial bins. The blue and red dashed curves show the initial ambient gas mass within concentric spherical shells of 10 pc width for the turbulent and the homogeneous (control) runs, respectively. The solid curves of the same colour show how these gas distributions evolve by time $t\simeq 283$ kyr. Note that the bubble swept up {\it all} of the ambient gas within a radius of $\sim 0.45$ kpc into a dense shell in the control run, but is obviously having great difficulties in removing the gas in the turbulent simulation. The density of the gas in the inner regions actually increases in the latter simulation as some of the cold dense gas falls in while the hot bubble fizzles out through the pores in the ambient gas. These results illustrate clearly the main thesis of our paper: {\it the impact of an UFO on the inhomogeneous multiphase medium is much less efficient than expected based on spherically symmetric modelling}. \section{Dynamics of clumpy gas} \subsection{Gas dynamics as a function of its density}\label{sec:density} We shall now analyse the response of the ambient gas to the presence of the hot bubble in the turbulent simulation T1 in greater detail. This response is a strong function of the properties of the ambient gas. Fig. \ref{vel-dist} shows the distribution of gas over radial velocity at time $t\simeq 70.8$ kyr, for three different initial density ranges (i.e. particles are grouped based upon their density at $t=0$). To avoid confusion due to dense gas phase shielding lower density gas behind it and therefore unaffected by the feedback flow yet, we include only the SPH particles that were within $0.3\leq R\leq 0.35$ kpc at $t=0$ kyr. The red and blue histograms show particles that originally have the highest and lowest densities whilst the grey curve shows particles at the logarithmic mean density. Each of the histograms accounts for $\sim 10\%$ of the total number of particles within $0.3\leq R\leq 0.35$ kpc at $t=0$ kyr. Fig. \ref{vel-dist} demonstrates that the lowest density gas is accelerated to high radial velocities, with a mean of $\left<v_{\rm r}\right>\simeq 661$ km s$^{-1}$. In contrast, the highest density gas is, on average, continuing to infall, with a mean $\left<v_{\rm r}\right>\simeq -145$ km s$^{-1}$. The logarithmic mean density gas shows a variety of behaviours from an infall with velocity of a few hundred km s$^{-1}$ to an outflow with a similar range in velocities. Also plotted are lines indicating the mean radial velocity of all of the gas originally in the $0.3\leq R\leq 0.35$ kpc region in the turbulent simulation ($\left<v_{\rm r}\right>\simeq 125$ kms$^{-1}$) and in the homogeneous control run. In the later case the gas is accelerated to high velocities on average ($\left<v_{\rm r}\right>\simeq 563$ kms$^{-1}$), in a single spherical shell of swept up material whilst in the turbulent simulation the hot bubble can escape through the porous medium and so much of the material does not get accelerated outwards. For the turbulent simulation, not only does the outflow fail to clear out the high-density material, a large fraction of the low-density material is also left behind due to shielding by high-density material in front of it. \subsection{The column density perspective}\label{sec:column} Whilst Fig. \ref{vel-dist} highlights that gas of different densities is affected by the outflow differently, it also shows that there is an overlap in their radial velocities: some low-density gas is infalling whilst some high-density gas is outflowing. This behaviour may partially be due to gas at larger radii being shielded from the feedback by dense gas at smaller radii. To remove this self-shielding effect in our analysis somewhat, we consider the column density of the gas calculated as the integral \begin{equation} \Sigma = \int_0^{R} dr\rho(r, \Theta, \phi)\;, \label{sigma_def} \end{equation} along the lines of sight (defined by the spherical coordinate angles $\Theta$ and $\phi$) from the centre of the galaxy. Fig. \ref{col-depth} shows the column density map as a function of the position on the sky as viewed from $R=0$. Only ambient gas located inside $R\leq 0.35$ kpc at $t=70.8$ kyr is taken into account in this analysis. The column density of the ambient gas, $\Sigma$, calculated in this way, varies by a factor of about 1000 in Fig. \ref{col-depth}. Fig. \ref{col-depth} also presents gas radial velocity information by showing contour lines for zero velocity gas (red). Material inside of these contours has a negative radial velocity at this time. We can see that it is the gas with the highest $\Sigma$ that remains infalling, whilst gas with a low $\Sigma$ generally has positive radial velocities. The complex nature of gas dynamics in the turbulent simulation makes defining and analysing the exact dynamics of gas difficult if not impossible since gas density changes during the simulation. Some of the gas may even switch phases when it cools or heats up. However we can carry out an approximate analysis by considering the momentum equation for a clump, \begin{equation} \frac{d}{dt}\left(m_{cl}v_{cl}\right)=\pi r_{cl}^{2}P_{\rm ram} -\frac{Gm_{cl}M(R)}{R^{2}} \label{mom-eq} \end{equation} where $r_{cl}$, $m_{cl}$ and $v_{cl}$ are the clump's radius, mass and radial velocity, respectively, $P_{\rm ram}$ is the hot bubble's ram pressure acting on the clump, $R$ is the radial position of the clump and $M(R)$ is the mass of material within $R$. Making the assumption that $m_{cl}$ and $r_{cl}$ remain approximately constant we can divide through by $m_{cl}$, and re-write equation \ref{mom-eq} as \begin{equation} a_{cl}=\frac{P_{\rm ram}}{\Sigma_{cl}}-a_{grav} \label{mom-acc} \end{equation} where $a_{cl}$ and $a_{grav}$ are the clump's acceleration and gravitational acceleration, respectively, and $\Sigma_{cl}=m_{cl}/\pi r_{cl}^{2}$ is the column density of the clump. The ram pressure of the hot gas cannot be predicted exactly by the analytical model, but we assume that hot gas streams out of its initial spherical configuration at approximately the sound speed of the hot gas through numerous ``holes'' in the cold ambient gas distribution. This argument suggest that by the order of magnitude $P_{\rm ram}$ should be comparable to the initial isotropic pressure of the hot gas, $P$. When $\Sigma_{cl}<<P/a_{grav}$, the driving force of the bubble dominates over gravity and we can neglect the $a_{grav}$ term in equation \ref{mom-acc}, integrating then gives an estimate for a clumps velocity at time $t$ as \begin{equation} v(t)=\frac{P_{\rm ram}}{\Sigma_{cl}}t + v(0). \label{mom-acc2} \end{equation} Setting $v(t)=0$ we can define a critical column density, \begin{equation} \Sigma_{crit}(t) = \frac{P_{\rm ram}}{|v(0)|}t \label{sig-crit} \end{equation} such that only material with $\Sigma > \Sigma_{\rm crit}$ should still be infalling at time $t$, whereas lines of sight with $\Sigma < \Sigma_{\rm crit}$ may be launched in an outflow. Using the mean radial velocity of gas particles at $t=0$ for $v_{0}$, we find $\Sigma_{\rm crit}\lower.5ex\hbox{\gtsima}} \def\sgra{Sgr~A$^*$ 0.36$ g cm$^{-2}$ at $t\sim 70.8$ kyr. Black contours in Fig. \ref{col-depth} show the lines of sight where $\Sigma = \Sigma_{\rm crit}$. We see that there is a close agreement between the red (zero velocity contours) and the black contour lines, suggesting that the approximate analysis based on equation \ref{mom-acc} does have a certain merit to it. This could be expected from theoretical studies of how a single dense gas cloud is affected by a hot bubble \citep[e.g.,][]{McKeeCowie75}, N14. The column density of the cloud, $\Sigma$, is roughly the product of the mean cloud density, $\rho_{\rm cl}$, and the physical size of the cloud, $r_{\rm cl}$. Therefore, a dense but physically small (small $r_{\rm cl}$) cloud may have a smallish $\Sigma$, and is accelerated to a significant radial velocity by the UFO, and hence may be completely destroyed, despite being dense. A dense and large (large $r_{\rm cl}$ and $\Sigma$) cloud, on the other hand, may both withstand the onslaught from the hot bubble and also continue to infall. There are a few caveats to this approach for comparing $\Sigma$ and expected radial velocity. The high $\Sigma$ regions shown in the plot can only be considered an estimate for the high-density material as they are calculated based upon the entire contribution of material along a particular line of site out to $R\leq 0.35$ kpc. This leads to potentially over(under)estimating $\Sigma$ if the clump extends to radii that are less (greater) than $0.35$ kpc. Further we use an average value for $v_{0}$ and assume that the column density of the clump remains approximately constant over the time period considered. Therefore the estimate here should only be considered as a rough illustration of the interaction of the high-density clumps with the expanding bubble and not an exact solution, which would require a far more detailed analysis than is necessary for the purposes of this paper. \begin{figure} \vskip - 0.5cm \psfig{file=col_depth_cool.pdf,width=0.5\textwidth} \caption{Column density of ambient gas at $R\leq 0.35$ kpc at $t=70.8$ kyr, as viewed from the position of the sink particle. Also plotted are contour lines for zero velocity gas (red) and gas with $\Sigma_{\rm crit} = 0.36$ g cm$^{-2}$, which is analytically predicted to have zero velocity at this time. Note that the two contour lines coincide over most of the plot.} \label{col-depth} \end{figure} \subsection{Time evolution of the outflow}\label{sec:phases} \begin{figure} \vskip - 0.5cm \psfig{file=bubble-time-flow-na.pdf,width=0.5\textwidth} \caption{Time evolution of the change in mean radial position (top) and change in mean radial velocity (bottom) of gas in the homogeneous (H1, red) and turbulent (T1, blue) runs. In the latter case the gas is further divided into low (dotted), intermediate (dashed) and high (dash-dot) density material.} \label{btf} \end{figure} So far we have only shown properties of the system at specific moments in time, however, a consideration of the time evolution of the system is also important. Fig. \ref{btf} shows the time evolution of the change in mean radial position, $\Delta R=\overline{R}(t)-\overline{R}(0)$ (top) and change in mean radial velocity $\Delta v = \overline{v}(t) - \overline{v}(0)$ (bottom) for ambient gas particles initially at $R\le 0.35$ (these particles are chosen to avoid other complicating factors such as shielding of low density gas). The solid red and blue lines on these figures are taken from the homogenous simulation H1 and turbulent simulation T1, respectively. Also shown on each of the panels in Fig. \ref{btf} is three blue lines calculated from the data of the turbulent simulation T1, showing the change in mean radial position (top) and mean radial velocity (bottom) for low (dotted), intermediate (dashed) and high (dot-dashed) density gas. We apply fixed density thresholds set at the values used in Fig. \ref{vel-dist} earlier, however, unlike in Fig. \ref{vel-dist}, where particles are grouped based upon their original density, here the particles are grouped based upon their density at time $t$. Both the change in mean radial position and mean radial velocity plots demonstrate again that the low-density gas is affected by the hot bubble much stronger than the high density gas. Both panels of Fig. \ref{btf} show a certain reduction in the difference between the three density groups as time goes on which is however due to (a) mixing between the two phases with time, and (b) the fact that the bubble energy is not replenished in our simulation. \subsection{Decoupling of energy and mass flow}\label{sec:decoupling} In the homogeneous control simulation H1, both mass and energy are flowing outward as the bubble expands. The situation is bound to be far more interesting in the case of the turbulent simulation T1, since we saw in Section \ref{sec:density} that there is both an inflow and an outflow at the same time. Furthermore, since the different phases have widely different radial velocities and temperatures, the overall direction of the flow of mass and energy is not obvious. \begin{figure} \vskip - 0.5cm \psfig{file=in_out_act.pdf,width=0.5\textwidth} \caption{Radial flows of energy, $\dot{E}$ (top panel), and mass, $\dot(M)$ (bottom panel), for gas that is either in-flowing (blue) or outflowing (red) at time $t=283$ kyr in the simulation T1.} \label{meflux} \end{figure} \begin{figure*} \vskip - 0.5cm \psfig{file=dE-rho0-035.pdf,width=0.8\textwidth} \psfig{file=dE-rho1-035.pdf,width=0.8\textwidth} \caption{Particle distribution plot of absolute change in specific energy between $t=0$ and $189$ kyr ($\Delta{e}$) against original gas density (top) and current gas density (bottom). Contours indicate gas that has lost energy (blue) or gained energy (red). The density axis have been collapsed into one-dimensional mass histograms above each panel whilst the energy axis has been collapsed into one-dimensional histograms weighted by $\Delta{e}$ to the right of each panel.} \label{dE-rho} \end{figure*} To analyse these flows we define the rate of mass and energy flows in a given radial bin of width $\Delta{r_{bin}}$, respectively, as \begin{eqnarray} \dot{M} = \displaystyle\sum{\frac{m_{sph}v_{r}}{\Delta{r_{bin}}}}\\ \dot{E} = \displaystyle\sum{\left[\frac{1}{2}v^{2}+\frac{3}{2}\frac{k_{B}T}{\mu m_{p}}\right]\frac{m_{sph}v_{r}}{\Delta{r_{bin}}}}\;. \end{eqnarray} The SPH particles in this sum are selected based on criteria placing them in one or the other phase or group (see below). In a steady state spherically symmetric flow, these definitions would include all of the SPH particles in a bin, and would then give the total mass and energy flux rate as a function of position in the flow. In the homogeneous control run, the energy and mass flows are dominated by outflowing material but only within the radius of the swept up shell, beyond this there is no outward $\dot{E}$ and $\dot{M}$, while the inward values are negligibly small. Fig. \ref{meflux} shows $\dot{E}$ (top) and $\dot{M}$ (bottom) for in-flowing ($v_{r}\le -\sigma /2$, blue) and outflowing ($v_{r}\ge\sigma/2$, red) material in the turbulent simulation T1, binned radially at $t=283$ kyr. Both panels show that, unlike the spherically symmetric situation (simulation H1), there are outflows and inflows of mass and energy for all radii in the clumpy simulation T1. Interestingly, the energy flow is dominated by the material streaming outward, which we identify with the hot low-density gas based on our earlier analysis, whereas the mass flow is mainly inward and is dominated by the high density gas. This shows that {\it energy and mass flows separate from one another in turbulent flows.} Unlike the spherically symmetric homogeneous case, energy does not necessarily flows where most of the mass does. To analyse this energy-mass decoupling further, we define the absolute change in specific energy of SPH particles as \begin{equation} |\Delta{e}| = \left|\frac{1}{2}\left(v^{2}-v_{0}^{2}\right)+\frac{3}{2}\frac{k_{B}}{\mu m_{\rm p}} \left(T-T_{0}\right)+G\frac{M_{a}}{a}\ln\left({\frac{R}{R_{0}}}\right)\right| \label{e_specific} \end{equation} where the terms on the right hand side are the change in specific kinetic, internal and gravitational potential energy, respectively (note we only include the gravity due to the underlying potential). $v$, $T$ and $R$ are the velocity, temperature and radial positions of each particle, respectively, with the subscript $0$ indicating the initial value of each of these parameters. Fig. \ref{dE-rho} shows the absolute change in SPH particle specific energy ($|\Delta e|$) between $t=0$ and $189$ kyr versus the gas density at the initial time (the top panel), and, alternatively, versus the gas density at $t=189$ kyr (the bottom panel). Contours indicate gas that has lost energy (blue) or gained energy (red). The density axis has been collapsed into one-dimensional mass histograms, located at the top of each plot, whilst the energy axis has been collapsed into one-dimensional histograms weighted by $\Delta{e}$, located to the right of each plot. As before (e.g., Fig. \ref{vel-dist}), only particles within $R=0.35$ kpc at $t=0$ are selected for this analysis to minimize complications due to gas self-shielding. Since gas in simulation T1 is initially infalling due to our initial conditions, so that radial velocity $v_r<0$, particles that loose specific energy (blue colour in fig. \ref{dE-rho}) correspond to particles that are, in general, only moderately affected by the hot bubble. The radial velocity of such particles is either still negative but less so than initially or has a small positive value. On the other hand, particles with a positive energy change (red), as a rule, are particles that are now outflowing with a larger positive $v_r$. Focusing on the 1D mass distributions, above the corresponding panels, we observe from the figure that most of the mass is in the blue gas that is on average denser than the red (outflowing) gas. At the same time, 1D energy distributions to the right of each panel, show that most energy is in the red SPH particles, so that, consistent with Fig. \ref{meflux}, energy is mainly in the low-density outflowing particles. The low-density tail of the distribution of the red particles in the bottom panel shows that the energy gained by the outflowing particles may be about two orders of magnitude higher than the energy change of the blue dense particles. Further, comparing the top and the bottom panels, we see that the low-density outflowing gas tail in the bottom panel had on average higher density at time $t=0$. This gas is initially moderately dense but has been ablated from the surface of the clouds and launched in the outflow by the hot bubble. The SPH particles in the blue part of the distribution had their density increased by a factor of several. The hot bubble thus compresses most of the dense gas by a factor of at least a few. This is consistent with earlier results of \citet{NZ12} (see also \citet{SilkNorman09}) showing that AGN outflows may in fact trigger star formation in dense cold gas by compressing it to very high densities. \section{Discussion} \subsection{Feedback on a homogeneous versus a multiphase ISM} We have studied the impact of a thermalized UFO launched by a rapidly accreting SMBH (modelled as a hot bubble) on the ambient gas of the host galaxy in two contrasting limits. In the first, the ambient gas is initially homogeneous and spherically symmetric, whereas in the second limit it is highly inhomogeneous due to an initially imposed turbulent velocity field. In broad agreement with previous work (W12, W13, N14 and ZN14), we find marked differences in the outcome of this interaction. We find that the homogeneous spherically symmetric ambient gas is driven outward by the hot bubble much in the same way as described by the energy-conserving analytical models of AGN feedback \citep[e.g.,][]{King03,king05,King10b,ZK12a,FQ12a}. In such models the ambient gas is only driven away if the feedback is sufficiently strong and the weight of the medium sufficiently small. In a stark contrast to this, the turbulent clumpy ISM cannot easily be described in a 1D language. Because of a large density contrast between the different phases in the ISM, there is simultaneously inflowing and outflowing gas streaming throughout the host galaxy. The cold dense medium is affected by the UFO significantly less than analytic models, quoted above, assume because the medium is overtaken by the UFO rather than being pushed in front of it. We find that some high-density clumps continue to move inward while the hot bubble fizzles out through low-density `pores' and accelerates the low-density phase of the ISM to high outward velocities. Analysis of this behaviour shows that the cold dense phase gets an initial kick from the pressure of the bubble before it is overtaken, after which the driving force acting on the clump diminishes. Another important result found here is a divergence in the directions of where most of the mass and energy flow in a turbulent ISM. While most of the mass is flowing inward, carried by the cold dense clouds which continue to infall despite AGN feedback, most of the UFO energy manages to percolate through the ambient ISM and flow outward through the bulge. \subsection{Pertinence to the $M-\sigma$ relation}\label{sec:speculation} Overall, our results suggest that the establishment of the $M-\sigma$ relation is much more complicated a process than in spherically symmetric models \citep[e.g.,][]{SilkRees98,Fabian99,King03}. In such models, the $M-\sigma$ mass divides two very different regimes. SMBHs below the $M-\sigma$ mass are unable to drive the gas outward beyond a small radius (tens to a few hundred pc, depending on the BH mass and the host velocity dispersion). It is only once the SMBH exceeds the $M-\sigma$ mass the outflow is able to overcome the weight of the ambient gas in the galaxy and clear {\it all of the host} of its gas. This paints an all or nothing picture of AGN feedback (above or below the $M-\sigma$ mass, respectively). The picture of AGN feedback changes radically if the ISM in the host is multiphase. There is no longer the two different regimes with a sharp boundary, the $M-\sigma$ mass, between them: at any SMBH mass there may be an inflow and an outflow of gas at the same location in the host and at the same time. This must dilute the meaning of the $M-\sigma$ mass, because, on the one hand, ``underweight'' SMBHs, i.e., those below the $M-\sigma$ mass, do have an influence on the host galaxy even on large scales. Since the hot gas propagates outward by finding and following the paths of least resistance, the low-density phase at all radii in the host is vulnerable to AGN feedback. On the other hand, the high-density medium is more resilient to SMBH feedback than could be thought based on spherically symmetric models because the medium is over-taken by the UFO rather than being pushed in front of it. N14 and ZN14 proposed that this unexpected resilience of the host gas to AGN feedback explains how SMBH manage to grow to the momentum-limited $M_\sigma$ masses \citep{King03} rather than the energy-limited ($\sim 100$ times lower) masses. One speculation arising from these results is that a tight $M-\sigma$ relation could actually never be established in an ensemble of {\it isolated} galaxies, and that mergers of galaxies are crucial to the emergence of the observed relations. On the basis of results presented here and in ZN14, we argue that there are simply too many factors determining the SMBH interaction with the host galaxy (the ISM structure, angular momentum of the gas, etc.), and that therefore one should expect a very significant spread in any SMBH-host relation based on {\it a single episode} of the galaxy and the SMBH growth. It is likely that averaging occurring during mergers of galaxies \citep[the central limit theorem applied to mergers, see e.g.,][]{JahnkeMaccio11} largely erases this significant spread, leading to a tight $M-\sigma$ relation at low redshift. This view is consistent with the fact that the observed SMBH-host scaling relations are only tight for classical bulges and ellipticals, that the scatter in such relations decreases towards higher masses, and that SMBH--host relations have larger scatter at large redshifts \citep{KormendyHo13}. \subsection{Induced star formation and stellar feedback} Comparing the top and the bottom panels of Fig. \ref{dE-rho}, or the blue curves in the horizontal histograms above these panels, we see that the mean density of the dense material increases with time in the simulation T1. Because of this density increase, a small number of star particles are formed in our simulation, in a broad agreement with the earlier suggestions of AGN-induced star formation \citep{NZ12} (see also \citet{SilkNorman09}). Such {\it positive} AGN feedback is likely unresolved in cosmological simulations. \subsection{Comparison with other work} Out of previous literature, our work is most similar in spirit to W12 and W13, with a number of similar conclusions. One difference, however, is that W13 finds that the dense clouds are heated strongly and accelerated outwards as a result of the feedback (albeit slower than the hot phase). In our work inflows occur {\it despite} the feedback. The response of the cold phase to the UFO is strongly dependent on the initial conditions of the phase and the physics included in the simulation. In W13, radiative cooling below a temperature of $10^4$~K is turned off, which clearly limits the highest densities that could be reached by the cold phase under the external compression by the hot medium. In our simulations, self-gravity of the clouds is an important factor in ensuring the integrity of the clouds when they are hit by the UFO. W12 and W13, on the other hand, do not include self-gravity of the gas and the initial densities of the clouds appear to be comparable to the tidal densities at the clouds' locations. In our opinion, cold clouds in W12 and W13 are both susceptible and defenceless to shear from the gravitational potential and hydrodynamic forces by the UFO. In any event, we believe that neither our study nor the previous work gives complete and {\it quantitatively} definitive answers on the interaction of the UFO and a clumpy turbulent medium of the host galaxy. Future simulations should focus on modelling the physical properties of the ISM with a greater realism, in particular including star formation and its feedback (which we did not include here). \subsection{Implications for cosmological simulations}\label{sec:cosm} Cosmological simulations \citep[e.g.,][]{DiMatteo08, Schaye10, Dubois12} often invoke AGN feedback in order to reproduce observed relationships such as the galaxy luminosity function. In this sense AGN provide a source of negative feedback and therefore the mechanism of the sub-grid prescription employed acts to inhibit star formation and eject gas from a galaxy. This is normally achieved through heating or ``kicking'' gas local to the black hole. Such simulations, which by necessity, balance on the edge of what is numerically achievable, are unable to resolve the multiphase ISM. It is likely that any feedback would be acting on a single phase medium. The heterogeneous effects that feedback has on the different phases of a multiphase ISM illustrated in our simulations may then be lost due to numerical limitations. The extent to which this poses a problem depends upon the exact nature of the multiphase ISM \citep{wagner12} and upon the problem that one wishes to investigate with the cosmological simulations. With regards to meeting large-scale observational trends, such as the galaxy luminosity function or $M-\sigma$ relation, the subgrid models employed by cosmological simulations may be sufficient. However, as shown in this paper, the exact nature of the ISM does impact how AGN feedback couples with the ambient gas in a galaxy. In our simulations, the cold dense phase is mainly affected by the ram pressure (momentum) of the UFO, whereas the low-density phase bears the brunt of the UFO's energy content. In contrast, widely used AGN feedback models \citep[e.g.,][]{DiMatteo08, Dubois12} tend to neglect the physical state of the gas and instead focus on the proximity of the gas to the SMBH. Even though cosmological simulations are currently unable to resolve the ISM, there may still exist material with a range of physical properties close to black hole. It is therefore likely that the robustness of cosmological simulations could be improved by a set of prescriptions that incorporate the physics highlighted by our simulations. Similarly, semi-analytical models \citep[e.g.,][]{BowerEtal06} may benefit from including an energy-leaking prescription for the hot bubble (see ZN14). \section{Conclusion} We have studied the impact of a thermalized UFO (modelled as a hot bubble) launched by a rapidly accreting SMBH on the ambient gas of the host galaxy in two contrasting limits. In the first, the ambient gas is initially homogeneous and spherically symmetric, whereas in the second limit it is highly inhomogeneous due to an initially imposed turbulent velocity field. In a broad agreement with previous work (W12, W13, N14 and ZN14), we find marked differences in the outcome of this interaction. In particular, most of the UFO's energy escapes via low-density channels in the clumpy ISM, which drastically reduces the impact of the UFO on the dense cold phase that contains most of the ambient gas in the host galaxy. We conclude that the state of the ISM in a galaxy is just as important as the AGN feedback model invoked, in determining how AGN feedback interacts with the ambient medium. Given the complexity of these processes, the meaning of the $M-\sigma$ mass becomes much less defined than in spherically symmetric analytic models \citep[e.g.,][]{SilkRees98,Fabian99,King03}. In the latter, SMBH below the $M-\sigma$ mass are unable to `clear' their host galaxies and hence continue to grow, whereas SMBH above this mass terminate their and their host's growth by expelling all the gas. In a turbulent ISM, there may be outflows -- of the low density phase -- at $\mbh \ll M_\sigma$, but there could also be inflows -- of the high-density phase -- at $\mbh \gg M_\sigma$. We therefore concluded in \S \ref{sec:speculation} that it is hard to see how tight SMBH-host correlations could occur in an ensemble of {\it isolated} galaxies, and that mergers of galaxies must be crucial to the emergence of the observed relations. The interesting question arising from this, then, is to what extent can the observed correlations be attributed to AGN feedback physics and to what extent be due to the central limit theorem \citep{JahnkeMaccio11}. \section*{Acknowledgements} We acknowledge an STFC grant and an STFC research studentship support. We thank Hossam Aly, Alex Dunhill and Kastytis Zubovas for useful discussions, and Justin Read for the use of SPHS. This research used the ALICE High Performance Computing Facility at the University of Leicester and the DiRAC Complexity system, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). This equipment is funded by BIS National E-Infrastructure capital grant ST/K000373/1 and STFC DiRAC Operations grant ST/K0003259/1. DiRAC is part of the UK National E-Infrastructure. Figs. \ref{time-evo-uniform} and \ref{time-evo-rho} were produced using SPLASH \citep{Price07}. \bibliographystyle{mnras}
1,314,259,994,056
arxiv
\section{Introduction} During the last decades, applications of foliations to theoretical physics have been considerably increased \cite{Do}. At the sixties, J. M. Souriau introduced foliations associated to elementary particles to study their evolutions in the Min\-kows\-ki space-time \cite{Soa}. Later, the use of foliated manifolds has provided very good results in relativity and quantum mechanics \cite{Mo}. For instance, the symplectic bundle structure allows us to enclose a space-time, a dynamical system and its evolution space in the same mathematical structure. In this way, a foliation describes the evolution of the dynamical system \cite{Gu,Lib}. These facts have motivated us to study some general properties of foliations. In this paper, we analyze distributions and foliations that remain invariable by parallel transports. If a foliation is conserved by parallel transports along the integral curves of its vector fields, then this foliation satisfies a {\it motion law}; in this case it can be proved that its leaves are totally geodesic \cite{Lia,Lic}. However, we can obtain more general properties using parallel transports along the integral curves of vector fields of another foliation. For example, if a foliation is conserved by parallel transports along world lines of a congruence of observers, then they observe the leaves of the foliation as invariable along their evolution, and it is interesting to study. According to this idea, we define a new concept of {\it stability} between foliations in Section \ref{sec1}. A particular case of stability (called {\it regular stability}) is studied in Section \ref{sec2}, giving a useful characterization in Theorem \ref{T1}. This result allows us to prove that there are no {\it regularly self-stable} foliations of dimension greater than $1$ in Schwarzschild and Robertson-Walker space-times, but there exist foliations of this kind in other space-times. Finally, in Section \ref{sec3}, we study the existence of regularly self-stable foliations in $pp$-wave space-times. \section{Stability} \label{sec1} We work on a $n$-dimensional space-time manifold $\mathcal{M}$ (although all results and proofs can be generalized to any manifold with a torsion-free metric connection) and we denote the Levi-Civita connection by $\nabla $. We use the convention that $\hbox{\rm span}\left( X_1,\ldots ,X_p\right) $ denotes the subbundle spanned by the vector fields $X_1,\ldots ,X_p$, and it is called \textit{distribution}. Usually, a distribution of dimension $p$ is called a $p$-distribution. All bases of distributions are local. A distribution that has an integral submanifold (leaf) in every point is a {\it foliation}. We say that a foliation is a \textit{flat foliation} if its leaves are flat submanifolds, and we say that a foliation is a \textit{totally geodesic foliation} if its leaves are totally geodesic submanifolds. In previous works \cite{Lib,Lia,Lic}, the concept of {\it motion law} was introduced using foliations: let $\Omega $ be a foliation, $X$ a vector field of $\Omega $, $c$ a maximal integral curve of $X$ and \[ \tau _{t}^{c}:T_{c\left( 0\right) }\mathcal{M}\longrightarrow T_{c\left( t\right) }\mathcal{M} \] the parallel transport along $c\left( t\right) $, for all $t\in I$, where $I$ is the domain of $c$. Then, $\Omega $ verifies a {\it motion law} if \[ \tau _{t}^{c}\Omega \left( c\left( 0\right) \right) =\Omega \left( c\left( t\right) \right) ,\quad t\in I. \] This motion law is equivalent to say that $\Omega $ is a totally geodesic foliation. Intuitively, the curvature of the leaves has to ``adapt'' to the curvature of the space-time. In Definition \ref{def1} we show how to generalize this intuitive idea. \begin{definition} \label{def1} Let $\Omega ,\Omega '$\ be two distributions. We will say that $\Omega $ is {\it stable with respect to} $\Omega '$, and we will denote it by $\nabla _{\Omega '}\Omega \subset \Omega $, if \[ \nabla _{Y}X\in \Omega \] for all vector fields $X\in \Omega $, $Y\in \Omega '$. Particularly, if $\Omega =\Omega '$ we will say that $\Omega $ is {\it self-stable}. \end{definition} Clearly, a distribution $\Omega $ is self-stable if and only if it is a totally geodesic foliation. Note that if $\Omega $ is a self-stable distribution, then $\left[ X,Y\right] =\nabla _{X}Y-\nabla _{Y}X\in \Omega $ for all $X,Y\in \Omega $. So, $\Omega $ is involutive and hence, by Frobenius' Theorem, it is integrable. In consequence, a self-stable distribution is in fact a totally geodesic foliation. In order to know if $\Omega $ is stable with respect to $\Omega '$ it is sufficient to check that, given $\left\{ X_{i}\right\} _{i=1}^{p},\left\{ Y_{j}\right\} _{j=1}^{q}$ some arbitrary bases of $\Omega $\ and $\Omega '$\ respectively, the following conditions hold: \begin{equation} \nabla _{Y_{j}}X_{i}\in \Omega ,\qquad \left\{ \begin{array}{c} i=1,\ldots ,p, \\ j=1,\ldots ,q. \end{array} \right. \label{cbases} \end{equation} Besides, conditions (\ref{cbases}) show that any span of vector fields of $\Omega $ is conserved by parallel transports along the integral curves of vector fields of $\Omega '$. Sometimes it is easier to deal with the orthogonal distribution of $\Omega $ (denoted $\Omega ^{\bot }$) instead of dealing with $\Omega $. In these cases, Proposition \ref{p100} is very useful. \begin{proposition} \label{p100} Let $\Omega ,\Omega ^{\prime}$ be two distributions. Then $\Omega $ is stable with respect to $\Omega ^{\prime}$ if and only if $\Omega ^{\bot }$ is stable with respect to $\Omega ^{\prime}$; {\it i.e.} \[ \nabla _{\Omega ^{\prime}}\Omega \subset \Omega \Longleftrightarrow \nabla _{\Omega ^{\prime}}\Omega ^{\bot }\subset \Omega ^{\bot }. \] \end{proposition} \begin{proof} It is known \cite{He} that for all triplet of vector fields $X,Y,Z$ in a pseudo-Riemannian manifold $\mathcal{M}$ with metric $g$ and connection $\nabla $, we have \begin{equation} Zg\left( X,Y\right) =g\left( \nabla _{Z}X,Y\right) +g\left( X,\nabla _{Z}Y\right) . \label{l2.22} \end{equation} Necessary condition: let $\Omega ,\Omega '$ be two distributions such that $\nabla _{\Omega '}\Omega \subset \Omega $. Given three arbitrary vector fields $X\in \Omega $, $Y\in \Omega ^{\bot }$, $Z\in \Omega '$, by (\ref{l2.22}) we have $0=g\left( X,\nabla _{Z}Y\right) $. So $\nabla _{Z}Y\in \Omega ^{\bot }$, and then $\nabla _{\Omega '}\Omega ^{\bot }\subset \Omega ^{\bot }$. The proof of the sufficient condition is analogous. \end{proof} Proposition \ref{p100} says that $\Omega $ and $\Omega ^{\bot }$ have the same behaviour in relation to stability. So, given a distribution $\Omega $, we can study the stability of $\Omega $ through the stability of $\Omega ^{\bot }$. This is very useful when $\Omega $ is a $\left( n-1\right) $-distribution, since $\Omega ^{\bot }$ is a $1$-distribution and the study of the stability becomes easier. Moreover, if $\Omega $ is a lightlike $\left( n-1\right) $-distribution, then $\Omega ^{\bot }$ is the span of a lightlike vector field of $\Omega $. In this particular case, the leaves of $\Omega $ are interpreted as wave fronts and the integral curves of $\Omega ^{\bot }$ represent the world lines of a congruence of massless particles. Hence, Proposition \ref{p100} can be regarded as a ``wave-particle duality'' result. \section{Regular stability} \label{sec2} We are going to introduce a special kind of stability, called {\it regular stability}. \begin{definition} Let $\Omega ,\Omega '$\ be two distributions. We will say that $\Omega $ is {\it regularly stable with respect to} $\Omega '$, and we will denote it by $\nabla _{\Omega '}\Omega =0$, if there exists a basis $\left\{ X_{i}\right\} _{i=1}^{p}$\ of $\Omega $ such that \[ \nabla _{Y}X_{i}=0\qquad i=1,\ldots ,p, \] for all vector field $Y\in \Omega '$. In this case we will say that $\left\{ X_{i}\right\} _{i=1}^{p}$\ is a {\it regularly stable basis of} $ \Omega $ {\it with respect to} $\Omega '$. Particularly, if $\Omega =\Omega ^{\prime}$ we will say that $\Omega $ is {\it regularly self-stable} and $\left\{ X_{i}\right\} _{i=1}^{p}$ is a {\it regularly self-stable basis of} $\Omega $. \end{definition} Given a regularly stable basis of $\Omega $ with respect to $\Omega '$, its vector fields are conserved by parallel transports along the integral curves of vector fields of $\Omega '$. But only some bases of $\Omega $ have this property. It is clear that any subset of vector fields of a regularly self-stable basis spans a regularly self-stable foliation and, obviously, it is a regularly self-stable basis of this foliation. Particularly, for dimension 1, we obtain that the vector fields of a regularly self-stable basis are geodesic. To illustrate the concept of regular stability, let us see the following example. \begin{example} In spherical coordinates $\left( t,r,\theta ,\varphi \right) $ the metrics of Schwarzschild and Robert\-son-Walker are given by \[ ds^{2}=\frac{1}{a_{\mathrm{S}}}dr^{2}+r^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right) -a_{\mathrm{S}}dt^{2}, \] \[ ds^{2}=\frac{F^{2}}{a_{\mathrm{RW}}^{2}}\left( dr^{2}+r^{2}\left( d\theta ^{2}+\sin ^{2}\theta d\varphi ^{2}\right) \right) -dt^{2}, \] respectively, where $a_{\mathrm{S}}:=1-\frac{2m}{r}$, $F=F\left( t\right) \realgeq 0$, $a_{\mathrm{RW}} :=\left( 1+\frac{1}{4}kr^{2}\right) $ and $k=-1,0,1$. Let us consider the $2$-foliations \[ \Omega :=\hbox{\rm span}\left( \frac{\partial }{\partial \theta },\frac{\partial }{\partial \varphi }\right) ,\quad \Omega ':=\hbox{\rm span}\left( \frac{\partial }{\partial r},\frac{\partial }{\partial t}\right) . \] The leaves of $\Omega $ are surfaces with $r$ and $t$ constant ({\it i.e.} spatial 2-spheres centered on the origin), and the leaves of $\Omega '$ are surfaces with $\theta $ and $\varphi $ constant. It is easy to prove that $\nabla _{\Omega '}\Omega =0$ in both space-times. The following bases of $\Omega $ \[ \left\{ \frac{1}{r}\frac{\partial }{\partial \theta },\quad \frac{1}{r}\frac{\partial }{\partial \varphi }\right\} ,\quad \left\{ \frac{a_{\mathrm{RW}}}{Fr}\frac{\partial }{\partial \theta },\quad \frac{a_{\mathrm{RW}}}{Fr}\frac{\partial }{\partial \varphi }\right\} , \] are regularly stable with respect to $\Omega '$ in Schwarzschild and Robertson-Walker space-times respectively. \end{example} Next, we are going to study the relationships between two regularly stable bases of the same distribution $\Omega $. \begin{proposition} \label{stat} Let $\Omega ,\Omega '$ be two distributions such that $\nabla _{\Omega '}\Omega =0$, and let $\left\{ X_{i}\right\} _{i=1}^{p}$ be a regularly stable basis of $\Omega $ with respect to $\Omega '$. Then, $\left\{ \overline{X}_{i}\right\} _{i=1}^{p}$ is another regularly stable basis of $\Omega $ with respect to $\Omega '$ if and only if there exists a family of functions $\left\{ \alpha _{i}^{j}\right\} _{i,j=1}^{p}$ such that \begin{itemize} \item $\det \alpha _{i}^{j}\neq 0$, \item $\overline{X}_{i}=\alpha _i^jX_j$ for all $i=1,\ldots ,p$, \item $Y\left( \alpha _{i}^{j}\right) =0$, for all $i,j=1,...,p$, and for all $Y\in \Omega '$ ({\it i.e.} $\left\{ \alpha _{i}^{j}\right\} _{i,j=1}^{p}$ is a family of constant functions for $\Omega '$). \end{itemize} \end{proposition} \begin{proof} Necessary condition: let us suppose that $\left\{ \overline{X}_{i}\right\} _{i=1}^{p}$ is a regularly stable basis of $\Omega $ with respect to $\Omega '$. Then, it is clear that there exists a family of functions $\left\{ \alpha _{i}^{j}\right\} _{i,j=1}^{p}$ such that $\det \alpha _{i}^{j}\neq 0$, and $\overline{X}_{i}=\alpha _i^jX_j$ for all $i=1,\ldots ,p$. Let $Y$ be an arbitray vector field in $\Omega '$. Then \begin{equation} 0=\nabla _{Y}\overline{X}_{i}=\nabla _{Y}\left( \alpha _{i}^{j}X_{j}\right) =Y\left( \alpha _{i}^{j}\right) X_{j}+\alpha _{i}^{j}\nabla _{Y}X_{j},\qquad i=1,...,p. \label{fp1} \end{equation} Since $\nabla _{Y}X_{j}=0$ for all $j=1,...,p$, by (\ref{fp1}) we have $Y\left( \alpha _{i}^{j}\right) X_{j}=0$ for all $i=1,...,p$ and then $Y\left( \alpha _{i}^{j}\right) =0$ for all $i,j=1,...,p$. Sufficient condition: it is clear that $\left\{ \overline{X}_{i}\right\} _{i=1}^{p}$ is another basis of $\Omega $. Moreover, given $Y\in \Omega '$ we have \begin{equation} \nabla _{Y}\overline{X}_{i}=\nabla _{Y}\left( \alpha _{i}^{j}X_{j}\right) =Y\left( \alpha _{i}^{j}\right) X_{j}+\alpha _{i}^{j}\nabla _{Y}X_{j},\qquad i=1,...,p. \label{fp2} \end{equation} Since $\nabla _{Y}X_{j}=0$ for all $j=1,...,p$, and $Y\left( \alpha _{i}^{j}\right) =0$ for all $i,j=1,...,p$, by (\ref{fp2}) we have $\nabla _{Y}\overline{X}_{i}=0$ for all $i=1,...,p$, concluding the proof. \end{proof} Proposition \ref{stat} assures us uniqueness, up to constant functions for $\Omega '$, of regularly stable bases of $\Omega $ with respect to $\Omega '$. Moreover, using this result, given a regularly stable basis of $\Omega $ with respect to $\Omega '$, we can construct all the regularly stable bases of $\Omega $ with respect to $\Omega '$. The main result of this paper is given in the next theorem, showing an operational condition for the equivalence between stability and regular stability in terms of the Riemann curvature tensor $R$. This condition is very useful because the study of regular stability is easier than the study of stability in general. \begin{theorem} \label{T1} Let $\Omega $ and $\Omega '$ be a $p$-distribution and a $q$-foliation respectively such that $\nabla _{\Omega '}\Omega \subset \Omega $. Then, $\nabla _{\Omega '}\Omega =0$ if and only if $R\left( Y,Z\right) X=0$ for all $X\in \Omega $ and for all $Y,Z\in \Omega '$. \end{theorem} \begin{proof} Let $\left\{ X_{i}\right\} _{i=1}^{p},\left\{ Y_{j}\right\} _{j=1}^{q}$ be two bases of $\Omega $ and $\Omega '$ respectively, where $p=\dim \Omega $ and $q=\dim \Omega '$. Then, there exist some functions $h_{jk}^{i},$ where $i,k=1,\ldots ,p$ and $j=1,\ldots ,q$ such that \begin{equation} \nabla _{Y_{j}}X_{k}=h_{jk}^{i}X_{i},\qquad \left\{ \begin{array}{c} k=1,\ldots ,p, \\ j=1,\ldots ,q. \end{array} \right. \label{1} \end{equation} Since $\Omega '$ is a foliation, we can suppose that $Y_{j}=\frac{ \partial }{\partial x^{j}}$ for $j=1,\ldots q$, where $\left( x^{1},\ldots ,x^{n}\right) $ is a flat chart for $\Omega '$. Let us state the eqs. $\nabla _{j}\left( y^{i}X_{i}\right) =0$ for $j=1,\ldots ,q$, where $\nabla _{j}$ denotes $\nabla _{\frac{\partial }{\partial x^{j}}}$ and $y^{i}$ are unknown functions for $i=1,\ldots ,p$. By using (\ref{1}), we have \begin{equation} \left( \frac{\partial y^{i}}{\partial x^{j}}+y^{k}h_{jk}^{i}\right) X_{i}=0,\qquad j=1,\ldots ,q. \label{2} \end{equation} Since $\left\{ X_{i}\right\} _{i=1}^{p}$ is a linearly independent family of vector fields, expression (\ref{2}) becomes \begin{equation} \frac{\partial y^{i}}{\partial x^{j}}+y^{k}h_{jk}^{i}=0,\qquad \left\{ \begin{array}{c} i=1,\ldots ,p, \\ j=1,\ldots ,q. \end{array} \right. \label{2b} \end{equation} The system (\ref{2b}) is formed by $q$ first order homogeneous linear sub-systems with $p$ differential equations and $p$ unknown functions each one. In each sub-system, it appears only one differential operator $\frac{\partial }{\partial x^{j}}$ for $j=1,\ldots ,q$. By a Frobenius' Theorem (see \cite{Hi}), we have that (\ref{2b}) has non-zero solutions if and only if some compatibility conditions (between the $q$ sub-systems that form (\ref% {2b})) are satisfied. These conditions are known as the \textquotedblleft cross-derivatives conditions\textquotedblright\ and are built imposing that the cross-derivatives of the functions $y^{i}$ are equal: \[ \left. \begin{array}{c} \frac{\partial }{\partial x^{l}}\left( \frac{\partial y^{i}}{\partial x^{j}}\right) =\frac{\partial }{\partial x^{l}}\left( -y^{k}h_{jk}^{i}\right) =-\frac{\partial y^{k}}{\partial x^{l}}h_{jk}^{i}-y^{k}\frac{\partial h_{jk}^{i}}{\partial x^{l}} \\ \frac{\partial }{\partial x^{j}}\left( \frac{\partial y^{i}}{\partial x^{l}}\right) =\frac{\partial }{\partial x^{j}}\left( -y^{k}h_{lk}^{i}\right) =-\frac{\partial y^{k}}{\partial x^{j}}h_{lk}^{i}-y^{k}\frac{\partial h_{lk}^{i}}{\partial x^{j}} \end{array} \right\} \Longrightarrow \] \begin{equation} \Longrightarrow \frac{\partial y^{k}}{\partial x^{l}}h_{jk}^{i}+y^{k}\frac{\partial h_{jk}^{i}}{\partial x^{l}}=\frac{\partial y^{k}}{\partial x^{j}}h_{lk}^{i}+y^{k}\frac{\partial h_{lk}^{i}}{\partial x^{j}},\qquad \left\{ \begin{array}{c} i=1,\ldots ,p, \\ j,l=1,\ldots ,q. \end{array} \right. \label{condtemp} \end{equation} Taking into account (\ref{2b}) and changing indexes, from (\ref{condtemp}) we obtain \[ \left( h_{lk}^{m}h_{jm}^{i}-h_{jk}^{m}h_{lm}^{i}+\frac{\partial h_{lk}^{i}}{\partial x^{j}}-\frac{\partial h_{jk}^{i}}{\partial x^{l}}\right) y^{k}=0,\qquad \left\{ \begin{array}{c} i=1,\ldots ,p, \\ j,l=1,\ldots ,q. \end{array} \right. \] So, a necessary and sufficient condition for the system (\ref{2b}) to have non-zero solutions is \begin{equation} h_{lk}^{m}h_{jm}^{i}-h_{jk}^{m}h_{lm}^{i}+\frac{\partial h_{lk}^{i}}{\partial x^{j}}-\frac{\partial h_{jk}^{i}}{\partial x^{l}}=0,\qquad \left\{ \begin{array}{c} i,k=1,\ldots ,p, \\ j,l=1,\ldots ,q. \end{array} \right. \label{cond} \end{equation} Moreover, if (\ref{cond}) is satisfied, the set of solutions of (\ref{2b}) form a vector space of dimension $p$ (see \cite{Hi}), {\it i.e.} there exists a family of differentiable functions $\left\{ f_{i}^{k}\right\} _{i,k=1}^{p}$ such that $\det f_{i}^{k}\neq 0$ and any solution of (\ref{2b}) has the form \[ y^{k}=C^{i}f_{i}^{k},\qquad k=1,\ldots ,p, \] where $\left\{ C^{i}\right\} _{i=1}^{p}$ are parameter functions ({\it i.e.} $\frac{\partial C^{i}}{\partial x^{j}}=0$ for $i=1,\ldots ,p$ and $j=1,\ldots ,q$). Hence, $\left\{ f_{i}^{k}X_{k}\right\} _{i=1}^{p}$ is a regularly stable basis of $\Omega $ with respect to $\Omega '$, and so $\nabla _{\Omega '}\Omega =0$ if and only if (\ref{cond}) is satisfied. So, we have to prove that (\ref{cond}) is equivalent to $R\left( Y,Z\right) X=0$ for all $X\in \Omega $ and for all $Y,Z\in \Omega '$. In fact, from the linearity of the Riemann curvature tensor, we have to prove that (\ref{cond}) is equivalent to $R\left( \frac{\partial }{\partial x^{j}},\frac{\partial }{\partial x^{l}}\right) X_{i}=0$ for all $i=1,\ldots ,p$ and $j,l=1,\ldots q$: \[ R\left( \frac{\partial }{\partial x^{j}},\frac{\partial }{\partial x^{l}}\right) X_{i}=0\Longleftrightarrow \nabla _{j}\nabla _{l}X_{i}-\nabla _{l}\nabla _{j}X_{i}=0,\qquad \left\{ \begin{array}{c} i=1,\ldots ,p, \\ j,l=1,\ldots ,q. \end{array} \right. \] Applying (\ref{1}) we have \[ \Longleftrightarrow \nabla _{j}\left( h_{li}^{k}X_{k}\right) -\nabla _{l}\left( h_{ji}^{k}X_{k}\right) =0 \] \[ \Longleftrightarrow h_{li}^{k}\nabla _{j}X_{k}+\frac{\partial h_{li}^{k}}{\partial x^{j}}X_{k}-h_{ji}^{k}\nabla _{l}X_{k}-\frac{\partial h_{ji}^{k}}{\partial x^{l}}X_{k}=0 \] \[ \Longleftrightarrow h_{li}^{k}h_{jk}^{m}X_{m}+\frac{\partial h_{li}^{k}}{\partial x^{j}}X_{k}-h_{ji}^{k}h_{lk}^{m}X_{m}-\frac{\partial h_{ji}^{k}}{ \partial x^{l}}X_{k}=0,\qquad \left\{ \begin{array}{c} i=1,\ldots ,p, \\ j,l=1,\ldots ,q. \end{array} \right. \] Changing indexes, \[ \Longleftrightarrow \left( h_{lk}^{m}h_{jm}^{i}-h_{jk}^{m}h_{lm}^{i}+\frac{\partial h_{lk}^{i}}{\partial x^{j}}-\frac{\partial h_{jk}^{i}}{\partial x^{l}}\right) X_{i}=0 \] \begin{equation} \Longleftrightarrow h_{lk}^{m}h_{jm}^{i}-h_{jk}^{m}h_{lm}^{i}+\frac{\partial h_{lk}^{i}}{\partial x^{j}}-\frac{\partial h_{jk}^{i}}{\partial x^{l}}=0,\qquad \left\{ \begin{array}{c} i,k=1,\ldots ,p, \\ j,l=1,\ldots ,q. \end{array} \right. \label{cond2} \end{equation} Since expressions (\ref{cond}) and (\ref{cond2}) are the same, we conclude the proof. \end{proof} Next, we are going to give some useful corollaries of Theorem \ref{T1} related to some interesting cases. \begin{corollary} \label{col1} Let $\Omega $ and $\Omega '$ be a $p$-distribution and a $q$-foliation respectively such that $\nabla _{\Omega '}\Omega \subset \Omega $. \begin{description} \item[(i)] In a flat space-time (Minkowski) we have that $\nabla _{\Omega '}\Omega \subset \Omega $ if and only if $\nabla _{\Omega '}\Omega =0$. \item[(ii)] If $q=1$ we have that $\nabla _{\Omega '}\Omega \subset \Omega $ if and only if $\nabla _{\Omega '}\Omega =0$. \end{description} \end{corollary} In these cases, the study of stability becomes the study of regular stability. This fact simplifies remarkably the problem. \begin{figure}[tbp] \begin{center} \includegraphics[width=0.4\textwidth]{fig1} \end{center} \caption{If $\Omega $ is a $(n-1)$-foliation and $\Omega '=\mathrm{span}(Y)$ is a 1-foliation such that $\nabla _{\Omega '}\Omega \subset \Omega $, then $\nabla _{\Omega '}\Omega =0$ by Corollary \ref{col1} (ii). Moreover, if $Y$ is not in $\Omega $, then it is possible to reconstruct the entire foliation $\Omega $ from only one leaf, by means of parallel transports of a regularly stable basis of $\Omega $ with respect to $\Omega '$ along the integral curves of $Y$.} \label{fig1} \end{figure} Let us suppose that $\Omega '=\mathrm{span}(Y)$. According to Corollary \ref{col1} (ii), there exist regularly stable bases of $\Omega $ with respect to $\Omega '$, {\it i.e.} bases of $\Omega $ whose vector fields are conserved by parallel transports along the integral curves of $Y$. If $\Omega $ is a $(n-1)$-foliation and $Y$ is a vector field which is not contained in $\Omega $, then it is possible to reconstruct the entire foliation from only one leaf of $\Omega $, by means of parallel transports of a regularly stable basis of $\Omega $ with respect to $\Omega '$ along the integral curves of $Y$ (see fig. \ref{fig1}). Moreover, if $Y$ is a future-pointing timelike vector field, then its integral curves represent observers and therefore, each observer detects $\Omega $ as invariable along its evolution ({\it i.e.} along its world line), as we show in the next example. \begin{example} In the Schwarzschild space-time with spherical coordinates, $U:=\frac{\partial }{\partial t}$ is a future-pointing timelike vector field, whose integral curves represent stationary observers. We are going to find all the lightlike 3-foliations that are stable with respect to $\hbox{\rm span}\left( U\right) $, {\it i.e.} all the light waves that are observed as invariable by any stationary observer. If we don't take into account Corollary \ref{col1} (ii), this becomes a hard work. But, applying this result, we only have to find the lightlike 3-foliations that are regularly stable with respect to $\hbox{\rm span}\left( U\right) $. We obtain only two lightlike 3-foliations: \[ \hbox{\rm span}\left( \pm \frac{\partial }{\partial t}+\frac{\partial }{\partial r}a_{\mathrm{S}},\frac{\partial }{\partial \theta },\frac{\partial }{\partial \varphi }\right) . \] The leaves of these foliations are spheres expanding and contracting respectively at the speed of light. So, all the observers represented by the integral curves of $U$ detect these foliations as invariable along their evolutions. But the most remarkable fact is that there are no other foliations with this property. \end{example} \begin{corollary} \label{col2} Let $\Omega $ be a self-stable $p$-foliation. Then $\Omega $ is regularly self-stable if and only if \begin{equation} \label{corii} R\left( Y,Z\right) X=0, \end{equation} for all $X,Y,Z\in \Omega $. \end{corollary} It is important to remark that condition (\ref{corii}) does not imply that $\Omega $ is a flat foliation. For example, in the Minkowski space-time, any foliation satisfies (\ref{corii}) but it is not necessarily flat. However, in general, if $\Omega $ is totally geodesic ({\it i.e.} it is self-stable) then $\overline{R}\equiv R$ and vice versa, in the sense that $\overline{R}\left( \overline{Y},\overline{Z}\right) \overline{X}=R\left( Y,Z\right) X$ where $\overline{R}$ is the Riemann curvature tensor of the metric $\overline{g}$ induced in the leaves of $\Omega $, $i:\mathcal{M}\left( \Omega \right) \longrightarrow \mathcal{M}$ is the canonical inclusion and $X=i_{\ast }\overline{X},Y=i_{\ast }\overline{Y},Z=i_{\ast }\overline{Z}$. So, if $\Omega $ is self-stable, then it is flat if and only if (\ref{corii}) is satisfied. A foliation is regularly self-stable if and only if it is totally geodesic and flat, and hence, the regular self-stability generalizes the concept of flat wave fronts, introduced by J.M. Souriau in the Minkowski space-time \cite{Sob}. We will discuss this fact deeply in Section \ref{sec5}. \begin{example} In the Schwarzschild and Robertson-Walker space-times, we can prove easily that there are not any distribution $\Omega $ of dimension greater than 1 such that $R\left( Y,Z\right) X=0$ for all $X,Y,Z\in \Omega $. So, by Corollary \ref{col2}, there are not any regularly self-stable foliations of dimension greater than 1. But, of course, there exist self-stable foliations, for example, in spherical coordinates, the timelike $2$-foliation \[ \hbox{\rm span}\left( \frac{\partial }{\partial r},\frac{\partial }{\partial t}\right) \] is a self-stable foliation whose leaves are surfaces with $\theta $ and $\varphi $ constant. \end{example} We will show, in Section \ref{sec3}, that there exist regularly self-stable foliations of dimension greater than 1 in $pp$-wave space-times. Moreover, we can find these kinds of foliations in other space-times, as we show in the next example. \begin{example} If we consider the metric $ds^2=-\frac{1}{z}dt^2+dx^2+dy^2+zdz^2$ in the open set $\left\{ \left( t,x,y,z\right) : z>0\right\} $, then the Einstein tensor is positive definite. So it is a valid non-flat space-time. The spacelike 2-foliation given by \[ \hbox{\rm span}\left( \frac{\partial }{\partial x},\frac{\partial }{\partial y}\right) \] is self-stable and satisfies (\ref{corii}). So, by Corollary \ref{col2}, we obtain that this foliation is a regularly self-stable foliation. A regularly self-stable basis is given by $\left\{ \frac{\partial }{\partial x},\frac{\partial }{\partial y}\right\} $. \end{example} \section{Examples of regularly self-stable foliations in $pp$-wave space-times} \label{sec3} It is known \cite{Ma} that, in standard coordinates $\left( u,v,y,z\right) $, a $pp$-wave metric can be expressed by $ds^{2}=dy^{2}+dz^{2}-2Hdu^{2}-2dudv$, where $u,v$ are the retarded and the advanced time coordinates respectively, and $H=H\left( u,y,z\right)$. According to \cite{Ma}, in a $pp$-wave space-time, the lightlike hypersurfaces with $u$ constant are leaves of a lightlike $3$-foliation $\Omega $ given by \[ \Omega :=\hbox{\rm span}\left( \frac{\partial }{\partial v},\frac{\partial }{\partial y},\frac{\partial }{\partial z}\right) , \] and its leaves are called {\it plane-fronted gravitational waves with parallel rays}. The foliation $\Omega $ is self-stable and flat, {\it i.e.} $R \left( Y,Z \right) X=0$ for all $X,Y,Z\in \Omega $. By applying Corollary 2, we obtain that $\Omega $ is a regularly self-stable foliation. Then there exists a basis $\left\{ X_{i}\right\} _{i=1}^{3}$\ of $\Omega $\ such that $\nabla _{X}X_{i}=0$, $i=1,2,3$, for all vector field $X\in \Omega $. This fact gives us a new geometrical perspective of the plane-fronted gravitational waves with parallel rays, because it ensures us explicitly the existence of this kind of bases of $\Omega $ that remain invariable under parallel transports along the integral curves of vector fields of $\Omega $. For example, a regularly self-stable basis of $\Omega $ is given by $\left\{ \frac{\partial }{\partial v},\frac{\partial }{\partial y}, \frac{\partial }{\partial z}\right\} $, and we can use Proposition \ref{p100} to find all the regularly self-stable bases of $\Omega $. On the other hand, the subfoliations $\hbox{\rm span}\left( \frac{\partial }{\partial y},\frac{\partial }{\partial z}\right) $, $\hbox{\rm span}\left( \frac{\partial }{\partial v},\frac{\partial }{\partial y}\right) $, and $\hbox{\rm span}\left( \frac{\partial }{\partial v},\frac{\partial }{\partial z}\right) $ are regularly self-stable $2$-foliations. The first one is spacelike (its leaves are called {\it wave surfaces} \cite{Ma}) and the others are lightlike. Regularly self-stable bases of these subfoliations are given by $\left\{ \frac{\partial }{\partial y},\frac{\partial }{\partial z}\right\} $, $\left\{ \frac{\partial }{\partial v},\frac{\partial }{\partial y}\right\} $ , and $\left\{ \frac{\partial }{\partial v},\frac{\partial }{\partial z}\right\} $ respectively. Moreover, the timelike $2$-foliation $\hbox{\rm span}\left( \frac{\partial }{\partial u},\frac{\partial }{\partial v}\right) $ is regularly self-stable too. A regularly self-stable basis is now given by $\left\{ \frac{\partial }{\partial u},\frac{\partial }{\partial v}\right\} $. \section{Discussion and comments} \label{sec5} We have introduced some new properties for foliations: stability and regular stability. Theorem \ref{T1} provides a relationship between both concepts in terms of the curvature. As particular cases, self-stability and regular self-stability are two interesting properties: a self-stable foliation is conserved by parallel transports along the integral curves of vector fields of the foliation, and a regularly self-stable foliation has a set of bases (characterized by Proposition \ref{p100}) whose vector fields are conserved by parallel transports along the integral curves of vector fields of the foliation, {\it i.e.} the curvature of the leaves is ``adapted'' to the curvature of the space-time. From Corollary \ref{col2} it follows that regular self-stability is a motion law for flat foliations, in contrast to self-stability, that it is a motion law for foliations in general. Finally, we show a direct interpretation of the leaves of a regularly self-stable lightlike $p$-foliation $\Omega $ with $p=n-1$, extending some properties of flat wave fronts given in special relativity (see \cite{Soa,Sob}) to general relativity: let $\left\{ X_{1},\ldots,X_{p}\right\} $ be a basis of $\Omega $, where $X_{1},\ldots,X_{p-1}$ are spacelike and $X_{p}$ is lightlike. Given a world line of a future-pointing timelike vector field $U$ ({\it i.e.} an observer), the wave fronts of $\Omega $ relative to $U$ are the leaves of the intersection of $\Omega $ and the Landau foliation ${}\mathcal{L}_{U}$ associated to $U$ (see \cite{Lib,Lia,Bo,Oli}). Let $U$ be a future-ponting timelike vector field such that the wave fronts of $\Omega $ relative to $U$ are the leaves of the foliation $\hbox{\rm span}\left( X_{1},\ldots,X_{p-1}\right) $, {\it i.e.} \begin{equation} \Omega \cap \mathcal{L}_{U}=\hbox{\rm span}\left( X_{1},\ldots,X_{p-1}\right) . \end{equation} Since $\left\{ X_{1},\ldots,X_{p-1}\right\} $ is a regularly self-stable basis, the leaves of $\hbox{\rm span}( X_{1},\ldots $ $\ldots ,X_{p-1})$ are totally geodesic and flat. So $U$ observes the wave fronts of $\Omega $ as spacelike totally geodesic and flat $\left( n-2\right) $-planes moving in the relative direction of $X_{p}$ ({\it i.e.} $X_{p}$ projected onto the leaves of ${}\mathcal{L}_{U}$) at the speed of light. But we cannot ensure that the wave fronts of $\Omega $ relative to any observer are totally geodesic and flat $\left( n-2\right) $-planes.
1,314,259,994,057
arxiv
\section{Acknowledgments} We thank the referees for their careful reading of our manuscript and for their constructive comments, in particular, for the suggestion of Ref. \citep{kim2002effects}, which made us aware of works on transmission zeros in channels with AB rings. This research was partially supported by the Fetzer Franklin Fund of the John E. Fetzer Memorial Trust. I.L.P. acknowledges financial support from the Science without Borders Program (CNPq, Brazil). Y.A. acknowledges support from the Israel Science Foundation (Grant 1311/14), Israeli Centers of Research Excellence (ICORE) Center ``Circle of Light,'' and the German-Israeli Project Cooperation (Deutsch-Israelische Projektkoopera-tion, DIP).
1,314,259,994,058
arxiv
\section{Introduction} In this paper, we consider elementary properties (i.\,e. properties expressible in the first order logic) of the automorphism groups of reduced Abelian $p$-groups. The first who considered connection of elementary properties of the different models with elementary properties of derivative models was A.I.\,Maltsev \cite{Maltsev} in 1961. He proved that the groups $G_n(K)$ and $G_m(L)$, where $G=\GL,\SL,\PGL,\PSL$ and $n,m\geqslant 3$, $K,L$ are fields of characteristics~$0$, are elementary equivalent iff $m=n$ and fields $K$ and $L$ are elementary equivalent. In 1992, this theory was continued with the help of ultraproduct construction and Keisler-Chang Isomorphism Theorem by K.I.\,Beidar and A.V.\,Mikhalev in \cite{BeiMikh}, in which they found a general approach to problems of elementary equivalence of different algebraic structures and generalized Maltsev theorem to the case of $K$ and $L$ being skew fields or associative rings. Continuation of this research was made in papers by E.I.\,Bunina (\cite{Bun1}--\cite{Bun4}, 1998--2009), in which the results of A.I\,Maltsev were extended for unitary linear groups over skew fields and associative rings with involutions, and also for Chevalley groups over fields and local rings. In 2000, V.Tolstikh considered in \cite{Tolstyh} the connection of the second order properties of skew fields with the first order properties of automorphism groups of spaces of infinite dimension over these skew fields. In 2003, E.I.~Bunina and A.V.~Mikhalev considered the connection of the second order properties of associative rings and the first order properties of categories of modules, endomorphism rings, automorphism groups and projective spaces of modules of infinite rank over these rings (see~\cite{categories}). In~\cite{Abelian}, E.I.~Bunina and A.V.~Mikhalev discovered connection of second order properties of an Abelian $p$-group with first order properties of its endomorphism ring (the analogue of Baer--Kaplansky Theorem for elementary equivalence). In~\cite{My1}, E.I.~Bunina and M.A.~Roizner discovered connection of first order properties of the automorphism group of an Abelian $p$-group with second order properties of the divisible part and the basic subgroup of the group. This paper continues the paper~\cite{My1}. We discover connection of first order properties of the automorphism group of an Abelian $p$-group with second order properties of the group bounded by its final rank provided that the group is reduced and $p>2$. \section{Background} It is said that an~element~$a\in A$ is \emph{divisible by} a~positive integer~$n$ (denoted as $n\mid a$) if there is an element $x\in A$ such that $nx=a$. A~group~$D$ is called \emph{divisible} if $n\mid a$ for all $a\in D$ and all natural~$n$. The groups $\mathbb Q$ and $\mathbb Z(p^\infty)$ are examples of divisible groups. A~group~$A$ is called \emph{reduced} if it has no nonzero divisible subgroups. A~subgroup $G$ of a~group~$A$ is called \emph{pure} if the equation $nx=g\in G$ is solvable in~$G$ whenever it is solvable in the whole group~$A$. In other words, $G$~is pure if and only if $$ \forall n\in \mathbb Z\quad nG=G\cap nA. $$ A~subgroup~$B$ of a~group~$A$ is called a~$p$-\emph{basic subgroup} if it satisfies the following constraints: \begin{enumerate} \item $B$ is a~direct sum of cyclic $p$-groups and infinite cyclic groups; \item $B$ is pure in~$A$; \item $A/B$ is $p$-divisible. \end{enumerate} Every group, for every prime~$p$, contains $p$-basic subgroups~(\cite{Fuks}). We now focus on $p$-groups, where $p$-basic subgroups are particularly important. If $A$ is a~$p$-group and $q$ is a~prime different from~$p$ then, evidently, $A$ has only one $q$-basic subgroup, namely~$0$. Therefore, in $p$-groups we may refer to the $p$-basic subgroups simply as \emph{basic} subgroups without confusion. We need the following facts about basic subgroups. \begin{theorem}[\cite{Sele7}]\label{BasicMaxBounded} Assume that $B$ is a~subgroup of a~$p$-group~$A$, $B=\bigoplus\limits_{n=1}^\infty B_n$, and $B_n$~is a~direct sum of groups~$\mathbb Z(p^n)$. Then $B$~is a~basic subgroup of~$A$ if and only if for every integer $n> 0$, the subgroup $B_1\oplus \dots\oplus B_n$ is a~maximal $p^n$-bounded direct summand of~$A$. \end{theorem} \begin{theorem}[\cite{Fuks}]\label{BasicEndIm} A basic subgroup of a~$p$-group~$A$ is an endomorphic image of the group~$A$. \end{theorem} An infinite system $L=\{ a_i\}_{i\in I}$ of elements of the group~$A$ is called \emph{independent} if every finite subsystem of~$L$ is independent. An independent system~$M$ of~$A$ is \emph{maximal} if there is no independent system in~$A$ containing~$M$ properly. By the \emph{rank} $r(A)$ of a~group~$A$ we mean the cardinality of a~maximal independent system containing only elements of infinite and prime power orders. The \emph{final rank} of a~basic subgroup~$B$ of a~$p$-group~$A$ is the infimum of the cardinals $r(p^nB)$. In the paper~\cite{My1}, E.I.~Bunina and M.A.~Roizner introduced certain formulas for operating with involutions (i.\,e.~automorphisms of order~$2$). The declarations of these formulas follow below. An involution~$\eps$ corresponds to the decomposition of the group~$A$ into direct sum $A = A_\eps^+ \oplus A_\eps^-$, where $A_\eps^+=\{ a\in A\mid \eps a=a\}$ and $A_\eps^-=\{ a\in A\mid \eps a=-a\}$. Formula~$Extreme(\eps)$ means that the automorphism~$\eps$ is an extreme involution (i.\,e. an involution which has one of its summands~$A_\eps^+$ and $A_\eps^-$ as indecomposable). The indecomposable summand for the~involution~$\eps$ is denoted by~$A_\eps$, while the other summand is denoted by~$A_\eps^\perp$. With only an~involution~$\xi$, one cannot distinguish the groups$A_\xi^+$ and $A_\xi^-$ in the first order language. Therefore we consider pairs~$(\xi, \eps)$ with the condition~$Extreme(\eps)\land\xi\eps = \eps\xi$. For such pairs, either $A_\eps \subset A_\xi^+$ or $A_\eps \subset A_\xi^-$. $A_\eps$ indicates the required group among~$A_\xi^+$ and~$A_\xi^-$ (it is denoted by~$A_{(\xi, \eps)}$). The property of being a pair is denoted by the formula~$Pair(\xi, \eps) \define \xi^2 = 1 \land Extreme(\eps) \land \xi\eps = \eps\xi$. Instead of $\forall \xi \forall \eps (Pair(\xi, \eps) \then (\dots))$ and $\exists \xi \exists \eps (Pair(\xi, \eps) \land (\dots))$, we will write $\forall (\xi, \eps)$ and $\exists(\xi, \eps)$ respectively. The following formulas dealing with involutions, extreme involutions and involution pairs, were defined in the paper~\cite{My1}~(p.\,7--8, 10, 24): 1) $\eps \in \eps_1 \oplus \eps_2$ iff $A_\eps \subset A_{\eps_1} \oplus A_{\eps_2}$ and $A_\eps^\perp\supset A_{\eps_1}^\perp \cap A_{\eps_2}^\perp$ for extreme involutions~$\eps$, $\eps_1$, $\eps_2$ such that $\eps_1 \eps_2 = \eps_2 \eps_1$; 2) $ \eps_2 \in (\xi_1, \eps_1)$ iff $A_{\eps_2} \subset A_{(\xi_2, \eps_2)}$ for an~extreme involution~$\eps_2$ and a~pair~$(\xi_1, \eps_1)$; 3) $(\xi_1, \eps_1) \subset (\xi_2, \eps_2)$ iff $A_{(\xi_1, \eps_1)} \subset A_{(\xi_2, \eps_2)}$; 4) $(\xi_1, \eps_1) = (\xi_2, \eps_2)$ iff $A_{(\xi_1, \eps_1)} = A_{(\xi_2, \eps_2)}$; 5) $(\xi_1, \eps_1) \cap (\xi_2, \eps_2) = (\xi_3, \eps_3)$ iff $A_{(\xi_3, \eps_3)} = A_{(\xi_1, \eps_1)} \cap A_{(\xi_2, \eps_2)}$; 6) $(\xi_1, \eps_1) \oplus (\xi_2, \eps_2) = (\xi_3, \eps_3)$ iff $A_{(\xi_3,\eps_3)} = A_{(\xi_1, \eps_1)} \oplus A_{(\xi_2, \eps_2)}$; 7) $\overline{(\xi_1, \eps_1)} = (\xi_2, \eps_2)$ iff $A_{(\xi_1, \eps_1)} \oplus A_{(\xi_2, \eps_2)} = A$; 8) the formula $f(\eps_1) = \eps_2$ for extreme involutions~$\eps_1, \eps_2$ and an automorphism~$f$ means that~$f(A_{\eps_1}) = A_{\eps_2}$. But since this situations is possible if only the summands~$A_{\eps_1}$ and~$A_{\eps_2}$ have equals orders it is convenient to define another formala for matching summands which have different orders: \begin{multline*} \eps_1 \mapson{f} \eps_2 \define Extreme(\eps_1) \land Extreme(\eps_2) \land f(\eps_1) \in \eps_1 \oplus \eps_2 \land f(\eps_1) \ne \eps_1 \land f(\eps_1) \ne \eps_2; \end{multline*} 9) the formula $ord (\eps_1) < ord (\eps_2)$ means that the order of an~involution~$\eps_1$ (i.\,e. the order of the corresponding summand~$A_{\eps_1}$) is less than the order of an~involution~$\eps_2$. Similarly, all the other order relations can be defined. \section{Specifying basic subgroup} Let $A$ be an~unbounded reduced Abelian $p$-group with cardinality~$\mu$ and final rank~$\mu_{fin}$ of~the basic subgroup. There exists a decomposition~$A = A_1 \oplus A_2$ such that the order of any indecomposable subgroup of the group~$A_1$ is less than the order of any indecomposable subgroup of the group~$A_2$ and the basic subgroup of $A_2$ has rank~$\mu_{fin}$. The formula specifying this decomposition follows: \begin{lemma}\label{Lemma1} Define a~formula: \[ ByOrd\Pair \define \forall \eps' \Big( Extreme(\eps') \then \big( \eps' \in \Pair \iff ord(\eps') \geq ord(\eps) \big) \Big). \] The~formula \begin{multline*} Final\Pair[0] \define ByOrd\Pair[0] \land \forall \Pair[1] \subsetneq \Pair[0] \bigg( ByOrd\Pair[1] \then\\ \then \exists f \Big( \big( \forall \eps \in \overline{\Pair[1]}\quad f(\eps) = \eps \big) \land \big( \forall \eps \in \Pair[0] \cap \overline{\Pair[1]}\quad \exists \eps' \in \Pair[1]\quad \eps' \mapson{f} \eps \big) \Big) \bigg) \end{multline*} specifies the decomposition~$A = A_1 \oplus A_2$, where $A_1 = A_{\overline{\Pair[0]}}$, $A_2 = A_{\Pair[0]}$. \end{lemma} \begin{proof} The formula~$ByOrd$ specifies such decompositions~$A = A_{\overline \Pair} \oplus A_{\Pair}$ that the order of any indecomposable subgroup of the group~$A_{\overline \Pair}$ is less than the order of any indecomposable subgroup of the group~$A_{\Pair}$. These decompositions will be referred as order-decompositions. The~formula~$Final$ states that, first, the decomposition~$A = A_{\overline{\Pair[0]}} \oplus A_{\Pair[0]}$ is an order-decomposition and, second, for any order-decomposition~$A_{\Pair[0]} = A_{\Pair[1]} \oplus A_{\Pair[0] \cap \overline{\Pair[1]}}$ the rank of the group~$A_{\Pair[0] \cap \overline{\Pair[1]}}$ is less than or equal to the rank of the basic subgroup of the group~$A_{\Pair[1]}$. The latter means that the rank of the basic subgroup of~$A_{\Pair[0]}$ equals to the final rank~$\mu_{fin}$. \end{proof} Fix the pair~$\Pair[0]$, and let~$A_{low} = A_{\overline{\Pair[0]}}$, $A_{fin} = A_{\Pair[0]}$. \begin{lemma}\label{Lemma2} For an unbounded reduced Abelian $p$-group~$A$, there exists an automorphism~$\nu$ such that~$\nu\,\big|_{A_{low}} = \id$ and, for some basic subgroup~$B$, $\im \left(\nu - \id\,\big|_{A_{fin}}\right) = B$, i.\,e. for any~$b \in B$ there exists~$a \in A_{fin}$ such that~$\nu(a) = a + b$ and, vice versa, for any~$a \in A_{fin}$, $\nu(a) = a + b$ for some~$b \in B$. \end{lemma} \begin{proof} There exists a basic subgroup~$B$ such that~$A_{low} \subset B$. By Theorem~\ref{BasicEndIm}, there exists such endomorphism~$\eps\colon\,A \to B$ that~$\eps\,\big|_{A_{low}} = \id$ and~$\im \left(\eps\,\big|_{A_{fin}} \right) = B\cap A_{fin}$, and for all~$a \in A_{fin}$ $,\ord(\eps(a)) < \ord(a)$. We define the automorphism~$\nu$ on~$A_{low}$ and~$A_{fin}$ independently in the following way: $\nu\,\big|_{A_{low}} = \id$ and $\nu\,\big|_{A_{fin}} = \id + \eps$. Clearly, it is the required automorphism. \end{proof} We associate each automorphism~$\nu$ with a~subgroup~$B_\nu$ with the following formula. The formula indicates if the indecomposable subgroup for an extreme involution~$\eps$ lies in~$B_\nu$: \begin{multline*} InBase(\eps, \nu) \define \exists\,\eps_{low}, \eps_{fin} \Big( Extreme(\eps_{low}) \land Extreme(\eps_{fin}) \land \eps \in \eps_{low} \oplus \eps_{fin} \land\\ \land \eps_{low} \subset A_{low} \land \eps_{fin} \subset A_{fin} \land \exists\,\eps'\subset A_{fin} \big( \eps' \mapson{\nu} \eps_{fin} \land \ord(\eps') > \ord(\eps_{fin}) \big) \Big) \end{multline*} \begin{lemma}\label{Lemma3} For the automorphism~$\nu$ defined in Lemma~\ref{Lemma2}, the corresponding subgroup~$B_\nu$ coincide with the original basic subgroup~$B$. \end{lemma} \begin{proof} Let $A_\eps$ lie in~$B$. Then~$A_\eps$ lies in the direct sum~$A_{\eps_{low}} \oplus A_{\eps_{fin}}$, where $A_{\eps_{low}} \subset B \cap A_{low} =: B_{low}$ and $A_{\eps_{fin}} \subset B \cap A_{fin} =: B_{fin}$. Let $A_{\eps_{fin}} = \langle b \rangle$. Then there exists an element~$a \in A_{fin}$ with greater order than $b$ such that $\nu(a) = a + b$. Then the extreme involution corresponding to the indecomposable subgroup~$\langle a \rangle$ can be chosen as~$\eps'$ since $\langle a \rangle \oplus \nu(\langle a \rangle) = \langle a \rangle \oplus A_{\eps_{fin}}$. In reverse, let the formula~$InBase$ hold for an~extreme involution~$\eps$. Then $A_{\eps_{low}}$ clearly lies in~$B$. Consider~$A_{\eps_{fin}} = \langle b \rangle$. Let $A_{\eps'} = \langle a \rangle$, $\nu(a) = ka + lb$. Be the construction of~$\nu$, $k = 1$. Since $\langle a \rangle \oplus \langle ka + lb \rangle = \langle a \rangle \oplus \langle b \rangle$, $l{\not\vdots} p$. Then for some~$m$, $\nu(ma) = ma + b$, i.\,e. $b \in B$. The statement is proved. \end{proof} Now we need to write the requirement on~$\nu$ stating that~$B_\nu$ is basic. We introduce some formulas. 1. The formula \[ Rest_\eps(\xi_1, \eps_1) \define \forall \eps_2 \big( \eps_2 \in (\xi_1, \eps_1) \then ord(\eps_2) \leq ord(\eps) \big) \] selects involution pairs~$(\xi_1, \eps_1)$ which correspond to direct sums of cyclic groups with order at most~$ord(A_\eps)$. 2. The formula \begin{multline*} MaxRest_{\eps}(\xi_1, \eps_1) \define Rest_\eps(\xi_1, \eps_1)\land \forall (\xi_2, \eps_2) \big( Rest_\eps(\xi_2, \eps_2) \then (\xi_1, \eps_1) \not \subset (\xi_2, \eps_2) \big) \end{multline*} selects involution pairs~$(\xi_1, \eps_1)$ which correspond to maximal direct sums of cyclic groups with order at most~$ord(A_\eps)$. \begin{lemma}\label{Lemma4} For an automorphism~$\nu$, the formula \begin{multline*} IsBase(\nu) \define \forall \eps_0\ Extreme(\eps_0) \then \forall (\xi, \eps) \Big( \forall \eps' \big(\eps' \subset (\xi, \eps) \iff\\ \iff \ord(\eps') \leq \ord(\eps_0) \land InBase(\eps', \nu)\big) \then MaxRest_{\eps_0}(\xi, \eps)\Big) \end{multline*} is true if and only if the subgroup~$B_\nu$ is basic. \end{lemma} \begin{proof} The requirement~$IsBase(\nu)$ means that each limitation of the subgroup~$B_\nu$ with the order~$\ord(\eps_0)$ is a maximal $\ord(\eps_0)$-bounded summand of the group~$A$. The statement of the lemma is implied from Theorem~\ref{BasicMaxBounded}. \end{proof} \section{Specifying definable sets in basic subgroup} In the paper~\cite{My1}, a variant of Shelah theorem (\cite{Shelah}) was proved for the case when $\Omega$ is the set of automorphism tuples encoding endomorphisms of the group~$A=\bigoplus\limits_\mu \Z{p^l}\ (l \in \mathbb N)$. \begin{theorem}\label{ShelahOriginal} There exists a formula~$\widetilde \varphi(\dots)$ satisfying the following statement. Let $\{ f_i\}_{i\in \mu}$ be a set of elements from $\Omega$. Then there exists a vector $\overline g$ such that the formula~$\widetilde \varphi( f,\overline g)$ is true in $\Omega$ if and only if $f=f_i$ for some~$i\in \mu$. \end{theorem} We need to interpret mappings of a set of extreme involutions from the basic subgroup~$B$ into itself in order to use Shelah theorem for the case of indecomposable direct summands of~$B$. For this purpose, accordingly to the previous section, we construct for a mapping~$f$ two automorphisms $f_1$~and~$f_2$, which correspond to~$B$, and define $$ f(A_{\eps_1}) = A_{\eps_2} \define \exists \eps \big( Extreme(\eps) \land \eps \mapson{f_1} \eps_1 \land \eps \mapson{f_2} \eps_2 \big). $$ A composition of such mappings can be easily expressed with the latter formula. Hence we get Shelah Theorem in the following formulation. \begin{theorem}\label{Shelah} Let $\Omega$ be the set of extreme involutions corresponding to dircet summands from~$B$. There exists a formula~$\widetilde \varphi(\dots)$ satisfying the following statement. Let $\{ f_i\}_{i\in \mu}$ be a~set of elements from $\Omega$. Then there exists a vector $\overline g$ such that the formula~$\widetilde \varphi( f,\overline g)$ is true in $\Omega$ if and only if $f=f_i$ for some~$i\in \mu$. \end{theorem} \section{Structuring basic subgroup} By Theorem~\ref{Shelah}, we define a set of extreme involutions that corresponds to decomposition of basic subgroups into indecomposable summands. This set must satisfy two conditions: first, involutions in it must be independent of each other, and second, any superset of extreme involutions must have dependent involutions. Denote this set by~$\F_B$. Let $B = \bigoplus\limits_{i} B_i$, where $B_i$ are indecomposable summands. We are to define a set of automorphisms~$g_{ij}$ with $\ord(B_i) \geq \ord(B_j)$ which specify generators~$b_i$ of these indecomposable summands. Precisely, $B_i = \langle b_i \rangle$ for each~$i$, $g_{ij} \Big|_{\bigoplus\limits_{m \ne i} B_m} = \id$ and $g_{ij}(b_i) = b_i + b_j$ for each~$i, j$. We need the following technical lemma. \begin{lemma}\label{Lemma5} Let $g$ be such an automorphism that $g \Big|_{\bigoplus\limits_{m \ne i} B_m} = \id$ and $g(b_i) = k_1 b_i + k_2 b_j$ for some $i, j, k_1 \ne 0, k_2 \ne 0$, where $B_i = \langle b_i \rangle$, $B_j = \langle b_j \rangle$. Then $k_1 = 1$ if and only if the constraint $g_0^{-1} g g_0(B_k) \subset B_k \oplus B_j$ is true for some $k$ and an~automorphism~$g_0$ with $g_0 \Big|_{\bigoplus\limits_{m \ne k} B_m} = \id$ and $g_0(b_k) = l_1 b_k + l_2 b_i$ for some $l_1 \ne 0, l_2 \ne 0$, where $B_k = \langle b_k \rangle$. \end{lemma} \begin{proof} $$ g_0^{-1} g g_0(b_k) = g_0^{-1} g(l_1 b_k + l_2 b_i) = g_0^{-1}(l_1 b_k + l_2 k_1 b_i + l_2 k_2 b_j) = b_k + l_2 (k_1 - 1) b_i + l_2 k_2 b_j. $$ Hence, it is clear that $g_0^{-1} g g_0(B_k)$ is in $B_k \oplus B_j$ if and only if $k_1 = 1$. \end{proof} Now we specify constraints for the set of automorphisms. \begin{enumerate} \item For each $i$ and $j$, $\ord(B_i) \geq \ord(B_j)$, there is exactly one automorphism~$g_{ij}$ in the set, which is identical on~$\bigoplus\limits_{m \ne i} B_m$ and maps $B_i$ into a subgroup of~$B_i \oplus B_j$ that is equal neither to $B_i$ nor to $B_j$. There must be no other automorphisms in the set. \item For each automorphism~$g_{ij}$ from the set, $g_{ij}(b_i) = b_i + k_{ij} b_j$ for some~$k_{ij}$, where $B_i = \langle b_i \rangle$, $B_j = \langle b_j \rangle$. This constraint can be expressed in a formula due to Lemma~\ref{Lemma5}. \item For any three automorphisms~$g_{ij}, g_{jk}, g_{ik}$ from this set, $g_{jk}^{-1} g_{ij}^{-1} g_{jk} g_{ij} = g_{ik}$. This constraint adjusts the coefficients~$k_{ij}$ with each other. Hence, it can be assumed that the coefficients~$k_{ij}$ are equal to~$1$ (by choosing the corresponding generators~$b_i$), i.\,e. $g_{ij}(b_i) = b_i + b_j$. \end{enumerate} We denote this set, which is provided by Theorem~\ref{Shelah}, by~$\F_g$. \section{Interpretation of the first order logic of the group~$A$} In this section, we express the first order logic of the group~$A$ in terms of the first order language of its automorphism group. For this purpose, it is sufficient to interpret each element of the group by some automorphism and to define the formulas for equality and addition of two elements. Then any statement in the first order language of the Abelian group~$A$ can be translated into an equivalent statement in the first order language of the automorphism group~$\Aut A$ by replacing all the quantifiers over elements of the group and all the predicates of equality and addition with the corresponding quantifiers over the interpreting automorphisms and the formulas for equality and addition. (For the details of the translation, see paper~\cite{My1}.) Notice that each element of the group~$A$ has finite order. Thus, there exists a decomposable direct summand~$B_i = \langle b_i \rangle$ of the subgroup~$B$ which has greater order. Then there exists an automorphism~$f$ which is identical on the direct complement to~$B_i$ and which maps~$b_i$ to~$b_i + a$. It is the automorphism which encodes~$a$. The formula for selecting such automorphisms~$f$ is the following: $$ \exists \eps_i \in \F_B\ \exists \eps_a \Big( Extreme(\eps_a) \land \eps_i \mapson{f} \eps_a \Big) \land \forall \eps_j \in \F_B \Big( \eps_i \ne \eps_j \then f(\eps_j) = \eps_j \Big). $$ Now here is the formula for equality of two such automorphisms~$f_1$ and~$f_2$: \begin{multline*} f_1 \doteq f_2 \define \exists \eps_1, \eps_2 \in \F_B\ \exists \eps_a \Big( Extreme(\eps_a) \land \eps_1 \mapson{f_1} \eps_a \land \eps_2 \mapson{f_2} \eps_a \land\\ \land \exists g_{ij} \in \F_g \big( \eps_1 \mapson{g_{ij}} \eps_2 \land g_{ij}^{-1} f_2 g_{ij} = f_1 \big) \Big). \end{multline*} Finally, here is the formula for addition of such automorphisms: \begin{multline*} f_1 \dotplus f_2 \doteq f_3 \define \exists f_1^\prime, f_2^\prime, f_3^\prime \Big( f_1 \simeq f_1^\prime \land f_2 \simeq f_2^\prime \land f_3 \simeq f_3^\prime \land\\ \land \exists \eps_i \in \F_B \big( f_1^\prime(\eps_i) \ne \eps_i \land f_2^\prime(\eps_i) \ne \eps_i \land f_3^\prime(\eps_i) \ne \eps_i \land f_3^\prime = f_1^\prime f_2^\prime \big) \Big). \end{multline*} These formulas provide interpretation of the first order logic of the group~$A$. \section{Interpretation of the second order logic of the group~$A$} Recall, we are concerned with an unbounded reduced Abelian $p$-group~$A$ which has decomposition~$A = A_1 \oplus A_2 = A_{low} \oplus A_{fin}$, where~$A_{fin}$ has rank of basic subgroup equal to the final rank~$\mu_{fin}$. There is also the decomposition of the basic subgroup~$B = B_{low} \oplus B_{fin}$, where $B_{low} \subset A_{low}$, $B_{fin} \subset A_{fin}$. In this section, we express the second order logic, bounded with~$\mu_{fin}$, of the group~$A$. The idea of interpretation is the same as one in the paper~\cite{My1}. We need a set of independent involution pairs where each pair corresponds to a direct summand of the group~$B_{fin}$. Each such direct summand must contain indecomposable direct summands of arbitrary big order. There must be total of~$\mu_{fin}$ such pairs. This set can be defined by Theorem~\ref{Shelah} in the same way as in the paper~\cite{My1}. Denote this set by~$\F_{fin}$. On each direct summand corresponding to a pair~$\Pair$ from~$\F_{fin}$, we interpet an element of the group~$A$ with an automorphism in the same way as in the previous section with the only difference that indecomposable summands from~$A_{\Pair}$ should be used instead of indecomposable summands from~$B$. Two such automorphism are equivalent (i.\,e. encode the same group element) if they differ by an automorphism that is identical on~$A_{\Pair}$: $$ f_1 \sim_{\Pair} f_2 \define \exists f \big( \forall \eps \in \Pair f(\eps) = \eps \land f_1 = f_2 f \big). $$ The remaining part of the proof of expressibility of the second order logic is totally similar to one in the paper~\cite{My1}. Expressly, each sentence~$\phi$ of the bounded second order logic of the group~$A$ has a corresponding sentence~$\psi$ in the first order logic of the group~$\Aut A$ which is constructed by the certain algorithm. In this algorithm, all the object variables are replaced with the encoding automorphisms and all the $k$-ary predicates are replaced with $k$ automorphisms~$f_1, \dots, f_k$ which encode elements on each direct summand~$A_{\Pair}, \Pair \in \F_{fin}$. A~tuple~$(x_1, \dots, x_k)$ belongs to this predicate whenever there is a direct summand~$A_{\Pair}$ on which the automorphism~$f_i$ encodes the element~$x_i$, for each $i = 1, \dots, k$. This algorithm is described in details in the papers~\cite{Abelian},~\cite{My1}.
1,314,259,994,059
arxiv
\section{Introduction \label{sec1}} The on going search for new physics (NP) is mostly inspired by the shortcomings of the SM in addressing some of the fundamental questions in modern particle physics, such as the hierarchy problem, the flavor patterns in the fermionic sector and dark matter. Some of these unresolved issues may be closely related and may have TeV-scale origins, thus inspiring the search for TeV-scale NP, both theoretically and experimentally. Indeed, two seemingly unrelated interesting measurements of both the ATLAS \cite{ATLAS1,ATLAS2} and the CMS \cite{CMS1,CMS2} collaborations at CERN, have been recently reported: \begin{enumerate} \item A possible $(2-4) \sigma$ (local) excess in the diphoton invariant mass distribution around 750 GeV, corresponding to a signal cross-section roughly in the range $\sigma(pp \to \gamma \gamma) \sim 3-13$ fb $(1\sigma)$, see e.g., \cite{1512.05777,1601.04751,1605.09401}. The interpretation of this excess signal has a slight preference to a spin 0 resonance, produced via gluon-fusion and having a total width ranging from sub-GeV to 45 GeV, with a more significant signal obtained in the ATLAS analysis for a scalar with a total width $\Gamma \sim 45~{\rm GeV}$ \cite{ATLAS1}. \item A possible $(1-2.5) \sigma$ excess in the measurement of the LFV decay $h \to \tau \mu$ of the 125 GeV light Higgs. In particular, the CMS collaboration finds $BR(h \to \tau \mu) = 0.84\%^{+0.39\%}_{-0.37\%}$ \cite{CMS2}, while the ATLAS collaboration finds $BR(h \to \tau \mu) = (0.53 \pm 0.51)\%$ \cite{ATLAS2}. \end{enumerate} Whether or not these two measurements are confirmed, it emphasizes the importance of the current efforts in the search for NP, since it provides an interesting manifestation/example of the exciting possibility that the building blocks of new TeV-scale physics may have rather non-conventional properties, potentially with important repercussions for both flavor and the hierarchy problems. For example, the new heavy scalar particle, $S$, responsible for the 750 GeV $\gamma \gamma$ excess, should have a rather narrow width and suppressed decay rates into ``conventional" channels such as $S \to WW,~ZZ,~t \bar t$, for which no excess signals has been observed within the currently available sensitivity of the corresponding LHC searches. In addition, such a heavy scalar $S$ is most likely related to the light 125 GeV scalar state and, therefore, might also be involved in flavor changing (FC) transitions in the fermionic sector. Such properties of the would be new 750 GeV resonating particle are, therefore, very challenging to accommodate in models beyond the SM, in particular, in supersymmetric models or in models that involve extra space-time dimensions, which seem to have a more fundamental origin and, therefore, likely linked to physics at higher energy scales. Nonetheless, we will show in this paper that a certain class of low-energy effective 2HDM frameworks with a 4th generation of heavy chiral fermions may be interesting candidates for such ``exotic" TeV-scale NP. In particular, since no evidence for such fundamentally structured theories has yet been seen, a frequently adopted phenomenological approach in studies of NP, is to construct TeV-scale models which require a UV completion and may, thus, be viewed as low energy effective frameworks for the underlying dynamics. Such models are useful as a guide for the exploration and model building of more fundamental theories and they often include new heavy fermionic and scalar states with sub-TeV masses. One of the simplest variants of an effective low-energy NP candidate, which dates back to the 1980's \cite{SM4proc}, is the SM with an additional 4th generation of fermions; the so called SM4 (for useful reviews see e.g., \cite{sher0}). Indeed, since three generations of chiral fermions have been observed in nature, it is natural to ask why not four generations of chiral fermions? It is quite interesting that this simple extension of the SM may address some of the theoretical challenges in particle physics, such as: electroweak symmetry breaking (EWSB) and the hierarchy problem \cite{DEWSB}, the CP-violation and the strength of the first-order phase transition needed to explain the origin of matter - anti matter asymmetry in the universe \cite{baryo-ref,fok}, and flavor physics \cite{flavor}. As is well known, the SM4 (i.e., with four generations of fermions and one Higgs doublet) is now excluded, since it cannot accommodate the measured SM-like properties of the 125 GeV scalar, see e.g., \cite{SM4-Higgs-bound,Lenz}, primarily due to an ${\cal O}(10)$ enhancement in the gluon-fusion light Higgs production mechanism from diagrams with $t^\prime$ and $b^\prime$ in the loops \cite{Kribs_EWPT}; see, however, W.-S. Hou in \cite{1606.03732}. This fact, along with the rather stringent direct limits on the masses of such heavy quarks (to be discussed later), has led to a common belief that generic extensions to the SM with heavy chiral 4th generation fermions $t^\prime,b^\prime,\nu^\prime,\tau^\prime$ are excluded. However, as was suggested by us a few years ago \cite{ourpaper1} and will be demonstrated again here, this is not the case when the heavy 4th generation chiral sector is embedded in frameworks with an extended Higgs sector (see also \cite{1312.4249}). Indeed, an extended Higgs sector in the context of 4th generation heavy fermions may come in handy for further addressing flavor problems \cite{ourpaper1} and the strength of the EW phase transition required for baryogenesis \cite{fok,1401.1827}. In particular, we will consider in this paper a version of a 2HDM introduced by us in \cite{ourpaper1} - the so called 4G2HDM of type I, where a chiral 4th generation doublet of heavy fermions (quark and lepton) is added and is coupled {\it only} to one of the scalar doublets (the ``heavy" doublet), while the SM 1st-3rd generations fermions are coupled only to the other doublet (the ``light" doublet). We will show in this paper that this 4G2HDM is a well motivated and valid low-energy model, which is compatible with the 125 GeV signals (see also \cite{our125}), with PEWD and with the existing direct bounds on the heavy fermions, and at the same time can also accommodate the recent indications for a new 750 GeV scalar resonance in the $\gamma \gamma$ channel. As was shown in \cite{ourpaper1}, the price to pay when adding another heavy SM-like chiral fermion doublet is that such constructions posses a nearby threshold/cutoff at the several TeV scale, which is manifest (as Landau poles) in the evolution of the Yukawa and Higgs potential couplings \cite{ourpaper1,ourhybrid}. Indeed, the large Yukawa couplings of the heavy chiral fermions can be thought of as a reflection of an underlying TeV-scale strong dynamics, so that the 4G2HDM framework should be viewed as a low energy (i.e., sub-TeV) effective model of an underlying strongly interacting sector. In particular, if the new heavy chiral fermions are viewed as the agents of EWSB (and are, therefore, linked to strong dynamics at the nearby TeV-scale, see e.g., \cite{DEWSB-heavyF,luty}), then more Higgs particles, which may be composites of these 4th generation fermions, are expected at the sub-TeV regime.$^{[1]}$\footnotetext[1]{Early attempts in this direction investigated the possibility of using the top-quark as the agent of dynamical EWSB via top-condensation \cite{top-con}. These models, however, fail to reproduce the observed value of the top-quark mass. Moreover, as opposed to the case of condensates of the heavy 4th generation fermions, where the typical cutoff for the new strong interactions is of ${\cal O}(1)$ TeV, the top-condensate models require a corresponding cutoff many orders of magnitudes larger than $m_t$, i.e., of ${\cal O}(10^{17})$ GeV, thus resulting in a severely fine-tuned picture of dynamical EWSB.} In such scenarios the resulting low-energy effective theory may contain more than a single composite Higgs field \cite{ourpaper1,luty,DEWSB-multiH} and may thus resemble a two (or more) Higgs doublet framework (for other related studies of the phenomenology of multi-Higgs 4th generation models see e.g., \cite{4G2HDM-others}). The purpose of this work is to revisit the 4G2HDM of \cite{ourpaper1}, studying its compatibility with the updated measurements of the 125 GeV light Higgs signals and with PEWD. We will also confront our model with the 750 GeV $\gamma \gamma$ excess and study its compatibility with a sub-percent branching ratio of the light Higgs in the FC decay channel $h \to \tau \mu$. Indeed, many interesting and exotic constructions beyond the SM have been suggested as possible explanations of the 750 GeV $\gamma \gamma$ excess (too many to be cited here); in most cases involving new degrees of freedom beyond just the 750 GeV resonating particle. In particular, the relevance of 2HDM frameworks to the 750 GeV $\gamma \gamma$ excess has been intensively studied in the past several months, where it was shown that the simplest 2HDM extension to the SM, in which no additional heavy degrees of freedom are added (i.e., beyond the extended scalar sector), cannot accommodate the necessary enhancement in $\sigma(pp \to H(750) \to \gamma \gamma)$, see e.g., \cite{gilad}. Consequently, extended 2HDM models with TEV scale vector-like (VL) fermions have been suggested for addressing the 750 GeV resonance signal \cite{2HDMVLQ}. The upshot of these studies is that, the needed enhancement in the 1-loop production and decay channels $gg \to H(750)$ and $H(750) \to \gamma \gamma$, requires several copies of VL fermions and/or VL fermions with charges appreciably larger than those of the SM fermions, unless their Yukawa couplings are much larger than one. The 4G2HDM considered in this work is, therefore, conceptually simpler, relying on new heavy fermionic degrees of freedom with properties similar to the SM fermions in a model that already exists in the literature. The paper is organized as follows: in section 2 we describe the type I 4G2HDM and we layout the physical parameters that are used in the numerical analysis. In section 3 we show our results and in section 4 we discuss their phenomenological consequences. In section 5 we discuss our results and summarize. \section{The 4G2HDM: a 2HDM with 4th generation fermions \label{sec2}} Motivated by the idea that TeV-scale scalar degrees of freedom may emerge as composites associated with heavy fermions, we assume that the low-energy (sub-TeV) effective framework is parameterized by a 2HDM with a chiral SM-like 4th generation of heavy fermions. Specifically, the model is constructed following \cite{ourpaper1}, such that one of the Higgs fields ($\phi_h$ - the ``heavier" field) couples only to the new heavy 4th generation fermionic fields, while the second Higgs field ($\phi_\ell$ - the ``lighter" field) is responsible for the mass generation of all other (lighter) fermions (i.e., the 1st-3rd generation SM fermions). In this model, named in \cite{ourpaper1} the 4G2HDM of type I (here we will refer to it simply as the 4G2HDM), the Yukawa interaction Lagrangian can be realized in terms of a $Z_2$-symmetry under which the fields transform as follows: \begin{eqnarray} \Phi_{\ell}\to-\Phi_{\ell},~ \Phi_{h}\to+\Phi_{h},~ F_{L}\to+F_{L}~, f_{R}\to-f_{R}\;(f={\rm SM~fermions}),~f^\prime_{R}\to +f^\prime_{R}\;(f^\prime={\rm 4th~gen.~fermions})~, \label{eq:z2} \end{eqnarray} where $F_L$ and $f_{R},f_R^\prime$ are the SU(2) fermion (quark or lepton) doublets and singlets, respectively, and $\Phi_{\ell,h}$ are the two Higgs doublets $\Phi_i =\left( \phi^{+}_i,\frac{v_i+\phi^{0}_i}{\sqrt{2}} \right)$, $i=\ell,h$. The Yukawa potential that respects the above $Z_2$-symmetry is: \begin{eqnarray} \mathcal{L}_{Y}= -\bar{F}_{L} \left( \Phi_{\ell} Y_d^f \cdot \left( I-{\cal I} \right) + \Phi_{h}Y_d^f \cdot {\cal I} \right) f_{d,R} -\bar{F}_{L} \left( \tilde\Phi_{\ell} Y_u^f \cdot \left( I - {\cal I} \right) + \Phi_{h} Y_u^f \cdot {\cal I} \right) f_{u,R} + h.c.\mbox{ ,} \label{eq:LY4G} \end{eqnarray} where $f_{u,R}$ and $f_{d,R}$ are the up and down-type SU(2) fermion singlets (quark or lepton of all four generations), $I$ is the identity matrix and ${\cal I}$ is the diagonal $4\times4$ matrix ${\cal I} \equiv {\rm diag}\left(0,0,0,1\right)$. The scalar sector contains five massive states: a charged scalar $H^+$, a CP-odd state $A$ and two CP-even scalars $h,H$, so that $h$ is the lighter one, corresponding to the observed 125 GeV Higgs boson. These physical states are related to the components of the two SU(2) scalar doublets via: \begin{eqnarray} H= s_\alpha {\rm Re} \left( \phi_h^0 \right) + c_\alpha {\rm Re}\left( \phi_\ell^0 \right) ~&,&~ A= s_\beta {\rm Im} \left( \phi_\ell^0 \right) - c_\beta {\rm Im}\left( \phi_h^0 \right) ~, \nonumber \\ h= c_\alpha {\rm Re}\left(\phi_h^0\right) - s_\alpha {\rm Re}\left(\phi_\ell^0\right) ~&,&~ H^+= s_\beta \phi_\ell^+ - c_\beta \phi_h^+ ~, \label{Higgsangles} \end{eqnarray} where $s_\alpha(c_\alpha)=\sin\alpha(\cos\alpha)$, $\alpha$ being the Higgs mixing angle in the CP-even sector and $s_\beta(c_\beta)=\sin\beta(\cos\beta)$, where $\tan\beta \equiv v_h/v_\ell$ is the ratio between the VEV's of the heavy and light Higgs fields. The Yukawa Higgs-quark-quark interactions in the 4G2HDM are (similar terms can be written for the leptons) \cite{ourpaper1}: \begin{eqnarray} {\cal L}(h q_i q_j) &=& \frac{g}{m_W \sin 2\beta} \bar q_i \left\{ m_{q_i} s_\alpha s_\beta \delta_{ij} - \cos(\beta -\alpha) \cdot \left[ m_{q_i} \Sigma_{ij}^q R + m_{q_j} \Sigma_{ji}^{q \star} L \right] \right\} q_j h \label{Sff1}~, \\ {\cal L}(H q_i q_j) &=& \frac{g}{m_W \sin 2\beta} \bar q_i \left\{ -m_{q_i} c_\alpha s_\beta \delta_{ij} + \sin(\beta -\alpha)\cdot \left[ m_{q_i} \Sigma_{ij}^q R + m_{q_j} \Sigma_{ji}^{q \star} L \right] \right\} q_j H ~, \\ {\cal L}(A q_i q_j) &=& - i I_q \frac{g}{m_W \sin 2\beta} \bar q_i \left\{ m_{q_i} s_\beta^2 \gamma_5 \delta_{ij} - \left[ m_{q_i} \Sigma_{ij}^q R - m_{q_j} \Sigma_{ji}^{q \star} L \right] \right\} q_j A ~, \end{eqnarray} \begin{eqnarray} {\cal L}(H^+ u_i d_j) = \sqrt{2} \frac{g}{ m_W \sin 2\beta} \bar u_i \left\{ \left[ m_{d_j} s_\beta^2 \cdot V_{u_id_j} - m_{d_k} V_{ik} \Sigma^{d}_{kj} \right] R + \left[ -m_{u_i} s_\beta^2 \cdot V_{u_id_j} + m_{u_k} \Sigma^{u \star}_{ki} V_{kj} \right] L \right\} d_j H^+ \label{Sff2}~, \end{eqnarray} where $V$ is the $4 \times 4$ CKM matrix, $q=d$ or $u$ for down or up-quarks with $I_d=-1$ and $I_u=+1$, respectively, and $R(L)=\frac{1}{2}\left(1+(-)\gamma_5\right)$. Also, $\Sigma^d$ and $\Sigma^u$ are new mixing matrices where all FCNC effects of the 4G2HDM are encoded. They are obtained after diagonalizing the quark mass matrices and, therefore, depend on the rotation (unitary) matrices of the right-handed down and up-quarks $D_R$ and $U_R$, respectively. In particular, for ${\cal I} \equiv {\rm diag}\left(0,0,0,1\right)$ in Eq.~\ref{eq:LY4G}, we have (see \cite{ourpaper1}):$^{[2]}$\footnotetext[2]{Note that this is in contrast to ``standard" frameworks such as the SM and the 2HDM's of types I and II, where the right-handed mixing matrices $U_R$ and $D_R$ are non-physical, being ``rotated away" in the diagonalization procedure of the quark masses.} \begin{eqnarray} \Sigma_{ij}^d = D_{R,4i}^\star D_{R,4j} ~,~ \Sigma_{ij}^u =U_{R,4i}^\star U_{R,4j} ~. \label{sigma} \end{eqnarray} The Yukawa structure and couplings defined by Eqs.~\ref{eq:LY4G}-\ref{sigma} is assumed to be copied to the leptonic sector, see \cite{ourg-2}. In the following sections \ref{sec3} and \ref{sec4}, for illustrative purposes (and without loss of generality), we will set $\Sigma^{d,u} \to {\rm diag}\left(0,0,0,1\right)$ in both the quark and lepton sectors, so that FCNC effects (in particular, between the 4th generation fermions and the SM fermions) are ``turned off". In fact, from the phenomenological point of view, it is sufficient to assume that $\Sigma^{u}_{34,43} \to 0$ (i.e., forbidding the decay $t^\prime \to t h$) and $V_{i4,4i} \to 0$ ($i=1,2,3$, thus forbidding the decays $t^\prime \to d_i W$ and $b^\prime \to u_i W$ with $d_i=d,s,b$ and $u_i=u,c,t$) in order to accommodate relatively light $t^\prime$ and $b^\prime$ with masses as low as 350 GeV, since the existing stringent exclusion limits of $m_{t^\prime},m_{b^\prime} \:\raisebox{-0.5ex}{$\stackrel{\textstyle>}{\sim}$}\: 700$ GeV, are based on searches that assume 100\% branching ratios of the 4th generation quarks into one of the channels: $t^\prime \to th,tZ,d_i W$ and $b^\prime \to Zb,u_iW$ \cite{pdg,1509.04261}. We will, therefore, assume that the dominant $t^\prime$ and $b^\prime$ decays are into one of the FC channels $t^\prime \to u_ih$ and $b^\prime \to d_i h$ ($u_i=u,c$ and $d_i=d,s,b$), due to small FCNC entries in $\Sigma^{u,d}$ (which have no effect on the results presented in sections \ref{sec3} and \ref{sec4}), in which case small off-diagonal CKM entries $V_{14,41}$ and/or $V_{24,42}$ are also allowed as long as $BR(t^\prime \to d_i W),~BR(b^\prime \to u_iW) \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 0.5$ \cite{1509.04261}. Such flavor structures, may have interesting phenomenological implications, as will be discussed in section \ref{sec5}. The 2HDM scalar sector is parameterized by seven free parameters (after minimization of the potential), which, in the so called ``physical basis", can be chosen as the four physical Higgs masses ($m_h,~m_H,~m_A,~m_{H^+}$), the two angles $\beta$ and $\alpha$ and one parameter from the scalar potential, which is needed in order to specify the scalar couplings, in particular, $hH^+H^-$ (which enters in the 1-loop $h \to \gamma \gamma$ decay), $HH^+H^-$ (which enters the 1-loop $H \to \gamma \gamma$ decay) and $Hhh$ (required for the decay $H \to hh$). In the physical basis, these scalar couplings can be written at tree-level as (see e.g., \cite{0408364}): \begin{eqnarray} \lambda_{Hhh}=-\frac{\cos(\alpha-\beta)}{2 v \sin 2\beta} \left[ \sin 2 \alpha \left(m_h^2+2m_H^2 \right) - \left( 3 \sin 2 \alpha - \sin 2 \beta \right) \frac{m_{\ell h}^2}{s_\beta c_\beta} \right] \label{lam1} ~, \end{eqnarray} \begin{eqnarray} \lambda_{hH^+H^-}=-\frac{1}{2 v \sin 2\beta} \left[ \left( \cos(\alpha-3\beta)+3 \cos(\alpha+\beta) \right) m_h^2 - 4 \sin 2 \beta \sin(\alpha-\beta) m_{H^\pm}^2 - 4 \cos(\alpha+\beta) \frac{m_{\ell h}^2}{s_\beta c_\beta} \right] \label{lam2} ~, \end{eqnarray} \begin{eqnarray} \lambda_{HH^+H^-}=-\frac{1}{2 v \sin 2\beta} \left[ \left( \sin(\alpha-3\beta)+3 \sin(\alpha+\beta) \right) m_H^2 + 4 \sin 2 \beta \cos(\alpha-\beta) m_{H^\pm}^2 - 4 \sin(\alpha+\beta) \frac{m_{\ell h}^2}{s_\beta c_\beta} \right] \label{lam3} ~, \end{eqnarray} where $m_{\ell h}^2$ is a mass-like term, $m_{\ell h}^2 \Phi_\ell^\dagger \Phi_h + h.c.$, which softly breaks the above $Z_2$-symmetry (i.e., $\Phi_{\ell}\to-\Phi_{\ell},~ \Phi_{h}\to+\Phi_{h}$), and which can be used to specify the above tree-level scalar couplings. However, since the working assumption of the 4G2HDM is that the scalar sector may be strongly interacting at the near by few TeV scale, the scalar potential is expected to be subject to significant renormalization and threshold effects. Thus, the above scalar couplings are expected to deviate from their tree-level values, depending on the details of the UV completion and on the masses of the heavy degrees of freedom of this model, see e.g., \cite{0408364,loop-cor}. As an example, consider the 1-loop corrections to the $Hhh$ coupling $\lambda_{Hhh}$, for $|\alpha| \to \pi/2$, in which case there is no mixing between the light and heavy Higgs fields (see Eq.~\ref{Higgsangles}), as required in order to accommodate the 750 GeV $\gamma \gamma$ excess in the 4G2HDM (see section \ref{sec4}). In this limit, the Yukawa couplings of the 4th generation fermions to the light Higgs state $h$ (i.e., $t^\prime t^\prime h$) vanish (see Eq.~\ref{Sff1} and Table \ref{tab4}) and we find that the dominant effect arises from the 1-loop triangle diagram with the charged Higgs exchange in the loop, giving a ``renormalized" $Hhh$ coupling $\bar\lambda_{Hhh} \equiv a_{Hhh} \lambda_{Hhh}$, with: \begin{eqnarray} a_{Hhh} \approx 1+ \frac{m_{\ell h}^4}{m_H^2 v^2} \frac{\left( 1-2c_\beta^2 \frac{m_{H^+}^2}{m_{\ell h}^2} \right) \left(1+c_\beta^2 \frac{m_{H}^2}{m_{\ell h}^2} - 2s_\beta^2 \frac{m_{H^+}^2}{m_{\ell h}^2} \right)}{2 \pi^2 (\sin2\beta)^2} I\left(m_h,m_H,m_{H^+} \right)~, \end{eqnarray} where $I\left(m_h,m_H,m_{H^+} \right)$ is the charged Higgs triangle loop integral, given by: \begin{eqnarray} I\left(m_h,m_H,m_{H^+} \right) = - \int_0^1 dx \int_0^{1-x} dy ~ \frac{1}{ (x+y) (x+y-1) m_h^2 - xy m_H^2 + m_{H^+}^2 } ~. \end{eqnarray} In particular, one roughly finds $|a_{Hhh}| \in \left\{0, 2\right\}$ when $m_{H^+} \in \left\{500~{\rm GeV}, 1~{\rm TeV} \right\}$ and with $m_H =750$ GeV, $m_h =125$ GeV and $m_{\ell h} \sim {\cal O}(1~{\rm TeV})$. For example, $a_{Hhh} \sim -0.15$ for $m_{H^+} = m_H = 750$ GeV and $m_{\ell h} =1.2$ TeV. In what follows we will, therefore, define the ``renormalized" scalar couplings as: $\bar\lambda_i \equiv a_i \lambda_i$, where $\lambda_i$ ($i=Hhh,~ hH^+H^-,~HH^+H^-$) are the corresponding tree-level couplings in Eqs.~\ref{lam1}-\ref{lam3}, and $a_i$ will be treated as free-parameters in the fit that will be varied in the range $|a_i| \in \left\{0, 2\right\}$. \section{The 125 GeV Higgs signals and PEWD \label{sec3}} The measured signals of the 125 GeV Higgs particle, which in the 4G2HDM is the light Higgs $h$, and PEWD impose stringent constraints on the free parameter space of the 4G2HDM. For the 125 GeV Higgs signals we use the measured values of the ``signal strength" parameters, which are defined as the ratio between the measured rates and their SM expectation. In particular, for a specific production and decay channel $i \to h \to f$, the signal strength is defined as: \begin{eqnarray} \mu_{i}^f \equiv \mu_i \cdot \mu^f ~, \end{eqnarray} with \begin{eqnarray} \mu_i = \frac{\sigma(i \to h)}{\sigma(i \to h)_{SM}} = k_i^2~,~ \mu^f = \frac{BR(h \to f)}{BR(h \to f)_{SM}} = \frac{k_f^2}{R^T} ~, \end{eqnarray} where $k_j$ is the 4G2HDM coupling involved in $j \to h$ or $h \to j$ production or decay processes, normalized by its SM value, and $R^T$ is the ratio between the total width of $h$ in the 4G2HDM and the total width of the SM 125 GeV Higgs. In particular, \begin{eqnarray} k_j \equiv \frac{k_j^{4G2HDM}}{k_j^{SM}} ~,~ R^T \equiv \frac{\Gamma_{h_{4G2HDM}}^{Total}}{\Gamma_{h_{SM}}^{Total}}~, \end{eqnarray} so that $\mu_i^f = k_i^2 k_f^2 / R^T$. In Table \ref{tab1} we list the latest combined ATLAS and CMS six parameter fit from RUN1 \cite{ATLAS-CMS-125res}, of the measured values for $\mu_{gg}^{\gamma \gamma},~\mu_{gg}^{WW^\star}, ~\mu_{gg}^{ZZ^\star},~\mu_{gg}^{bb},~\mu_{gg}^{\tau \tau}$ and $\mu_V/\mu_{gg}$, where $\mu_V$ stands for Higgs production via vector-boson fusion (VBF) or in association with a vector-boson (VH).$^{[3]}$\footnotetext[3]{We neglect Higgs production via $pp \to tth$ which, although included in the fit, is 2-3 orders of magnitudes smaller than the gluon-fusion channel}. We also write in Table \ref{tab1} the model predictions for the various signal strengths in terms of the normalized couplings defined above. \begin{table}[htb] \begin{center} \begin{tabular}{c||c|c|} & measured value & model prediction / couplngs \\ \hline \hline $\mu_{gg}^{\gamma \gamma} $ & $1.13^{+0.24}_{- 0.21}$ & $k_g^2 k_\gamma^2 /R^T $ \\ \hline $\mu_{gg}^{ZZ^\star} $ & $1.29^{+0.29}_{- 0.25}$ & $k_g^2 k_V^2 /R^T$ \\ \hline $\mu_{gg}^{WW^\star} $ & $1.08^{+0.22}_{- 0.19}$ & $k_g^2 k_V^2 /R^T$ \\ \hline $\mu_{gg}^{bb} $ & $0.65^{+0.37}_{- 0.28}$ & $k_g^2 k_b^2/R^T$ \\ \hline $\mu_{gg}^{\tau \tau} $ & $1.07^{+0.35}_{- 0.28}$ & $k_g^2 k_\tau^2 /R^T$ \\ \hline $\mu_V/\mu_{gg} = $ & $1.06^{+0.35}_{- 0.27}$ & $k_V^2 / k_g^2$ \\ \hline \end{tabular} \caption{Measured values \cite{ATLAS-CMS-125res} and model predictions in terms of normalized couplings (see text) of the various production and decay channels for the 125 GeV Higgs, using the signal strength prescription. Note that while $k_V,k_b$ and $k_\tau$ are ratios of tree-level couplings, $k_g$ and $k_\gamma$ are the normalized (with respect to the SM) 1-loop 4G2HDM couplings $hgg$ and $h\gamma \gamma$, respectively, calculated using the formula in \cite{HHG}. Also, in our 4G2HDM $k_W=k_Z=k_V$.} \label{tab1} \end{center} \end{table} For the PEWD constraints on the 4G2HDM, we update our study in \cite{ourpaper1}. In particular, the effects of any new physics can be divided into those which do and which do not couple directly to the ordinary SM fermions. For the former, the leading effect in the 4G2HDM comes from the decay $Z \to b \bar b$, which is mainly sensitive to the $H^+ t^\prime b$ and $W^+ t^\prime b$ couplings through one-loop exchanges of $H^+$ and $W^+$, as was analyzed in detail in \cite{ourpaper1}. These contributions to $Z \to b \bar b$ are, however, absent in the currently studied versions of the 4G2HDM, since our working assumption here is that $V_{t^\prime b} \to 0$ and $\Sigma^{d,u} \to {\rm diag}\left(0,0,0,1\right)$, so that the $H^+ t^\prime b$ and $W^+ t^\prime b$ vertices vanish or are negligibly small (see previous section). The effects which do not involve direct couplings to the ordinary fermions, can be analyzed in the formalism of the oblique parameters S,T and U \cite{peskin}. The contribution of a 2HDM with a 4th generation of chiral fermions to the oblique parameters were studied in \cite{ourpaper1}. This includes the pure 1-loop Higgs exchanges to the gauge-bosons 2-point functions and the 1-loop exchanges of $t^\prime$ and $b^\prime$ which shift the T parameter and which involve the new SM4-like diagonal coupling $W t^\prime b^\prime$ (here also the contributions involving the off-diagonal couplings $W t^\prime b$ and $W t b^\prime$ are absent since we assume $V_{t^\prime b},~ V_{t b^\prime} \to 0$, see also \cite{deltaT}). These are calculated with respect to the SM values and are bounded by a global fit to PEWD \cite{gfitter}: \begin{eqnarray} \Delta S &=& S - S_{SM} = 0.06 \pm 0.09 ~, \nonumber \\ \Delta T &=& T - T_{SM} = 0.1 \pm 0.07 \label{SandT}~, \end{eqnarray} with a correlation coefficient of $\rho = +0.91$. These values are obtained for $\Delta U=0$ (the $U$ parameter is often set to zero since it can be neglected in most new physics models and, in particular in our 4G2HDM) and with the SM reference values $M_{H,{\rm ref}}=125$ GeV and $m_{t,{\rm ref}}=173$ GeV. We, thus, consider below the constraints from the 2-dimensional ellipse in the $S-T$ plane which, for a given confidence level (CL), is defined by: \begin{widetext} \begin{eqnarray} \left( \begin{array}{c} S - S_{exp} \\ T - T_{exp} \end{array} \right)^T \left( \begin{array}{cc} \sigma_S^2 & \sigma_S \sigma_T \rho \\ \sigma_S \sigma_T \rho & \sigma_T^2 \end{array} \right) \left( \begin{array}{c} S - S_{exp} \\ T - T_{exp} \end{array} \right) = - 2 {\rm ln}\left( 1 - CL \right) \label{ST2}~, \end{eqnarray} \end{widetext} where $S_{exp} = 0.06$ and $T_{exp} = 0.1$ are the best fitted (central) values, $\sigma_S = 0.09, \sigma_T = 0.07$ are the corresponding standard deviations and $\rho=0.91$ is the (strong) correlation factor between S and T. We thus perform a random (``blind") scan of the relevant parameter space, imposing compatibility at 95\% CL of the 4G2HDM with the measured 125 GeV Higgs signals listed above and with the best fitted values of $S$ and $T$ using Eqs.~\ref{SandT} and \ref{ST2}. In particular, we fix $m_H=750$ GeV (for compatibility with the recent 750 GeV $\gamma \gamma$ signal, see next section) and scan the rest of the parameters over the following ranges: \begin{eqnarray} \alpha \in \left[ - \frac{\pi}{2},\frac{\pi}{2} \right] ~,~ \tan\beta \in \left[ 0.4 , 10 \right]~,~ a_i\in \left[ -2,2 \right] ~ (i=hH^+H^-,HH^+H^-,Hhh) ~, \nonumber \end{eqnarray} \begin{eqnarray} m_{\ell h}^2 \in \left[ - \left(2~ {\rm TeV} \right)^2,\left(2~ {\rm TeV} \right)^2 \right] ~,~ m_{A,H^+} \in \left[ 300~ {\rm GeV} ,1.5~ {\rm TeV} \right] ~, \nonumber \end{eqnarray} \begin{eqnarray} m_{t^\prime,b^\prime} \in \left[ 350~{\rm GeV} , 500~{\rm GeV} \right] ~,~ m_{\nu^\prime,\tau^\prime} \in \left[ 200~{\rm GeV} , 1200~{\rm GeV} \right] ~. \end{eqnarray} \begin{figure} \begin{center} \includegraphics[scale=0.27]{SandT_case1.eps} \includegraphics[scale=0.27]{alpha_vs_tb_case1.eps} \includegraphics[scale=0.27]{deltam_case1.eps} \includegraphics[scale=0.27]{signal_strength_case1.eps} \includegraphics[scale=0.27]{SandT_case3.eps} \includegraphics[scale=0.27]{alpha_vs_tb_case3.eps} \includegraphics[scale=0.27]{deltam_case3.eps} \includegraphics[scale=0.27]{signal_strength_case3.eps} \end{center} \caption{The distribution of the 4G2HDM parameter space that is compatible with the 125 GeV signals and PEWD. From left to right: in the $S -T$ plane (yellow, pink and green ellipses correspond to the 68\%, 95\% and 99\% CL allowed contours, respectively), in the $\tan\beta - \sin\alpha$ plane, in the $\Delta m_{\ell^\prime} - \Delta m_{q^\prime}$ plane, where $\Delta m_{\ell^\prime} \equiv m_{\nu^\prime} - m_{\tau^\prime}$ and $\Delta m_{q^\prime} \equiv m_{t^\prime} - m_{b^\prime}$ and the corresponding 125 GeV signal strengths (on right). Case 1 in the upper row, case 2 in the middle row and case 3 in the lower row, see text. \label{fig1}} \end{figure} We find two types of possible 4G2HDM ``solutions": \begin{description} \item{case 1:} $\tan\beta \leq 0.5$, $\sin\alpha \to -1$ and $m_{\ell h}^2 > 0$. \item{case 2:} $\tan\beta \geq 2$, $\sin\alpha \sim 0.1 - 0.45$ and any $m_{\ell h}^2$ in the entire range scanned. \end{description} In both cases above, $m_A,~m_{H^+}$ and the 4th generation fermion masses can have values spanning over the entire scan ranges. In Fig.~\ref{fig1} we plot the resulting distributions of the relevant parameter space in the $S -T$, $\tan\beta - \sin\alpha$ and $\Delta m_{\ell^\prime} - \Delta m_{q^\prime}$ planes, where $\Delta m_{\ell^\prime} \equiv m_{\nu^\prime} - m_{\tau^\prime}$ and $\Delta m_{q^\prime} \equiv m_{t^\prime} - m_{b^\prime}$. We also show in Fig.~\ref{fig1} the resulting predicted 125 GeV Higgs signal strengths for the two cases above, which, as can seen, have different characteristics. We next discuss the compatibility of the above two 4G2HDM solutions with the recently observed 750 GeV $\gamma \gamma$ excess. \section{The 4G2HDM and the 750 GeV $\gamma \gamma$ resonance \label{sec4}} We search here for the portion of parameter space of the two 4G2HDM cases found in the previous section, that survive once the 4G2HDM is also required to accommodate the 750 GeV $\gamma \gamma$ excess, which is being interpreted here as the decay of one or both of the heavy neutral Higgs (i.e., assumed to have masses $\sim 750$ GeV) $H \to \gamma \gamma$ and/or $A \to \gamma \gamma$. Given the exploratory nature of our study, we will simplify our analysis at this point, assuming that the scalar spectrum have the characteristics of the so-called decoupling limit (see e.g., \cite{decouplinglimit}). In particular, we assume that it is split into 2 typical scales: $m_{light} \sim 125$ GeV, corresponding to the observed light Higgs and $m_{heavy} \sim 750$ GeV around which the three heavy Higgs masses lie, i.e., $m_H,m_A,m_{H^+} \sim 750$ GeV. Even though we find a wider range of allowed masses for the non-resonant heavy scalar states (i.e., for $m_A$ and $m_{H^+}$, see below) that can accommodate the 750 GeV signal, the choice $m_H,~m_A,m_{H^+} \sim 750$ GeV will suffice for conveying our point: that the 750 GeV resonance in the $\gamma\gamma$ channel can be accommodated by one of the heavy scalars of the 4G2HDM without any conflict with other existing relevant data. Indeed, if this measurement will be eventually confirmed, then it will be instructive to study the 4G2HDM within a wider range of the relevant parameter space. We, thus, re-scan the 4G2HDM parameter space corresponding to two 4G2HDM cases found in the previous section, where now $m_H$, $m_A$ and $m_{H^+}$ are varied within a 30 GeV mass range around 750 GeV, i.e., $m_{H,A,H^+} \in 750 \pm 30$ GeV. The scan is performed with the following additional ``filters"/requirements (i.e., in addition to the requirement of compatibility with PEWD and with the measured 125 GeV Higgs signals, as outlined in the previous section): \begin{itemize} \item Reproducing the 750 GeV $\gamma \gamma$ excess within the range $3 ~{\rm fb} < \sigma(pp \to H/A \to \gamma \gamma) < 13~{\rm fb}$. We find that the (by far) dominant $H$ and/or $A$ production mechanism is the gluon-fusion one $gg \to H/A$, so that all the relevant cross-sections $\sigma(pp \to H/A \to f)$ are calculated in the narrow width approximation via: \begin{eqnarray} \sigma(pp \to H/A \to f) = \frac{C_{gg}}{s m_{H/A}} \Gamma(H/A \to gg) BR(H/A\to f) ~, \end{eqnarray} where $\sqrt{s} =8$ or $13$ TeV and $C_{gg}$ is the gluon luminosity: \begin{eqnarray} C_{gg} = \frac{\pi^2}{8} \int_{m_{H/A}^2/s}^1 \frac{dx}{x} g(x) g\left( \frac{m_{H/A}^2}{sx} \right) ~, \end{eqnarray} giving $C_{gg} \sim 2140(175)$ at $\sqrt{s} =13(8)$ TeV, see \cite{gluonPDF}. \item The resonating scalar which produces the 750 GeV $\gamma \gamma$ excess is required to have a width smaller than 45 GeV, i.e., $\Gamma_{H/A} < 45$ GeV. \item We impose the existing experimental bounds on the production and decays of the heavy neutral scalars $H$ and $A$, as obtained at the 8 and 13 TeV LHC runs (in particular when applied to $m_H,m_A \sim 750$ GeV) in all other channels which are relevant to our study: $pp \to W^+W^-,~ZZ,~t \bar t,~\tau \tau,~b \bar b,~hh,~hZ$. In particular, we use the 95\% CL bounds in Table \ref{tab2} quoted in \cite{1605.09401}. \end{itemize} \begin{table}[htb] \begin{center} \begin{tabular}{c||c|c|} final state & $\sigma$ at $\sqrt{s}=8$ TeV & $\sigma$ at $\sqrt{s}=13$ TeV \\ \hline \hline $pp \to H \to W^+ W^- $ & $< 40 $ fb & $< 300$ fb\\ \hline $pp \to H \to ZZ $ & $< 12 $ fb & $< 200$ fb\\ \hline $pp \to H \to hh $ & $< 39 $ fb & $< 120$ fb\\ \hline $pp \to A \to hZ $ & $< 19 $ fb & $< 116$ fb\\ \hline $pp \to H/A \to t \bar t $ & $< 450 $ fb & \\ \hline $pp \to H/A \to b \bar b $ & $< 1 $ pb & \\ \hline $pp \to H/A \to j j $ & $< 2.5 $ pb & \\ \hline $pp \to H/A \to \tau \tau $ & $< 12 $ fb & $< 60$ fb \\ \hline \end{tabular} \caption{Upper bounds at 95\% CL on $\sigma(pp \to S \to f)$ for various final states $f$, produced through a narrow resonance with $m_S \sim 750$ GeV and $\Gamma_S/m_S \sim {\cal O}(10^{-2})$, as applied to our scan with $S = H,A$. The bound on $\sigma( pp \to H/A \to jj)$ is relevant for $j = {\rm gluon}$. Table taken from \cite{1605.09401}.} \label{tab2} \end{center} \end{table} Applying the above filters, we find that: \begin{enumerate} \item Only the CP-even scalar state $H$ (with $m_H =750$ GeV), can accommodate the 750 GeV $\gamma \gamma$ resonance, since $\sigma(pp \to A \to \gamma \gamma) \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: {\cal O}(0.01)$ fb, which is 2-3 orders of magnitudes smaller than the measured $\gamma \gamma$ excess, see also Table \ref{tab4}. \item Only a ``shrinked" version of the 4G2HDM case 1 survives out of the two cases that were found to be compatible with PEWD and the 125 GeV light Higgs signals. In particular, the surviving 4G2HDM models have (see Fig.~\ref{fig2}): $\tan\beta \leq 0.5$, $\alpha \to -\pi/2$ and $m_{\ell h} > 600$ GeV, having some correlation with the renormalization factors of the scalar couplings $a_i = \bar\lambda_i/\lambda_i$, $i=Hhh,~ hH^+H^-,~HH^+H^-$. \item The resulting heavy fermions mass ranges are narrowed to: $350 ~ {\rm GeV} \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: m_{t^\prime},m_{b^\prime} \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 390 ~ {\rm GeV}$, where the lower limit is from direct searches (see section \ref{sec2}), and $900 ~ {\rm GeV} \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: m_{\nu^\prime},m_{\tau^\prime} \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 1200 ~ {\rm GeV}$, where the upper limit is a rough estimate of the perturbativity bound on heavy chiral leptons. \end{enumerate} In Fig.~\ref{fig2} we show three scatter plots of the resulting 4G2HDM parameter space, corresponding to the mass spectrum of the heavy fermions, the correlation between the soft breaking mass parameter $m_{\ell h}$ and the renormalization factor of the scalar couplings $a_i = \bar\lambda_i/\lambda_i$, $i=Hhh,~ hH^+H^-,~HH^+H^-$, and the resulting allowed ranges of the 125 GeV light Higgs signal strengths in all the measured channels. We see that, while $|m_{t^\prime} - m_{b^\prime}| \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 30$ GeV, the mass splitting of the heavy leptons is typically $|m_{\nu^\prime} - m_{\tau^\prime}| \:\raisebox{-0.5ex}{$\stackrel{\textstyle>}{\sim}$}\: m_W$. We also see that smaller values of $m_{\ell h}$ typically require larger values of the renormalization factors of the scalar vertices $a_i$, e.g., $a_{Hhh} \sim 1$ for $m_{\ell h} \sim 700$ GeV. \begin{figure} \begin{center} \includegraphics[scale=0.37]{Fig2_deltam.eps} \includegraphics[scale=0.37]{fig2_ai_vs_mlh.eps} \includegraphics[scale=0.37]{fig2_125_signals.eps} \end{center} \caption{Scatter plots of the 4G2HDM parameter space that is compatible with the 125 GeV signals, with PEWD, with $\sigma(pp \to H\to \gamma \gamma) = 3-13$ fb, with $\Gamma_H \leq 45$ GeV and with all 8 and 13 TeV LHC bounds on the cross-section $\sigma(pp \to H/A \to f)$ in all final states $f$ relevant to the $H$ and $A$ decays, see Table \ref{tab2}. The scatter plots are given for the mass splitting spectrum of the heavy fermions (left), the correlation between the soft breaking mass parameter $m_{\ell h}$ and the renormalization factor of the scalar couplings $a_i = \bar\lambda_i/\lambda_i$, $i=Hhh,~ hH^+H^-,~HH^+H^-$ (middle), and the resulting allowed ranges of the 125 GeV light Higgs signal strengths in all the measured channels (right). \label{fig2}} \end{figure} It is interesting to note that the resulting mass spectrum of the heavy chiral quarks, which is required to accommodate the 750 GeV $\gamma \gamma$ resonance, is rather narrow and roughly centered around $m_H /2$, i.e., $m_{t^\prime},m_{b^\prime} \sim 350 - 390 ~ {\rm GeV}$. This may hint back to the possibility that the the heavy scalars are composites primarily of the heavy chiral quarks, in which case the 4G2HDM might indeed be interpreted as a low energy effective framework for some TeV-scale strongly interacting theory. Such an effective low energy 2HDM, with features similar to the 4G2HDM discussed here, was introduced in \cite{ourhybrid}, where it was shown that, using the Nambu-Jona-Lasinio (NJL) mechanism \cite{Nambu}, it is possible to construct an effective sub-TeV 2HDM hybrid framework, in which the 125 GeV light Higgs is mostly a fundamental scalar, while the heavy Higgs states are components of a composite field of the form $\Phi_h \sim g_{t^\prime}^\star < \bar Q_L^{\prime c} (i \tau_2) t^{\prime c}_R > + g_{b^\prime} < \bar Q_L^\prime b^\prime_R >$, which is responsible for EW symmetry breaking and for the dynamical mass generation of the heavy quarks.$^{[4]}$\footnotetext[4]{Another interesting framework which entertains the idea that heavy chiral quarks may form the 750 GeV composite was recently suggested in \cite{1602.05539}.} \section{Phenomenology of the 4G2HDM \label{sec5}} Inspired by the indications of the 750 GeV $\gamma \gamma$ resonance and following the analysis of the previous section, we briefly consider here some of the distinct phenomenological consequences of the 4G2HDM with characteristics similar to those required to accommodate such a heavy scalar resonance. In particular, we will assume below that $\tan\beta \sim 0.5$ and $\sin\alpha \sim -1$, in which case the light 125 GeV Higgs of the 4G2HDM, $h$, does not couple to $f^\prime f^\prime$, while the heavy CP-even Higgs, $H$, does not couple to a pair of SM fermions (see Eqs.~\ref{Sff1}-\ref{Sff2} and Table \ref{tab3}). Also, the 4th generation heavy fermions are assumed to have masses in the ranges $350 ~ {\rm GeV} \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: m_{t^\prime},m_{b^\prime} \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 400 ~ {\rm GeV}$ and $900 ~ {\rm GeV} \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: m_{\nu^\prime},m_{\tau^\prime} \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 1200 ~ {\rm GeV}$, and the dominant decay channels of the heavy quarks are $t^\prime \to uh$ ($u=u,c$) and $b^\prime \to dh$ ($d=d,s,b$), with corresponding branching ratios $ \:\raisebox{-0.5ex}{$\stackrel{\textstyle>}{\sim}$}\: 0.5$, due to small off diagonal-entries $\Sigma^u_{4i}$ ($i=1,2$) and/or $\Sigma^d_{4i}$ ($i=1,2,3$) (see Table \ref{tab3} and discussion in section \ref{sec2}). \begin{table}[htb] \begin{center} \begin{tabular}{|c||c|c|c|} & \multicolumn{3}{c|}{Yukawa couplings in the 4G2HDM with $\sin\alpha \sim -1$} \\ \hline & $v \cdot y(\bar f f)$ & $v \cdot y(\bar f^\prime f^\prime)$ & $v \cdot y(\bar f_i f_{j})$ ($i, j = 1-4, i\neq j$) \\ \hline \hline $h$ & $-\frac{m_f}{\cos\beta}$ & 0 & $\frac{\Sigma^f_{ij}}{\cos\beta} (m_{f_i} R + m_{f_j} L)$ \\ \hline $H$ & 0 & $\frac{m_f}{\sin\beta}$ & $ \frac{\Sigma^f_{ij} }{\sin\beta} (m_{f_i} R + m_{f_j} L)$ \\ \hline $A$ & $-i I_f m_f \tan\beta$ & $i I_f m_{f^\prime} \cot\beta$ & $i I_f \frac{\Sigma^f_{ij}}{\sin\beta \cos\beta} (m_{f_i} R - m_{f_j} L)$ \\ \hline \end{tabular} \caption{Yukawa couplings of the neutral Higgs particles in the 4G2HDM with $\sin\alpha \to -1$ and assuming $\Sigma^f_{ij} \ll \Sigma^f_{44} =1$ for $ij \neq 44$, see section \ref{sec2}. In the first column $f$ is a SM fermion of the 1st-3rd generations, while in the second column $f^\prime$ stands for a 4th generation fermion. In the 3rd column $f_i$ correspond to any fermion of the $i$th generation. Also, $I_f = 1(-1)$ for up(down) type fermions.} \label{tab3} \end{center} \end{table} In Table \ref{tab4} we list three benchmark points (BMP1,BMP2,BMP3) which have some distinct characteristics and which are compatible with PEWD, with the 125 GeV Higgs signals, with the 750 GeV $\gamma \gamma$ signal and with the LHC bounds on all relevant 750 GeV Higgs resonance channels $pp \to H/A \to f$ given in Table \ref{tab2}. For definiteness, we have generated the benchmark points for the case of $m_H = 750 $ GeV and $m_A,m_{H^+} \sim m_H \pm 50$ GeV, but the discussion below has a more general scope, i.e., with regard to some of the possible phenomenological signatures of the 4G2HDM associated with the TeV-scale heavy scalars of the model and independent of whether the 750 GeV $\gamma \gamma$ resonance is confirmed or not. The three benchmark points include cases where the 750 GeV Higgs total width ranges from a few GeV to $\sim 45$ GeV, having a resonance cross-section to $\gamma \gamma$ between 4-12 fb. They also correspond to cases where $BR(H/A \to \bar q^\prime q^\prime) \sim 1$ and $BR(H^+ \to \bar q^\prime q^\prime) \sim 1$. \begin{table}[htb] \begin{center} \begin{tabular}{|c||c|c|c|} \hline & BMP1 & BMP2 & BMP3 \\ \hline \hline $m_{t^\prime},m_{b^\prime}, m_{H^+}, m_A$ [GeV] & $352,382,709,780$ & $384,373,795,778$ & $368,369,691,731$ \\ \hline $H$ total width, $\Gamma_H$ [GeV] & 43 & 4 & 17 \\ \hline $\sigma(pp \to H/A \to \gamma \gamma)$ [fb] & $4/0.004$ & $12/0.006$ & $8/0.005$ \\ \hline $BR(H \to \bar t^\prime t^\prime, \bar b^\prime b^\prime, \bar t t, hh,gg)$ & $0.94,0,{\cal O}(10^{-6}),{\cal O}(10^{-4}),0.06$ & $0,0.36,0.04,0.007,0.64$& $0.47,0.38,{\cal O}(10^{-6}),{\cal O}(10^{-4}),0.14$ \\ \hline $BR(A \to \bar t^\prime t^\prime, \bar b^\prime b^\prime, \bar t t, hZ,gg)$ & $0.61,0.33,0.015,0.04,{\cal O}(10^{-4})$ & $0.33,0.61,0.016,0.05,0.004$ & $0,0,0.29,0.71,{\cal O}(10^{-4})$ \\ \hline $\sigma(pp \to H \to \bar t^\prime t^\prime, \bar b^\prime b^\prime, \bar t t, hh)$ [fb] & $15000,0,0.015,5$ & $0,4000,0.65,109$ & $7500,6000,0.02,9$ \\ \hline $\sigma(pp \to A \to \bar t^\prime t^\prime, \bar b^\prime b^\prime, \bar t t, hZ)$ [fb] & $160,87,19,52$ & $90,164,21,57$ & $0,0,38,91$ \\ \hline $BR(H^+ \to t^\prime \bar b^\prime, t \bar b ,W^+h)$ & $0,0.31,0.69$ & $0.9,0.03,0.07$ & $0,0.32,0.68$ \\ \hline \end{tabular} \caption{Benchmark points with some distinct characteristics, which are consistent with PEWD, with the 125 GeV Higgs signals, with the 750 GeV $\gamma \gamma$ signal and with the LHC bounds on all relevant 750 GeV Higgs resonant channels $pp \to H/A \to f$ given in Table \ref{tab2}.} \label{tab4} \end{center} \end{table} In particular, if $m_H,m_A > m_{q^\prime}/2$, then $H/A \to \bar q^\prime q^\prime$ is open and typically dominates, having a branching ratio of ${\cal O}(1)$ (see Table \ref{tab4}). In that case, we find that within the 4G2HDM parameter space discussed here, the corresponding resonance cross-sections for $\bar q^\prime q^\prime$ production at the 13 TeV LHC are typically $\sigma(pp \to H \to q^\prime q^\prime) \sim {\cal O}(10)$ [pb] and $\sigma(pp \to A \to q^\prime q^\prime) \sim {\cal O}(0.1)$ [pb], (both $H$ and $A$ produced through gluon-fusion $gg \to H/A$), so that in the case of $H \to q^\prime q^\prime$ (see Table \ref{tab4}), this is about an order of magnitude larger than the QCD (continuum) $\bar q^\prime q^\prime$ production rate. Therefore, if the 750 GeV $\gamma \gamma$ resonance persists, one should also expect an observable resonance signal at least in the $H \to \bar q^\prime q^\prime$ channel. Let us, therefore, briefly investigate the signal $H \to \bar q^\prime q^\prime$ under more general grounds, i.e., when $m_H > m_{q^\prime}/2$ but not necessarily $m_H \sim 750$ GeV. For example, in the case of $H \to \bar t^\prime t^\prime$, the $t^\prime$ will further decay either via the FC channels $t^\prime \to uh$ ($u=u$ or $c$) or via the 3-body decay $t^\prime \to b^\prime W \to dhW$ ($d=d,s,b$), where $b^\prime W$ are either off-shell or on-shell (i.e., when $m_{t^\prime} > m_{b^\prime}+m_W$, see Fig.~\ref{fig1}). If the former case (i.e., $t^\prime \to uh$) dominates, then the resulting resonance signal should be searched for in $pp \to \bar t^\prime t^\prime \to (jh)_{t^\prime}(jh)_{t^\prime}$ ($j$ is a light jet), while if the 3-body $t^\prime$ decay dominates then $pp \to \bar t^\prime t^\prime \to (jhW^+)_{t^\prime}(jhW^-)_{t^\prime}$. In either case, the SM-like light Higgs ($h$) further decays into $b \bar b$ or $WW$ with SM rates, giving rise to resonance signatures of the form $pp \to (nj+mb+\ell W)_H$, with $(n,m,\ell)=(2,4,0),(2,0,4),(2,2,2),(2,4,2),(2,2,4),(2,0,6),(0,2,6),(0,4,4),(0,6,2)$ and with unique kinematic features that distinguishes them from more conventional signatures. Similar signals are also expected for $H \to \bar b^\prime b^\prime$. We recognize that these type of signals are very challenging and may require new strategies, in particular, for reconstructing the parent $q^\prime$'s in such a high jet-multiplicity environment. The decay pattern of the charged Higgs may also change in the 4G2HDM, in particular for the case when $m_{H^+} > m_{t^\prime} + m_{b^\prime}$, for which the decay of $H^+$ into a pair of heavy 4th generation fermions can dominate (see BMP1 in Table \ref{tab4}). In particular, taking $m_{t^\prime} \sim m_{b^\prime} \equiv m_{q^\prime}$ and assuming that $H^+$ is sufficiently heavier than $2m_{q^\prime}$, so that we can ignore corrections of ${\cal O}(4m_{q^\prime}^2/m_{H^+}^2)$ in the phase-space factors, we have in the 4G2HDM: \begin{eqnarray} R_{t^\prime b^\prime/tb} &\equiv& \frac{\Gamma(H^+ \to t^\prime b^\prime)}{\Gamma(H^+ \to t b)} \sim 2 \frac{m_{q^\prime}^2}{m_t^2} \cot^4\beta ~, \\ R_{t^\prime b^\prime/Wh} &\equiv& \frac{\Gamma(H^+ \to t^\prime b^\prime)}{\Gamma(H^+ \to Wh)} \sim 12 \frac{m_{q^\prime}^2}{m_{H^+}^2} \left(\frac{\cot\beta}{\cos(\beta -\alpha)}\right)^2 ~. \end{eqnarray} Thus, for $\alpha \sim - \pi/2$, $\tan\beta \sim 0.5$ (i.e., $\cos(\beta - \alpha) \sim -0.45$), $m_{q^\prime} \sim 350$ GeV (i.e., values of the 4G2HDM parameter space that can accommodate the 750 GeV $\gamma \gamma$ signal) and taking $m_{H^+} \sim {\cal O}(1)$ TeV, we obtain: $R_{t^\prime b^\prime/tb} \sim {\cal O}(100)$ and $R_{t^\prime b^\prime/Wh} \sim {\cal O}(10)$, in which case $BR(H^+ \to t^\prime b^\prime) \sim 1$ (e.g., as in the case of BMP2), leading to some interesting signatures of the heavy charged Higgs at the LHC. In particular, the dominant production channels of $H^+$ at the LHC are $gg/gb \to H^+ b \bar t, H^+ W^-/ H^+ \bar t$, with a typical cross-section of $\sim 100$ fb when $\tan\beta \sim 1$ \cite{Hplusprod}. The subsequent $H^+$ decay to a pair of 4th generation heavy fermions with $BR(H^+ \to t^\prime \bar b^\prime) \sim 1$ will, thus, lead to new $H^+$ signals, e.g., $pp \to t (t^\prime b^\prime)_{H^+} \to (b W)_t (jh)_{t^\prime} (jh)_{b^\prime}$, again with the typical 4G2HDM heavy fermion high jet-multiplicity signatures of the form $pp \to nj+mb+\ell W$. This is in contrast to ``standard" 2HDM frameworks where the heavy charged Higgs will dominantly decay to $Wh$ and/or $tb$ (see BMP1 and BMP3), leading to a lower multiplicity of jets in the final state. As noted earlier, a wider range of solutions exist (which are not being discussed here) to all data and filters mentioned above (i.e., including the 750 GeV $\gamma \gamma$ resonance), in which lighter pseudoscalar $A$ and charged Higgs $H^{+}$ are allowed, with masses as low as $300$ GeV. In such 4G2HDM scenarios, the heavy 4th generation quarks (and leptons) can have substantial decay rates in channels involving also the heavy Higgs species, i.e., $t^\prime \to H^+ d, Au$ ($d=d,s,b$ and $u=u,c)$ and $b^\prime \to H^+ u, Ad$ ($d=d,s,b$ and $u=u,c)$, followed by $H^+ \to W^+ h, t \bar b$ and $A \to hZ, t \bar t$. Indeed, such decay patterns can also lead to some un-explored collider signatures of the 4G2HDM. We leave the discussion of the phenomenology of such wider range of 4G2HDM scenarios to a later work. Finally, we wish to comment on the flavor violating structure of the 4G2HDM and its compatibility with the recently reported indications of the LFV decay of the 125 GeV light Higgs $h \to \tau \mu$ \cite{ATLAS2,CMS2}. Writing the LFV couplings of $h$ in a general form: \begin{eqnarray} {\cal L}(h f_i f_j) = {\cal S}_{ij} + {\cal P}_{ij} \gamma_5 ~, \end{eqnarray} one obtains: \begin{eqnarray} \Gamma(h \to \bar f_i f_j + \bar f_j f_i) = \frac{m_h}{4 \pi} \left( |{\cal S}_{ij}|^2 + |{\cal P}_{ij}|^2 \right) ~. \end{eqnarray} In our 4G2HDM we have for the case of the LFV decay $h \to \tau \mu$ (neglecting terms of ${\cal O}(m_\mu/m_\tau)$, see Eq.~\ref{Sff1}): \begin{eqnarray} |{\cal S}_{\tau \mu}| = |{\cal P}_{\tau \mu}| \sim \frac{g}{4} \frac{m_\tau}{m_W} f(\beta,\alpha) \xi_{\tau \mu}~, \end{eqnarray} where we have defined $\Sigma^\ell_{32} = \Sigma^\ell_{23} \equiv \xi_{\tau \mu}$ (see Eq.~\ref{sigma}) and: \begin{eqnarray} f(\beta,\alpha) = \frac{\cos(\beta - \alpha)}{s_\beta c_\beta} ~. \end{eqnarray} Requiring now that $BR(h \to \tau \mu) \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 1\%$ we find: \begin{eqnarray} | f(\beta,\alpha) \xi_{\tau \mu}| \sim {\cal O}(0.1)~. \end{eqnarray} Thus, since for the values of $\tan\beta$ and $\alpha$ that were found to be compatible with all data considered in the previous sections, we find $|f(\beta,\alpha)| \sim 1-5$, and specifically $f(\beta,\alpha) \sim 1$ for $\alpha \to -\pi/2$ and $\tan\beta \sim 0.5$, as required in order to accommodate the 750 GeV $\gamma \gamma$ resonance (see previous section), the 4G2HDM with $|\xi_{\tau \mu}| \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 0.1$ can address the measured $BR(h \to \tau \mu) \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 1\%$ if it persists. \section{Summary \label{sec6}} We have revisited a class of models beyond the SM, suggested by us a few years ago in \cite{ourpaper1}, which put together an additional Higgs doublet with a heavy chiral 4th generation quark and lepton doublet and which have several important and attractive theoretical features. In particular, we focused on the so-called 4G2HDM of type I (in \cite{ourpaper1}), in which a discrete $Z_2$ symmetry couples the ``heavy" scalar doublet only to the heavy 4th generation fermions and the ``light" one to the lighter SM fermions. We have confronted this model with PEWD, with the measured 125 GeV light Higgs signals and also studied its compatibility with the recent indication of a 750 GeV $\gamma \gamma$ resonance and with the current LHC bounds on heavy scalar resonances in other relevant channels. We found that the CP-even heavy Higgs state of the 4G2HDM with a mass $\sim 750$ GeV can accommodate the measured $750$ GeV excess for a rather unique choice of the parameter space: $\tan\beta \sim 0.5$, $\alpha \sim -\pi/2$ (the Higgs mixing angle) and with heavy chiral fermion masses $m_{t^\prime,b^\prime} \:\raisebox{-0.5ex}{$\stackrel{\textstyle<}{\sim}$}\: 400$ GeV and $m_{\nu^\prime,\tau^\prime} \:\raisebox{-0.5ex}{$\stackrel{\textstyle>}{\sim}$}\: 900$ GeV. We have shown that the heavy chiral quarks (and leptons) of the 4G2HDM may have FCNC decays into the light 125 GeV Higgs plus a light-quark jet, $q^\prime \to j h$, with branching ratios of ${\cal O}(1)$, thus leading to some un-explored signatures of $q^\prime \bar q^\prime$ production at the LHC and, therefore, being consistent with the current direct bounds on the masses of new heavy fermions. Indeed, new and rich phenomenology in $q^\prime$ - heavy Higgs systems is expected, including possible resonance production of $q^\prime q^\prime$ pairs via either the heavy neutral or heavy charged Higgs particles of the 4G2HDM, which leads to high jet-multiplicity signatures, with or without charged leptons, of the form $\bar q^\prime q^\prime \to nj + mb + \ell W$, with $n+m+\ell=6-8$ and unique kinematic features which are related to the resonating heavy scalar and the decay pattern of the heavy quarks. The reconstruction of the $q^\prime q^\prime$ pairs in such high jet-multiplicity signals is very challenging and require more thought and possibly new search strategies. We also show that the recent indication of a percent-level branching ratio in the LFV decay of the 125 GeV Higgs $h \to \tau \mu$, if it persists, can be readily addressed within the distinct flavor structure of the 4G2HDM. \bigskip \bigskip \bigskip \bigskip \bigskip \bigskip {\bf Acknowledgments:} We thank Pier Paolo Giardino for useful conversations. The work of AS was supported in part by the US DOE contract \#DE-SC0012704. \pagebreak
1,314,259,994,060
arxiv
\section{1 - Introduction} This is the first paper of a two-part investigation dealing with the Hamiltonian theory of the gravitational field, and more precisely, the one which is associated with the so-called Standard Formulation of General Relativity (SF-GR) \cite{ein1,LL,gravi,wald}, i.e., the Einstein field equations. In the second paper the corresponding quantum formulation will be presented. For this purpose, in the two papers new manifestly-covariant variational approaches are developed. The two formulations will be referred to as theories of\ Covariant Classical and respectively Quantum Gravity (CCG/CQG) or briefly \emph{CCG- }and\emph{\ CQG-theory}. As shown below \emph{CCG-theory} is built upon the results presented in Refs \cite{noi1,noi2} about the variational formulation of GR achieved in the context of a\textbf{\ }\emph{DeDonder-Weyl}\textbf{-}type approach and the corresponding possible realization of a super-dimensional and\textbf{\ manifestly-covariant Hamiltonian theory. In particular, based on a suitable identification of the effective kinetic energy and the related Hamiltonian density $4-$scalars adopted in Ref.\cite{noi2}, the aim here is to identify a reduced-dimensional continuum Hamiltonian structure of SF-GR,\ to be referred to here as \emph{Classical Hamiltonian Structure} (CHS). The crucial goal of the paper is to show that CHS can be associated with arbitrary possible solutions of the Einstein field equations corresponding either to vacuum or non-vacuum conditions. In other words, this means that the same Hamiltonian structure should occur for arbitrary external source terms which may appear in the variational potential density. Despite being intimately related to the one earlier considered in Ref.\cit {noi2}, the new Hamiltonian structure is achieved in fact by means of the parametrization of the corresponding canonical state in terms of the proper-time determined along arbitrary geodetics of the background metric field tensor. This feature turns out to be of paramount importance for the establishment of the corresponding canonical transformation and covariant Hamilton-Jacobi theories. The CHS\textbf{\ }determined in this way is shown to be realized by the ensemble $\left\{ x_{R},H_{R}\right\} $ represented by an appropriate variational canonical state $x_{R}=\left\{ g,\pi \right\} $,\ with $g$\ and $\pi $\ being suitably-identified tensor fields representing appropriate continuum Lagrangian coordinates and conjugate momenta and H_{R} $ a corresponding variational Hamiltonian density. Its basic feature is that of being based, in analogy with Ref.\cite{noi2}, on the adoption of the \emph{synchronous Hamiltonian variational principle} for the variational formulation of GR. However, in difference to the same reference, new features are added which, as we intend to show, are mandatory for the construction of the corresponding canonical transformation and Hamilton-Jacobi theories. In particular for this purpose, first, a reduced-dimensional representation is introduced for the canonical momenta, which however leaves formally unchanged the corresponding variational Hamiltonian density. Second, an appropriate parametrization by means of a suitably-defined proper time $s$\ is introduced so that the resulting Euler-Lagrange equations are now realized by means of Hamilton equations in evolution form. Accordingly, variational and prescribed tensor fields\ are introduced, with the prescribed ones being left invariant by the synchronous variations and the variational fields. In particular, the same Hamilton equations may reduce identically to the Einstein field equations, which are fulfilled by the prescribed fields, if suitable initial conditions are set.\textbf{\ This occurs provided the Poisson bracket of the Hamiltonian density is a local function, i.e., it does not depend explicitly on proper time.\ In the realm of the classical theory the physical behavior of variational fields provide the mathematical background for the establishment of a manifestly-covariant Hamiltonian theory of GR, and in particular the CCG-theory realized here. When passing to the corresponding covariant quantum theory (i.e., in the present case the CQG-theory to be developed in the subsequent paper) variational fields become\ quantum fields and inherit the corresponding tensor transformation laws of classical fields. Thanks to its intrinsic consistency with the principles of covariance and manifest covariance,\ the synchronous variational setting developed in Refs.\cit {noi1,noi2} provides at the same time:\newline - the natural framework for a Hamiltonian theory of classical gravity which is consistent with SF-GR;\newline - the prerequisite for the establishment of a covariant quantum theory of gravitational field which is in turn consistent with classical theory and SF-GR (see Ref.\cite{part2}, hereon Part 2). According to such an approach the $4-$scalar, \textit{i.e., }invariant, $4- volume element of the space-time ($d\Omega $) entering the action functional\ is considered independent of the functional class of variations, so that it must be defined in terms of a \emph{prescribed metric tensor fiel } $\widehat{g}\left( r\right) $, represented equivalently either in terms of its covariant or counter variant component, \textit{i.e.,} either\textbf{\ } \widehat{g}\left( r\right) \equiv \left\{ \widehat{g}_{\mu \nu }\left( r\right) \right\} $\ or\textbf{\ }$\widehat{g}\left( r\right) \equiv \left\{ \widehat{g}^{\mu \nu }\left( r\right) \right\} $.\textbf{\ }Here $r\equiv \left\{ r^{\mu }\right\} $ and $\widehat{g}\left( r\right) $ denote respectively an arbitrary GR-frame parametrization and an arbitrary particular solution of the Einstein field equations.\textbf{\ }This is obtained therefore upon identifying in the action functional $d\Omega \equiv d^{4}r\sqrt{-\left\vert \widehat{g}\right\vert }$, with $d^{4}r$ bein \textbf{\ }the corresponding canonical measure expressed in terms of the said parametrization and $\left\vert \widehat{g}\right\vert $ denoting as usual the determinant of the metric tensor $\widehat{g}\left( r\right) $. In the context of the synchronous variational principle to GR a further requirement is actually included which demands that the prescribed field\ \widehat{g}_{\mu \nu }(r)$ must determine, besides $d\Omega $,\ also the \emph{geometric properties} of space-time. This means that $\widehat{g}_{\mu \nu }(r)$ should uniquely prescribe the tensor transformation laws of arbitrary tensor fields, which may depend in principle, besides $\widehat{g _{\mu \nu }(r)$,\ both on the variational state $x_{R}$ and the $4-$position $r\equiv \left\{ r^{\mu }\right\} $. This requires in particular that \widehat{g}_{\mu \nu }(r)$\ and $\widehat{g}^{\mu \nu }(r)$\ respectively lower and raise tensor indexes of the same tensor fields. In a similar way \widehat{g}_{\mu \nu }(r)$ uniquely determines also the standard Christoffel connections which enter both the Ricci tensor $\widehat{R}_{\mu \nu }$ and the covariant derivatives of arbitrary variational tensor fields. Therefore, in the context of synchronous variational principle to GR the approach known in the literature as "background space-time picture"\ \cit {Drummond2000,Moffayt2012,Hossenfelder2009}\textbf{\ }is adopted, whereby the background space-time\textbf{\ }$\left( \mathbf{Q}^{4},\widehat{g}\left( r\right) \right) $\textbf{\ }is considered defined \textquotedblleft a priori\textquotedblright\ in terms of $\widehat{g}_{\mu \nu }(r)$,\ while leaving unconstrained all the variational fields $x_{R}=\left\{ g,\pi \right\} $\ and in particular the Lagrangian coordinates $g\left( r\right) \equiv \left\{ g_{\mu \nu }\left( r\right) \right\} $. Indeed, consistent with Ref.\cite{noi2}, the physical interpretation which arises from CCG-theory exhibits a connection also with the so-called \textit induced gravity} (or \textit{emergent gravity}) \cite{emerg2,emerg1}, namely the conjecture that the geometrical properties of space-time should reveal themselves as a mean field description of microscopic stochastic or quantum degrees of freedom underlying the classical solution. In the present approach this is achieved by introducing the prescribed metric tensor \widehat{g}_{\mu \nu }\left( r\right) $ in the Lagrangian and Hamiltonian action functionals, which is held constant in the variational principles when performing synchronous variations and has to be distinguished from the variational field $g_{\mu \nu }\left( r\right) $. In this picture, $\widehat g}_{\mu \nu }\left( r\right) $ should arise as a macroscopic prescribed mean field emerging from a background of variational fields $g_{\mu \nu }\left( r\right) $, all belonging to a suitable functional class. This permits to introduce a new representation for the action functional in superabundant variables, depending both on $g_{\mu \nu }\left( r\right) $ and $\widehat{g _{\mu \nu }\left( r\right) $. Such a feature, as explained above, is found to be instrumental for the identification of the covariant Hamiltonian structure associated with the classical gravitational field and provides a promising physical scenario where to develop a covariant quantum treatment of GR. In this reference, one has to acknowledge the fact that the Hamiltonian description of classical systems is a mandatory conceptual prerequisite for achieving a corresponding quantum description \cite{FoP1,FoP2}, \textit i.e., }in the case of continuum systems, the related relativistic quantum field theory. This task involves the identification of the appropriate Hamiltonian representation of the continuum field, to be realized by means of the following steps:\newline \emph{Step \#1: }Establishment of underlying Lagrangian and Hamiltonian variational action principles.\newline \emph{Step \#2: }Construction of the corresponding Euler-Lagrange equations \emph{\ }realized respectively in terms of appropriate continuum Lagrangian and Hamiltonian equations.\newline \emph{Step \#3: }Determination of the corresponding set of continuum canonical transformations and\ formulation of the related Hamilton-Jacobi theory. The proper realization of these steps remains crucial. In actual fact, the last target appears as a prerequisite of foremost importance for being able to reach a consistent formulation of relativistic quantum field theory for General Relativity, \textit{i.e., }the so-called Quantum Gravity. The conclusion follows by analogy with Electrodynamics. In fact, as it emerges from the recent investigation concerning the axiomatic foundations of relativistic quantum field theory for the radiation-reaction problem associated with classical relativistic extended charged particles (see Refs \cite{EPJ1,EPJ2,EPJ3,EPJ5,EPJ7,EPJ8}), it is the Hamilton-Jacobi theory which naturally provides the formal axiomatic connection between classical and quantum theory, to be established by means of a suitable realization of the quantum correspondence principle. Prerequisite for reaching such goals in the context of relativistic quantum field theory is the establishment of a theory fulfilling at all levels both the \emph{Einstein general covariance principle} and the \emph{principle of manifest covariance}. Such a viewpoint is mandatory in order that the axiomatic construction method of SF-GR makes any sense at all \cite{haw}. Indeed, in order that physical laws have an objective physical character they cannot depend on the choice of the GR reference frame. This requisite can only be met provided all classical physical observables and the corresponding mathematical relationships holding among them, \textit{i.e., the physical laws, can actually be expressed in tensorial form with respect to the group of transformations indicated above. In the context of SF-GR the adoption of the same strategy requires therefore the realization of \emph Steps \#1-\#3\ }in manifest covariant form. As far as the actual identification of \emph{Steps \#1 }and\emph{\ \#2 }for SF-GR is concerned, the candidate is represented by the variational theory reported in Refs.\cit {noi1,noi2}. The distinctive features of such a variational theory, which sets it apart from previous Hamiltonian formulations in literature \cit {h1,h3,h4,h5,h6,h7}, lie in its consistency with the criteria indicated above and the DeDonder-Weyl classical field theory approach \cit {donder,weyl,sym3,sym4,sym5,sym6,sym7,sym8,sym9}. Nevertheless well-known alternative approaches exist in the literature which are based on non-manifestly covariant approaches.\ For the purpose of formal comparison let us briefly mention some of them, a detail analysis being left to future developments. For definiteness, approaches can be considered which are built upon space-time\ foliations, namely\ are based on so-called 3+1 and/or 2+2 splitting schemes.\ In fact, GR can be formulated in any GR-frame (i.e., coordinate system) by introducing a suitable local point transformation $r^{\mu }\rightarrow r^{\prime \mu }=f^{\mu }(r)$ leading to a decomposition of this type. In particular, the 3+1 approach is convenient for purposes related, for example, to the definition of conventional energy-momentum tensors, thermodynamic and kinetic values,\ and to provide corresponding methods of quantization \cite{zzz1,zzz2,zzz3}. The latter are\ exemplified by the well-known approach developed by Arnowitt, Deser and Misner (1959-1962 \cite{ADM}), usually referred to as ADM theory in literature. The same\ theory is based on the introduction of the so-called 3+1 decomposition of space-time which by construction is foliation dependent, in the sense that it relies on a peculiar choice of a family of GR frames for which time and space transform separately so that space-time is effectively split into the direct product of a 1-dimensional time and a 3-dimensional space subsets respectively (ADM-foliation) \cite{alcu}. Instead, different types of 2+2 splitting (or with double 3+1 and 2+2 splitting) are considered, for instance, to find new classes of GR exact solutions \cite{Vaca2,Vaca1,Vaca3}, to develop the theory of geometric flows related to classical, quantum gravity and geometric thermodynamics \cit {Cli,Vaca6}, or to elaborate some approaches based on deformation quantization of GR and modified gravity theories \cite{Vaca4,Vaca5}. In comparison with these approaches, the manifestly-covariant Lagrangian and Hamiltonian formulations of GR reported in Refs.\cite{noi1,noi2} and developed below mainly differ because, first, there is no introduction of foliation of space-time, so that the 4-tensor formalism is preserved at all stages of investigation. Second, in contrast to the Hamiltonian theory of GR obtained from ADM decomposition \cite{wald}, both Lagrangian and Hamiltonian dynamical variables and canonical state are expressed in 4-tensor notation and satisfy as well the manifest covariance principle.\ Third, in the context of CCG-theory\ the Hamiltonian flow\ associated with the Hamiltonian structure\ $\left\{ x_{R},H_{R}\right\} $\ (see Eq.(\ref{flow}) below) is defined with respect to an invariant proper-time $s$, and not a\ coordinate-time as in ADM theory. Finally, it must be stressed that, in such a context\ for the proper implementation of the DeDonder-Weyl formalism, besides the customary 4-scalar curvature term of the Einstein-Hilbert Lagrangian, $4-$tensor (i.e., manifestly-covariant)\ momenta must be\ adopted in the action functional. This property which can be fulfilled only\ adopting a synchronous variational principle is\ missing in the ADM Hamiltonian theory, where field variables and conjugate momenta are identified only after performing the 3+1 foliation on the Einstein-Hilbert Lagrangian of the associated asynchronous variational principle. Despite this difference the two approaches are\ complementary in the sense that they exhibit distinctive physical properties associated with the\ two canonical Hamiltonian structures underlying SF-GR\ (see again Ref.\cite{noi2}). \subsection{Goals of the paper} Going beyond the considerations discussed above, the construction of CCG-theory involves a number of questions, closely related to the continuum Hamiltonian theory reported in Ref.\cite{noi2}, which remain to be addressed. This involves posing the following distinct goals: \begin{itemize} \item \emph{GOAL \#1: Reduced continuum Hamiltonian theory for SF--GR -} The search for a reduced-dimensional realization of the continuum Hamiltonian theory for the Einstein field equations, which still satisfies the principle of manifest covariance. In fact, as a characteristic feature of the DeDonder-Weyl approach, in the Hamiltonian theory given in Ref.\cite{noi2} the canonical variables defining the canonical state have different tensorial orders, with the momenta being realized by third-order $4- tensors. In contrast, the new approach to be envisaged here should provide a realization of the canonical state $x_{R}\equiv \left\{ g_{\mu \nu },\pi _{\mu \nu }\right\} $ in which both generalized coordinates and corresponding momenta have the same tensorial dimension and are represented by second-order $4-$tensor fields. \item \emph{GOAL \#2: Evolution form of the reduced continuum Hamilton equations - }A further problem is whether the same reduced continuum Hamilton equations can be given a causal evolution form, namely they can be cast as \emph{canonical evolution equations}. Since originally the continuum Hamilton equations are realized by PDE, this means that some sort of Lagrangian representation should be determined. Hence, by introducing a suitable Lagrangian Path (LP) parametrization of\ the canonical state x_{R}\equiv \left\{ g_{\mu \nu },\pi _{\mu \nu }\right\} $ in terms of the proper-time associated with the prescribed tensor field $\widehat{g}_{\mu \nu }(r)$ indicated above, the corresponding continuum canonical equations are found to be realized by means of evolution equations advancing in proper time the canonical state. These will be referred to as \emph{GR-Hamilton equations}\textbf{\ }of CCG-theory: they generate the evolution of the corresponding canonical fields by means of a suitable canonical flow. \item \emph{GOAL \#3: Realization of manifestly-covariant continuum Hamilton-Jacobi theory -} A related question which arises involves, in particular, the determination of the canonical transformation which generates the flow corresponding to the continuum canonical evolution equations. This concerns, more precisely, the development of a corresponding Hamilton-Jacobi theory\ applicable in the context of CCG-theory and the investigation of the canonical transformation generated by the corresponding Hamilton principal function. \item \emph{GOAL \#4:\ Global prescription and regularity properties of the corresponding GR-Lagrangian and Hamiltonian densities}. The Lagrangian and Hamiltonian formulations should be globally prescribed in the appropriate phase-spaces. The global prescription should include also the validity of suitable \emph{regularity properties} of the corresponding Hamiltonian density $H_{R}$. \item \emph{GOAL \#5:\ Identification of the gauge properties of the classical GR-Lagrangian and Hamiltonian densities. }The related issue concerns the identification of the possible gauge indeterminacies, in terms of suitable \emph{gauge functions}, characterizing the Lagrangian and Hamiltonian densities. \item \emph{GOAL \#6: Dimensionally-normalized form of CHS.} In particular, the\ goal here is to show that a suitable\ dimensional normalization of the Hamiltonian structure $\left\{ x_{R},H_{R}\right\} $ can be reached so that the canonical momenta acquire the physical dimensions of an action, a feature required for the establishment of a quantum theory of GR in terms of Hamilton-Jacobi theory. More precisely this involves\ the construction of a non-symplectic canonical transformation for the GR-Hamiltonian density H_{R} $. The issue is to show that this can be taken of the form \begin{equation} \left\{ \begin{array}{c} g^{\mu \nu }\rightarrow \overline{g}^{\mu \nu }=g^{\mu \nu } \\ \pi ^{\mu \nu }\rightarrow \overline{\pi }^{\mu \nu }=\frac{\alpha L}{k}\pi ^{\mu \nu } \\ H_{R}\rightarrow \overline{H}_{R}\equiv \overline{T}_{R}+\overline{V}=\frac \alpha L}{k}H_{R \end{array \right. , \label{CANONICAL-0} \end{equation where $\kappa $ is the dimensional constant $\kappa =\frac{c^{3}}{16\pi G}$, $L$ is a $4-$scalar scale length to be defined, $\alpha $ is a suitable dimensional $4-$scalar, while $\overline{T}_{R},$ and $\overline{V}$ denote the corresponding transformed effective kinetic and potential densities defining the transformed Hamiltonian density $\overline{H}_{R}$. Then the question arises whether $\alpha $ can be prescribed in such a way that the transformed canonical momentum $\overline{\pi }^{\mu \nu }$ has the dimensions of an action. \item \emph{GOAL \#7: Structural stability of the GR-Hamilton equations of CCG-theory -} The final issue concerns the study of the structural stability which in the framework of CCG-theory the canonical equations\ exhibit with respect to their stationary solutions, \textit{i.e., }the solutions of the Einstein equations. In fact, depending on the specific realization of CHS considered here, infinitesimal perturbations whose dynamics is governed by the said canonical evolution equations may exhibit different stability behaviors, \textit{i.e., }be stable/unstable or marginally stable, with respect to arbitrary solutions of the Einstein field equations. For definiteness, the case of vacuum solutions with a non-vanishing cosmological constant $\Lambda $ is treated. It is shown that the stability analysis provides a prescriptions for the gauge functions indicated above which characterize the GR-Hamiltonian density. \end{itemize} In view of these considerations and of the results already achieved in\ Refs \cite{noi1,noi2}, in this paper the attention will be focused on the investigation of \emph{GOALS \#1-\#7}. These topics, together with the continuum Lagrangian and Hamiltonian theories proposed in Refs.\cit {noi1,noi2}, have potential impact in the context of both classical and quantum theories of General Relativity. \section{2 - Evolution form of Hamilton equations for SF-GR} In this section the problem of\ the determination of a\textbf{\ }\emph reduced continuum Hamiltonian theory }for GR is addressed for a prescribe \textbf{\ }Hamiltonian system. This is represented by the CHS $\left\{ x_{R},H_{R}\right\} $\ which is formed by an appropriate $4-$tensor canonical state $x_{R}$\ and a suitable $4-$scalar Hamiltonian density H_{R}\left( x_{R},\widehat{x}_{R}(r),r,s\right) $. In particular, the target requires to find a realization of the variational canonical momentum in such a way that, in the corresponding reduced canonical state, fields and reduced momenta form a couple of second-rank conjugate $4-$tensors. The requisite is that such a Hamiltonian theory should warrant the validity of the non-vacuum Einstein field equations to be achieved as realizations of suitable reduced continuum Hamilton equations set in evolution form and referred to as\emph{\ GR-Hamilton equations }of CCG-theory.\ More precisely, these are realized by the initial-value problem represented by the canonical equations: \begin{equation} \left\{ \begin{array}{c} \frac{Dg_{\mu \nu }(s)}{Ds}=\frac{\partial H_{R}(x_{R},\widehat{x _{R}(r),r,s)}{\partial \pi ^{\mu \nu }(s)}, \\ \frac{D\pi _{\mu \nu }(s)}{Ds}=-\frac{\partial H_{R}\left( x_{R},\widehat{x _{R}(r),r,s\right) }{\partial g^{\mu \nu }(s)} \end{array \right. \label{canonical evolution equations -2} \end{equation and the initial conditions of the type \begin{equation} \left\{ \begin{array}{c} g_{\mu \nu }(s_{o})\equiv g_{\mu \nu }^{(o)}(s_{o}), \\ \pi _{\mu \nu }(s_{o})\equiv \pi _{\mu \nu }^{(o)}(s_{o}) \end{array \right. \label{initial conditions} \end{equation Then the solution of the initial-value problem (\ref{canonical evolution equations -2})-(\ref{initial conditions}) generates the Hamiltonian flo \begin{equation} x_{R}(s_{o})\rightarrow x_{R}(s), \label{flow} \end{equation which is associated with the Hamiltonian structure\emph{\ }$\left\{ x_{R},H_{R}\right\} .$ Here the notation is as follows. First, $s$\ denotes the proper time prescribed along an arbitrary geodesic curve $r(s)\equiv \left\{ r^{\mu }(s)\right\} $. This is associated with the prescribed metric tensor $\widehat{g}_{\mu \nu }(r)$\ of the background space-time. Second \begin{equation} x_{R}(s)\equiv \left\{ g_{\mu \nu }(r(s)),\pi _{\mu \nu }(r(s))\right\} \label{REDUCED STATE} \end{equation identifies the $s-$parametrized \emph{reduced-dimensional variational canonical state}, with $g_{\mu \nu }(r)$\ and $\pi _{\mu \nu }(r)$\ being the corresponding continuum Lagrangian coordinates and the conjugate momenta, $\widehat{x}_{R}(s)\equiv \left\{ \widehat{g}_{\mu \nu }(r(s)) \widehat{\pi }_{\mu \nu }(r(s))\equiv 0\right\} $\textbf{\ }being the corresponding prescribed state and $H_{R}(x_{R},\widehat{x}_{R}(r),r,s)$\ the variational Hamiltonian $4-$scalar density to be suitably determined. Finally, $\frac{D}{Ds}$\ is the covariant $s-$derivativ \begin{equation} \frac{D}{Ds}=\frac{\partial }{\partial s}+t^{\alpha }(s)\widehat{\nabla _{\alpha }, \label{covariant s-derivative} \end{equation while $t^{\alpha }(s)$\ and $\widehat{\nabla }_{\alpha }$\ are respectively the tangent $4-$vector to the geodesics $r(s)\equiv \left\{ r^{\mu }(s)\right\} $\ and the covariant derivative evaluated at the same position in terms of the prescribed metric tensor $\widehat{g}_{\mu \nu }(r)$. The GR-Hamilton equations are covariant with respect to arbitrary canonical transformations. This property implies that the same equations are covariant with respect to an arbitrary local point transformation (LPT) which leaves invariant a given space-time represented by the differential manifold of the type $\left( \mathbf{Q}^{4},\widehat{g}(r)\right) $, so that the General Covariance Principle and the Principle of Manifest Covariance are necessarily both fulfilled by construction. The realization of the evolution form of the GR-Hamilton equations represents a requirement for the construction of a corresponding manifestly-covariant Hamilton-Jacobi theory of GR. The construction of these equations is based on the following steps. \subsection{2A - Step \#1: Prescription of the reduced-dimensional Hamiltonian density} In the first step the Hamiltonian density $H_{R}(x_{R},\widehat{x _{R}(r),r,s)$ is identified, extending the treatment given in Ref.\cite{noi2 . In terms of the reduced canonical variables this yield \begin{equation} H_{R}\left( x,\widehat{x},r\right) \equiv T_{R}\left( x_{R},\widehat{x _{R}\right) +V(g,\widehat{x},r), \label{reduced HAMILTONIAN} \end{equation where the effective kinetic and potential densities $T_{R}\left( x_{R} \widehat{x}_{R}\right) $ and $V(g,\widehat{x},r,s)$ can be taken respectively of the general for \begin{equation} \left\{ \begin{array}{c} T_{R}\left( x_{R},\widehat{x}_{R}\right) \equiv \frac{1}{2\kappa f(h)}\pi _{\mu \nu }\pi ^{\mu \nu }, \\ V\left( g,\widehat{x},r,s\right) \equiv \sigma V_{o}\left( g,\widehat{x \right) +\sigma V_{F}\left( g,\widehat{x},r,s\right) \end{array \right. \label{KINETIC ENERGY DENSITY} \end{equation Here $f(h)$\ and $\sigma $\ denote suitable \emph{multiplicative gauge functions which remain in principle still arbitrary at this point.\textbf{\ More precisely, $f(h)$\ identifies an \textquotedblleft a priori\textquotedblright\ arbitrary non-vanishing and smoothly-differentiable real gauge function depending on the variational weight-factor $h=\left( 2-\frac{1}{4}g^{\alpha \beta }g_{\alpha \beta }\right) $\ introduced in Ref.\cite{noi1} and prescribed in such a way that \begin{equation} f(\widehat{g}^{\mu \nu }(r))=1. \label{COINSTRAIUNT ON f(h)} \end{equation We anticipate here that the function $f(h)$ will be shown in Part 2 to be identically $f(h)=1$, as required by quantum theory of GR. Furthermore, \sigma $ denotes the additional constant gauge function $\sigma =\pm 1.$ Finally, in Eq.(\ref{KINETIC ENERGY DENSITY}) the two $4-$scalars \begin{equation} \begin{array}{c} V_{o}\left( g,\widehat{x}\right) \equiv \kappa h\left[ g^{\mu \nu }\widehat{ }_{\mu \nu }-2\Lambda \right] , \\ V_{F}\left( g,\widehat{x},r\right) \equiv hL_{F}\left( g,\widehat{x ,r\right) \end{array} \label{POT-ENERGY-SOURCES-2} \end{equation identify respectively the gravitational and external-field source contributions defined in Ref.\cite{noi1}, with $L_{F}$ being associated with a non-vanishing stress-energy tensor. \subsection{2B - Step \#2: Lagrangian path parametrization} In the second step we introduce the notion of Lagrangian path (LP) \cit {FoP1,FoP2}. \ For this purpose, preliminarily the real $4-$tensor t^{\gamma }(\widehat{g}(r),r)$ is introduced such that identicall \begin{equation} \left\{ \begin{array}{c} t^{\alpha }(\widehat{g}(r),r)\widehat{\nabla }_{\alpha }t^{\gamma }(\widehat g}(r),r)=0, \\ \widehat{g}_{\gamma \delta }(r)t^{\gamma }(\widehat{g}(r),r)t^{\delta } \widehat{g}(r),r)=1 \end{array \right. \label{REQUIREMENT-A} \end{equation so that by construction $t^{\gamma }(\widehat{g}(r),r)$ is tangent to an arbitrary geodetics belonging to an arbitrary $4-$position $r\equiv \left\{ r^{\mu }\right\} $ of the space-time $\left( \mathbf{Q}^{4},\widehat{g (r)\right) $ \cite{LL}. Then, the LP is identified with the geodetic curv \begin{equation} \left\{ r^{\mu }(s)\right\} \equiv \left\{ \left. r^{\mu }(s)\right\vert \text{ }\forall s\in \mathbb{R} ,\text{ }r^{\mu }(s_{o})=r_{o}^{\mu }\right\} , \end{equation which is solution of the initial-value proble \begin{equation} \left\{ \begin{array}{c} \frac{dr^{\mu }(s)}{ds}=t^{\mu }(s), \\ r^{\mu }(s_{o})=r_{o}^{\mu } \end{array \right. \label{LP equation} \end{equation Here the $4-$scalar proper-time $s$ is defined along the same curve $\left\{ r^{\mu }(s)\right\} $ so that $ds^{2}=\widehat{g}_{\mu \nu }(r)dr^{\mu }(s)dr^{\nu }(s)$. Furthermore, $t^{\mu }(s)$ identifies the $LP- parametrized $4-$vector $t^{\mu }(s)\equiv t^{\mu }(\widehat{g}(r(s)),r(s)) . In Eq.(\ref{LP equation}) $\frac{d}{ds}\equiv \frac{\partial }{\partial s} \ identifies the ordinary derivative with respect to $s$.\textbf{\ }In the following we shall denote as \emph{implicit }$s-$\emph{dependences }the dependences on the proper time $s$\ appearing in the variational fields through the LP parametrization of the fields. In contrast, we shall denote as \emph{explicit }$\emph{s-}$\emph{dependences}\textbf{\emph{\ }}the proper-time dependences which enter either explicitly on $s$ itself or through the dependence on $r(s)\equiv \left\{ r^{\mu }(s)\right\} $. Let us now introduce the parametrization obtained replacing everywhere, in all the relevant tensor fields, $r\equiv \left\{ r^{\mu }\right\} $ with r(s)\equiv \left\{ r^{\mu }(s)\right\} $, namely obtained identifying \begin{equation} \left\{ \begin{array}{c} g^{\mu \nu }(s)\equiv g^{\mu \nu }(r(s)), \\ \pi ^{\mu \nu }(s)\equiv \pi ^{\mu \nu }(r(s)), \\ \widehat{x}(r)\equiv \widehat{x}(r(s)) \end{array \right. \end{equation This yields for the Hamiltonian density $H_{R}$\ the so-called LP-parametrization, in terms of which the reduced Hamilton equations (\re {canonical evolution equations -2}) can in turn be represented.\textbf{\ }In the remainder, for greater generality, such a representation shall be taken of the for \begin{equation} H_{R}(s)\equiv H_{R}\left( x_{R}(s),\widehat{x}_{R}(r),r(s),s\right) , \label{LP-PARAMETRIZATION OF H_R} \end{equation \textit{i.e., }including also a possible explicit dependence in terms of the proper time $s$. Specific examples in which explicit $s-$dependences may occur in the theory include: 1) Continuum canonical transformations and in particular canonical transformations generating local or nor local point transformations (see Ref \cite{noi4}). In this case explicit $s-$dependences may arise in the transformed Hamiltonian density due to explicit $s-$dependent generating functions. 2) Hamilton-Jacobi theory (see Section 3), where in a similar way the\ explicit $s-$dependence in the Hamiltonian density may be generated by the canonical flow. 3) Stability theory for wave-like perturbations where explicit $s- dependences may appear in the variational fields $x_{R}=x_{R}(s)$\ (see Section 5). \subsection{2C - Step \#3: The reduced Hamiltonian variational principle} Given these premises, in the context of CCG-theory the explicit construction of the GR-Hamilton equations (\ref{canonical evolution equations -2}) follows in analogy with the extended Hamiltonian theory achieved in Ref.\cit {noi2}.\ The goal also in the present case is in fact the development of a manifestly-covariant variational approach, \textit{i.e.}, in which at all levels all variational fields, including the canonical variables, the Hamiltonian density, as well as their synchronous variations and the related Euler-Lagrange equations, are expressed in $4-$tensor form. To this end in the framework of the synchronous variational principle developed there - and in agreement with the DeDonder-Weyl approach - the variational functional is identified with a real $4-$scala \begin{equation} S_{R}\left( x,\widehat{x}\right) \equiv \int d\Omega L_{R}\left( x,\widehat{ },r,s\right) , \label{FUNCTIONAL} \end{equation with $L_{R}\left( x,\widehat{x},r,s\right) $ being the variational Lagrangian densit \begin{equation} L_{R}\left( x,\widehat{x},r,s\right) \equiv \pi _{\mu \nu }\frac{D}{Ds g^{\mu \nu }-H_{R}\left( x,\widehat{x},r,s\right) . \label{legendre-hh} \end{equation Thus, $L_{R}\left( x,\widehat{x},r,s\right) $ is identified with the Legendre transform of the corresponding variational Hamiltonian density H_{R}\left( x,\widehat{x},r,s\right) $ defined above, with $\pi _{\mu \nu \frac{D}{Ds}g^{\mu \nu }$ denoting the so-called exchange term. Then the variational principle associated with the functional $S_{R}\left( x,\widehat x}\right) $\ is prescribed in terms of the synchronous-variation operator \delta $\ (i.e., identified with the Frechet derivative according to Ref \cite{noi1}), i.e., by means of the synchronous variational principl \begin{equation} \delta S_{R}\left( x,\widehat{x}\right) =0 \label{SYNCR-VARIATIONAL-PRINCIPLE} \end{equation obtained keeping constant both the\ prescribed state $\widehat{x}$ and the 4-$scalar volume element $d\Omega .$ This delivers the $4-$tensor Euler-Lagrange equations cast in symbolic for \begin{equation} \left\{ \begin{array}{c} \frac{\delta S_{R}\left( x,\widehat{x}\right) }{\delta g^{\mu \nu }}=0, \\ \frac{\delta S_{R}\left( x,\widehat{x}\right) }{\delta \pi _{\mu \nu }}=0 \end{array \right. \end{equation which are manifestly equivalent to the Hamilton equations (\ref{canonical evolution equations -2}). These equations can be written in the equivalent Poisson-bracket representatio \begin{equation} \frac{D}{Ds}x_{R}(s)=\left[ x_{R},H_{R}\left( x_{R},\widehat{x _{R}(r),r,s\right) \right] _{(x_{R})}, \label{LP-PARAMETRIZED REDUCED HAM-EQ} \end{equation with $\left[ ,\right] _{(x_{R})}$ denoting the Poisson bracket evaluated with respect to the canonical variables $x_{R}$, namel \begin{equation} \left[ x_{R},H_{R}\left( x_{R},\widehat{x}_{R}(r),r,s\right) \right] _{(x_{R})}=\frac{\partial x_{R}}{\partial g^{\mu \nu }}\frac{\partial H_{R}(s)}{\partial \pi _{\mu \nu }}-\frac{\partial x_{R}}{\partial \pi _{\mu \nu }}\frac{\partial H_{R}(s)}{\partial g^{\mu \nu }}. \end{equation Then, after elementary algebra, the PDE's (\ref{LP-PARAMETRIZED REDUCED HAM-EQ}) yield the GR-Hamilton equations in evolution form given above by Eqs.(\ref{canonical evolution equations -2}). In particular, invoking Eqs. \ref{KINETIC ENERGY DENSITY})-(\ref{POT-ENERGY-SOURCES-2}) it follow \begin{equation} \frac{\partial V\left( g,\widehat{x}_{R}(r),r,s\right) }{\partial g^{\mu \nu }(s)}=\sigma \kappa h(s)\widehat{R}_{\mu \nu }-\sigma \kappa g_{\mu \nu }(s \frac{1}{2}\left( g^{\alpha \beta }(s)\widehat{R}_{\alpha \beta }-2\Lambda \right) -\sigma \kappa \frac{8\pi G}{c^{2}}T_{\mu \nu }, \label{GRADIENT-V} \end{equation where $\widehat{R}_{\mu \nu }\equiv \widehat{R}_{\mu \nu }(s)$ and $T_{\mu \nu }\equiv T_{\mu \nu }(s)$ denote the LP-parametrizations of the Ricci and stress-energy tensors. Hence, in the case the gauge function\ $f(h)$ is prescribed as $f(h)=1$, the canonical equations (\ref{canonical evolution equations -2}) reduce to the single equivalent \emph{Lagrangian evolution equation }for the variational field\emph{\ }$g_{\mu \nu }(s)$ in the LP-parametrization \begin{equation} \frac{D}{Ds}\left[ \frac{D}{Ds}g_{\mu \nu }(s)\right] +\sigma h(s)\widehat{R _{\mu \nu }-\sigma g_{\mu \nu }(s)\frac{1}{2}\left[ g^{\alpha \beta }(s \widehat{R}_{\alpha \beta }-2\Lambda \right] -\sigma \frac{8\pi G}{c^{2} T_{\mu \nu }=0. \label{Lagrangian evolution equation} \end{equation This concludes the proof that the GR-Hamilton equations (\ref{canonical evolution equations -2}), as well as the equivalent Lagrangian equation (\re {Lagrangian evolution equation}) are - as expected - both variational. \subsection{2D - Step \#4: Connection with Einstein field equations} The connection of the canonical equations (\ref{canonical evolution equations -2}) with the Einstein theory of GR can\ be obtained under the assumption that the Hamiltonian density does not depend explicitly on proper time $s$, \textit{i.e.}, it is actually of the for \begin{equation} H_{R}=H_{R}\left( x_{R},\widehat{x}_{R}(r),r\right) . \label{AUTONOMY} \end{equation In this case, one furthermore notices that the identities $\widehat{g}_{\mu \nu }(s)\widehat{g}^{\mu \nu }(s)=\delta _{\mu }^{\mu }$ and $\frac{D}{Ds \widehat{g}_{\mu \nu }(s)\equiv 0$ hold, so that by construction $\widehat \pi }_{\mu \nu }(s)\equiv 0$ and hence the canonical equation for\ $\widehat \pi }_{\mu \nu }(s)$ (or equivalently Eq.(\ref{Lagrangian evolution equation )) delivers for the prescribed field \begin{equation} \widehat{R}_{\mu \nu }-\widehat{g}_{\mu \nu }(s)\frac{1}{2}\left[ \widehat{g ^{\alpha \beta }(s)\widehat{R}_{\alpha \beta }-2\Lambda \right] =\frac{8\pi }{c^{2}}\widehat{T}_{\mu \nu }, \label{Einstein field equations} \end{equation which coincides with the Einstein field equations. Therefore, in this framework the latter are obtained by looking for a stationary solution of the GR-Hamilton equation (\ref{canonical evolution equations -2}),\textit{\ i.e.}, requiring the initial condition \begin{equation} \left\{ \begin{array}{c} g_{\mu \nu }(s_{o})\equiv \widehat{g}_{\mu \nu }(s_{o}), \\ \pi _{\mu \nu }(s_{o})\equiv \widehat{\pi }_{\mu \nu }(s_{o})=0 \end{array \right. \label{INITIAL CONDOITIONS} \end{equation while requiring furthermore for all $s\in I \begin{equation} \widehat{\pi }_{\mu \nu }(s)=0. \label{CONSTRAINT CONDITION} \end{equation Notice that, in principle, additional extrema may exist for the effective potential, i.e. such that $\frac{\partial V\left( g,\widehat{x _{R}(r),r,s\right) }{\partial g^{\mu \nu }(s)}=0$. One can show that this indeed happens, for example, in the case of vacuum, namely letting $\widehat T}_{\mu \nu }\equiv 0.$ Thus, besides $g_{\mu \nu }(s)\equiv \widehat{g _{\mu \nu }$ additional extrema include $g_{\mu \nu }(s)\equiv -\frac{2}{3 \widehat{g}_{\mu \nu }$ and the case in which $g_{\mu \nu }(s)$ satisfies identically the constraint equations $h(s)=0$ and $1-\frac{1}{2}g_{\mu \nu }(s)\widehat{g}^{\mu \nu }=0$. However, once the initial conditions (\re {INITIAL CONDOITIONS}) are set the stationary solution is unique. The prerequisite for the existence of such a particular solution is, however, the validity of the constraint condition (\ref{AUTONOMY}), i.e., the requirement that the GR-Hamilton equations (\ref{canonical evolution equations -2}) are autonomous. Such a property is non-trivial. In fact, it might be in principle violated if non-local effects are taken into account (see for example Refs.\cite{EPJ2,EPJ8}). Analogous circumstance might arise due to possible quantum effects. The issue will be further discussed in Part 2. Finally, for completeness, we mention also the connection between the reduced Hamiltonian system $\left\{ x_{R},H_{R}\right\} $ defined according to Eqs.(\ref{REDUCED STATE}) and (\ref{reduced HAMILTONIAN}) and the representation given in Ref.\cite{noi2} in terms of the "extended" Hamiltonian system $\left\{ x,H\right\} $ and based on the adoption of the "extended" canonical state $x\equiv \left\{ g^{\mu \nu },\Pi _{\mu \nu }^{\alpha }\right\} $. More precisely, the connection is obtained, first, by the prescription $H=H_{R}$,\ and, second, upon identifying $\Pi _{\mu \nu }^{\alpha }=t^{\alpha }\pi _{\mu \nu }$. In fact it then follows that $\pi _{\mu \nu }=t_{\alpha }\Pi _{\mu \nu }^{\alpha }$, so that $\pi _{\mu \nu }\left( r\right) $ represents the projection of $\Pi _{\mu \nu }^{\alpha }\left( r\right) $\ along the tangent vector $t_{\alpha }(s)$ to the background geodesic curve. \subsection{2E - Step \#5: Alternative Hamiltonian structures} As indicated above, Eq.(\ref{GRADIENT-V}) together with the GR-Hamilton equations (\ref{canonical evolution equations -2}) provides the required connection with the Einstein\ field equations. In practice this means that any suitably-smooth $4-$scalar function such tha \begin{equation} \left. \frac{\partial V\left( g,\widehat{x}_{R}(r),r,s\right) }{\partial g^{\mu \nu }(s)}\right\vert _{g^{\mu \nu }(s)=\widehat{g}^{\mu \nu }(s)}=\sigma \kappa \widehat{R}_{\mu \nu }-\sigma \kappa \widehat{g}_{\mu \nu }(s)\frac{1}{2}\left( \widehat{g}^{\alpha \beta }(s)\widehat{R}_{\alpha \beta }-2\Lambda \right) -\sigma \kappa \frac{8\pi G}{c^{2}}T_{\mu \nu }=0, \end{equation realizes an admissible Hamiltonian structure of GR. The choice corresponding to Eqs.(\ref{KINETIC ENERGY DENSITY}) with the functions $V_{o}\left( g \widehat{x}\right) $ and $V_{F}\left( g,\widehat{x},r\right) $ prescribed according to Eqs.(\ref{POT-ENERGY-SOURCES-2}) corresponds to the lowest-order polynomial representation (but still non-linear, and thus non-trivial) in terms of the variational field $g_{\mu \nu }(s)$ for the variational Hamiltonian. However, alternative possible realizations of the Hamiltonian structure \left\{ x_{R},H_{R}\right\} $ can be readily identified. In fact, once the initial conditions (\ref{INITIAL CONDOITIONS}) are set, alternative possible realizations of the GR-Hamilton equations (\ref{canonical evolution equations -2}), leading to the correct realization of the Einstein field equations, can be achieved. These are obtained introducing a transformation of the typ \begin{equation} \left\{ \begin{array}{c} g_{\mu \nu }(s)\rightarrow g_{\mu \nu }(s), \\ \pi ^{\mu \nu }(s)\rightarrow \pi ^{\mu \nu }(s)-(s-s_{o})P^{\mu \nu } \widehat{x}_{R}), \\ V\left( g,\widehat{x},r,s\right) \rightarrow V_{1}\left( g,\widehat{x ,r,s\right) +U_{o}\left( \widehat{g},\widehat{x},s\right) \end{array \right. \label{TRANSFORMATIONS} \end{equation Notice that here the function $U_{o}\left( \widehat{g},\widehat{x},s\right) $ remains in principle arbitrary, so that it can always be determined so that the extremal value of the potential density is preserved, namely $V\left( \widehat{g},\widehat{x},r,s\right) =V_{1}\left( \widehat{g},\widehat{x ,r,s\right) +U_{o}\left( \widehat{g},\widehat{x},s\right) $. However, \widehat{\pi }^{\mu \nu }(s),P_{\mu \nu }(\widehat{x}_{R})$ and $V_{1}\left( g,\widehat{x},r,s\right) $ can always be determined so that: 1) the extremal momentum $\widehat{\pi }^{\mu \nu }(s)$ is prescribed so that \begin{equation} \widehat{\pi }^{\mu \nu }(s)=(s-s_{o})P^{\mu \nu }(\widehat{x}_{R}); \label{NON-VANISHING MOMENTUM} \end{equation} 2) $P^{\mu \nu }(\widehat{x}_{R})$ and $V_{1}\left( g,\widehat{x},r,s\right) $ are such tha \begin{eqnarray} &&\left. \left. \frac{\partial V_{1}\left( g,\widehat{x}_{R}(r),r,s\right) } \partial g^{\mu \nu }(s)}\right\vert _{g^{\mu \nu }(s)=\widehat{g}^{\mu \nu }(s)}-P_{\mu \nu }(\widehat{x}_{R})=\right. \notag \\ &&\sigma \kappa \widehat{R}_{\mu \nu }-\sigma \kappa \widehat{g}_{\mu \nu }(s)\frac{1}{2}\left( \widehat{R}-2\Lambda \right) -\sigma \kappa \frac{8\pi G}{c^{2}}T_{\mu \nu }. \end{eqnarray Hence, a particular possible realization which leads to a functionally-different prescription of the potential density, and hence of the same Hamiltonian structure, is provided, for example, by the settin \begin{equation} \left\{ \begin{array}{c} V_{1}\left( g,\widehat{x}_{R}(r),r,s\right) \equiv \kappa hg^{\mu \nu \widehat{R}_{\mu \nu }+V_{F}\left( g,\widehat{x},r\right) , \\ U_{o}\left( \widehat{g},\widehat{x},s\right) =-2\kappa \Lambda , \\ P_{\mu \nu }(\widehat{x}_{R})=-\sigma \kappa \widehat{g}_{\mu \nu }(s)\Lambda \end{array \right. \end{equation with $V_{F}\left( g,\widehat{x},r\right) $ being given by Eq.(\re {POT-ENERGY-SOURCES-2}). The present example means that\ the contribution of the cosmological constant in the Einstein field equations can also be interpreted as arising due to a non-vanishing canonical momentum of the form given by Eq.(\ref{NON-VANISHING MOMENTUM}). Alternatively, a realization of the Einstein field equations with vanishing cosmological constant ($\Lambda \equiv 0$) can be achieved in terms of the potential density $V\left( g \widehat{x}_{R}(r),r,s\right) $ of the form given above by Eq.(\re {POT-ENERGY-SOURCES-2}) while setting at the same tim \begin{equation} P_{\mu \nu }(\widehat{x}_{R})=\sigma \kappa \widehat{g}_{\mu \nu }(s)\Lambda , \end{equation where now $\Lambda $ can be interpreted as an arbitrary real $4-$scalar. From the previous considerations it follows, however, that if a solution of the type (\ref{NON-VANISHING MOMENTUM}) is permitted the actual identification of the variational potential density remains essentially undetermined. It is however obvious that once the requirement $P_{\mu \nu } \widehat{x}_{R})\equiv 0$, or equivalently the constraint condition introduced above (\ref{CONSTRAINT CONDITION}) are set, the transformations \ref{TRANSFORMATIONS}) reduce necessarily to the trivial one, namel \begin{equation} \left\{ \begin{array}{c} g_{\mu \nu }(s)\rightarrow g_{\mu \nu }(s), \\ \pi ^{\mu \nu }(s)\rightarrow \pi ^{\mu \nu }(s), \\ V\left( g,\widehat{x},r,s\right) \rightarrow V_{1}\left( g,\widehat{x ,r,s\right) +U_{o}\left( \widehat{g},\widehat{x},s\right) \end{array \right. \end{equation which leaves unaffected the CHS. As a consequence, in validity of the constraint (\ref{CONSTRAINT CONDITION}), the same Hamiltonian structure remains uniquely determined. \section{3 - Manifestly-covariant Hamilton-Jacobi theory} From the results established in the previous section, it follows that, thanks to the\ realization introduced here for the GR-Hamilton equations of CCG-theory (i.e. Eqs.(\ref{canonical evolution equations -2})), the same take the form of dynamical evolution equations.\ This follows as a consequence of the parametrization in terms of the proper-time $s$ adopted for all geodetics belonging to the background space-time. This feature permits one to develop in a standard way, in close analogy with classical Hamiltonian mechanics, the theory of canonical transformations. Given these premises, in this section the problem of constructing a Hamilton-Jacobi theory of GR is addressed. Such a theory should describe a dynamical flow connecting a generic phase-space state with a suitable initial phase-space state characterized by identically-vanishing (\textit{i.e., }stationary with respect to $s$) coordinate fields and momenta. In view of the similarity of the LP formalism for GR with classical mechanics, it is expected that also in the present context the Hamilton-Jacobi theory follows from constructing a symplectic canonical transformation associated with a mixed-variable generating function of type $S\left( g^{\beta \gamma },P_{\mu \nu },\widehat x}_{R},r,s\right) $. Accordingly, the transformed canonical state $X_{R}\equiv \left\{ G_{\mu \nu },P_{\mu \nu }\right\} $ must satisfy the constraint equation \begin{eqnarray} \frac{D}{Ds}P_{\mu \nu }(s_{o}) &=&0, \\ \frac{D}{Ds}G^{\mu \nu }(s_{o}) &=&0, \end{eqnarray which imply the Hamilton equation \begin{eqnarray} 0 &=&\left[ P_{\mu \nu },K_{R}\left( X_{R},\widehat{X}_{R},r,s\right) \right] _{(X_{R})}, \label{PB-11} \\ 0 &=&\left[ G^{\mu \nu },K_{R}\left( X_{R},\widehat{X}_{R},r,s\right) \right] _{(X_{R})}, \label{PB-127} \end{eqnarray where $K_{R}\left( X_{R},\widehat{X}_{R},r,s\right) $ is the transformed Hamiltonian given b \begin{equation} K_{R}\left( X_{R},\widehat{X}_{R},r\right) =H_{R}\left( x_{R},\widehat{x _{R},r,s\right) +\frac{\partial }{\partial s}S\left( g^{\beta \gamma },P_{\mu \nu },\widehat{x}_{R},r,s\right) . \label{qqq} \end{equation Thanks to Eqs.(\ref{PB-11}) and (\ref{PB-127}), the transformed Hamiltonian is necessarily independent of $X_{R}$. As a consequence, $K_{R}$ identifies an arbitrary gauge function, \textit{i.e., }in actual fact K_{R}=K_{R}\left( \widehat{x}_{R},r\right) $, which can always be set equal to zero ($K_{R}=0$). On the other hand, canonical transformation theory requires that it must b \begin{eqnarray} \pi _{\iota \xi } &=&\frac{\partial S\left( g^{\beta \gamma },P_{\mu \nu } \widehat{x}_{R},r,s\right) }{\partial g^{\iota \xi }}, \label{R1-1} \\ G^{\iota \xi } &=&\frac{\partial S\left( g^{\beta \gamma },P_{\mu \nu } \widehat{x}_{R},r,s\right) }{\partial P_{\iota \xi }}. \label{R2-2} \end{eqnarray Then, introducing the $s-$parametrization it follows that Eq.(\ref{qqq}) deliver \begin{equation} H_{R}\left( g^{\beta \gamma },\frac{\partial S\left( g^{\beta \gamma },P_{\mu \nu },\widehat{x}_{R},s\right) }{\partial g^{\iota \xi }},\widehat{ }_{R},r,s\right) +\frac{\partial }{\partial s}S\left( g^{\beta \gamma },P_{\mu \nu },\widehat{x}_{R},r,s\right) =0, \label{AA-REDUCED-HJ} \end{equation which realizes the GR-Hamilton-Jacobi equation for the mixed-variable generating function $S\left( g^{\beta \gamma },P_{\mu \nu },\widehat{x _{R},r,s\right) $. Due to its similarity with the customary Hamilton-Jacobi equation well-known in Hamiltonian classical dynamics, in the following $S$ will be referred to as the (\emph{classical}) \emph{GR-Hamilton principal function}. The canonical transformations generated by $S\left( g^{\beta \gamma },P_{\mu \nu },\widehat{x}_{R},s\right) $ are then obtained by the set of equations (\ref{R1-1}), (\ref{R2-2}) and (\ref{AA-REDUCED-HJ}). Now we notice, in view of the discussions given above, that the inverse canonical transformation $X_{R}\rightarrow x_{R}$ locally exists provided the invertibility condition on the Hessian determinant $\det \left\vert \left[ \frac{\partial ^{2}S\left( g^{\beta \gamma },P_{\mu \nu },\widehat{x _{R},r,s\right) }{\partial g^{\rho \sigma }\partial P_{\iota \xi }}\right] _{X_{R}=\widehat{x}_{R}}\right\vert \neq 0$ is met. Under such a condition the direct canonical equation (\ref{R2-2}) determines $g^{\beta \gamma }$ as an implicit function of the form $g^{\beta \gamma }=g^{\beta \gamma }\left( G^{\beta \gamma },P_{\mu \nu },\widehat{x}_{R},r,s\right) $. The following statement holds on the relationship between the GR-Hamilton-Jacobi and the GR-Hamilton equations. \bigskip \textbf{THM.1 - Equivalence of GR-Hamilton and GR-Hamilton-Jacobi equations} \emph{The GR-Hamilton-Jacobi equation (\ref{AA-REDUCED-HJ}) subject to the constraint (\ref{R1-1}) is equivalent to the set of GR-Hamilton equations expressed in terms of the initial canonical variables, as given by Eqs.(\re {canonical evolution equations -2}).} \emph{Proof - }Without loss of generality and avoiding possible misunderstandings, the compact notation $S\left( g,P,\widehat{x _{R},r,s\right) $ will be used in the following proof to denote the GR-Hamilton principal function. To start with, we evaluate first the partial derivative of Eq.(\ref{AA-REDUCED-HJ}) with respect to $g^{ik}$, keeping both $\frac{\partial S\left( g,P,\widehat{x}_{R},r,s\right) }{\partial g^{\iota \xi }}$ and $r^{\mu }$ constant. This give \begin{equation} \frac{\partial }{\partial g^{ik}}H_{R}\left( g^{\beta \gamma },\frac \partial S\left( g,P,\widehat{x}_{R},s\right) }{\partial g^{\iota \xi }} \widehat{x}_{R},r,s\right) +\frac{\partial }{\partial s}\left[ \frac \partial }{\partial g^{ik}}S\left( g,P,\widehat{x}_{R},r,s\right) \right] _{\left( g,P\right) }=0. \label{first-hjh} \end{equation Then, let us evaluate in a similar manner the partial derivative with respect to $\frac{\partial S\left( g,P,\widehat{x}_{R},s\right) }{\partial g^{ik}}$, keeping $g^{\mu \nu }$ and $r^{\mu }$ constant. This give \begin{equation} \frac{\partial }{\partial \frac{\partial S\left( g,P,\widehat{x _{R},r,s\right) }{\partial g^{ik}}}H_{R}\left( g^{\beta \gamma },\frac \partial S\left( g,P,\widehat{x}_{R},s\right) }{\partial g^{\iota \xi }} \widehat{x}_{R},r,s\right) +\left[ \frac{\partial }{\partial \frac{\partial S\left( g,P,\widehat{x}_{R},r,s\right) }{\partial g^{ik}}}\frac{\partial } \partial s}S\left( g,P,\widehat{x}_{R},r,s\right) \right] _{\left( g,P\right) }=0. \label{second-hjh} \end{equation With the identification $\pi _{\iota \xi }=\frac{\partial S\left( g,P \widehat{x}_{R},s\right) }{\partial g^{\iota \xi }}$ provided by Eq.(\re {R1-1}) it follows that Eq.(\ref{first-hjh}) become \begin{equation} \frac{\partial }{\partial g^{ik}}H_{R}\left( g^{\beta \gamma },\pi _{\iota \xi },\widehat{x}_{R},r\right) +\frac{D}{Ds}\pi _{ik}=0, \label{HAM-EQ-2a} \end{equation which coincides with the second Hamilton equation in (\ref{canonical evolution equations -2}). To prove also the validity of the Hamilton equation for $g^{\beta \gamma }$ we first invoke the following identit \begin{gather} \left[ \frac{\partial }{\partial \frac{\partial S\left( g,P,\widehat{x _{R},s\right) }{\partial g^{ik}}}\frac{\partial }{\partial s}S\left( g,P \widehat{x}_{R},s\right) \right] _{\left( g,P\right) }=\frac{\partial } \partial \frac{\partial S\left( g,P,\widehat{x}_{R},s\right) }{\partial g^{ik}}}\frac{\partial }{\partial s}S\left( g,P,\widehat{x}_{R},s\right) \notag \\ -\frac{D}{Ds}g^{\beta \gamma }\frac{\partial }{\partial \frac{\partial S\left( g,P,\widehat{x}_{R},s\right) }{\partial g^{ik}}}\frac{\partial S\left( g,P,\widehat{x}_{R},s\right) }{\partial g^{\beta \gamma }}, \label{ts} \end{gather wher \begin{equation} \frac{\partial }{\partial \frac{\partial S\left( g,P,\widehat{x _{R},r,s\right) }{\partial g^{ik}}}\frac{\partial S\left( g,P,\widehat{x _{R},r,s\right) }{\partial g^{\beta \gamma }}=\delta _{\beta }^{i}\delta _{\gamma }^{k}. \end{equation The first term on the rhs of Eq.(\ref{ts}) vanishes identically because \frac{\partial }{\partial s}S\left( g,P,\widehat{x}_{R},r,s\right) $ must be considered as independent of $\pi _{ik}$. Therefore, Eq.(\ref{second-hjh}) give \begin{equation} \frac{\partial }{\partial \pi _{ik}}H_{R}\left( g^{\beta \gamma },\pi _{\iota \xi },\widehat{x}_{R},r,s\right) -\frac{D}{Ds}g^{ik}=0, \label{HAM-EQ-1} \end{equation which coincides with the Hamilton equation for $g^{ik}$ and gives also the relationship of the generalized velocity $\frac{D}{Ds}g^{ik}$ with the canonical momentum, since here no explicit $s-$dependence appears. This proves the equivalence between the GR-Hamilton-Jacobi and GR-Hamilton equations, both expressed in manifestly-covariant form. \textbf{Q.E.D.} \bigskip This conclusion recovers the relationship between Hamilton and Hamilton-Jacobi equations holding in Hamiltonian Classical Mechanics for discrete dynamical systems. The connection is established also in the present case for the continuum gravitational field thanks to the manifestly-covariant LP parametrization of the theory and the representation of the Hamiltonian and Hamilton-Jacobi equations as dynamical evolution equations with respect to the proper time $s$ characterizing background geodetics. The physical interpretation which follows from the validity of THM.1 is remarkable. This concerns the meaning of the Hamilton-Jacobi theory in providing a wave mechanics description of the continuum Hamiltonian dynamics. This follows also in the present context by comparing the mathematical structure of the Hamilton-Jacobi equation (\ref{AA-REDUCED-HJ}) with the well-known eikonal equation of geometrical optics. In fact Eq.(\re {AA-REDUCED-HJ}) contains the squared of the derivative $\frac{\partial S\left( g^{\beta \gamma },P_{\mu \nu },\widehat{x}_{R},r,s\right) }{\partial g^{\iota \xi }}$, so that the Hamilton principal function $S\left( g^{\beta \gamma },P_{\mu \nu },\widehat{x}_{R},r,s\right) $ is associated with the eikonal (\textit{i.e., }the phase of the wave), while the remaining contributions due to the geometrical and physical properties of the curved space-time formally play the role of a non-uniform index of refraction in geometrical optics \cite{Goldstein}. The outcome pointed out here proves that the dynamics of the field $g_{\mu \nu }(s)$ in the virtual domain of variational fields where the Hamiltonian structure is defined and the Hamilton-Jacobi theory (\ref{AA-REDUCED-HJ}) applies must be characterized by a wave-like behavior and can therefore be given a geometrical optics interpretation. This feature is expected to be crucial for the establishment of the corresponding manifestly-covariant quantum theory of the gravitational field. An important\ qualitative feature must be pointed out regarding the Hamilton-Jacobi theory developed here. This\ refers to a formal difference arising between the Hamilton-Jacobi theory for continuum fields built on the DeDonder-Weyl covariant approach and the Hamilton-Jacobi theory holding in classical mechanics for particle dynamics. This concerns, more precisely, the dimensional units to be adopted for the Hamilton principal function $S$ and hence also the canonical momentum $\pi ^{\mu \nu }$. Indeed, as is well-known, in particle\textbf{\ }dynamics $S$\ retains the dimension of an action (and therefore of the action functional), so that $\left[ S\right] \left[ \hbar \right] $. In the present case instead (see Eq.(\ref{reduced HAMILTONIAN})) one has that $\left[ S\right] =\left[ \hbar L^{-3}\right] $, namely the dimension of $S$\ differs from that of an action by the cubic length $L^{-3}$. This arises because for continuum fields the action functional is an integral over the $4-$volume element of the Hamiltonian density, while for particle mechanics it is expressed as a line integral over the proper-time length. One has to notice, however, that, first, the dimensions of $S$\ may be changed by the introduction of a non-symplectic canonical transformation. This means that, by suitable choice of the same transformation, $S$ can actually recover the dimension of an action. Second, the relationship between the Hamilton principal function $S$\ and the Hamiltonian function itself remains in all cases the same, with the two functions differing by the dimension of a length (see also Section 4 below). Before concluding, the following additional remarks are in order: 1) The GR-Hamilton-Jacobi description permits to construct explicitly canonical transformations mapping in each other the physical and virtual domains. The generating function determined by the GR-Hamilton-Jacobi equation is a real $4-$scalar field. 2)\ The generating function obtained in this way realizes the particular subset of canonical transformations which map the physically-observable state $\widehat{x}_{R}$\ into a neighboring admissible virtual canonical state $x_{R}$. 3) The virtue of the approach is that it preserves the validity of the Einstein equation in the physical domain. In other words, the canonical transformations do not affect the physical behavior. 4) A further issue concerns the connection between the same prescribed metric tensor $\widehat{g}(r)\equiv \left\{ \widehat{g}_{\mu \nu }(r)\right\} $\ and the variational/extremal state $x_{R}(s)=\left\{ g(s),\pi (s)\right\} $. This can be obtained by establishing a proper statistical theory achieved by considering the initial state\textbf{\ \begin{equation} x_{R}(s_{o})=\widehat{x}(s_{o})+\delta x_{R}(s_{o}) \end{equation as a stochastic tensor and thus endowed with a suitable phase-space probability density. The topic can be developed in the framework of a statistical description of classical gravity to be discussed elsewhere in detail. \section{4 - Properties of CHS} In this section some properties are discussed which characterize the manifestly-covariant Hamiltonian theory of GR. \subsection{4A - Global prescription and regularity} This refers, first of all, to the global prescription and regularity of the GR-Lagrangian and GR-Hamiltonian densities $L_{R}\equiv L_{R}(y,\widehat{g ,r,s)$ and $H_{R}\equiv H_{R}(x,\widehat{g},r,s)$ defined according to Eqs. \ref{legendre-hh}) and (\ref{reduced HAMILTONIAN})\ which are associated with the corresponding Hamiltonian structure $\left\{ x_{R},H_{R}\right\} $\ indicated above. For this purpose we notice that in Eq.(\ref{legendre-hh}) the effective kinetic and potential densities expressed in terms of the Lagrangian state $y=\left\{ g_{\mu \nu },\frac{D}{Ds}g^{\mu \nu }\right\} $ and the LP-parametrization (\ref{LP-PARAMETRIZED REDUCED HAM-EQ}) can be taken of the general typ \begin{equation} \left\{ \begin{array}{c} T_{R}\left( y,\widehat{g}\right) =\frac{f(h)}{2\kappa }\frac{D}{Ds}g_{\mu \nu }\frac{D}{Ds}g^{\mu \nu }, \\ V\left( g,\widehat{g},r,s\right) \equiv \sigma V_{o}\left( g,\widehat{g \right) +\sigma V_{F}\left( g,\widehat{g},r,s\right) \end{array \right. \label{kin-energy-density} \end{equation Here, $f(h)$ and $\sigma $ identify the two distinct multiplicative gauge functions\ introduced above (see Section 2), $\kappa $ is the dimensional constant $\kappa =\frac{c^{3}}{16\pi G}$, while the rest of the notations is expressed in standard form according to Refs.\cite{noi1,noi2}. More precisely, in the second equation $V_{o}\left( g,\widehat{g}\right) $ and V_{F}\left( g,\widehat{g},r,s\right) \equiv hL_{F}\left( g,\widehat{g ,r,s\right) $ are defined as in Eq.(\ref{POT-ENERGY-SOURCES-2}) and must be expressed here in Lagrangian variables, with the field Lagrangian L_{F}\left( g,\widehat{g},r,s\right) $ being prescribed according to Ref \cite{noi1}. Furthermore,\textbf{\ }$T_{R}\left( y,\widehat{g}\right) $ identifies the \emph{generic form} of the effective kinetic density. It follows that a sufficient condition for the global prescription of the canonical state, \textit{i.e., }the existence of a smooth bijection connecting the Lagrangian and Hamiltonian states is the so-called \emph regularity condition} of the GR-Hamiltonian (and corresponding GR-Lagrangian) density. This requires more precisely that in the whole Hamiltonian phase-spac \begin{equation} \left\vert \frac{\partial ^{2}H_{R}}{\partial \pi _{\mu \nu }\partial \pi ^{\alpha \beta }}\right\vert \equiv \left\vert \frac{\partial ^{2}T_{R}} \partial \pi _{\mu \nu }\partial \pi ^{\alpha \beta }}\right\vert =\frac{1} \kappa f(h)}\neq 0. \label{Hessian-1} \end{equation} \subsection{4B - Gauge indeterminacies of CHS} As shown in Ref.\cite{noi2} at the classical level\ the Hamiltonian structure $\left\{ x_{R},H_{R}\right\} $ of SF-GR remains intrinsically non-unique, with the Hamiltonian density $H_{R}$ being characterized by suitable gauge indeterminacies. Leaving aside the treatment of additive gauge functions earlier discussed in Refs.\cite{noi1,noi2}, these refer\ more precisely to the following properties: \begin{itemize} \item \emph{A) The first one is the so-called multiplicative gauge transformation of the effective kinetic density.} To identify it we notice that the scalar factor $f(h)$ appearing in the prescriptions of the effective kinetic density (see first equation in (\ref{kin-energy-density})) remains in principle essentially indeterminate. In fact the regularity condition (\ref{Hessian-1}) requires only that \begin{equation} \left\vert \frac{\partial ^{2}T_{R}}{\partial \pi _{\mu \nu }\partial \pi ^{\alpha \beta }}\right\vert =\frac{1}{\kappa f(h)}\neq 0, \end{equation implying that the function $f(h)$ can be realized by an arbitrary non-vanishing (for example, strictly positive) and suitably smooth dimensionless real function. In addition, in order that both $T_{R}\left( y \widehat{g}\right) $ and $V\left( y,\widehat{g}\right) $ (see again Eqs.(\re {kin-energy-density}))\textbf{\ }are realized by means of integrable functions in the configurations space $U_{g}$, accordingly the functions f(h)$ and $1/f(h)$ should be summable too. As a consequence the prescription of $f(h)$ remains in principle still free within the classical theory of SF-GR developed in Section 2. This means that $f(h)$ should be intended in such a context as a gauge indeterminacy affecting the Hamiltonian density H_{R}$, \textit{i.e., }with respect to the \emph{multiplicative gauge transformation \begin{equation} \left\{ \begin{array}{c} T_{R}\left( x_{R},\widehat{g}\right) \equiv \frac{1}{2\kappa }\pi _{\mu \nu }\pi ^{\mu \nu }\rightarrow T_{R}^{\prime }\left( x_{R},\widehat{g}\right) \equiv \frac{1}{2\kappa f(h)}\pi _{\mu \nu }^{\prime }\pi ^{\prime \mu \nu } \frac{1}{f(h)}T_{R}\left( x_{R},\widehat{g}\right) , \\ \pi _{\mu \nu }\rightarrow \pi _{\mu \nu }^{\prime }=f(h)\pi _{\mu \nu } \end{array \right. \label{multiplicative-gauge-1} \end{equation} \item \emph{B) The second indeterminacy is related to the so-called multiplicative gauge transformation of the effective potential density } V\left( g,\widehat{g},r,s\right) $ (see again Section 2). More precisely the indeterminacy is related to the constant gauge factor $\sigma =\pm 1$. \item \emph{C) The third indeterminacy is related to the so-called additive gauge transformation.} Indeed, one can readily show (see Ref.\cite{noi2}) that $L_{R}(y,\widehat{g},s)$ is prescribed up to an arbitrary additive gauge transformation of the typ \begin{equation} L_{R}(y,\widehat{g},r,s)\rightarrow L_{R}(y,\widehat{g},r,s)+\Delta V \label{classical-gauge-2} \end{equation being $\Delta V$ a gauge scalar field of the form $\Delta V=\frac{DF(g \widehat{g},r,s)}{Ds}$, with $F(g,\widehat{g},s)$ being an arbitrary, suitably-smooth real gauge function of class $C^{(2)}$ with respect to the variables $(g,s)$ (see also related discussion in Ref.\cite{noi1}). \end{itemize} \subsection{4C - Dimensional normalization of CHS} In this section it is shown that the Hamiltonian structure\textbf{\ } \left\{ x_{R},H_{R}\right\} $ can be equivalently realized in such a way that $x_{R}$, and consequently also $H_{R}$, can be suitably normalized, \textit{i.e., }so that to achieve prescribed physical dimensions. Granted the non-symplectic canonical nature of the transformation indicated above \textit{i.e., }Eq.(\ref{CANONICAL-0})) one can always identify the $4- scalar $\alpha $ with a classical invariant parameter, \textit{i.e., }both frame-independent and space-time independent. In particular, in the framework of a classical treatment it should be identified with the classical parameter $\alpha \equiv \alpha _{\text{Classical}},$ being \begin{equation} \alpha _{\text{Classical}}=m_{o}cL>0, \label{ALFA-1} \end{equation with $c$ being the speed of light in vacuum, $m_{o}$ a suitable rest-mass (to be later identified with the non-vanishing graviton mass in the framework of quantum theory of GR) and $L$\ a characteristic scale length to be considered as an invariant non-null $4-$scalar.\ Without loss of generality this can always be assumed of the form\textbf{\ \begin{equation} L=L(m_{o}), \label{charact-lewngth} \end{equation with $m_{o}$ itself being regarded as an invariant $4-$scalar. Here $L$\ is regarded as a classical invariant parameter, so that it should remain independent of all quantum parameters, \textit{i.e., }in particular $\hslash $.\ In addition, in view of the covariance property of the theory, whereby the choice of the background space-time $\left( \mathbf{Q}^{4},\widehat{g (r)\right) $ is in principle\ arbitrary, the $4-$scalars $m_{o}$\ and $L$\ should be \emph{universal constants,}\textbf{\ }namely also invariant with respect to the action of local and non-local point transformations \cit {noi4}. As an example, a possible consistent choice for the invariant function $L(m_{o})$\ is realized by means of the so-called Schwarzschild radius, \textit{i.e., \begin{equation} L(m_{o})=\frac{2m_{o}G}{c^{2}}. \label{Schwartzchild radius} \end{equation The invariant rest-mass $m_{o}$\ remains however still arbitrary at this level, its prescription being left to quantum theory. It follows that the transformed GR-Hamilton equations (\ref{canonical evolution equations -2}) can always be cast in the\emph{\ dimensional normalized form \begin{equation} \left\{ \begin{array}{c} \frac{Dg_{\mu \nu }}{Ds}=\frac{\partial \overline{H}_{R}}{\partial \overline \pi }^{\mu \nu }}\equiv \frac{\partial H_{R}}{\partial \pi ^{\mu \nu }}, \\ \frac{D\overline{\pi }_{\mu \nu }}{Ds}=-\frac{\partial \overline{H}_{R}} \partial g^{\mu \nu }}\equiv -\frac{\alpha L}{k}\frac{\partial H_{R}} \partial g^{\mu \nu }} \end{array \right. \label{can-2-a} \end{equation where the transformed Hamiltonian $\overline{H}_{R}$ identifies the \emph normalized GR-Hamiltonian density \begin{equation} \overline{H}_{R}(\overline{x}_{R},\widehat{g},r,s)=\frac{1}{f(h)}\overline{T _{R}(\overline{x}_{R},\widehat{g},r,s)+\overline{V}\left( \overline{g} \widehat{g},r,s\right) . \label{HAmiltonian-N} \end{equation Here the notation is as follows. First, for an arbitrary curved space-time $ \mathbf{Q}^{4},\widehat{g}(r))$ the functions $\overline{T}_{R}$ and \overline{V}$ now are identified wit \begin{equation} \left\{ \begin{array}{c} \overline{T}_{R}(\overline{x}_{R},\widehat{g},r,s)=\frac{\overline{\pi ^{\mu \nu }\overline{\pi }_{\mu \nu }}{2\alpha L}, \\ \overline{V}\left( g,\widehat{g},r,s\right) \equiv \sigma \overline{V _{o}\left( g,\widehat{g},r,s\right) +\sigma \overline{V}_{F}\left( g \widehat{g},r,s\right) \end{array \right. \label{V_R-N} \end{equation so that $\overline{T}_{R}(\overline{x}_{R},\widehat{g},r,s)$ identifies the normalized effective kinetic density and $\overline{V}$ by analogy is the corresponding normalized effective potential density, with $\overline{V _{o}\left( g,\widehat{g}\right) $ and $\overline{V}_{F}\left( g,\widehat{g ,r,s\right) $ now being prescribed respectively in terms of $V_{o}\left( g \widehat{g}\right) $ and $V_{F}\left( g,\widehat{g},r,s\right) $ as \begin{equation} \left\{ \begin{array}{c} \overline{V}_{o}\left( g,\widehat{g}\right) \equiv h\alpha L\left[ g^{\mu \nu }\widehat{R}_{\mu \nu }-2\Lambda \right] , \\ V_{F}\left( g,\widehat{g},r,s\right) \equiv \frac{h\alpha L}{2k}L_{F}\left( g,\widehat{g},r,s\right) \end{array \right. \label{V_F-N} \end{equation From the canonical equations (\ref{can-2-a}) it is obvious that by construction the transformed canonical momentum $\overline{\pi }^{\mu \nu }$ takes the dimensions of an action, \textit{i.e., }$\left[ \overline{\pi ^{\mu \nu }\right] =\left[ \hslash \right] $. The set $\left\{ \overline{x _{R},\overline{H}_{R}\right\} $ thus provides an admissible representation of CHS. In particular it follows that the GR-Hamilton equations in normalized form become respectivel \begin{equation} \left\{ \begin{array}{c} \frac{Dg_{\mu \nu }}{Ds}=\frac{\overline{\pi }_{\mu \nu }}{\alpha L}, \\ \frac{D\overline{\pi }_{\mu \nu }}{Ds}=-\frac{\partial \overline{V}\left( g \widehat{g},r,s\right) }{\partial g^{\mu \nu }} \end{array \right. \label{RENORM-HAM;} \end{equation with the operator $D/Ds$\ being prescribed again in terms of the corresponding prescribed metric tensors $\widehat{g}_{\mu \nu }(r)$. For completeness, we mention here also the normalized Hamilton-Jacobi equation corresponding to the canonical equations (\ref{RENORM-HAM;}). This is reached introducing the corresponding normalized Hamilton principal function $S(g,P,\widehat{g},r,s)$, \textit{i.e., }the mixed-variable generating function for the canonical transformatio \begin{equation} x_{R}(s_{o})\equiv (\mathbf{G}_{\mu \nu },\mathbf{P}^{\mu \nu })\Leftrightarrow x(s)\equiv (g_{\mu \nu }(s),\pi ^{\mu \nu }(s)), \end{equation with $x_{R}(s_{o})\equiv (\mathbf{G}_{\mu \nu },\mathbf{P}^{\mu \nu })$ denoting the initial canonical GR-state. Then, $S(g,\mathbf{P},\widehat{g ,r,s)$ is prescribed in such a way that the normalized canonical momentum \pi ^{\mu \nu }(s)$ is given by $\overline{\pi }_{\mu \nu }=\frac{\partial S(g,\mathbf{P},\widehat{g},r,s)}{\partial g^{\mu \nu }}$, while the initial canonical coordinate $\mathbf{G}_{\mu \nu }$ is determined by the inverse canonical transformation $\mathbf{G}_{\mu \nu }=\frac{\partial S(g,\mathbf{P ,\widehat{g},r,s)}{\partial \mathbf{P}^{\mu \nu }}$. It follows that the corresponding dimensionally-normalized Hamilton-Jacobi equation which is equivalent to Eqs.(\ref{RENORM-HAM;}) is provided b \begin{equation} \frac{\partial S(g,\mathbf{P},\widehat{g},r,s)}{\partial s}+\overline{H _{R}\left( g,\overline{\pi }\equiv \frac{\partial S(g,\mathbf{P},\widehat{g ,r,s)}{\partial g},\widehat{g},r,s\right) =0, \label{GR-HAM-JACOBI} \end{equation with $\overline{H}_{R}$ being prescribed by Eqs.(\ref{HAmiltonian-N}). \section{5 - Structural stability of the GR-Hamilton equations} In this section we present an application of the GR-Hamiltonian theory for the Einstein field equations developed in this paper, which is represented by Eqs.(\ref{canonical evolution equations -2}) or equivalently by the Hamilton-Jacobi equation (\ref{AA-REDUCED-HJ}). This refers to the stability of the GR-Hamilton equations with respect to their stationary solution. As shown above the latter realizes by construction a solution of the Einstein field equations (\ref{Einstein field equations})\ in its most general form, i.e., in the presence of arbitrary external sources. Therefore the task to be addressed concerns the so-called \emph{structural stability} of the GR-Hamilton equations (with respect to the Einstein field equations), namely the stability of stationary solutions of the GR-Hamilton equations assuming that the perturbed fields realize particular solutions of the same GR-Hamilton equations. As a first illustration of the problem, here we consider the case of arbitrary vacuum solutions realized by setting a vanishing stress-energy tensor ($\widehat{T}_{\mu \nu }=0$)\ and possibly retaining also a non-vanishing cosmological constant as corresponds to de Sitter ($\Lambda >0 ) or anti-de Sitter ($\Lambda <0$) space-times. Let us address the problem in the context of the reduced continuum manifestly-covariant Hamiltonian theory. The study is supported by the conclusions concerning the wave mechanics interpretation of the reduced continuum Hamiltonian dynamics of the gravitational field determined by the Hamilton-Jacobi theory. For this purpose we shall consider perturbations of the reduced canonical state $x_{R}(s)$ which are suitably close to $\widehat x}_{R}(s)$, namely of the for \begin{eqnarray} g^{\mu \nu }(s) &=&\widehat{g}^{\mu \nu }(s)+\varepsilon \delta g^{\mu \nu }(s), \label{SOL-1} \\ \pi _{\mu \nu }(s) &=&\varepsilon \delta \pi _{\mu \nu }(s), \label{SOL-2} \end{eqnarray where $\varepsilon \ll 1$ is an infinitesimal dimensionless parameter identifying the perturbations of the canonical fields. In particular, consistent with the existence of a Hamilton-Jacobi theory and its physical interpretation pointed out above, we are authorized to consider a wave-like form of the perturbations. These are assumed to propagate along field geodetics, namely the same perturbations of the canonical fields $\left( \delta g^{\mu \nu }(s),\delta \pi _{\mu \nu }(s)\right) $ are taken of the for \begin{eqnarray} \delta g^{\mu \nu }(s) &=&\delta \widehat{g}^{\mu \nu }(\widehat{g}(s))\exp \left\{ G(s)\right\} , \label{PERT-1} \\ \delta \pi _{\mu \nu }(s) &=&\delta \widehat{\pi }_{\mu \nu }(\widehat{g (s))\exp \left\{ G(s)\right\} . \label{PERT-2} \end{eqnarray Here $G(s)$ denotes the eikona \begin{equation} G(s)=i\frac{\omega }{c}s-iKr^{\mu }(s)t_{\mu }(s), \label{eiko} \end{equation with $\omega $\ and $K$ being $4-$scalar parameters which, by construction, have respectively the dimensions of a frequency and that of a wave number \textit{i.e., }the inverse of a length). Therefore, denoting $K\equiv 1/\lambda $, according to the representation (\ref{eiko}), $\omega $ and \lambda $ identify the invariant frequency and wave-length of the wave-like perturbations of the canonical fields. The invariant character of $\omega $ and $\lambda $ is a characteristic feature of the manifestly-covariant Hamiltonian theory. It is then immediate to show that, in terms of the canonical evolution equations (\ref{LP-PARAMETRIZED REDUCED HAM-EQ}), the following set of linear differential equations advancing in proper time the perturbed fields \delta g^{\mu \nu }(s)$ and $\delta \pi ^{\mu \nu }(s)$ and accurate through $O\left( \varepsilon \right) $ is obtained\ thanks also to the requirement \ref{COINSTRAIUNT ON f(h)}) \begin{eqnarray} \frac{D}{Ds}\delta g^{\mu \nu }(s) &=&\frac{1}{\kappa }\delta \pi ^{\mu \nu }(s), \label{prima} \\ \frac{D}{Ds}\delta \pi _{\mu \nu }\left( s\right) &=&\frac{\sigma }{2}\kappa \left( \widehat{R}+2\Lambda \right) \delta g_{\mu \nu }\left( s\right) \frac{\sigma }{2}\kappa \left( \widehat{g}_{\mu \nu }(s)\widehat{R}_{\alpha \beta }+\widehat{R}_{\mu \nu }\widehat{g}_{\alpha \beta }(s)\right) \delta g^{\alpha \beta }. \label{seconda} \end{eqnarray In particular, introducing the representation (\ref{eiko}) and recalling the definition of the differential operator $\frac{D}{Ds}$, the first equation yields a unique relationship between $\delta \pi ^{\mu \nu }(s)$ and $\delta g^{\mu \nu }(s),$ namely $\delta \pi ^{\mu \nu }(s)=i\kappa \left[ \frac \omega }{c}-K\right] \delta g^{\mu \nu }(s).$ Then, Eq.(\ref{seconda}) determines the algebraic linear equation for $\delta g_{\mu \nu }\left( s\right) $ \begin{equation} \left( -\kappa \left[ \frac{\omega }{c}-K\right] ^{2}-\frac{\sigma }{2 \kappa \left[ \widehat{R}+2\Lambda \right] \right) \delta g_{\mu \nu }(s) \frac{\sigma \kappa }{2}\left( \widehat{g}_{\mu \nu }(s)\widehat{R}_{\alpha \beta }+\widehat{R}_{\mu \nu }\widehat{g}_{\alpha \beta }(s)\right) \delta g^{\alpha \beta }\left( s\right) . \label{single equation for the perturbation} \end{equation} To solve it explicitly one needs to determine the corresponding algebraic equations holding for the independent tensor products $\widehat{g}_{\mu \nu }(s)\delta g^{\alpha \beta }\widehat{R}_{\alpha \beta }$ and $\widehat{g _{\alpha \beta }(s)\delta g^{\alpha \beta }\widehat{R}_{\mu \nu }$ appearing on the rhs of the same equation. Thus, by first multiplying tensorially term by term Eq.(\ref{single equation for the perturbation}) by $\widehat{R}^{\mu \nu }$it follow \begin{equation} \left( -\left[ \frac{\omega }{c}-K\right] ^{2}-\sigma \left[ \widehat{R +\Lambda \right] \right) \widehat{R}^{\mu \nu }\delta g_{\mu \nu }\left( s\right) =\frac{\sigma }{2}\widehat{R}_{\mu \nu }\widehat{R}^{\mu \nu \widehat{g}_{\alpha \beta }(s)\delta g^{\alpha \beta }\left( s\right) . \label{second single equationm} \end{equation Next, multiplying tensorially term by term Eq.(\ref{single equation for the perturbation}) by $\widehat{g}^{\mu \nu }\left( s\right) $ one obtains instea \begin{equation} \left( -\left[ \frac{\omega }{c}-K\right] ^{2}-\sigma \left[ \widehat{R +\Lambda \right] \right) \widehat{g}^{\mu \nu }\left( s\right) \delta g_{\mu \nu }\left( s\right) =2\sigma \widehat{R}_{\alpha \beta }\delta g^{\alpha \beta }\left( s\right) . \label{third single equation} \end{equation Combining together equations (\ref{second single equationm}) and (\ref{third single equation}) one is finally left with the equation \begin{equation} \left( -\left[ \frac{\omega }{c}-K\right] ^{2}-\sigma \left[ \widehat{R +\Lambda \right] \right) ^{2}-\widehat{R}_{\mu \nu }\widehat{R}^{\mu \nu }=0, \label{DISPERSION RELATION} \end{equation which identifies the dispersion relation between $\omega $ and $K$, \textit i.e., }the condition under which in the context for the reduced continuum Hamiltonian theory the infinitesimal perturbations (\ref{PERT-1}) and (\re {PERT-2}) can occur. To analyze in terms of Eq.(\ref{DISPERSION RELATION}) the conditions for the existence of stable, marginally stable or unstable oscillatory solutions for the canonical fields $\left( \delta g^{\mu \nu }(s),\delta \pi _{\mu \nu }(s)\right) $, it is convenient to classify the possible complex roots for \omega $ which can locally occur. Such a classification has therefore necessarily a local character. More precisely, single roots of this equation such that locally a) $Im(\omega )<0$, b) $Im(\omega )=0$ or c)$\ Im(\omega )>0$ will be referred to as (locally) stable, marginally stable and unstable respectively. Correspondingly, the perturbation $\left( \delta g^{\mu \nu }(s),\delta \pi _{\mu \nu }(s)\right) $ will be classified to be locally a) decaying, b) oscillatory; c) growing. Therefore, the equilibrium solution \widehat{g}(r)$ will be denoted as a) locally stable, b) locally marginally stable and c) locally unstable respectively if: a) all roots of $\omega $ are locally stable: namely\ for them $Im(\omega )<0;$ b) there is at least one root of $\omega $ which is locally marginally stable, namely such that $Im(\omega )=0$; c) there is at least one locally unstable root of $\omega ,$ namely such that $Im(\omega )>0.$ To investigate the stability problem we consider the vacuum configuration in which $T_{\mu \nu }\equiv 0$, but still $\Lambda \neq 0$,\ so that $\widehat R}_{\mu \nu }=\Lambda \widehat{g}_{\mu \nu }$ and $\widehat{R}=4\Lambda $ is the constant Ricci scalar. Then Eq.(\ref{DISPERSION RELATION}) yields the dispersion relation in the explicit for \begin{equation} \left( -\left[ \frac{\omega }{c}-K\right] ^{2}-5\sigma \Lambda \right) ^{2}-4\left( -\sigma \Lambda \right) ^{2}=0, \label{DISPERSION RELATION-2} \end{equation which delivers the root \begin{equation} \frac{\omega }{c}-K=\left\{ \begin{array}{c} \pm \sqrt{-7\sigma \Lambda } \\ \pm \sqrt{-3\sigma \Lambda \end{array \right. . \end{equation Therefore, two possible alternatives can be distinguished in which respectively: A) \emph{First case:} $-\sigma \Lambda \geq 0$. Then the equilibrium solution $\widehat{g}(r)$ is marginally stable since all roots of the dispersion relation (\ref{DISPERSION RELATION-2}) have vanishing imaginary part. B) \emph{Second case: }$-\sigma \Lambda <0$. Then $\widehat{g}(r)$ is necessarily unstable (there exists always an unstable root of the same equation (\ref{DISPERSION RELATION-2})). Hence, both for the case of de Sitter and anti-de Sitter space times ( \Lambda $\ respectively $>0$\ or $<0$) the possible occurrence of stability or instability depends on the multiplicative gauge parameter $\sigma $\ appearing in the definition of the effective potential density $V$\ (see the second equation given above in (\ref{V_R-N})). However, a physically-admissible Hamiltonian theory of GR should predict stable solutions, i.e., which are structurally stable in the sense indicated above. This should occur in principle not just for special realizations of the background space-time but - at least in the case of vacuum \textbf{- }\emph for arbitrary vacuum background space-times }$\left( \mathbf{Q}^{4},\widehat g}(r)\right) $. In particular, if $\Lambda >0$ - as most frequently invoked in the literature (see for example \cite{Winberg2000,Carroll2004}) - this happens provided the gauge factor $\sigma $\ is uniquely identified with \sigma =-1$. Although the rigorous validity of such a choice still remains a mere conjecture at this point, its\ full justification should ultimately emerge from quantum theory. Nevertheless, the stability property pointed out here can be viewed as prerequisite for the consistent development of a covariant quantum gravity theory. For this reason the issue will be further discussed in Part 2. \section{6 - Conclusions} A common fundamental theoretical aspect laying at the foundation of both General Relativity (GR) and classical field theory is the variational character of the fundamental dynamical laws which identify these disciplines. This concerns both the representation of the Einstein field equations and the covariant dynamics of classical fields as well as of discrete (e.g., test particles) or continuum systems in curved space-time. Issues related to the variational formulation of the Einstein equations have been treated in Refs.\cite{noi1,noi2}, where the existence of a new type of Lagrangian and Hamiltonian variational approaches has been identified in terms of synchronous variational principles realized in the framework of the DeDonder-Weyl formalism. As shown in Ref.\cite{noi2}, this leads to the realization of a manifestly covariant Hamiltonian theory for the Einstein equations. In this paper new aspects of the Hamiltonian structure of GR have been displayed which is referred to here as CCG-theory. In particular, we have shown that a reduced-dimensional realization of the continuum Hamiltonian theory for the Einstein field equations, denoted here as CHS (Classical Hamiltonian Structure of GR), actually exists\ in which both generalized coordinates and corresponding conjugate momenta are realized by means of 2nd-order $4-$tensors. The virtue of such an approach lies precisely in its general validity. This means in fact that the same Hamiltonian structure holds for arbitrary particular solutions of the Einstein field equations and arbitrary realizations of the external source terms appearing in the variational potential density. As a result, a causal form has been obtained for the corresponding continuum Hamilton equations\ by introducing a suitable Lagrangian parametrization prescribed in terms of the proper-time\emph{\ }$s$ defined along field geodetics of the curved space-time. As a result, the same equations are cast in the equivalent form of an initial-value problem for suitable canonical evolution equations, referred to here as GR-Hamilton equations.\emph{\ }This provides a physical interpretation for the reduced Hamiltonian theory, according to which an arbitrary initial canonical state is dynamically advanced by means of the canonical flow generated by the same Hamilton equations. Given validity to such Hamiltonian theory, the case of canonical transformations which generate the flow corresponding to the continuum canonical equations has been considered. This has been obtained by introducing the appropriate mixed-variable generating function - the so-called Hamilton principal function - and by developing the corresponding Hamilton-Jacobi theory in manifestly-covariant form. As a result, the same generating function has been shown to obey a $4-$scalar continuum Hamilton-Jacobi equation which has been proved to be equivalent to the corresponding canonical evolution equations of the Hamiltonian theory. The global prescription and regularity of the Hamiltonian structure have been analyzed and the gauge transformation properties of the reduced Hamiltonian density $H_{R}$\ have been pointed out. Finally, as an application of the Hamiltonian formulation developed here, the structural stability of the Hamiltonian theory has been investigated, a feature which is required for a consistent development of a corresponding quantum theory of GR based on the same canonical representation. In this paper we have studied the stability of\ perturbed fields which realize particular solutions of the GR-Hamilton equations with respect to stationary solutions, \textit{i.e.}, metric tensor solutions of the Einstein field equations. The case of background vacuum solutions having vanishing stress-energy tensor and non-null cosmological constant has been analyzed, determining the conditions for the occurrence of stable and unstable roots, adopting an eikonal representation for the perturbed fields. These conclusions highlight the major key features of the reduced Hamiltonian theory and corresponding Hamilton-Jacobi equation determined here. The new theory, besides being a mandatory prerequisite for the covariant theory of quantum gravity to be established in Part 2, is believed to be susceptible of applications to physics and astrophysics-related problems and to provide at the same time new insight in\ the axiomatic foundations of General Relativity. \section{Acknowledgments} Work developed within the research projects of the Czech Science Foundation GA\v{C}R grant No. 14-07753P (C.C.) and the Albert Einstein Center for Gravitation and Astrophysics, Czech Science Foundation No. 14-37086G (M.T.).
1,314,259,994,061
arxiv
\section{Lanes and Crowds} \label{sec:background} Although many definitions exist, a crowd can generally be described as ``a large group of individuals gather together in the same physical area for some duration of time''. Crowds often appear at busy public locations, such as train stations, airport terminals, football stadiums, theaters, city squares, or shopping malls. The behavior of the crowd is the results of the interactions between individuals. According to Helbing and Molnar~\cite{helbing1995social}, these interactions are local: each individuals influences only the people nearby. Describing a crowd using only local information is thus a natural representation. Martella et al.~\cite{martella2014crowd} proposed the idea of representing the \emph{texture} of crowds using \emph{proximity graphs}. Formally, a proximity graph is a form of spatio-temporal graph where nodes represent individuals. Time is discretized into fixed-sized timesteps and two nodes are connected by an edge at a timestep if these two individuals have been within physical proximity of each other during that time, i.e., their distance has been less than some predetermined distance. Note that proximity graphs do not store any absolute localization data, they only describe the local ``view'' of each individual. Furthermore, we assume edges do not store any information on the physical distance between nodes. Previous work~\cite{martella2014proximity} focused on methods for extracting proximity graphs from real-world noisy data obtained using proximity sensors. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{img/lane} \caption{Example of lane in a crowd from top-down view.} \label{fig:example_of_lane} \end{figure} \reffig{example_of_lane} shows an artificial example of the proximity graph for a crowd. Points represents individuals and arrows indicate their direction and speed of movement. This particular example show non-random behavior: a lane has emerge since nodes are flowing from the bottom-left corner to the right-hand edge. According to Helbing et al.~\cite{helbing2001self}, the formation of lanes in crowds is a naturally occurrence. Individuals moving towards a target navigate the environment according to their own personal preferences. However, while moving through a dense crowd, they often need to step aside to prevent collisions with others. To minimize these interactions, it is beneficial for the walkers to follow behind someone moving in the same direction. The result of this local behavior is stable ``highway''-like lanes through the crowd. One quick glance is sufficient to recognize that the highlighted nodes in \reffig{example_of_lane} have formed a lane. However, this observation is informal and relies on the intuition of the observer. Based on this intuition, we can define three criteria for a lane. \begin{itemize} \item[\textbf{(R1)}] Members of a lane move in a \textbf{similar direction} and have a \textbf{similar speed}. However, since each individual only influences its local neighborhood, each lane member should have a movement vector similar only to surrounding members. The lane as a whole can have curves and movement speed is not uniform. Though these changes are gradual and do not happen abruptly. Lanes move similar to how a river flows through a landscape. \item[\textbf{(R2)}] A lane must be \textbf{connected}. In other words, if one were to create a link between each pair of lane members that are in proximity of each other, the result must be one connected unit. A lane never consist of multiple disjoint segments. \item[\textbf{(R3)}] A lane is defined by its \textbf{border}, not by its contents. We identify lanes due to the abrupt transition between the movement inside and outside the lane. However, this border might not always be well-defined and can be ambiguous. This happens, for example, when someone leaves or joins the lane, thus blurring the line between the lane and the crowd. \end{itemize} \section{Related Work} \label{sec:related_work} Utilizing proximity graphs to analyze the behavior of people has proven to be a promising area of research. Martella et al. showed how proximity graphs can be used to mine the behavior of museum visitors~\cite{martella2016visualizing}, track people in a six-story building using only a handful of anchor points~\cite{martella2017exploiting}, and capture the social interactions at an IT conference~\cite{martella2014proximity}. However, further research on proximity graphs has been scarce. In computer vision, the analysis of crowd behavior is an active field of research. Most work focuses on automated analysis of surveillance camera footage. We discuss some of the recent major contributions in this section. We refer to the survey by Li et al.~\cite{li2015crowded} for a comprehensive overview of research on crowd analysis from the area of computer vision. One particular topic from computer vision which is related to lane detection is \emph{crowd behavior analysis}~\cite{li2015crowded}. These algorithms classify the behavior of people in crowds. For example, Rodriguez et al.~\cite{rodriguez2011data} proposed a data-driving crowd analysis approach. The algorithm works by first learning common crowd motion patterns from a large database containing crowd videos. To analyze a new video, the frames are split up into blocks which are matched to learned patches from the database. By labeling the learned patches, one can classify the behavior in different regions of the video. The authors argue that, while the number of all possible videos is infinite, the space of recognizable crowd patterns might not be all that large. Benabbas et al.~\cite{benabbas2010motion} presented a method that can detect six crowd-related events in videos: walking, running, splitting, dispersion, and evacuation. The method works by tracking objects of interest using optical flow techniques. Next, the camera view is divided into fixed-sized blocks. For each block, the $K$ most dominant movement vectors are determined, where $K$ is a user-defined parameter. Blocks are clustered using a region-based segmentation algorithm. Finally, each cluster is classified into one of six events based on the average movement vector within the cluster. Solmaz et al.~\cite{solmaz2012} showed how five types of behavior can be extracted from video: bottlenecks, fountainheads, lanes, arches, and blocking. Their method moves particles according to the optical flow of the video. Each region is then classified using the Jacobian matrix based on the linear approximation of the trajectories within the region. The eigenvalues of this matrix determine which of the five types the behavior belongs to. Another topic related to lane detection and which has receive much attention in computer vision is \emph{crowd motion segmentation}~\cite{li2015crowded}. These algorithms segment the video into \emph{motion patterns}, i.e., spatial regions that have a high degree of similarity in terms of speed and direction. For instance, Ali et al.~\cite{ali2007lagrangian} used techniques from computational fluid dynamics for motion segmentation. A flow field is generated from frames of a moving crowd. From the flow field, a finite-time Lyapunov exponent field is constructed, which shows the Lagrangian Coherent Structures (LCS) in the underlying flow. The LCS highlight the boundaries of a flow segments and they are used for segmentation. Kang \& Wang~\cite{kang2014fully} demonstrated how neural networks can be used to for crowd segmentation. First, they show how to use fully convolutional neural networks to segment individuals from single static frames from videos of crowds. Next, they extend this method by integrating motion cues to capture movement, helping to separate stationary and moving crowds, and structure cues, such as walls and floors. The results show tight segmentation contours around individuals. Zhao \& Medioni~\cite{gong2012robust} presented a method based on manifold learning and \emph{tracklets}. A tracklet is a short fragment of an object's trajectory obtained by tracking the object for short amount of time. The tracklet points are mapped to points in $(x, y, \theta)$ space, where $(x,y)$ corresponds to the image space and $\theta$ represents the motion direction in degrees between $0$ and $360$. In this 3D space, points form manifold structures each corresponding to a motion patterns. The author propose a robust manifold grouping algorithm based on Tensor Voting to extract the manifolds. The use of proximity graphs show two clear advantages over utilizing cameras. First, proximity graphs can provide a holistic view over a large areas. They can be used to monitor the behavior of crowds within one single build building, a small neighborhood, or even an entire city. Cameras are inherently limited to one perspective and there seems little research on how to ``join'' the image analysis from multiple cameras. Second, many computer vision techniques take a coarse-grained approach and classify regions within the image, meaning any information about individuals is lost. Our approach classifies nodes of the proximity graphs, thus retaining this fine-grained information. Overall, we believe our method is the first lane detection algorithm designed for proximity graphs. \section{Algorithm for Lane Detection} \label{sec:algorithm} Our goal is to design an algorithm that extracts lanes from proximity graphs. First, we discuss the challenges of designing such an algorithm. Second, we present our lane-detection solution. \subsection{Challenges} The input of our lane detection algorithm is a proximity graph with nodes $\{v_1, \ldots, v_n\}$ and edges $E$ where $(v_i, v_j, t) \in E$ indicates that nodes $n_i$ and $n_j$ were close to each other at timestep $t$. The output should be, for each timestep $t$, the lanes detected at that moment in time. An important decision is how to deal with nodes that are not part of a lane, such as isolated nodes or stationary crowds. We have chosen to assign each of these groups to their own cluster. This simplifies the problem of lane detection into a unsupervised classification problem where the goal is to partition the nodes into ``coherent'' clusters for every moment in time. Each cluster consists of people showing similar behavior. \subsubsection{Analysis of Proximity Graph} Our initial attempts at lane detection were built on the following premise: choose time window $W$, aggregate data for every $W$ consecutive timesteps into a single dataset, and partition the resulting graph. However, this showed poor results since graph partitioning algorithms rely heavily on the presence of high density within each cluster. Proximity graphs have spatial nature in their topology, resulting in low intra-cluster density. In our experience, off-the-shelf graph partitioning algorithms and community detection algorithms tend to split long lanes into several separate clusters. The fundamental problem is that the definition of lanes roots deeply in the notion of ``distance'' and ``velocity'', which are difficult to formalize for proximity graphs. To be able to define these terms more explicitly, we embed the nodes into a two-dimensional space. An important observation is that the location determine by such embedding does not need to be highly accurate. For lane detection, only the local neighborhood of each node is relevant, so it is sufficient if the position of each node is accurate relative only to the nodes that surround it. We found that techniques from graph drawing are suitable to calculate the embedding. Since proximity graphs change over time, the embedding is repeated for each timestep to adapt the previous embedding to the new topology. This adaptation produces movements of the nodes over time. The nodes can be clustered based on their position and velocity. \subsubsection{Selection of Clustering Algorithm} Choosing the right clustering method is non-trivial since lane detection present a trade-off between two problems: \emph{transitivity} and \emph{ambiguity}. On the one hand, lanes can be of any arbitrary shape and they are often elongated. This means many nodes within a lane are only indirectly connected to each other. If there is a strong relation between nodes $a$ and $b$ and between $b$ and $c$, then the nodes $a$, $b$, and $c$ all belong to the same cluster, even if the relation between $a$ and $c$ is weak. Transitivity must be taken into account. On the other hand, real-world crowds often act chaotic and clustering based on individual links between nodes is sensitive to noise. For example, consider the scenario where a person leaves the lane and joins the stationary crowd. During this transition, this person will have both a strong relation with the lane and the crowd. The clustering method should correctly interpret these ambiguous links: a single``bridge'' between two clusters should be not be sufficient evidence that the clusters should be merged. Centroid-based clustering methods, such as $k$-means~\cite{hartigan1979algorithm}, Mean shift~\cite{comaniciu2002mean}, or EM~\cite{moon1996expectation}, are not suitable for lane detection due to transitivity. Lanes lack a ``center'', making detection of elongated lanes impossible. Hierarchical methods, such as SLINK~\cite{sibson1973slink}, are unfit due to ambiguity since a single ``noisy'' link can cause a lane to be undetectable. The clustering method that respect both aspects of lane detection is \emph{density-based clustering}. This class of algorithms is built on the idea that clusters correspond to dense groups of points that are separated by sparse regions. These algorithms detect clusters of any arbitrary shape, thus incorporating transitivity. They also deal well with noise, since a few outliers do not yield sufficiently high density. High quality results were obtained using DBSCAN~\cite{ester1996density}. \vspace{-0.1cm} \subsection{Algorithm Description} Our lane-detection method consists of two stages: {\it graph embedding} and {\it density-based clustering}. {\bf Graph embedding.} For the first stage, we embed the nodes into two-dimensional space using the traditional force-directed algorithm by Fruchterman and Reingold~\cite{fruchterman1991graph}. Force-directed graph embedding is a well-studied topic and many algorithms exist~\cite{kobourov2012spring}, but all follow a similar approach. Forces are assigned among pairs of vertices: attractive force between pairs connected by an edge and repulsive force between remaining pairs. The behavior of the system is simulated until an equilibrium state is reached. In our method, nodes are initially randomly placed and forces are simulated until equilibrium is reached. For subsequent timesteps, we use the resulting positions from the previous run as initial positions for the next run. This allows for incremental update of the node's positions and results in movement of the nodes over time. Computational cost is low since few iterations are needed per timestep to reach equilibrium. {\bf Density-based clustering.} Next, we cluster the nodes using DBSCAN~\cite{ester1996density}, since it has proven to provide high-quality results~\cite{ester1998clustering} and scales to large datasets~\cite{zhou2000approaches}. DBSCAN takes two parameters: a radius value $\varepsilon$ and the minimum number of points $MinPts$ that should lay within this radius. More specifically, let $d_{ij}(t)$ measure the ``similarity'' between nodes $v_i$ and $v_j$ at time $t$. Clearly, $d_{ij}(t)$ can be defined in many different ways. We discuss several options for $d_{ij}(t)$ in \refsec{evaluation}. The $\varepsilon$-\emph{neighborhood} of a node $v_i$ at time $t$ is the set of all nodes $v_j$ such that $d_{ij}(t) < \varepsilon$. A node is referred to as a \emph{core node} if the size of its $\varepsilon$-neighborhood is at least $MinPts$. Intuitively, core nodes are all data points ``near the core'' of the cluster since they have many neighbors in their proximity. Non-core nodes are found at the ``border'' of a cluster. DBSCAN starts at an arbitrary node $v$. If $v$ is a not a core node, it is labeled as noise and the procedure repeats at the next unlabeled node. If $v$ is a core node, a new cluster $C$ is created containing node $v$. The cluster is now iteratively expanded by repeatedly adding every unlabeled node which is within $\varepsilon$ distance of any core node already in $C$. Once the cluster is complete, the entire procedure is repeated for the next unlabeled node. There are different ways to handle noise points afterwards. We have chosen to assign each noise node to its own singleton cluster. Note that DBSCAN is not deterministic, non-core nodes can be assigned different clusters depending on the order in which nodes are processed. We randomize the processing order for each run. \section{Conclusions \& Future Work} \label{sec:conclusion} In this work, we present a method to detect lanes in proximity graphs. Our method combines graph embedding with density-based clustering. For evaluation, we have explored three different score functions to measure similarity between nodes. Best performance was obtained by measuring similarity as the maximum over two terms: difference in position (distance) and difference in velocity. The results show that our method can detect different types of lanes (thick lanes, parallel lanes, and curved lanes). Graph embedding performs excellent, although its computational cost is high. For DBSCAN, exact tuning of the parameters is important. Most notably, DBSCAN shows sensitivity to the value of $\varepsilon$. For future work, we are exploring methods to automatically determine the best parameters for DBSCAN. Furthermore, we are looking into more complex scenarios. For example, opposing lanes, lanes crossing at an intersection, and lanes moving through a narrow doorway. We are also extending our simulation model to support more situations, such as people joining/leaving the lane or a lane dissolving into the crowd. Finally, we are working on obtaining real-world measurements to evaluate our method on non-synthetic datasets. Overall, we view our work as a first step towards rich pattern recognition in proximity graphs. One can think of many types of crowd behavior identification, such as detection of congestion, social cliques, evacuations, and anomalies. Our goal is to utilize proximity graphs as a tool to enable these types of analysis. \section{Simulation Model} \label{sec:model} To evaluate the quality of our lane detection algorithm, we require a simulation model which accurately models lanes in a crowd. Many models for crowd simulations have been proposed, most notably the social force model~\cite{helbing1995social} and its many variations (see survey by Castellano et. al.~\cite{castellano2009statistical}). However, while these models simulate realistic crowd dynamics, the behavior that emerges is not controlled. For example, the social force model~\cite{helbing1995social} shows lane formation, but these lanes form organically and are not planned. To evaluate the accuracy of our lane detection method, our simulations need lanes to form according to some given ground truth. To the best of our knowledge, no such model currently exists. \begin{figure} \centering \includegraphics[width=0.35\textwidth]{img/regions} \caption{Example of lane going through the crowd.} \label{fig:regions} \end{figure} We propose a simple probabilistic model that exhibits controlled lane formation. Our model is based on random walks on the two-dimensional grid, i.e., each walker has integer coordinates. Initially, walkers are randomly placed in certain areas. Time passes in discrete steps. During each timestep, each walker can take one step in one of the four cardinal directions (north, east, south, west) according to predefined behavior. There are two types of behavior: \emph{random walkers} and \emph{lane walkers}. \textbf{Random walkers} model a nearly stationary crowd. We define a rectangular region in which random walkers are initially placed at random locations (see \reffig{regions}). This region can be seen as a top-down view of a public area (e.g., city square, train station, airport terminal). During each timestep, each random walker behaves according to the following rules: \begin{itemize} \item If outside the region, take one step back. \item Otherwise: \begin{itemize} \item With probability $p$: stay at current location. \item With probability $1-p$: take one random step. \end{itemize} \end{itemize} \textbf{Lane walkers} model the lane going through the crowd. For each lane, we define a path consisting of a series of line segments (see \reffig{regions} for an example). Lane walkers are initial placed at the start of the path and they follow the path until they reach the end. During each timestep, every lane walkers adheres to the following rules: \begin{itemize} \item With probability $q$, follow the lane. Find point $a$ on the path closest to the position $b$ of the walker. \begin{itemize} \item If $\|a-b\| = w > w_\text{max}$, take step in direction of $a$. \item Otherwise, take one step in the direction tangent to line segment $ab$, i.e., follow the direction of the lane. \end{itemize} \item With probability $1-q$: \begin{itemize} \item With probability $p$: stay at current location. \item With probability $1-p$: take one random step. \end{itemize} \end{itemize} \reffig{scenarios} illustrates the walker following the lane. The parameter $w_\text{max}$ controls the maximum width of the lane. If $w > w_\text{max}$, the lane walker has wandered too far off from the lane and must move closer, thus limiting the maximum width to $2w_\text{max}$. If $w < w_\text{max}$, the lane walker must follow the direction of the lane. To keep walkers aligned on the grid, they take one horizontal step with probability $\frac{|dx|}{|dx|+|dy|}$ or one vertical step with probability $\frac{|dy|}{|dx|+|dy|}$. The average movement vector is thus $[dx~dy]^T$. If walker $A$ wants to move to a new location which already occupied by another walker $B$, then $A$ is allowed to ``push'' $B$ by forcing it to move to one of the three remaining locations adjacent to $B$. This pushing mechanism models people stepping aside for others and is necessary to prevent bottlenecks where lane walkers are blocked by stationary random walkers. If all three adjacent locations are already occupied, the move by $A$ fails and it remains at its current location. In other words, ``pushing'' is not transmissible: walkers which get pushed cannot also push other walkers. \begin{figure} \centering \includegraphics[width=0.4\textwidth]{img/scenarios} \caption{Two scenarios of a lane walker following a path.} \label{fig:scenarios} \end{figure} The parameters $p$ and $q$ control the difficulty of detecting the lane. For $p=0$, the random walkers are completely stationary and only the lane walkers move, while for $p=1$ the random walkers act chaotic. For $q=1$, the lane walkers move at maximum speed, while for $q=0$ the lane walkers show same behavior as random walkers. Changing the value of $p$ or $q$ has an impact on the difficulty of lane detection. \ignore{ \begin{figure} \centering \includegraphics[width=0.4\textwidth]{img/result} \caption{Example of lane simulation at different moments in time. Black and white dots correspond to random and lane walkers, respectively.} \label{fig:model_example} \end{figure} \reffig{model_example} shows an example of the simulation of a straight lane at different moments in time. } \section{Experimental Setup} \label{sec:evaluation} We evaluated our lane detection algorithm as described in \refsec{algorithm} using data generated using the model from \refsec{model}. We describe the three similarity functions and the three scenarios we consider. \subsection{Similarity Scores} As discuss in \refsec{algorithm}, we are required to define a function $d_{ij}(t)$ that measures the similarity between two nodes. Low scores indicates a strong relation (i.e., nodes belonging to the same lane), while high scores indicate a weak relation. We explore three possible options for this function. The parameter $W$ is the size of the window, it determine how far we look ``back in time''. \textbf{ Score function A}: Calculate the maximum physical distance between two nodes over the last $W$ timesteps. The intuition is that two nodes belong to the same lane if they are physically close to each other for a long period of time. Let $p_i(t)$ be the position of node $v_i$ at time $t$. The score function is defined as follows: \begin{equation*} d_{ij}^A(t) = \max_{0 \leq dt \leq {W}} \| p_i(t-dt) - p_j(t-dt) \|. \end{equation*} \textbf{Score function B}: One issue with option A is that one might need a very large window size to detect the lane since two nodes can physically close for a longer duration of time while not belonging to the same lane (for example, with a horseshoe shaped lane). Alternatively, define the velocity vector $s_i$ of node $v_i$ as the average distance traveled per timestep over the last $W$ timesteps: \begin{equation*} s_i(t) = \frac{p_i(t) - p_i(t - W)}{W}. \end{equation*} Given the current position and velocity of a node, we can predict its expected position $T$ timesteps into the future. \begin{equation*} d_{ij}^B(t) = \max \left[ \| p_i(t) - p_j(t) \|,\| (p_i(t) + Ts_i(t)) - (p_j(t) + Ts_j(t)) \| \right]. \end{equation*} \textbf{Score function C}: Instead of comparing the expected future position of two nodes, we can also compare only the expected \emph{displacement}. The intuition is that if two nodes are close to each other and show similar displacement, they most likely belong to the same lane. A simple way to formalize this is as follows: \begin{equation} d_{ij}^C(t) = \max \left[ \| p_i(t) - p_j(t) \|, T~\| s_i(t) - s_j(t) \| \right]. \end{equation} \begin{figure} \includegraphics[width=0.4\textwidth]{img/diagrams} \caption{Three different lane scenarios used: (1) one straight lane, (2) curved lane, (3) two parallel straight lanes.} \label{fig:diagrams} \end{figure} \subsection{Scenarios} For evaluation, we consider a scenario where random walkers are placed in a square region of $100 \times 100$ units (see \reffig{diagrams}). Lane walkers are placed in regions north of this square and walk south. The lane regions have width $w$ and height $100 \times 100 / w$. The density of both regions is equal to ensure the number of random and lane walkers is equal. Unless noted otherwise, we set $w=10$, $p=0.2$, $q=0.5$ and density is $0.3$. We further experiment with these parameters in \refsec{results_sec}. Our algorithm is performed during each timestep, starting at time $W$ and ending either once the last walker exits the region or until $1000$ timesteps have passed. For every timestep $t$ of the simulation, our algorithm yields a partition $X(t)=\{X_1(t), \dots, X_n(t)\}$ of the population into cohesive clusters The ground-truth clusters of the model are $\{R, L\}$ where $R$ is the set of random walkers and $L$ is the set of lane walkers. For the scenario with two lanes, there are three ground-truth clusters. We use the normalized mutual information~\cite{vinh2010information} (NMI) score to measure the correlation between the two partitions. The range is between $0$ (no correlation) and $1$ (perfect clustering). The reported NMI is the average over the entire simulation. \section{Empirical Evaluation} \label{sec:results_sec} In this section we present the preliminary results of our method. In \refsec{results_tuning}, we evaluate the three proposed similarity functions and tune the parameters of the algorithm for a simple scenario. In \refsec{results_scenarios}, we consider a variety of scenarios with lanes of different widths and shapes. In \refsec{results_resilience}, we evaluate the resilience of our method by varying the parameters of the simulation model. In \refsec{results_tuning}, \ref{sec:results_scenarios}, and \ref{sec:results_resilience}, the graph embedding phase of the algorithm is omitted, i.e., coordinates from the simulation are directly used for clustering. This allows for evaluation of DBSCAN in isolation. Finally, in \refsec{results_embedding}, we revisit the problem of graph embedding. \subsection{Method Tuning} \label{sec:results_tuning} \begin{figure} \centering \includegraphics[width=\columnwidth]{graphs/window_size} \caption{One straight lane, different window sizes.} \label{fig:results_window_size} \quad \centering \includegraphics[width=0.7\columnwidth]{graphs/baseline} \caption{One straight lane, different values of $T$.} \label{fig:results_baseline} \quad \centering \includegraphics[width=\columnwidth]{graphs/min_pts} \caption{One straight lane, different values of $MinPts$.} \label{fig:results_min_pts} \end{figure} In this section, we focus on tuning of the parameter for DBSCAN. Four parameters are of interest: $\varepsilon$, $T$, $W$, and $MinPts$. Unless noted otherwise, we use values $T=100$, $W=100$, and $MinPts=15$. We only consider the simple scenario of one straight lane. The crucial parameter is $\varepsilon$. For all three similarity functions, this parameter can be interpreted as the maximal physical distance that is allowed between two nodes over some period of time. If $\varepsilon$ is too small, then DBSCAN is too rigid, and many tiny clusters appear. If $\varepsilon$ is too large, then DBSCAN is too tolerant and all nodes collapse into a single cluster. First, we consider how the window size $W$ affects the results. \reffig{results_window_size} shows the results for different window sizes. We see that the lower bound of $\varepsilon$ is approximately 10, regardless of the chosen similarity function. This can be explained based on the width of the lane. If $\varepsilon$ is less then the lane width, the random walkers on opposite sides of the lane are no longer connected since they are two far apart, causing the random walkers to be split into two groups. The upper bound of $\varepsilon$ depends heavily on the chosen similarity function. For function A, the upper bound scales linearly with $W$. This is expected since larger $W$ implies that we look further back in time and thus the maximal distance between lane and non-lane walkers increases. For functions B and C, the upper bound also scales with $W$ but is limited to approximately $15$ for B and $25$ for C. This happens since the window size is used to calculate the velocity of nodes. For large values of $W$, the velocity converges to $(0,0)$ for random walkers and $(0,q)$ for lane walkers. Next, we evaluate how $T$ affects the results for functions B and C, see \reffig{results_baseline}. In both cases, the lower and upper bound of $\varepsilon$ scale linearly with the value of $T$. This can be explained since for large values of $T$, the similarity score is dominated by the term $T~s_i(t)$. For larger $T$, the similarity score between neighboring lane walkers increases, which results in a wider valid range of $\varepsilon$. Now we turn our attention to how $MinPts$ affects quality. \reffig{results_min_pts} shows results for different values of $MinPts$ and $\varepsilon$. The figure shows that the algorithm is not sensitive to the value of $MinPts$: the upper and lower bound of $\varepsilon$ is minimally affected by its value. This parameter determines the robustness against outliers, but one straight lane contains little ambiguity. From Figures~\ref{fig:results_window_size}, \ref{fig:results_baseline}, and \ref{fig:results_min_pts}, we conclude that function C performs the best. The valid range of $\varepsilon$ for this function is approximately between $10$ and $25$. This range scales linearly when increasing $T$, but is barely affected when varying $W$ or $MinPts$. The minimum value of $W$ should be $50$, but larger values make the algorithm less sensitive to the exact choice of $\varepsilon$. For the remainder of this work, we focus solely on function C. \begin{figure} \centering \begin{subfigure}[b]{0.45\columnwidth} \includegraphics[width=\columnwidth]{img/width_20} \caption{Lane width of 20} \end{subfigure}\quad % \begin{subfigure}[b]{0.45\columnwidth} \includegraphics[width=\columnwidth]{img/width_30} \caption{Lane width of 30} \end{subfigure} \caption{Lanes which are too wide cause split in stationary crowd. Each point corresponds to a walker. Different colors indicate different clusters.} \label{fig:vis_width_lane} \end{figure} \subsection{Different Types of Lanes} \label{sec:results_scenarios} In this section, we experiment with various types of lanes to test how well our algorithm performs in different scenarios. \reffig{diagrams} shows the three cases which we discuss. First, we consider the scenario where the width of the lane varies between $5$ and $50$ units (i.e., the lane lane cover 5-50\% of the region of interest). \reffig{results_lane_width} shows the results. For $T=100$ and $W=100$, the lane can only be detected if its width is below approximately 20 units and $\varepsilon$ is roughly between $5$ and $15$. For wider lanes, the walkers in the stationary crowd on opposite sides of the lane are no longer considered to belong to the same cluster since they are too far apart. \reffig{vis_width_lane} demonstrates this problem. By increasing the value of $T$ or $W$, wider lanes can be detected. \reffig{results_lane_width} shows that both for $T=200$ and for $W=200$, lanes having a width up to 30 units can be discovered. For both cases, increasing the width of the lane also increases the lower bound of $\varepsilon$, for which the lane is detected. \begin{figure} \centering \includegraphics[width=\columnwidth]{graphs/lane_width} \caption{Baseline scenario for different lane widths.} \label{fig:results_lane_width} \quad \centering \includegraphics[width=\columnwidth]{graphs/double_lane} \caption{Two parallel lanes for variable interlane distance.} \label{fig:results_double_lane} \quad \centering \includegraphics[width=\columnwidth]{graphs/sinus_lane} \caption{Sinusoidal lane for different amplitudes.} \label{fig:results_sinus_lane} \end{figure} Next, we consider the scenario for two parallel lanes, moving in the same direction, both of width 10, and having a fixed interlane distance. \reffig{results_double_lane} shows the results for this scenario for different interlane distances. We see that for $T=100$ and $W=100$, quality deteriorates once the lanes are less than $15$ units apart. This happens because the lanes are too close and can no longer be separated into two distinct clusters. The valid range of $\varepsilon$ is approximately in the range $10-15$. The figure shows that doubling the value of $T$ increases the minimum separation distance to $20$. By increasing $T$, more weight is put on velocity when measuring similarity, thus making it more difficult to distinguish the two lanes. Doubling the value of $W$ does not increase the minimum separation distance and increases the upper bound of $\varepsilon$ to $20$. Finally, to test how well our method deals with curved lanes, we consider a scenario with a single sinusoidal lane. \reffig{results_sinus_lane} shows the results for sinusoidal lanes having amplitude up to $30$ units. Note that an amplitude of $30$ units is an extreme case, considering the height of our region is just $100$ units. \begin{figure} \centering \begin{subfigure}[b]{0.45\columnwidth} \includegraphics[width=\columnwidth]{img/sinus_10} \caption{Amplitude of 10.} \end{subfigure}\quad % \begin{subfigure}[b]{0.45\columnwidth} \includegraphics[width=\columnwidth]{img/sinus_20} \caption{Amplitude of 20.} \end{subfigure} \caption{Visualization of sinusoidal lane for $T=100$, $W=100$. Each point corresponds to a walker. Different colors indicate different clusters.} \label{fig:vis_sinus_lane} \end{figure} The results show that lanes having an amplitude up to approximately $5$ units can be detected. For larger amplitudes, the algorithm tends to split the lane into multiple straight segments. \reffig{vis_sinus_lane} shows an example of this phenomenon. Increasing the value of $T$ does not change the accuracy of our method. Increasing the window size to $W=200$ significantly increases the accuracy and allows to detect lanes having an amplitude up to 30 units. The explanation is that a larger window size smooths out the sharp turning angles of the wave. \subsection{Resilience} \label{sec:results_resilience} \begin{figure} \centering \includegraphics[width=\columnwidth]{graphs/randomness} \caption{One straight lane for varying $p$ or $q$.} \label{fig:results_randomness} \end{figure} To test the resilience of our method, we vary $p$ and $q$, see \reffig{results_randomness}. The value of $p$ determines the probability that a random walker takes a random step during a timestep. If the value of $p$ is high, detecting the lane becomes more difficult since random walkers behave erratic. \reffig{results_randomness} confirms this intuition. The lane can be detected for $p < 0.6$. The value of $q$ determines the probability that a lane walker follow the lane during a timestep. If the value of $q$ is too low, detecting the lane becomes difficult because the velocity of the lane is too low. The results show that the lane can only be detected if $q > 0.2$. Increasing the value of $q$ does not change the lower bound of $\varepsilon$ but its upper bound scales linear. For both scenarios, the detectability of the lane can be improved by using a larger window size $W$. A larger window implies that velocity is determined over longer period of time, thus containing less noise. \subsection{Graph Embedding} \label{sec:results_embedding} \begin{figure} \centering \begin{subfigure}[b]{0.45\columnwidth} \includegraphics[width=\columnwidth]{img/embedding_0} \caption{At timestep 0.} \end{subfigure}\quad % \begin{subfigure}[b]{0.45\columnwidth} \includegraphics[width=\columnwidth]{img/embedding_300} \caption{At timestep 300.} \end{subfigure} \caption{Results of proximity graph embedding.} \label{fig:vis_embedding} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{graphs/embedding} \caption{Accuracy with/without embedding at time. Note the different scales on the horizontal axes.} \label{fig:results_embedding} \end{figure} Up until this point, the graph embedding phase has been omitted, i.e., the absolute coordinates of the nodes are directly passed to the clustering phase. To evaluate the complete algorithm, we first generate a proximity graph and embed the nodes into two-dimensional space using force-directed embedding (see \refsec{algorithm}) before clustering the nodes. In practice, we find that the absolute coordinates and the coordinates found by embedding are approximately equivalent, up to scale and rotation. For example, \reffig{vis_embedding} shows an embedding of one straight lane at different moments. The proximity graph was created using a detection radius of 25 units. Force directed embedding works well for our method since the data is spatial by nature. Since graph embedding shows excellent results, the effect on the accuracy of DBSCAN is minor. \reffig{results_embedding} shows a comparison of the obtained NMI with and without embedding for one straight lane at timestep 200. Both curves are nearly identical. Note that the difference in range of $\varepsilon$ is the result of graph embedding not preserving the scale. \section{Introduction} \label{sec:introduction} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{img/real_crowd3} \caption{Long exposure shot at busy train station reveals lane formation. Photo by David Iliff, CC-BY-SA 3.0 license.} \label{fig:example_of_real_lane} \end{figure} Different studies (see survey by Castellano et al.~\cite{castellano2009statistical}) have shown that, while the behavior of individuals in public areas is often erratic and unpredictable, the behavior of large crowds as a whole is predictable and can be modeled. Crowd simulation models are plentiful, examples are models based on fluid dynamics~\cite{helbing1998fluid,guo2008mobile}, cellular automata~\cite{sarmady2011cellular}, or dynamic systems~\cite{helbing1995social}. Helbing et al. observed that crowds have a tendency to form \emph{collective movement patterns}~\cite{helbing2001self}. The patterns are not globally planned or externally organized, but emerge naturally from the local interactions between individuals. Examples are circulation of flow at intersections, clogging at bottlenecks, and formation of lanes in crowded areas. Automated detection of these patterns is crucial for understanding, analyzing, and predicting the behavior of crowds in large open areas. One can think of a large number of applications~\cite{zhan2008crowd}, for example, improve safety at sport matches, music concerts, or public demonstrations, provide guidelines for urban planners to improve design of public spaces, or automate detection of anomalies. Previous attempts at automated detection of these patterns utilize surveillance cameras combined with image processing techniques (see references in \refsec{related_work}). These techniques are, however, inherently limited to the perspective of one camera. In our work, instead of employing cameras, we assume each person is wearing a device that acts as a local proximity sensor: each sensor can detect other sensors nearby. These devices can be implemented using readily available hardware such as smartphones or electronic badges~\cite{martella2014proximity}. Each detection between two devices corresponds to an edge in a graph which changes over time, a so-called \emph{proximity graph}~\cite{martella2014crowd}. A proximity graphs characterizes the \emph{texture} of a crowd and describes how individuals navigate through the space. While a single camera can only cover a small area, proximity graphs provide a holistic view of large areas. Extracting global movement patterns from proximity graphs is challenging since each device provides only local information. The fundamental problem that we tackle is how to uncover global patterns based on local detections. In this work, we focus on one particular type of pattern: the formation of \emph{lanes}. Lanes often appear in crowds when groups of people traverse a densely crowded space, for example, in a narrow shopping street or at a busy train station (\reffig{example_of_real_lane}). We propose an automated lane-detection method based on proximity graphs. Our method combines techniques from graph embedding with a density-based clustering algorithm to identify lanes. To evaluate our method, we present a model that simulates lanes moving through a stationary crowd. Preliminary results show that our method is able to detect lanes of different sizes and shapes. Overall, our work can be seen as the first step towards rich motion pattern recognition using proximity graphs. The remaining sections are structured as follows: \refsec{background} presents background information, \refsec{algorithm} describes our lane-detection method, \refsec{model} proposes the simulation model, \refsec{evaluation} \& \ref{sec:results_sec} present results, \refsec{related_work} describes related work, and \refsec{conclusion} is dedicated to conclusions and future work.
1,314,259,994,062
arxiv
\section{Introduction} \label{sec:intro} Future planetary missions will require long-distance autonomous traverse on challenging, obstacle-rich terrains. For example, the landing site for the NASA/JPL \ac{M2020} mission will be the Jezero crater, a 49\,[km]-wide crater considered to be an ancient Martian lake produced by the past water-related activities \cite{Goudge2015}. Autonomous driving in this crater is expected to be challenging due to its high rock abundance. The state-of-the-art on-board path planner for Mars rovers called GESTALT \cite{Maimone2006}, which has successfully driven Spirit, Opportunity, and Curiosity rovers (\autoref{fig:curiosity_selfie}), is known to suffer from high rock density due to its highly conservative design. More specifically, GESTALT frequently fails to find a feasible path through a terrain with 10\% \ac{CFA} (cumulative fraction of area covered by rocks), where CFA is a commonly used measure of rock abundance on Mars \cite{Golombek2008_CFA}. The Jezero crater has significantly higher rock density than any landing sites of previous Mars rover missions, where the CFA is up to 15--20\% based on the orbital reconaissance \cite{Golombek2015}. For this reason, a significant improvement in the autonomous driving capability was demanded by the Mars 2020 rover mission. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{graphics/ROB-18-0189_fig1.jpg} \caption{Self-portrait of the MSL Curiosity rover, taken on Sol 1065. Sufficient clearance between the vehicle and the ground is necessary to safely traverse on rough terrain with outcrops. (Credit: NASA/JPL-Caltech/MSSS)} \label{fig:curiosity_selfie} \end{figure} Conservatism is both a virtue and a limitation for spacecraft software. In general, any on-board algorithms must be conservative by design because no one can go to Mars to fix rovers if something goes wrong. In case of collision check in path planning, for example, false positives (a safe path is incorrectly assessed to be unsafe) are acceptable but false negatives (an unsafe path is incorrectly assessed to be safe) are unacceptable. However, excessive conservatism (i.e., too frequent false positives) results in reduced efficiency (e.g., unnecessarily winding paths) or inability to find solutions. Therefore, we have two adversarial objectives: guaranteing safety and reducing the algorithmic conservatism\footnote{Conceptually, it is analogues to solve an inequality-constrained optimization such as $\min_x x \ \rm{s.t} \ x \ge 0.$ }. We found a source of excessive conservatism in GESTALT is collision-checking. Most tree- or graph-based path planners, including GESTALT, need to check if each arc (edge) of the path tree or graph is safe by running a collision check at a certain interval (typically tens of cm). The collision check algorithm estimates multiple safety metrics, such as the ground clearance, tilt, and suspension angles, and check if all of them are within pre-specfied safety ranges. In GESTALT, the rover state is simply represented by a point in the 2D space, which represents the geometric center of the 2D footprint of the rover. Roughly speaking, GESTALT expands potentially colliding obstacles by the radius of the rover such that any part of the rover is guaranteed to be safe as long as the center point is outside of the expanded obstacles, as in \autoref{fig:collision_check}(a)\footnote{In reality, the terrain assessment of GESTALT is not binary; the terrain assessment map (called the \textit{goodness map}) is pre-processed by taking the worst value within the diameter of the rover from each grid of the map (i.e., dilation in image processing). This is equivalent to obstacle expansion in case of binary goodness value.}. Densely populated rocks may block a significant portion of the state space, resulting in a failure of finding a feasible path. In particular, this approach does not allow the rover to straddle over a rock even if it does not hit the belly pan. This approach is safe and computationally simple, but often overly conservative, particularly on a terrain with high rock density or undulation. The main idea of the proposed approach in this paper, called \textit{\ac{ACE}}, is to check collision \textit{without} expanding obstacles. Instead of representing the rover state just by a 2D point, ACE considers the wheel footprint and resulting suspension angles to evaluate the safety metrics such as ground clearance and tilt, as illustrated in \autoref{fig:collision_check}(b). This approach can significantly mitigate the level of conservatism while still guaranteeing safety. For example, path planning with ACE allows straddling over rocks as long as there is sufficient clearance to the belly pan. However, precisely evaluating these metrics requires solving an iterative kinematics problem, which is not tractable given the very limited computational resources of the planetary rovers. Besides the nonlinear kinematics equations associated with the suspension mechanisms, a rough terrain profile makes it difficult to precisely predict wheel-terrain contact \cite{Sohl2005}. There are no known analytic solutions in general, and the problem is typically approached by iterative numerical methods at the cost of computational efficiency. Therefore, we turn to a conservative approximation. That is, instead of running an iterative kinematic computation, ACE computes the worst-case bounds on the metrics. This approach is a practical compromise for our Mars rovers as it has guaranteed conservatism with acceptable computational requirement. This claim will be empirically supported by simulations (Section \ref{sec:simulation}) and hardware experiments (Section \ref{sec:experiments}). Furthermore, as we will empirically show in Section \ref{sec:result_conservatism}, ACE-based path planning is significantly less conservative than GESTALT. \begin{figure} \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[height=5.2cm]{graphics/ROB-18-0189_fig2a.png} \caption{} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[height=5.2cm]{graphics/ROB-18-0189_fig2b.png} \caption{} \end{subfigure} \caption{Conceptual illustration of the collision checking approach in (a) GESTALT, the state-of-the-art Mars rover autonomous navigation algorithm used in Spirit, Opportunity, and Curiosity rovers, and (b) ACE, the proposed approach used in the Mars 2020 rover. The red triangles represent obstacles. GESTALT conservatively expands obstacles by the radius of the rover while ACE assesses the collision in consideration of the wheel footprints.} \label{fig:collision_check} \end{figure} There have been a significant body of works in literature, but none of these were sufficient for our application in terms of speed, path efficiency, and safety guarantee. Most of the motion planning methods for generic ground vehicles do not explicitly consider suspension articulation. In non-planar terrain, it is very common to model the terrain as a 2.5D or 3D map and fit a robot-sized planar patch to terrain models to obtain geometric traversability \cite{Gennery1999,Chilian2009,Ishigami2013,Wermelinger2016,Krusi2017}. It is also common to add other criteria such as roughness and step hazards to capture obstacles and undulations. Similar to GESTALT, those planners will suffer from extremely less path options in a highly cluttered environment such as the surface of Mars. Without reasonable vehicle state prediction, it is difficult to utilize the greater body-ground clearance of off-road vehicles. To enable more aggressive yet safe planning, pose estimation on uneven terrains has been used together with path planners. Kinematics and dynamics are two major categories which account for the state of articulated models on uneven terrain. With kinematics-based approach, the problem is to find contact points between the wheel and the terrain under the kinematic constraints of the vehicle. Generic kinematics modeling is introduced for articulated rovers such as NASA/JPL's rocker-bogie rovers \cite{Tarokh2005,Chang2006}. These kinematic models are used to compute vehicle settling on uneven terrain by minimizing the wheel-ground contact errors \cite{Tarokh2005,Howard2007,Ma2019}. The terrain settling technique is used in the current ground operation of Mars rovers to check safety before sending mobility commands \cite{Yen2004,Wright2005rsvp}. The kinematic settling is also effective for other types of vehicles, such as tracked vehicles \cite{Jun2016}. The kinematics approach is generally faster than dynamics-based approach, but still computationally demanding for onboard execution on spacecraft computers. Dynamics simulation is typically performed with general purpose physics engines. Due to its popularity, many works use \ac{ODE} for simulating robot motion during planning \cite{Papadakis2012}. \ac{ROAMS} \cite{Jain2003,Jain2004} is simulation software developed at \ac{JPL}, which models the full dynamics of flight systems including Mars rovers. \ac{ROAMS} was used to predict the high-fidelity rover behavior in rough terrain \cite{Huntsberger2008,Helmick2009}. Another faster dynamics simulator is proposed in \cite{Seegmiller2016}, which runs simulation over 1000 times faster than real time in decent computing environment. Although these methods can provide high-fidelity estimate in clearance, vehicle attitude, and suspension angles, they cannot directly be deployed onto the rovers due to its intractable computational cost. Moreover, for on-board path planning in planetary missions, conservatism in safety is more important than accuracy: a single collision can terminate a mission as it is not possible to repair a damaged vehicle on another planet at least for the foreseeable future. The main contribution of this paper is to introduce a novel kinematics solution named \ac{ACE} and its empirical validation based on simulations and hardware experiments. The key concept of \ac{ACE} is to quickly compute the vehicle configuration bounds, instead of solving the full kinematic rover-terrain settling. Knowing the bounds of certain key states, \ac{ACE} can effectively produce a conservative estimation of the rover-terrain clearance, rover attitude, and suspension angles in a closed form. \ac{ACE} is being developed as part of the autonomous surface navigation software of NASA/JPL's \ac{M2020} mission. The initial idea of \ac{ACE} appears in \cite{Otsu2016phd}, and a probabilistic extension of this work is reported in \cite{Ghosh2018}. This paper introduces improved mathematical formulation and extensive Verification and Validation (V\&V) work. The remainder of this paper is structured as follows: \autoref{sec:model} formulates the kinematics models of articulated suspension systems, \autoref{sec:algorithm} describes the \ac{ACE} algorithm, \autoref{sec:evaluation} provides experimental results including benchmarking, and \autoref{sec:conclusion} concludes the paper. \section{Suspension Models} \label{sec:model} Our approach is to use kinematic equations to propagate the bounds on the height of wheels to the bounds on vehicle configuration. While this approach is applicable to many vehicles with articulated suspension systems used in the planetary rover domain, this section particularly focuses on the rocker and rocker-bogie suspensions. The latter is the suspension system of choice for the successful NASA/JPL's Mars rover missions \cite{Harrington2004}. \subsection{Frames} We first introduce the reference frames used in the paper, which are illustrated in \autoref{fig:suspension_models} and \ref{fig:rockerbogie_model_top}. Following the aerospace convention, the forward-right-down coordinate system is employed for the body frame of the rover. The origin is set to the center of middle wheels at the height of ground contact point when the rover is stationary on the flat ground. In this frame, the wheel heights are described in $z$-axis pointing downward (i.e., A greater wheel ``height" indicates that the wheel is moved downward). A global reference frame is defined as a north-east-down coordinate system. The terrain geometry, which can be specified in any format such as a point cloud or a \ac{DEM}, is expressed in an arbitrary frame. The rover path planning is conventionally performed in 2D or 2.5D space based on the nature of rover's mobility systems. A path is typically represented as a collection of poses containing 2D position and heading angle $ (x, y, \psi) $. A path is regarded as safe if all poses along the path satisfies the safety constraints. \begin{figure}[t] \centering \begin{subfigure}{0.48\textwidth} \centering \includegraphics[height=4.2cm]{graphics/ROB-18-0189_fig3a.eps} \caption{Rocker suspension} \label{fig:rocker_model} \end{subfigure} \begin{subfigure}{0.48\textwidth} \centering \includegraphics[height=4.2cm]{graphics/ROB-18-0189_fig3b.eps} \caption{Rocker-bogie suspension} \label{fig:rockerbogie_model} \end{subfigure} \caption{Articulated suspension systems on flat ground, viewed from the left side.} \label{fig:suspension_models} \end{figure} \begin{figure}[t] \centering \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{graphics/ROB-18-0189_fig4.eps} \caption{Rocker-bogie suspension model, viewed from top. For visibility, wheels do not represent actual positions.} \label{fig:rockerbogie_model_top} \end{minipage}\hspace{5mm} \begin{minipage}{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{graphics/ROB-18-0189_fig5.eps} \caption{Simple triangular geometry that models rocker/bogie systems.} \label{fig:triangular_geometry} \end{minipage} \end{figure} \subsection{Rocker Suspension} The rocker suspension in \autoref{fig:rocker_model} is a simpler variant of the rocker-bogie suspension, which will be discussed in the next section. The rocker suspension usually consists of four wheels, where the two wheels on the same side are connected with a rigid rocker link. The left and right suspensions are related through a passive differential mechanism, which transfers a positive angle change on one side as a negative change to the other side. The kinematic relation of the rocker suspension is represented by a simple triangular geometry in \autoref{fig:triangular_geometry}. Consider a triangle ABC with a known shape parameterized by two side lengths and the angle between them $ (l_{ca}, l_{ab}, \varphi_{a}) $. Given the height of A and B (i.e., $ z_{a}, z_{b} $), there are up to four solutions for $ z_{c} $, but other constraints such as vehicle orientation uniquely specifies a single solution given by: \begin{align} z_{c} = z_{a} - l_{ca}\sin \kappa(z_{a}, z_{b}) \label{eq:tri_z} \end{align} where $ \kappa(\cdot) $ denotes an angle of link AC with respect to the reference line (i.e., $\kappa_c$) and is defined as \begin{align} \kappa(z_a, z_b) = \varphi_a + \sin^{-1}\left(\cfrac{z_{a}-z_{b}}{l_{ab}}\right) \:. \label{eq:tri_kappa} \end{align} The solution only exists if $ | z_{a}-z_{b} | \leq l_{ab} $. The rocker suspension model is formulated using the triangular geometry in \eqref{eq:tri_z} and \eqref{eq:tri_kappa}. Given two wheel heights $ z_{f} $ and $ z_{r} $, the rocker joint height is given as \begin{align} z_{d} = z_{f} - l_{df}\sin \kappa_d(z_{f}, z_{r}) \end{align} where $ l_{df} $ is the length of front rocker link and $ \kappa_d(\cdot) $ is an instance of \eqref{eq:tri_kappa} with the rocker suspension parameters $ (l_{df}, l_{fr}, \varphi_{f}) $. Due to the differential mechanism, the left and right rocker angles in relative to the body, $\delta_{l}, \delta_{r}$, have the same absolute value with the opposite sign. They can be computed from link angles as: \begin{align} \delta_{l} = -\delta_{r} = \cfrac{\kappa_d(z_{f_r}, z_{r_r}) - \kappa_d(z_{f_l}, z_{r_l}) }{2} \:. \end{align} The body attitude is a function of left and right rocker joint states. The roll angle of the body is computed from the difference of joint heights: \begin{align} \phi &= \sin^{-1}\left( \cfrac{z_{d_r} - z_{d_l}}{2 y_{od}} \right), \end{align} where $ y_{od} $ is the lateral offset from the center of body to a differential joint. The pitch angle is computed as \begin{align} \theta &= \kappa_{d0} - \cfrac{\kappa_d(z_{f_l}, z_{r_l})+\kappa_d(z_{f_r}, z_{r_r})}{2} \end{align} where the first term represents an angle offset of front link when the rover is on a flat ground ($ \kappa_{d0}=\varphi_f $ in this example). Finally, the body frame height in the global frame can be obtained as \begin{align} z_o = \cfrac{z_{d_{l}} + z_{d_{r}}}{2} + x_{od} \sin\theta\cos\phi - z_{od} \cos\theta \cos\phi \end{align} where $ x_{od} $ and $ z_{od} $ are offsets from the body frame origin to a differential joint. Since the belly pan is rigidly attached to the body frame, the rover-terrain clearance can be derived from these height and attitude information. \subsection{Rocker-bogie Suspension} The rocker-bogie suspension (\autoref{fig:rockerbogie_model}) is a rocker suspension with an additional free joint on each side. According to the previous Mars rover conventions, we assume that the front wheels of the six-wheeled rover are connected directly to the rocker suspension while the middle and rear wheels are attached to the bogie suspension. The inverse kinematics of the rocker-bogie suspension can be derived by extending that of the rocker suspension, described in the previous subsection. We first determine the state of bogie link. The bogie joint heights can be estimated from middle and rear wheel heights $ (z_{m}, z_{r}) $ \begin{align} z_{b} = z_{m} - l_{bm}\sin \kappa_b(z_{m}, z_{r}) \end{align} where $l_{bm}$ is the length of bogie front link and $ \kappa_b(\cdot) $ denotes the triangular geometry for the bogie triangle. Using the height of bogie joint $ z_{b} $, the rocker joint height can be computed as \begin{align} z_{d} = z_{f} - l_{df}\sin \kappa_d(z_{f}, z_{b}) \:. \end{align} Given the heights of the wheels and joints, rocker and bogie angle changes are computed as \begin{align} \delta_{l} &= -\delta_{r} = \cfrac{\kappa_d(z_{f_{r}}, z_{b_{r}}) - \kappa_d(z_{f_{l}}, z_{b_{l}})}{2} \\ \beta_{l} &= \kappa_d(z_{f_{l}}, z_{b_{l}}) - \kappa_b(z_{m_{l}}, z_{r_{l}}) - \kappa_{d0} + \kappa_{b0} \\ \beta_{r} &= \kappa_d(z_{f_{r}}, z_{b_{r}}) - \kappa_b(z_{m_{r}}, z_{r_{r}}) - \kappa_{d0} + \kappa_{b0} \end{align} where $ \kappa_{d0} $ and $ \kappa_{b0} $ denote the initial angles of rocker and bogie joints. The attitude and height of the body are derived as: \begin{align} \phi &= \sin^{-1}\left( \cfrac{z_{d_r} - z_{d_l}}{2y_{od}} \right) \label{eq:roll} \\ \theta &= \kappa_{d0} - \cfrac{\kappa_d(z_{f_l}, z_{b_l})+\kappa_d(z_{f_r}, z_{b_r})}{2} \label{eq:pitch} \\ z_o &= \cfrac{z_{d_{l}} + z_{d_{r}}}{2} + x_{od} \sin\theta \cos\phi - z_{od} \cos\theta \cos\phi \label{eq:z_o} \:. \end{align} \section{Algorithm} \label{sec:algorithm} Remember that \ac{ACE} is designed to quickly compute conservative bounds on vehicle states. Unlike the full kinematics settling that relies on iterative numerical methods, our approach computes the bounds in a closed form. The ACE algorithm is summarized as follows: \begin{enumerate} \item For a given target rover pose $ (x, y, \psi) $, find a rectangular wheel box in x-y plane in the body frame that conservatively includes the footprint of each wheel over any possible rover attitude and suspension angles. \item Find the minimum and maximum terrain heights in each of the wheel boxes (see \autoref{fig:wheel_intervals}). \item Propagate the bounds on wheel heights to the vehicle configuration with the kinematic formula derived in the previous section. \item Assess vehicle safety based on the worst-case states. \end{enumerate} In 3), all possible combinations are considered to obtain the worst-case bounds. Due to the monotonic nature of suspension, the bounds can be obtained via the evaluation of extreme configurations. For example, the bounds on the rocker/bogie states are obtained by finding the worst cases among the eight extreme combinations of the min/max heights of three wheels, as illustrated in \autoref{fig:extreme_cases}. This propagation process is visually presented in the supplemental video. To precisely describe the algorithm, we first introduce the interval arithmetic as a mathematical framework in our method. We then describe how we apply it to solve our problem with a case study using the \ac{M2020} rover. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{graphics/ROB-18-0189_fig6.jpg} \caption{Ranges of possible wheel configurations computed from terrain geometry and mechanical constraints.} \label{fig:wheel_intervals} \end{figure} \subsection{Notation} In the interval arithmetic~\cite{Hickey2001}, an \textit{interval} is defined as follows \begin{align} \interval{x} = \{ x\in\mathbb{R}^{*} \mid x^{-} \leq x \leq x^{+} \} \end{align} where a pair $ \interval{x} $ represents the all reals between two. The symbol $ \mathbb{R}^{*} $ denotes an extended real defined as $ \mathbb{R}^{*} = \mathbb{R} {\cup} \{-\infty, \infty\} $. Elementary arithmetic operations on reals can be extended to intervals, such as \begin{align} \interval{x} + \interval{y} &= [x^{-}+y^{-},\ x^{+}+y^{+}] \\ \interval{x} - \interval{y} &= [x^{-}-y^{+},\ x^{+}-y^{-}] . \end{align} For a continuous function $ f(x) $, we can extend its input and output space to intervals \begin{align} f(\interval{x}) = \left[ \min_{x\in\interval{x}}f(x),\ \max_{x\in\interval{x}}f(x) \right] \:. \end{align} Computing the minimum and maximum is trivial if the function $ f $ is monotonic, or special non-monotonic functions such as trigonometric functions. In the rest of the paper, we use the following abbreviation to represent an interval unless explicitly stated \begin{align} [x] \equiv \interval{x} \:. \end{align} \subsection{Wheel Height Intervals} \begin{figure} \centering \begin{subfigure}{0.38\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig7a.jpg} \caption{$(z_{f}^{+}, z_{m}^{+}, z_{r}^{+})$} \end{subfigure} \begin{subfigure}{0.38\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig7b.jpg} \caption{$(z_{f}^{-}, z_{m}^{+}, z_{r}^{+})$} \end{subfigure} \begin{subfigure}{0.38\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig7c.jpg} \caption{$(z_{f}^{+}, z_{m}^{-}, z_{r}^{+})$} \end{subfigure} \begin{subfigure}{0.38\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig7d.jpg} \caption{$(z_{f}^{-}, z_{m}^{-}, z_{r}^{+})$} \end{subfigure} \begin{subfigure}{0.38\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig7e.jpg} \caption{$(z_{f}^{+}, z_{m}^{+}, z_{r}^{-})$} \end{subfigure} \begin{subfigure}{0.38\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig7f.jpg} \caption{$(z_{f}^{-}, z_{m}^{+}, z_{r}^{-})$} \end{subfigure} \begin{subfigure}{0.38\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig7g.jpg} \caption{$(z_{f}^{+}, z_{m}^{-}, z_{r}^{-})$} \end{subfigure} \begin{subfigure}{0.38\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig7h.jpg} \caption{$(z_{f}^{-}, z_{m}^{-}, z_{r}^{-})$} \end{subfigure} \caption{Extreme configurations for state bound computation. Three elements in a tuple represent the front, middle and rear wheel heights, respectively. The superscripts $+$ and $-$ represent the maximum and minimum for the state variable.} \label{fig:extreme_cases} \end{figure} Firstly, we estimate the wheel height intervals based on terrain measurements from sensors (e.g., stereo vision). The span of wheel heights can be computed from the highest and lowest terrain points within the wheel boxes (see green boxes in \autoref{fig:wheel_intervals}). The $x$ and $y$ dimensions of the wheel boxes are derived from the vehicle's mechanical properties such as wheel size, suspension constraints, and vehicle tip-over stability. We can determine a conservative range of wheel contact locations for all possible suspension angles and stable attitude. In the rest of this paper, we represent the bound for $i$-th wheel by $ [z_i] $. It is important to estimate these bounds conservatively to make the final state bounds to be complete, since the uncertainty in wheel heights is directly propagated to other states. For the conservative estimate, we may need to include the dynamic effect such as terrain deformation, wheel slips and sinkage, depending on the environment to explore. In addition, it is important to consider the effect of perception error as detailed in the experiment section. \subsection{Suspension Intervals} Since $\sin^{-1}(x)$ is monotonically increasing in $ x \in [-1, 1] $, we can extend the concept of intervals to the function $ \kappa(\cdot) $ in \eqref{eq:tri_kappa} \begin{align}\label{eq:21} \kappa([z_{a}], [z_{b}]) = [ \kappa(z_{a}^{-}, z_{b}^{+}), \kappa(z_{a}^{+}, z_{b}^{-}) ] . \end{align} On the other hand, \eqref{eq:tri_z} is a convex function which has a global minimum if $ z_{b} = z_{b}^{-} $ and the rear link CB is aligned with $z$-axis. In case of the \ac{M2020} rover, the minimum is located outside of the mechanical limits. Therefore, in practice, we can assume the monotonicity and use the following interval for the height \begin{align} [z_{c}] &= [ z_{a}^{-} - l_{ab}\sin \kappa_b(z_{a}^{-}, z_{b}^{-}),\ z_{a}^{+} - l_{ab}\sin \kappa_b(z_{a}^{+}, z_{b}^{+}) ] \:. \end{align} Based on the above intervals, the suspension state intervals are computed for joint heights \begin{align} [z_{b}] &= [ z_{m}^{-} - l_{bm}\sin \kappa_b(z_{m}^{-}, z_{r}^{-}),\ z_{m}^{+} - l_{bm}\sin \kappa_b(z_{m}^{+}, z_{r}^{+}) ] \\ [z_{d}] &= [ z_{f}^{-} - l_{df}\sin \kappa_d(z_{f}^{-}, z_{b}^{-}),\ z_{f}^{+} - l_{df}\sin \kappa_d(z_{f}^{+}, z_{b}^{+}) ] \end{align} and for joint angles \begin{align} [\delta_l] &= -[\delta_r] = \left[ \cfrac{\kappa_d(z_{f_{r}}^{-}, z_{b_{r}}^{+}) - \kappa_d(z_{f_{l}}^{+}, z_{b_{l}}^{-})}{2},\ \cfrac{\kappa_d(z_{f_{r}}^{+}, z_{b_{r}}^{-}) - \kappa_d(z_{f_{l}}^{-}, z_{b_{l}}^{+})}{2} \right] \\ [\beta_{l}] &\subseteq \left[ \kappa_d(z_{f_{l}}^{-}, z_{b_{l}}^{+}) - \kappa_b(z_{m_{l}}^{+}, z_{r_{l}}^{-}) - \kappa_{d0} + \kappa_{b0},\ \kappa_d(z_{f_{l}}^{+}, z_{b_{l}}^{-}) - \kappa_b(z_{m_{l}}^{-}, z_{r_{l}}^{+}) - \kappa_{d0} + \kappa_{b0} \right] \\ [\beta_{r}] &\subseteq \left[ \kappa_d(z_{f_{r}}^{-}, z_{b_{r}}^{+}) - \kappa_b(z_{m_{r}}^{+}, z_{r_{r}}^{-}) - \kappa_{d0} + \kappa_{b0},\ \kappa_d(z_{f_{r}}^{+}, z_{b_{r}}^{-}) - \kappa_b(z_{m_{r}}^{-}, z_{r_{r}}^{+}) - \kappa_{d0} + \kappa_{b0} \right] . \end{align} For the sake of simplicity, we use loose bounds for the bogie angles., The boundary configurations may be impossible in reality. In this example, the lower bound of $\beta$ requires $(z_{f}^{-}, z_{m}^{+}, z_{r}^{-}, z_{b}^{+})$ but $z_{b}^{+}$ requires $(z_{m}^{+}, z_{r}^{+})$, which is inconsistent in $z_{r}$ (except the case where $z_{r}^{-}$ and $z_{r}^{+}$ are identical). \subsection{Attitude Intervals} Similarly, the intervals for body attitude can be derived from wheel height intervals. Using the kinematics equations \eqref{eq:roll} and \eqref{eq:pitch} yields \begin{align} [\phi] &= \left[ \sin^{-1}\left(\cfrac{z_{d_r}^{-} - z_{d_l}^{+}}{2y_{od}} \right),\ \sin^{-1}\left(\cfrac{z_{d_r}^{+} - z_{d_l}^{-}}{2y_{od}} \right) \right] \\ [\theta] &= \left[ \kappa_{d0} - \cfrac{\kappa_d(z_{f_l}^{+}, z_{b_l}^{-}) + \kappa_d(z_{f_r}^{+}, z_{b_r}^{-})}{2},\ \kappa_{d0} - \cfrac{\kappa_d(z_{f_l}^{-}, z_{b_l}^{+}) + \kappa_d(z_{f_r}^{-}, z_{b_r}^{+})}{2} \right] . \end{align} \subsection{Clearance Intervals} Since the vehicle body has connection to the world only through its suspension and wheel systems, its configuration is fully determined by the suspension state. The body height bound in the world frame is computed with \eqref{eq:z_o}: \begin{align} [z_o] \subseteq &\left[ \cfrac{z_{d_l}^{-} + z_{d_r}^{-}}{2} - z_{od} \cos|\theta|^{+} \cos|\phi|^{+} + x_{od} \min(\sin\theta^{-} \cos|\phi|^{-},\ \sin\theta^{-} \cos|\phi|^{+}) , \right. \nonumber\\ & \ \ \left. \cfrac{z_{d_l}^{+} + z_{d_r}^{+}}{2} - z_{od} \cos|\theta|^{-} \cos|\phi|^{-} + x_{od} \max(\sin\theta^{+} \cos|\phi|^{-},\ \sin\theta^{+} \cos|\phi|^{+}) \right] \end{align} using the intervals of absolute roll/pitch angles $ [|\phi|], [|\theta|] $. Note that the trigonometric functions in the equations are monotonic since we can assume $ |\phi|,|\theta| \in \left[ 0, \tfrac{\pi}{2} \right] $ for typical rovers. Assuming the belly pan is represented as a plane with width $w_{p}$ and length $l_{p}$ at nominal ground clearance $ c_0 $, a loose bound for the maximum (lowest) height point in belly pan is computed as \begin{align} [z_{p}] \subseteq [ & z_o^{-} - c_{0} \cos|\theta|^{-} \cos|\phi|^{-} + \cfrac{l_{p}}{2} \sin|\theta|^{-} \cos|\phi|^{+} + \cfrac{w_{p}}{2} \sin|\phi|^{-}, \nonumber\\ & z_o^{+} - c_{0} \cos|\theta|^{+} \cos|\phi|^{+} + \cfrac{l_{p}}{2} \sin|\theta|^{+} \cos|\phi|^{-} + \cfrac{w_{p}}{2} \sin|\phi|^{+} ] \end{align} Let's define the rover-ground clearance as a height gap between the lowest point of the belly pan and the highest point of the ground. This is a conservative definition of clearance. Given the intervals of ground point height under the belly pan $ [z_{m}] $, the clearance is computed as \begin{align}\label{eq:32} [c] &\equiv [z_{m}^{-} - z_{p}^{+},\ z_{m}^{-} - z_{p}^{-}] \:. \end{align} \subsection{Safety Metrics} We use the above state intervals to test if a given pose has chance to violate safety conditions. Different safety conditions can be applied to different rovers. For example, the following metrics are considered for the \ac{M2020} rover. \begin{itemize} \item \textit{Ground clearance} must be greater than the threshold. \item \textit{Body tilt} (computed from roll and pitch angles) must be smaller than the threshold. \item \textit{Suspension angles} must stay within the predefined safety ranges. \item \textit{Wheel drop} (defined as a span of wheel height uncertainty) must be smaller than the threshold. \end{itemize} \section{Experiments} \label{sec:evaluation} Recall that ACE is designed to be conservative, and at the same time, to reduce the conservatism compared to the state-of-the-art. In Sections \ref{sec:simulation} and \ref{sec:experiments}, we will show that ACE is conservative, hence safe, through simulation and hardware experiments. Then, Section \ref{sec:result_conservatism} compares the algorithmic conservatism with the state-of-the-art. Our tests involve rovers with different sizes to observe the performance difference due to mechanical system configurations. \subsection{Simulation Study} \label{sec:simulation} We first tested ACE with simulation to validate the algorithm in noise-free scenarios. On these tests, terrain topological models are directly loaded from the simulator to ACE. Therefore, the presented results are not contaminated by measurement noise from perception systems. The algorithm is tested on different terrain configurations, from artificial quadratic functions to a realistic Martian terrain model. We use simulator-reported rover state as ground truth, which is originally computed with iterative numeric methods based on rover and terrain models. \subsubsection{Simulation with a single ACE run} \begin{figure} \centering \begin{subfigure}{0.48\textwidth} \includegraphics[width=0.9\textwidth]{graphics/ROB-18-0189_fig8a.eps} \caption{Ground-truth and predicted states} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=0.9\textwidth]{graphics/ROB-18-0189_fig8b.eps} \caption{Side view (concave and convex)} \end{subfigure} \caption{Analysis on artificial terrains with varying undulation levels} \label{fig:state_undulation} \end{figure} We placed a simulated rover on simple geometric terrains and run ACE to compute bounds on the belly pan clearance, the attitude, and the suspension angles. Their ground-truth values were also obtained from the simulation to check if they are conservatively confined by the ACE bounds. In addition, these results were compared against a simple, planefit-based estimation approach. More specifically, we fitted a plane to a given topography over a window with a 1.25 m radius and approximately estimate the rover's pose by assuming that the rover is placed on this plane. The ground clearance was estimated by computing the difference between the highest point of the terrain in the window and the belly pan height based on the estimated rover pose. The planefit-based approximation gives the exact ground clearance when the terrain is flat. We chose plane fitting as the point of comparison because, as we shall see shortly, it provides an insight to the cause of the conservatism of GESTALT, the state-of-the-art autonomous rover navigation algorithm used for the three existing Mars rovers. We used simple terrains represented by $z=ax^2$ in the body frame with varying $a$ for this test. Tests with more complex, realistic terrains will follow. The ground-truth settling was obtained via a numeric optimization method. \autoref{fig:state_undulation} shows the results. Note that the $z$-axis points downwards, meaning that the terrain is convex with a negative $a$ and concave with a positive $a$. The brown solid lines represent ground-truth states, with ACE bounds denoted by orange shaded areas. As expected, ACE bounds always provided conservative estimate. Compared to attitude and suspension angles, the clearance estimation resulted in a greater conservatism in general. This is because the clearance is the last estimated property propagated from terrain heights and hence accumulates uncertainty. In contrast, the planefit-based approach consistently gave an optimistic estimate of the ground clearance and the pitch angle. In addition, since the rover is always placed on a plane, the bogie angle is always estimated to be zero. GESTALT does not explicitly computes the ground clearance; instead, it computes the ``goodness" of each cell on the terrain from multiple factors including roughness (i.e., residual from the planefit) and step obstacles (i.e., maximum height difference between adjacent cells), where the weights on each factor are manually tuned such that the conservatism is guaranteed for the worst cases. The fact that the planefit-based clearance estimation is optimistic for non-zero $a$ implies that the weights on roughness and the step obstacle must be sufficiently great for the worst-case $a$. This in turn makes the algorithm overly conservative when the terrain is nearly flat (i.e., smaller $a$), which is the case for most of the time of driving on Mars. In contrast, ACE gives tighter bounds for a smaller $a$. This illustrates a desirable behavior of ACE; that is, it adjusts the level of conservatism depending on the terrain undulation. It results in the exact estimation on a perfectly flat terrain and increases conservatism for undulating terrains. Overall, the ground truths are always conservatively bounded. We also note that ACE becomes overly conservative on a highly undulating terrain. We believe the impact of this issue in practical Mars rover operation is relatively limited because we typically avoid such terrains when choosing a route. Having said that, even though ACE enables the rover to drive on significantly more difficult terrains than GESTALT, this conservatism is one of the remaining limitations. Mitigating the conservatism of ACE on a highly undulating terrain is our future work. \subsubsection{Simulation with multiple ACE runs} We then drive a rover with a pre-specified path on various terrains in simulation while calling ACE multiple times at a fixed interval to check collisions. \paragraph{Flat Terrain with a Bump} \begin{figure} \centering \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig9a.png} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig9b.png} \caption{} \end{subfigure} \begin{subfigure}{0.32\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig9c.png} \caption{} \end{subfigure} \caption{Flat terrain with a smooth bump. Six wheel trajectories are shows in different color. The initial positions of the wheels are marked with circles. (a) Linear approach to the bump. (b) Curved approach to the bump. (c) Climb over the bump only with the right wheels.} \label{fig:wheel_tracks} \vspace{14px} \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig10a.eps} \caption{Linear motion} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig10b.eps} \caption{Curved motion} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig10c.eps} \caption{Linear motion with the right wheels on the bump} \end{subfigure} \caption{ACE estimation results for body roll $ \phi $, body pitch $ \theta $, left rocker angle $ \delta_l $, left and right bogie angles $ \beta_l $ and $ \beta_r $, and body height $ z_o $. The solid lines represent the ground-truth state computed by a numeric method. The shaded regions represent the ranges between the \ac{ACE} upper/lower bounds. } \label{fig:bump_result} \end{figure} The test environment is a simple flat terrain with a 0.2\,[m] height bump. A Curiosity-sized rover is driven over the bump with three different trajectories shown in \autoref{fig:wheel_tracks}. The rover drives at the nominal speed of Curiosity on Mars ($\sim$0.04\,[m/s]). We collected data in 8\,[Hz], including ground-truth pose from a numeric method. The ranges of six wheel heights are extracted directly from the base map using the ground-truth pose reported by the simulator. \autoref{fig:bump_result} shows the time-series profiles of suspension and body states for three trajectories. The solid lines denote the ground-truth states computed by the numeric method, and the shaded regions represent the state bounds computed by \ac{ACE}. All the ground-truth states are always within the bounds, meaning ACE bounds are conservative as expected. It is interesting to observe how the algorithm evaluates rover states for its worst-case configurations. With trajectory (a), the rover approaches perpendicularly to the bump. The ground-truth roll angle stays zero for the entire trajectory since the left and right wheels interact with the ground exactly at the same time in this noise-free simulation. However, this is unlikely in the real-world settings where small difference in contact time, or difference in wheel frictions, can disturb the symmetry and cause rolling motion. \ac{ACE} computes the state bounds based on the worst-case configurations. Therefore, the algorithm captures such potential perturbation and conservatively estimate the state bounds, as presented in the top left figure of \autoref{fig:bump_result}. \paragraph{Martian Terrain Simulation} \begin{figure} \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=0.8\textwidth]{graphics/ROB-18-0189_fig11a.jpg} \caption{Simulator view} \includegraphics[width=0.9\textwidth]{graphics/ROB-18-0189_fig11b.eps} \caption{Path on \ac{DEM}} \end{subfigure} \begin{subfigure}[b]{0.58\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig11c.eps} \caption{State estimation result} \end{subfigure} \caption{ACE estimation result in a Mars-like environment. The Rocky 8 rover was deployed on a synthetic Martian terrain. The base terrain is an orbit-based \ac{DEM} for the Jezero crater. The rocks are randomly populated according to the Martian size-frequency distribution model.} \label{fig:roams_result} \end{figure} Next, we tested the \ac{ACE} algorithm in a Mars-like environment. We imported a \ac{DEM} of Jezero crater into the \ac{ROAMS} simulator \cite{Jain2003}. Since no spacecrafts have landed on Jezero, we only have a limited resolution of terrain model from satellite measurements. To create an environment closer to the actual, we populate rocks based on the empirically created Martian rock size-distribution model \cite{Golombek2008_CFA}. We populate rocks assuming 10\% \ac{CFA}. For the hardware platform, we used the Rocky 8 rover which is a mid-sized rover similar to \acp{MER}. We drove the rover at a speed of 0.15\,[m/s] with an autonomous hazard avoidance mode. The data are taken at 10\,[Hz] including ground-truth pose reported by the simulator. The state estimation result is shown in \autoref{fig:roams_result}. The figure only reports the body states including roll, pitch, and minimum clearance, but similar results are obtained for the other suspension states. Again, all the ground-truth states are always within the \ac{ACE} bounds, successfully confirming the algorithmic conservatism. The level of conservatism varies from time to time. For most of the time, the span between upper and lower bounds were within a few degrees. However, at 55\,[s] in \autoref{fig:roams_result}, for example, the upper bound on the pitch angle was about 10\,[deg] while the actual angle is around 1\,[deg]. Such false alarms typically occur when a large rock is in one of the wheel boxes but the rover did not actually step on it. This behavior is actually beneficial because it helps the path planner to choose less risky paths if the planner uses the bounds as a part of its cost function. Of course, such conservatism may result in a failure of finding a feasible path. However, we reiterate that conservatism is an objective of ACE because safety is of supreme importance for Mars rovers. Furthermore, as we will demonstrate in \autoref{sec:result_conservatism}, ACE significantly reduces the conservatism compared to the state-of-the-art. An additional idea that can further mitigate the conservatism is to introduce a probabilistic assessment, as proposed by \cite{Ghosh2018}. \subsection{Hardware Experiments} \label{sec:experiments} We deployed ACE on actual hardware systems and validated through the field test campaign in the JPL Mars Yard. ACE is deployed on two rover testbeds: the Athena rover whose size is compatible to the MER rovers, and the Scarecrow rover, a mobility testbed for MSL. In both systems, terrain height measurements are obtained by the stereo camera attached to the mast. Therefore, the heightmap that ACE receives involves noise from the camera and stereo processing. As we will report shortly, the stereo noise results in occasional bound violations. Adding an adequate margin to account for the noise restores the conservatism of ACE. \subsubsection{Athena Rover} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{graphics/ROB-18-0189_fig12.jpg} \caption{Athena rover driving on a slope of JPL Mars Yard.} \label{fig:athena} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig13a.eps} \caption{} \end{subfigure} \begin{subfigure}[b]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig13b.eps} \caption{} \end{subfigure} \caption{ACE estimation result for Athena's two different drives. The ground-truth (GT) state is within the ACE bounds except occasional violations.} \label{fig:athena_result} \end{figure} The first experiment is performed with the Athena rover developed at \ac{JPL} (see \autoref{fig:athena}). The platform is designed for testing Mars rover capabilities on Earth and is comparable in size to \ac{MER}. The navigation system is primarily vision-based using a stereo-camera pair consisting of two PointGrey Flea2 cameras rigidly mounted on a movable mast. The mast is at a height of 1.4\,[m], and the baseline of the stereo-camera pair is 0.30\,[m]. The images are captured at a resolution of $640 \times 480$ from wide field-of-view lenses. The ground-truth pose is obtained from OxTS xNAV system, which reports 6DoF pose from integrated GPS and inertial measurements. The suspension angles are not measured on this platform. We manually drove Athena on a slope of 6 to 12\,[deg] in the Mars Yard. The slope consists of multiple terrain materials including cohesive/cohesion-less sands and bedrocks. We evaluated \ac{ACE} by comparing the estimated state bounds from the algorithm and the ground-truth state. \ac{ACE} was applied to a past few image sets prior to the driving time. Unlike the rover autonomous drive software that prevents the placement of wheels in unknown terrain, our dataset collected by manual commanding contains samples in which the point clouds do not cover the terrain under all six wheels. We do not report state estimation results for such incomplete data. \autoref{fig:athena_result} shows the ground truth, as well as the upper and lower bounds from ACE, of the roll and pitch angles and the ground clearance for two drives. Each run consists of about 40\,[m] traverse including level, uphill, downhill, and cross-slope drives. As expected, the ground truth is within the bounds for most of the time. However, unlike the simulation results reported in the previous subsection, occasional bound violations were observed, as shown by the red crosses on the plots. This was due to perception errors, such as stereo matching error and calibration error. The positional error in point clouds from the stereo camera is propagated to the rover states through the kinematic equations, causing the error in state bounds. A practical approach to restore the conservatism is to add a small margin $\epsilon$ to the perceived height of the ground. More specifically, the maximum and minimum height of each wheel box, $z^+_w$ and $z^-_w$ ($w \in \{ f, m, r\}$) in (\ref{eq:21})-(\ref{eq:32}), are replaced with $z^+_w + \epsilon$ and $z^-_w - \epsilon$, where $\epsilon$ is the estimated upper bound of the height error. A downside of this approach is an increased conservatism. \autoref{tab:athenta_stats} shows a statistical result from cumulative 130\,[m] drive by Athena. The total success rate is computed by counting samples that all state variables are within the \ac{ACE} bounds. The success rate was 98.74\% without the perception error margin, with $\sim 3$\,[deg] maximum attitude error and 0.012\,[m] clearance error. Although it is rare that these small estimation error contributes to the hazard detection miss which is critical to the mission, extra conservatism is preferred for planetary applications. The conservatism is fully restored (i.e., 100\% success rate) with $\epsilon=15$\,[mm], which roughly corresponds with the worst-case height perception error. \begin{table}[t] \centering \caption{Error statistics from cumulative 130\,[m] drive by Athena.} \label{tab:athenta_stats} \begin{tabular}{lrrrr} \toprule & & \multicolumn{3}{c}{Max State Violation} \\ \cmidrule(lr){3-5} Method & Success Rate [\%] & Roll [deg] & Pitch [deg] & Clearance [m] \\ \midrule ACE & 98.74 & 2.6 & 3.2 & 0.012 \\ ACE ($\epsilon$=5\,[mm]) & 99.70 & 1.7 & 2.0 & 0 \\ ACE ($\epsilon$=10\,[mm]) & 99.95 & 0.7 & 0.9 & 0 \\ ACE ($\epsilon$=15\,[mm]) & 100 & 0 & 0 & 0 \\ \bottomrule \end{tabular} \end{table} \subsubsection{Scarecrow Rover} We deployed ACE on JPL's mobility testbed called Scarecrow and performed a series of experiments in JPL's Mars Yard. The purpose of the experiments is to test ACE with hardware and software that is close to the Mars 2020 rover. The mobility hardware of Scarecrow, including the rocker-bogie suspension system and wheels, are designed to be nearly identical to that of Curiosity and Mars 2020 rovers. The vehicle's mass is about one third of Curiosity and Mars 2020 rovers, simulating their weight under the Martian gravity. In terms of software, ACE is re-implemented in C and integrated with the Mars 2020 flight software. Since Scarecrow was originally designed for mobility experiments, it does not have the identical processor as the real Mars rovers. Instead, we compiled the software for Linux and run on a laptop computer placed on the top deck of the vehicle. Therefore, this experiment does not replicate the run time of the software. We evaluated the run time of ACE in a hardware-in-the-loop simulation using RAD750, as described in Section \ref{sec:runtime}. The original design of Scarecrow also lacks cameras. Therefore, we retrofitted a pair of Baumer cameras, from which height map is created on-the-fly via on-board stereo processing. The Mars Yard is configured in a way to represent some of the most difficult conditions in the Mars 2020 landing site, including 30 degree of slope and 15\% \ac{CFA} \cite{Golombek2008_CFA} of rock density. \autoref{fig:marsyard_for_scarecrow} shows a typical set up of the Mars Yard. Our extensive test campaign consisted of 42 days of experiments in the Mars Yard using Scarecrow. The analysis of the test results were largely qualitative rather than quantitative or statistical for a few reasons. First, we are unable to keep the exactly same set up of the Mars Yard as it is shared by many teams. It is also slightly altered by precipitation and wind. Second, the driving speed of Scarecrow is only 0.04\,[m/s] (same as Curiosity and Mars 2020 rover), therefore it typically takes 20 to 30 minutes to complete a single Mars Yard run. Repeating a statistically significant number of runs with the same set up is difficult. Third, the software implementation was continuously improved throughout the test campaign. Fourth, the ground truth of belly pan clearance is difficult to measure directly. Fifth and finally, the tests were performed as a part of the software development for the Mars 2020 rover mission, where the main purpose of the tests were the verification and validation of the integrated software capabilities rather than the quantitative assessment of the performance of ACE alone. Qualitatively, through the test campaign, the algorithm and implementation were matured to the point where the vehicle can drive confidently over $ \sim40 $\,[m] through a high rock density (15\% CFA) terrain. Since the path planner solely rely on ACE for collision check, the fact that the rover reliably avoids obstacles without hitting the belly pan is an indirect and qualitative evidence that ACE is working properly. For example, \autoref{fig:caspian} shows the 3D reconstruction of the terrain and the vehicle configuration from the Scarecrow test data. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{graphics/ROB-18-0189_fig14.jpg} \caption{Scarecrow test in JPL's Mars Yard on July 17, 2018, showing a typical set-up of the Yard for the experiments.} \label{fig:marsyard_for_scarecrow} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\textwidth]{graphics/ROB-18-0189_fig15.jpg} \caption{Visualization of a path planned in JPL's Mars Yard on July 23, 2018 with \textit{Caspian} visualizer.} \label{fig:caspian} \end{figure} A limited quantitative assessment is possible because a few intermediate and derivative variables in ACE were directly measured and recorded. These variables include rocker angle, left and right bogie angles, and the vehicle's tilt angle. Figure \ref{fig:scarecrow_results} shows the ground-truth measurement of rocker and right bogie angles as well as the bounds computed by ACE on three long Scarecrow drives in the Mars Yard. There are a few observations from the results. Firstly, the bounds successfully captured the ground-truth trends. For example, the negative spike in the rocker angle at $\sim 1100$\,[s] in Figure \ref{fig:scarecrow_results}(a) is correctly predicted by ACE, indicated by the reduced lower bound around that time. Secondly, the bounds were almost always respected. Thirdly, however, we observed occasional violations of the bounds as shown in red crosses on the plots. Our investigation concluded that the main causes of bound violations are the error in encoders and the error in perceived height map. The height map error is a result of two factors: 1) error in stereo processing (i.e., feature extraction and matching, error in camera model, noise in images, etc) and 2) ``smoothing effect" due to re-sampling (3D point cloud from stereo processing is binned and averaged over a 2D grid). This conclusion was derived by using simulations in the following steps: 1) assured that ACE bounds are always respected when running ACE on a ground-truth height map, 2) reproduced the stereo error by using simulated camera images, and 3) ACE bound violations occur with comparable frequency and magnitude with the simulated stereo error. As in the Athena rover experiment, adding an adequate margin $\epsilon$ on the perceived height can restore the conservatism. \begin{figure} \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig16a.eps} \caption{} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig16b.eps} \caption{} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=\textwidth]{graphics/ROB-18-0189_fig16c.eps} \caption{} \end{subfigure} \caption{Recorded ground-truth (GT) rocker and right bogies angles as well as the predicted bounds by ACE on three Scarecrow tests performed on Sept 12, 2018, in JPL's Mars Yard.} \label{fig:scarecrow_results} \end{figure} \subsubsection{Run-time Performance}\label{sec:runtime} The run-time performance is important for space applications where the computational resources are severely limited. \ac{ACE} has a significant advantage on this regard, compared to other alternatives that depend on iterative numeric methods. In the following analysis, we chose plane fitting as a point of comparison because it is a light-weight approximations for estimating rover state on rough terrain and used as the basis of GESTALT, the state-of-the-art autonomous navigation algorithm being used for the existing Mars rovers. The computation of \ac{ACE} is very fast due to its closed-form formulation. On the NVIDIA Jetson TK1 board on the Athena rover, \ac{ACE} takes 11.2\,[$\mu$s] for a single pose evaluation while plane fitting takes 26.1\,[$\mu$s] over $\sim$100 points and 68.2\,[$\mu$s] over $\sim$200 points. \ac{ACE} runs faster than the naive plane-fit approach using least squares, as well as providing richer information about the vehicle state. For reference, the average run-time of \ac{ACE} on a 2.8GHz Intel Core i7 machine is 2\,[$\mu$s], which enables a robot to evaluate 500k poses at a second, whereas plane-fit is 5 times slower with 200 points. Next, perhaps more importantly for spacecraft applications, the computational time of \ac{ACE} is constant. Thanks to the analytic formulation of \ac{ACE}, the computational time is always the same regardless of terrain patterns. This is not the case for numeric methods, which require more iterations for complex terrain before converging. We also evaluate the performance of ACE on the RAD750 CPU, which is used for the Curiosity and Mars 2020 rovers. While the precise timing is difficult due to the specialized configuration of the flight software, the typical run-time was 10-15 [ms] with a 10 [cm] resolution \ac{DEM}. This is sufficient run-time as a collision checker to support the ambitious traversal plans on the \ac{M2020} mission. \subsection{Comparison with State-of-the-Art} \label{sec:result_conservatism} \begin{figure} \centering \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17a-1.eps} \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17a-2.eps} \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17a-3.eps} \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17a-4.eps} \caption{State-of-the-art path planner} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17b-1.eps} \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17b-2.eps} \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17b-3.eps} \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17b-4.eps} \caption{ACE-based path planner} \end{subfigure} \begin{subfigure}{\textwidth} \centering \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17c-1.eps} \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17c-2.eps} \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17c-3.eps} \includegraphics[width=0.24\textwidth]{graphics/ROB-18-0189_fig17c-4.eps} \caption{Ideal path planner} \end{subfigure} \caption{Comparison of safety assessment methods in 20\,[m] path planning with varying CFA levels. a) Conventional method that checks slope and step hazards with rover-sized inflation; b) Assessment with worst-case state from ACE bounds; c) Assessment with ground-truth state.} \label{fig:path_planning} \end{figure} \begin{figure} \centering \begin{subfigure}{0.48\textwidth} \includegraphics[width=0.9\textwidth]{graphics/ROB-18-0189_fig18a.eps} \caption{Probability of success} \end{subfigure} \begin{subfigure}{0.48\textwidth} \includegraphics[width=0.9\textwidth]{graphics/ROB-18-0189_fig18b.eps} \caption{Average path inefficiency with standard error} \end{subfigure} \caption{Statistical result of path planning over 20 different maps in each CFA level. } \label{fig:path_planning_stats} \end{figure} Finally, we directly compared the performance of ACE-based path planning with the state-of-the-art in simulation. The point of comparison was a variant of GESTALT implemented in MATLAB\footnote{We did not use the flight implementation of GESTALT because porting a part of spacecraft flight software is difficult due to technical and security reasons.}, which computes slope, roughness, and step hazards from plane fitting, and creates a goodness map by inflating hazards by the rover radius. In addition, we also compared against the ``ideal" path planner that uses the ground-truth collision check (no conservatism). Such a planner is computationally unacceptable for the practical Mars rovers, but the comparison gives us an insight about how close the ACE-based paths are to the strictly optimal paths. The path planning algorithm is the same for all the three planners; a depth-five tree search was used for path selection with 1.5\,[m] edge for each depth, while the collision check was run at every 0.25\,[m]. The only difference is the collision check method. The terrains we tested are flat, 30-by-40 meters in size, randomly populated with rocks at four different CFA levels (5, 10, 15, 20\%). We created 20 terrains for each CFA levels (80 terrains in total). Three planners were run on each of the 80 terrains. A Curiosity-sized rover was commanded to go to the point 20\,[m] away. Two quantitative metrics were used for the comparison. The first is the path inefficiency, defined as the fraction of the generated path length and the straight-line distance. Intuitively, the over-conservatism of collision checking should result in an increased path inefficiency because it is more likely that the paths heading straightly towards the goal is incorrectly judged unsafe, resulting in a highly winding path. The second metric is the success rate, defined as the number of runs the planner successfully arrived in the goal divided by the total number of runs. An excessive conservatism may result in a failure to reach the goal because no feasible path is found to move forward. \autoref{fig:path_planning} shows representative examples of paths generated by the three methods. The top, middle, and bottom rows are the state-of-the-art, ACE-based, and ideal path planners. As expected, the state-of-the-art paths were most winding (greater path inefficiency) while the ideal paths were the most straight. Notably, the state-of-the-art approach failed to find a path to the goal at 15 and 20\% \ac{CFA}, while the ACE-based planner were able to find a way to the goal. The ACE-based planner was more capable of finding paths through cluttered environments mainly because it allows straddling over rocks if sufficient clearance is available. However, the ACE-based paths are less efficient compared to the ideal ones. This result is again expected, because ACE conservatively approximates the rover states for the sake of significantly reduced computation (as reported in Section \ref{sec:runtime}) compared to the exact kinematic solution. \autoref{fig:path_planning_stats} shows the statistical comparison over the 20 randomly generated maps for each CFA level in terms of the two quantitative metrics. According to \autoref{fig:path_planning_stats}(a), the ACE-based planner was capable of driving reliably ($\ge 95$\% success rate) up to 15\% CFA, but the success rate drops significantly at 20\% CFA. In comparison, the state-of-the-art path planning had only 40 \% success rate at rather benign 10\% CFA terrains. The ideal path planner was always be able to find a path to the goal for all the tested CFA values. Next, the results on path inefficiency in \autoref{fig:path_planning_stats}(b) clearly shows the difference in algorithmic conservatism. For example, at 10\% CFA, the state-of-the-art planner resulted in 33\% path inefficiency while it was nearly zero for the ACE-based and the ideal planners. At 15\% CFA, the ACE-based planner resulted in 12\% path inefficiency while that of the ideal planner is still nearly zero. The path inefficiency of the state-of-the-art planner was not computed for 15 and 20\% CFA because the success rate was zero. Finally, at 20\% CFA, the path inefficiency of the ACE-based planner went up to 33\% while that of the ideal planner was at 3\%. The CFA of the landing site of the Mars 2020 Rover (Jezero Crater) is typically less than 15\%, while we can almost surely find a round to go around the fragmented spots with $\ge 15$\% CFA. Therefore, with these results, ACE allows us to confidently drive the Mars 2020 rover autonomously for the majority of the drive. \section{Conclusions} \label{sec:conclusion} In this paper, we presented an approximate kinematics solver that can quickly, albeit conservatively, evaluate the state bounds of articulated suspension systems. The proposed method provides a tractable way of determining path safety with the limited computational resources available to planetary rovers. \ac{ACE} avoids expensive iterative operations by only solving for the worst-case rover-terrain configurations. The algorithm is validated using simulations and actual rover testbeds, giving satisfactory results in all experiments including 42 days of field test campaign. The experimental results indicate that the ACE-based planner successfully navigates the rover in environments with similar complexity to the planned landing site of Mars 2020 mission; however, one of the remaining algorithmic limitations is over-conservatism in estimated state bounds. Especially, the conservatism becomes greater on highly undulating terrain. An excessive conservatism may result in path inefficiency or a failure to find a path to the goal. Mitigating the extra conservatism is deferred to our future work. Although the algorithm is primarily designed for planetary rover applications, the work is applicable to other domains where fast state estimation is needed but the fidelity of estimation is not demanded. Examples include trajectory planning of manipulators and path planning of ground/aerial/maritime vehicles. The importance of this method is in how we incorporate environmental uncertainty into the planning problem, without redundant computation or unsafe approximation. With the proper bounds of uncertainty, the robot state is guaranteed to be safe within well-defined intervals. The \ac{ACE} algorithm was successfully integrated with the surface navigation software of \ac{M2020} rover mission. \ac{ACE} will enable faster and safer autonomous traverse on more challenging terrains on the red planet.